A gentic Separation of duties
I sitting in the Google Cloud Next Keynote. I just watched the Gemini Enterpise Agent Platform announcement. One underexplored risk in enterprise AI is the illusion of separation of duties.
Many companies will deploy multiple “agents” inside one platform and assume they’ve created checks and balances:
• one agent writes code
• one reviews it
• one approves deployment
• one audits controls
But if all of them run on the same underlying model, same orchestration layer, same permissions stack, same retrieval system, and same corporate culture, how independent are they really?
Changing prompts is not the same as changing incentives.
Changing personas is not the same as changing cognition.
Changing wrappers is not the same as changing governance.
This mirrors an old control problem:
Internal QA can be excellent, but internal QA often shares the same assumptions, blind spots, deadlines, and normalized shortcuts as the builders.
That’s why serious institutions use external auditors, independent risk functions, and layered controls.
AI systems may need the same structure.
Real separation of duties in enterprise AI likely has three layers:
1. Role separation
Builder / reviewer / approver / auditor
2. Model separation
Different underlying models with different training histories and reasoning tendencies
3. Governance separation
Different owners, incentives, reporting lines, and authority to block decisions
Without that, a company may have many agents but one monoculture.
The next generation of enterprise controls may ask:
Not “How many agents do you have?”
But “How independent are they from each other?”
More thoughts from the show

