Ask any CX leader to produce a complete list of every AI agent currently running in their contact centre environment – what it is connected to, what data it can access, and who approved its permissions. Most cannot. This issue is growing by the quarter as major enterprise platform vendors ship autonomous agents directly into customer-facing operations.
Speaking on The Verge’s Decoder podcast, Okta co-founder and CEO Todd McKinnon outlined the issue:
“AI agents need to log into stuff. You need to have a system to keep track of them, define their role, define their permissions, what they can connect to and what they can do.”
For an industry that has spent the past two years racing to deploy agentic AI, governance infrastructure has failed to keep pace with automation.
Why Is AI Agent Governance in the Contact Center So Difficult?
The vendor deluge has arrived faster than most IT and CX teams anticipated. Microsoft has been pushing AI agents deep into Dynamics 365 Contact Center, equipping them with the ability to autonomously handle customer interactions across voice and digital channels.
SAP is automating ticket resolution through its own AI support capabilities, operating independently on support cases at a speed and volume that no human team can supervise in real time.
Salesforce, meanwhile, has been defending AgentForce directly to investors nervous about what agentic AI means for the future of SaaS, with CEO Marc Benioff framing the technology as an expansion of the platform rather than a disruption to it.
Agents are arriving from every direction simultaneously, each with its own identity model, data connections, and permission logic. No single team in most enterprises has the full picture across all of them.
What Happens When No One Is Tracking AI Agent Permissions?
The consequences of that visibility gap are not abstract. In a contact centre context, AI agents are routinely credentialed into CRM systems, customer data platforms, ticketing infrastructure, and telephony layers. They operate on behalf of customers, with access to purchase histories, account credentials, and personal data.
When the permissions governing those connections are distributed across vendor dashboards, IT procurement records, and individual team deployments, the risk is not just operational inefficiency – it is a customer trust exposure.
McKinnon is deliberately unsentimental about what happens next:
“Stuff is going to go wrong. There are going to be issues and threats and prompt injection.”
The question for CX leaders is not whether a misconfigured or compromised agent will cause harm – it is whether they will know about it quickly enough to act, and whether they have the mechanism to respond when they do.
This is the logic behind what Okta calls a “kill switch”: not shutting down the agent itself but revoking its access to every system it touches. “We’re pulling the access to everything the agent can access, not access to the agent,” McKinnon explains. “Almost like you would take a machine off the network.” For a contact centre running SAP’s autonomous ticket resolution at scale, or Microsoft’s Dynamics 365 agents handling live customer calls, that capability – and the speed with which it can be deployed – may prove to be an important governance feature in the stack.
How Should CX Leaders Build an AI Agent Governance Framework?
McKinnon’s blueprint for what he calls the “secure agentic enterprise” translates into three practical obligations for CX technology buyers – none of which are vendor-specific, and all of which should be applied to Microsoft, SAP, and Salesforce deployments equally.
Build the inventory first. Before any governance framework can function, organisations need a system of record for every agent in the environment. “Just giving enterprises a list of the agents they have – sounds simple, but they need a list of the agents they have,” McKinnon says. For CX teams, this means mapping across platform-native agents, third-party CCaaS AI layers, and internally built automations into a single, maintained register.
Define and control connection points. An agent’s value is proportional to its data access – but so is its risk. McKinnon argues that the long-term model is not consolidating everything into a single data warehouse and running agents on that but ensuring that agents have precisely scoped access tokens to the systems they legitimately need. “There’s no good standard for how agents connect to a bunch of other systems they need to get their data,” he acknowledges – which means CX buyers must ask vendors directly how agent authentication is managed, and what audit trail exists.
The Governance Gap Is Growing
There is a harder truth embedded in McKinnon’s analysis. The behaviour of these systems is, by his own admission, “non-deterministic.” There is, as he puts it, “no free lunch” – you either grant agents sufficient data access to be genuinely useful, or you constrain them to the point of irrelevance. That tension is not going away. It is the defining operational trade-off of the agentic era in CX.
What separates organisations that navigate it well from those that face avoidable incidents is not the choice of platform vendor. It is whether they treated AI agents with the same governance discipline they would apply to a new human hire – knowing what access they have, who authorised it, and how to take it back.
FAQs
What is AI agent governance in the contact center?
AI agent governance is the process of tracking, permissioning, and controlling every autonomous AI agent operating within your contact centre environment.
Why do enterprises struggle to manage AI agent permissions?
Most organisations are deploying agents from multiple vendors simultaneously, with no single system of record spanning all of them.
What is an AI agent identity?
An agent identity is a hybrid credential – part human profile, part system account – that defines what an AI agent can access, on whose behalf, and under what conditions.
What is an AI agent kill switch?
A kill switch revokes an agent’s access to every connected system instantly, effectively removing it from the network without shutting down the agent itself.
How should CX leaders start building an AI agent inventory?
Begin by mapping every agent in your environment – platform-native, third-party, and internally built – into a single maintained register before addressing permissions or governance policy.
Is AI agent behaviour predictable?
No – Okta CEO Todd McKinnon describes agent behaviour as inherently non-deterministic, meaning governance frameworks must account for unexpected actions, not just intended ones.