Agentic AI is the buzzword at ServiceNow’s Knowledge 26. It signals a shift from AI that answers questions to AI that takes action.
That shift matters most in customer experience, CX teams live and die by trust. They also carry the operational risk when automation makes a wrong call at scale.
In an interview, Heath Ramsey, GVP Product Management, AI Platform at ServiceNow warned:
“…the promise of AI is the value. Promise is the transformation, but the the gap is the governance and the visibility, especially within the CX space. Because it’s such an important part to make sure that you get the experience right the first time.”
Ramsey’s point is pragmatic. AI can unlock creativity. It can reshape workflows. But CX leaders still have to align that power with the experience they want customers to have.
Why Visibility Becomes a CX Requirement, Not a Nice-to-Have
Governance starts with knowing what is running. Ramsey frames AI Control Tower as a way to inventory generative AI services across an enterprise, then bring that view back into a system teams can manage.
For CX, that matters because AI often enters the business in fragmented ways. Different teams adopt different tools. Vendors deliver different agentic capabilities. Without visibility, CX leaders can’t confidently assess risk or performance.
Asked what visibility and control organisations have over AI Control Tower, Ramsey emphasized:
“we clearly inventory and understand the generative AI state across the enterprise. It’s not just control tower. Is not just for ServiceNow. I can automatically discover lots of different things that are happening across Microsoft, across Amazon, all the different services that are available, and bring it back into the platform, so that you have, this visibility overall.”
That ‘discover and bring it back’ idea is central. Ramsey is arguing that enterprises need a single operational view of AI, not a scattered set of pilots.
Audit Trails and Accountability Once Agents Start Acting
As agentic AI moves from supporting humans to executing tasks, the accountability question gets sharper.
If an agent changes a customer record, triggers a fulfillment action, or influences the next-best step in a journey, someone still has to answer for the outcome. Ramsey keeps returning to traceability, and to the importance of connecting AI actions to deterministic workflows and processes.
Looking ahead Ramsey stressed: “because our AI specifically, is linked to the platform, because it’s linked to the deterministic workflows and processes you’ve got the complete audit trail of what those AI agents have done, so that you can also understand where the responsibility lies in the agent and what it did.”
For enterprise buyers, that is a direct governance claim. An agentic future only works if organizations can show what happened, why it happened, and what it changed.
The Expectation Gap That Can Break CX Trust
Ramsey also highlights a customer-side challenge. Many people assume AI should be perfect.
CX leaders know how damaging that expectation can be. A single flawed interaction can undo loyalty. At enterprise scale, small failure rates can become visible fast.
Ramsey framed the move as a practical shift: “There is an expectation that AI is perfect., there’s this expectation that it’s going to do the right thing every time, it’s going to be 100% accurate. But it’s not.”
That is why governance is not a background function in Ramsey’s framing. It becomes part of how CX leaders protect the experience while still adopting automation.
The winners will not be the teams that deploy agents first. They will be the teams that can govern agents at scale, prove value, and maintain customer trust while automation takes on more responsibility.
CX leaders will be judged on a new metric. Not only what they automate, but how well they can explain and control what automation does.
Join the conversation: Join our LinkedIn community (40,000+ members): https://www.linkedin.com/groups/1951190/
Get the weekly rundown: Subscribe to our newsletter: http://cxtoday.com/sign-up