How to Win Buy-In for Agentic AI Investments Across the Enterprise

Enterprises scaling AI in customer experience must align leadership, risk, and frontline teams to convert pilots into everyday operations

4
Sponsored Post
AI & Automation in CXInterview

Published: April 9, 2026

Nicole Willing

As AI agents become more capable, many enterprises are discovering that automating customer interactions is no longer a decision for customer experience teams alone. 

Deploying AI in customer-facing roles touches brand reputation, operational risk, compliance obligations, workforce design, and customer trust. That breadth of impact means AI agent initiatives increasingly require oversight and buy-in from the highest levels of leadership. 

Enterprises rolling out AI agents must treat the decision as strategic, Martin Taylor, Deputy CEO and Co-Founder of Content Guru, told CX Today.  

“If you’re going to automate customer interactions, that’s a board-level decision. They’ve got to agree that this is what you want to do in pursuit of certain objectives and outcomes.” 

Part of the caution surrounding AI agent programs stems from lessons learned during the first wave of GenAI experimentation. 

“When we’re looking at agentic AI, there’s a more thoughtful approach,” Taylor said.  

Enterprises are reassessing how AI programs are scoped, governed, and evaluated before committing significant investment. But even with clearer evaluation frameworks, gaining executive alignment remains complex. 

Why Enterprise Alignment Is Critical for AI Agent Success 

Different leaders approach AI with very different priorities.  

For CEOs and boards, the focus tends to be strategic. Automation can support innovation in customer service that creates competitive advantage and long-term efficiency gains. 

For finance leaders, however, the conversation is often shaped by previous disappointments, Taylor noted. 

“The CFO is always demanding savings be made. They want efficiency.” CFOs will be aware that outside of customer experience, a number of AI projects have failed to demonstrate clear returns, and therefore finance teams increasingly expect strong evidence before approving investment. 

“Although certain sectors, such as CX, have seen some impressive efficiency gains and ROI from the implementation of GenAI tools, the experience isn’t universal. As a result, the CFO community may be feeling a bit bruised and want a more reassuring experience this time.” 

Technology leaders bring another perspective. 

“The CIO’s goal typically is to try and consolidate the number of technologies that they’ve got going on around their organization,” Taylor said. 

Many CIOs are currently focused on reducing complexity across technology estates. AI agents can deliver powerful new capabilities, but they can also introduce additional integration challenges and security considerations. 

These different motivations around the leadership table shape how executives evaluate the investment decisions and operational implications of AI initiatives, and explain why AI agent programs often require careful negotiation across the executive team before they move forward. 

“Teams using AI on the front-line need to know that they’ve got solid technology to work with.” 

This layered approach also reflects a broader realization. Customer service automation once sat primarily within contact center operations, but AI agents now operate at the intersection of CX, security, legal oversight and corporate strategy. 

Why AI Brings Risk and Compliance to CX Decisions 

AI agents are bringing a wider group of stakeholders into CX technology decisions. Legal teams, compliance specialists, and information security leaders increasingly play central roles in evaluating deployments before they reach production. 

“These things can’t go live unless they are given a seal of approval from across the organisation,” Taylor said. 

And the growing complexity surrounding AI agents also means ownership cannot sit within a single function. 

Financial risk, cybersecurity exposure, regulatory obligations, and operational performance all intersect within AI-driven customer service. 

There are potentially significant financial and information security risks if the ownership of AI implementations is not shared across departments. 

Data sovereignty has emerged as one of the most prominent concerns. Research sponsored by Arqit and Intel found that 62 percent of respondents consider data sovereignty and privacy risks as the biggest factor slowing AI projects when using a public cloud. Those delays are affecting customer experience, according to 45 percent of respondents, as well as affecting operational efficiency (53% percent) and competitive advantage (48 percent). 

Enterprises want clear visibility into how customer data is handled by AI systems, particularly when cloud infrastructure spans multiple jurisdictions. 

“People want to know where their customer data is going to be processed, by whom, who’s got access to it, and what jurisdiction applies,” Taylor said. 

Legislation such as the U.S. Clarifying Lawful Overseas Use of Data (CLOUD) Act or the Foreign Intelligence Surveillance Act (FISA) introduces further complexity for multinational enterprises. In some scenarios, data stored in one jurisdiction could potentially be accessed under legal orders issued in another. 

“There is no common global standard for what’s allowable in AI. Common standards don’t exist in the handling of customer data, more generally either.” 

“Data sovereignty is something that people weren’t even seriously talking about a year ago. But in my recent trip to CCW in Berlin, after agentic AI, it was the main thing people were talking about.” 

“It’s a very nuanced and complex picture, much more so than it was when we were just talking about answering a phone,” Taylor said. 

But enterprises need to balance the concerns of security teams with the need to maintain momentum when introducing new technology. 

“They will tend to be the slowest moving ship in the convoy and look at the very worst thing that could happen. When actually, very often the worst thing that can happen is the process goes back to how it was before.” 

Pilots as Organizational Proof Points 

When it comes to AI, pilots serve a broader purpose than technical testing. 

They help enterprises build internal confidence while providing evidence for finance, legal, and security teams that the technology can operate safely and effectively. 

Security leaders often want to see use cases proven in microcosm first and then, once they surpass agreed success criteria, they are more likely to agree to a controlled expansion into other areas of the business, Taylor explained. 

“A good pilot sets out its objectives. It’s got a clear start point and end point and success criteria clearly defined and agreed by all concerned.” 

When pilots succeed, teams should already know what comes next. 

“When it’s proven, there should already be an exploitation plan of what’s phase two when this works,” Taylor said. 

That approach allows enterprises to scale AI gradually, expanding automation across departments, customer segments, or service channels once initial results are validated. 

Importantly, teams are also becoming more comfortable learning from unsuccessful experiments.  

“If you think of agile methodologies, where something doesn’t work, there’s a lot to learn there too. Why didn’t it work? What would we need to do in order to make it work next time?” 

For enterprises seeking to scale AI in customer experience, progress happens when leadership aligns around clear outcomes, pilots prove value and deployment expands in controlled steps. Initiatives that treat AI as a shared responsibility across the enterprise are far more likely to move beyond experimentation and incorporate AI into core workflows at scale. 

Agentic AIAgentic AI in Customer Service​AI AgentsArtificial IntelligenceAutomationAutonomous AgentsDigital TransformationSecurity and Compliance
Featured

Share This Post