McKinsey’s State Of AI: The Scaling Gap Is Now CX’s Problem

From agent pilots to workflow redesign, how the ‘State of AI in 2025’ is showing up for CX operations in 2026

6
McKinsey and Company State of AI Report
AI & Automation in CXFeature

Published: February 23, 2026

Rob Wilkinson

As someone who’s watched ‘AI transformation’ turn into a stack of pilots and a few shiny demos, I understand why many CX and operations leaders feel conflicted right now.

The technology is moving fast, executives expect visible wins, and customers have zero patience for half-working automation.

McKinsey’s The State of AI in 2025 puts real numbers behind that tension.

On the one hand, AI is now mainstream. McKinsey reports that “88% of survey respondents say their organizations regularly use AI in at least one business function,” and “72% report using gen AI (up from 33% in 2024).” On the other hand, the scale story is still shaky. The report finds that “nearly two-thirds have not yet begun scaling AI across the enterprise.”

For CX and ops leaders, that gap is the point. Customers are increasingly moving through AI-mediated journeys, but many enterprises still lack the operating discipline to make those journeys reliable.

AI Has Moved Beyond Hype, and That Changes The Standard

One of the strongest positive takeaways from the analyst community is that the conversation has changed from experimentation to business transformation.

Gagan Bhasin, Founder & CEO VAO, sums it up in a way that mirrors the report’s underlying message about ‘high performers’:

“AI is no longer a futuristic concept, it’s a critical driver of business transformation.”

That framing matters for CX leaders because ‘transformation’ in service is not a slogan. It means rethinking resolution, redesigning workflows, and deciding what humans should do versus what machines can safely do.

McKinsey’s own segmentation supports this. It defines a small group of ‘AI high performers,’ roughly 6% of respondents, as organizations that report significant value and attribute more than 5% of EBIT to AI.

That small number is both encouraging and sobering. It suggests real value is possible, but it is rare.

The Numbers Show Maturity, But They Also Reveal Fragility

If you want a single metric that captures 2025’s shift, it is still that adoption figure, 88% of organizations now use AI.

That market maturity creates a new problem for CX teams. When adoption is nearly universal, ‘we’re using AI’ stops being a differentiator. Customers won’t reward you for having a bot. They will reward you for reducing effort, increasing clarity, and fixing issues on the first try.

McKinsey’s data suggests AI is already moving needles in areas CX leaders care about.

In Exhibit 6, the report lists that AI can improve customer satisfaction by 45%. But the report also shows how uneven progress remains. Only 39% report any EBIT impact attributable to AI. That implies many organizations are still struggling to tie AI work to durable business outcomes.

In practice, CX leaders see the same pattern: visible activity, limited operational change.

The Report’s Biggest Future Signal: Agents That Do Work, Not Just Talk

The other major positive signal is the shift from chatbots to agentic systems.

Abhijit Verekar, Founder and CEO of Avèro Advisors summarizing the report in a LinkedIn post, frames the shift clearly:

“Agents are the new frontier: We are moving from chatbots that talk to agents that do.”

This aligns with McKinsey’s own data. The report finds 62% are at least experimenting with agents, and 23% are scaling agentic systems somewhere. In CX terms, that’s a directional move toward systems that can complete multi-step workflows, not just generate text.

But agentic AI also raises the bar for service operations. When AI can act, errors can become actions. So the core leadership question shifts from ‘Can it answer?’ to ‘Can it execute safely, consistently, and with accountability?’

McKinsey’s risk findings suggest this is not theoretical. The report says 51% experienced at least one negative consequence, and that inaccuracy is the most common, with 30% experienced.

For service leaders, that’s an uncomfortable baseline. Inaccuracy is not a rare edge case. It is a normal operating risk.

The Critical View: Many Organizations Automated Tasks, Not Operations

Now to the harder part. Several critics argue that McKinsey’s data accidentally reveals how shallow many AI deployments still are.

Brianna Bentler, Co-Founder and CEO of Stealth AI captures this critique bluntly in a LinkedIn post:

“Zero integration. They’d automated tasks, not transformed operations.”

Even without adding new data, this critique maps onto what McKinsey reports about scaling.

If nearly two-thirds of organizations have not begun scaling across the enterprise, then many deployments are likely isolated to teams, tools, or point use cases. That is where automation lives, and where transformation struggles to show up.

For CX leaders, zero integration often means AI that cannot see order status in real time, verify policy eligibility, update the CRM cleanly, or escalate without context, creating extra work for agents.

So the customer gets a fast conversation, but not a faster outcome.

The Value Gap Is The Story, and It’s Not A Small One

Another critique focuses on how rare true value creation is.  In their article, MJ Smith, CMO at CoLab Software highlighted:

“Only 5.5% of companies drive significant value from AI.”

That figure is consistent with McKinsey’s own high performer definition and reported size of the cohort. It also reinforces the operational reality many CX leaders already feel: pilots spread quickly, but scaled value is hard.

Teams can demonstrate productivity wins in pockets, but they can’t convert those wins into enterprise-level outcomes that show up in cost-to-serve, customer satisfaction, or risk reduction.

McKinsey’s data suggests what separates winners is not access to models. It is the redesign of work.

The Harshest Critique: Big Spending Does Not Guarantee Success

The sharpest critical perspective in your set is the idea that investment level can be misleading.

In analysis titled “I Did The Maths McKinsey Didn’t…Jing Ho, Director at Ardonio argues:

“Spending more on AI makes you twice as likely to fail”

The critique points to survivorship bias and correlation being mistaken for causation. McKinsey shows that high performers are more likely to invest heavily, but that does not mean heavy investment creates high performance. Many heavy investors may still fail.

For CX and operations leaders, this is actually useful. It reframes the executive conversation. Budget matters, but it is not the lever that fixes the core scaling constraints, like data fragmentation, workflow ambiguity, governance gaps, and change management.

In plain terms: you can fund pilots forever and still not change the way work gets done.

What The Past Teaches, and What The Future Demands

If we connect McKinsey’s findings with these expert perspectives, a clear lesson emerges.

The past two years taught us adoption is easy to celebrate. Transformation is hard to execute.

The next two years will be defined by whether organizations can operationalize three things:

1) Workflow Redesign Becomes The Real Transformation Work

McKinsey reports that AI high performers are 2.8x more likely to report fundamental workflow redesign (55% vs 20% of others). That is the cleanest do this signal in the dataset for CX and ops.

For service organizations, workflow redesign looks like:

  • Redefining what ‘resolution’ means in each journey
  • Designing clean escalation paths and stop conditions
  • Structuring knowledge and policy so AI can ground to truth
  • Rebuilding agent desktops so humans handle exceptions, not repetition

2) ‘Human In The Loop’ Shifts From Safety Net To Operating Model

McKinsey reports high performers are far more likely to have defined ‘human in the loop’ validation processes: 65% vs 23%. That points to a future where humans supervise, audit, and intervene by design.

This is also how you protect trust. Customers will accept automation if it feels accountable. They will reject it if it feels like a wall.

3) CX Leaders Start Owning AI Trust As A Metric

McKinsey’s risk findings, especially the prevalence of inaccuracy, suggest CX leaders need to treat AI quality like a service reliability discipline. That means routine evaluation, monitoring, and visible remediation loops.

McKinsey’s Exhibit 6 includes customer satisfaction as a reported improvement area. But in 2026, the customer satisfaction question gets more precise: are customers satisfied with the outcome, and do they trust the system that produced it?

Closing Reflection: The Next Competitive Edge Is Operational Trust

McKinsey’s State of AI in 2025 is not just a snapshot of adoption. It’s a mirror held up to execution. AI is no longer a futuristic concept, and adoption numbers are genuinely staggering. But value remains rare, and critics are right to call out the gap between automation and transformation.

For CX and operations leaders, 2026 is the year we decide what AI will represent to customers. Another layer of fast talk, or a real change in how reliably we resolve issues.

The organizations that win will not be the ones with the most pilots. They will be the ones that redesign the work, validate with humans where it matters, and make trust measurable.

That is how AI becomes a customer experience advantage instead of a new source of customer frustration.

Agentic AIAgentic AI in Customer Service​Agentic AI SoftwareAI AgentAI AgentsAI Governance ToolsAutonomous AgentsSPOTLIGHT: Protecting Customer Trust in the Age of AI​
Featured

Share This Post