As agentic AI gains momentum in customer experience, vendors are rolling out new tools to help enterprises move from experimentation to real-world deployment of AI agents.
But despite strong interest, many organizations are still stuck in pilot mode.
Dialpad recently introduced new capabilities aimed at helping companies move agentic AI from proof of concept into production.
Building on its launch of AI agents last fall, the company is focusing on three practical challenges: identifying the right use cases, building & proving ROI before launch, and deploying governed Agentic AI agents across voice and digital channels without requiring code.
Why AI Agents Stall
The biggest barrier to AI adoption isn’t awareness; it’s trust. According to Shezan Kazi, Head of AI Transformation at Dialpad, “Organizations need to trust that AI agents will deliver on expectations.
“They also need confidence in their own ability to build, deploy, and manage those agents over time.”
Organizations aren’t always confident that AI agents will deliver on performance expectations, particularly for customer-facing interactions.
Just as important, many teams aren’t sure they have the internal expertise to build, test, and manage AI agents over time.
At the same time, executives face increasing pressure to automate and reduce costs. Nearly every company knows it should be doing something with agentic AI.
The problem is knowing where to start.
Three consistent questions emerge:
- Where are the highest-impact opportunities?
- How do we build these agents, and what skills are required?
- When is an AI agent truly ready for production?
Most organizations lack a clear, data-driven way to answer those questions. As a result, many early AI projects are based on assumptions rather than evidence.
According to Kazi, “everyone knows they want to do agentic AI, but they don’t know where best to apply it, and what workflows or use cases to focus on. Most customers can’t identify what types of AI agents to build based on their data, and it’s all guesswork right now.”
Covering the Entire Agentic Lifecycle
Dialpad’s approach is to help customers operationalize AI agents in a controlled, measurable way, addressing the full lifecycle of agentic AI in the contact center – from discovery to deployment to governance and ongoing optimization.
The starting point is discovery. Dialpad’s Skill Mining capability analyzes historical call transcripts in detail, evaluating conversations turn by turn rather than just categorizing them by high-level intent.
It clusters interaction types, assesses which ones are suitable for automation, and surfaces the opportunities most likely to drive measurable impact if they were handled by AI agents.
Instead of guessing which workflows to automate, teams get a clear, ranked view of where AI agents can deliver results, and free human agents up for more complex customer interactions.
Once a use case is selected, Agent Studio provides a visual, no-code environment for building AI agents. Business users can configure workflows using natural language instructions without needing programming or conversational design expertise.
The platform includes built-in intelligence, proprietary AI models, and a connectors ecosystem that aligns agents with enterprise systems, security requirements, and compliance policies.
Before AI agents go live, Proving Ground automates testing. It generates hundreds of simulated scenarios, personas, and edge cases, then runs real-time evaluations to assess how the agent performs.
According to Kazi, “this replaces manual testing cycles and gives teams performance insights in minutes.”
In practical terms: Skill Mining identifies what to automate, Agent Studio builds it, and Proving Ground tests it before release. This is how customers operationalize agentic AI.
When Data Changes the Strategy
Early AI agent deployments show some interesting findings.
In one e-commerce deployment, Skill Mining found that 51.4% of interactions were candidates for automation. More specifically, 27% of calls involved generating return labels.
The system identified and mapped the required steps, allowing the company to quickly build an AI workflow that handles return label generation using its own data. Proving Ground then simulated customer scenarios to validate performance before launch.
In another case, an education provider offering language programs in various countries assumed product-related questions would dominate call volume.
Skill Mining analyzed the company’s calls and conversations and found that 37% of calls were about where students could do laundry after arriving in a new country.
That insight led to the creation of an AI agent focused on answering a question leadership had not considered a priority.
Similarly, a restaurant group initially targeted reservations as its primary automation use case.
However, Skill Mining found that there were roughly 25,000 monthly password reset requests following a new password rotation policy – significantly more than reservation calls.
Automating password resets using AI agents offered a clearer and faster return on investment.
Governance Built In
As AI agent creation becomes easier, governance becomes more critical. Dialpad embeds safeguards at both build time and runtime.
During agent development, COMPASS (Conversational Performance and Safety Supervisor) reviews each agent before release. It evaluates configuration logic, flags potential risks, and recommends corrections.
Think of it as a quality and safety check before deployment.
Once the agent is live, a Guardian model monitors conversations in real time. It can block interactions, enforce compliance guardrails, or escalate to a human agent when necessary. The goal is to reduce risk while maintaining performance.
Early Results
While many customers are still in deployment phases, early metrics are emerging. One organization is handling more than 90,000 calls per month with an AI agent resolution rate above 70%.
An e-commerce customer reports resolution rates exceeding 80% for order status and update inquiries.
Another operational benefit is reduced reliance on specialized AI talent. Non-technical CX leaders can configure and manage agents without hiring prompt engineers or conversational designers, lowering barriers to agentic AI adoption.
Moving from Experimentation to Architecture
As it’s becoming easier to build AI agents, the real question is how they fit into your enterprise CX architecture.
Organizations should think about compliance by design, governance by design, and security by design. Speed matters, but sustainability matters more.
Building AI agents is becoming easier. Scaling them responsibly inside a complex enterprise environment is harder.
Kazi notes, “as organizations move forward with agentic AI, the focus is shifting from speed to sustainability.
“It’s becoming easier to build AI agents. The real question is how they fit into your enterprise architecture.
“Organizations should think about compliance by design, governance by design, and security by design. Speed matters, but sustainability matters more.”
Dialpad’s strategy centers on managing the full lifecycle – from identifying the right use cases to validating impact to enforcing real-time oversight.
For enterprises under pressure to show measurable ROI from AI investments, that end-to-end approach may determine whether agentic AI remains a pilot initiative or becomes part of the core customer experience infrastructure.