What does accountability look like when customer support shifts from generative AI copilots to autonomous, agentic systems that can act on behalf of an enterprise? In this CX Today interview, host Nicole Willing speaks with Vishal Sharma, Chief Technology Officer at SearchUnify, about the operational reality behind the hype and the guardrails enterprises need before they let AI agents interact directly with customers.
Sharma describes a three-stage progression that many support organizations are following: find, assist, and act. SearchUnify’s journey started with enterprise search, giving human support teams one place to find relevant internal information while handling tickets. From there, the focus moved to assistance, using AI to help draft responses, summarize cases, and support “swarming” by identifying the right internal experts. Now, with the “act” phase gaining momentum, Sharma argues that most enterprises are not ready to make a fully autonomous leap, even if the industry conversation is moving in that direction.
Sharma points to two readiness gaps. The first is process. Autonomous systems need workflows that are designed for AI execution, not bolted onto human-first processes. Sharma warns that AI will not magically smooth over broken operating models. Instead, it tends to magnify what is already there.
“For AI, at the end of the day, it is going to amplify your current system. So if you’ve got crap in place, it is going to make it worse. If you’ve got a well-designed system for AI to take advantage of, it’s going to amplify it and make it great.”
The second gap is tooling and architecture. Sharma notes that the support stack is increasingly being “consumed” by agents, not just people. That shift forces changes in knowledge bases, support consoles, and ticket workflows, including restructuring content so AI can reliably retrieve and use it. Sharma also suggests support environments should become more API-driven to enable fast, multi-step execution.
When an enterprise decides an agent is appropriate, Sharma emphasizes layered guardrails, including grounding answers with retrieval, providing citations, defining when the system should say “I don’t know,” and ensuring a clear handoff to assisted support. Sharma also highlights security and privacy controls, including checks for personally identifiable information (PII), semantic protections like toxicity controls, and experience safeguards that prevent users from getting stuck in loops.
As multi-agent orchestration becomes more common, Sharma says clarity becomes even more critical: each agent needs a well-defined job and outcome so the system does not become overly complex and unreliable.
Watch the full interview to hear Sharma break down the guardrails, workflow changes, and real-world readiness steps needed for agentic AI in customer support.