A single outage can feel like a one-off. But a recent incident at Pocket OS where an AI agent took unexpected actions is being treated by CX and enterprise IT leaders as something closer to a preview.
In this CX Today interview, Nicole Willing sits down with Alex Gallego, CEO of Redpanda Data, to unpack what happened when an AI coding agent was given the kind of access many organizations still leave dangerously unchecked. The result was swift and damaging: a production database and its backups were deleted in seconds, triggering a 30-hour outage and a scramble to restore service.
For CX leaders, the headline is that agentic AI can execute mistakes at machine speed, inside systems that were never designed with autonomous actors in mind. Gallagher argues the most important lesson is about fundamentals, separating production from development, tightening authentication and authorization, and shrinking the toolset an agent can touch. Without those controls, even well-intentioned automation becomes risky, especially in customer-facing environments where downtime and data exposure translate directly into churn and reputational damage.
He also frames governance as an operational discipline, not a policy document. If teams cannot reconstruct what an agent did, why it did it, and which permissions enabled it, they cannot manage accountability. That becomes even more important as regulators and buyers demand clearer auditability across AI-assisted workflows.
“When you’re trying to give agents access to private data, you really have to think from first principles, which is: what permissions do these agents have access to and how am I going to govern access to this data?”
The conversation goes beyond a single outage to the broader reality that many enterprise systems still rely on root access patterns and legacy permissioning. Gallagher explains why organizations may need proxy layers and stricter guardrails to enforce least-privilege access, and why monitoring and decommissioning agents should be treated like any other critical reliability practice.
If your organization is exploring agentic AI for customer operations, contact centers, or back-office workflows that touch sensitive data, this interview offers a clear takeaway that speed and autonomy are only valuable when governance is designed in from day one.
Watch the full interview for practical guidance on governing agentic AI before it governs you.