Who is Liable When AI Agents Go Rogue?

We often think of AI agents as just the next evolution of the chatbot, but Surfshark Information Security Manager Miguel Fornes argues this is a dangerous misconception.

Security, Privacy & ComplianceInterview

Published: February 26, 2026

Rhys Fisher

In this CX Today interview, he explains that while chatbots were like “parrots in a cage,” harmlessly mimicking language, agentic AI is a “wise owl” that has broken out, capable of executing actions, accessing bank accounts, and wiping data. The shift, he notes, is from content to consequence.

Fornes warns that consumers and enterprises are currently acting as “unpaid QA analysts,” handing over credentials to systems they don’t fully understand. He compares the current adoption of agentic AI to handing your wallet to a stranger at an airport simply because they hold a sign claiming they can book cheap flights. He also highlights a critical legal gap: if an agent “hallucinates” a transaction or commits a cyber error, the liability currently points to the human who clicked “run,” not the software provider.

The interview also covers the unique challenge of securing these systems. Unlike traditional software, which is binary (access granted or denied), AI agents operate on linguistics and interpretation, making them vulnerable to prompt injection and manipulation. Fornes advises leaders to treat agentic AI like a “talented but reckless genius”—valuable for ideas, but never to be trusted with the keys to the company’s most critical data without extreme skepticism.

Agentic AIAgentic AI in Customer Service​Agentic AI SoftwareAI AgentsAutonomous AgentsSPOTLIGHT: Protecting Customer Trust in the Age of AI​
Featured

Share This Post