Most Organizations Lack Effective Risk Controls For AI

Survey reveals 67% of companies deploy AI without adequate governance structures

3
AIContact CenterConversational AILatest News

Published: September 24, 2025

Nicole Willing

Most companies are rolling out AI in their customer service operations without governance structures, potentially opening themselves up to security risks.

A survey conducted by analytics software provider CallMiner revealed that almost all (96 percent) of CX contact center leaders view implementation of AI, including generative and agentic AI, as a key strategy for their business.

According to the annual CallMiner CX Landscape Report, released in partnership with Vanson Bourne, that is up from 87 per cent in 2024.

“More organizations are progressing beyond pilots and proofs of concept, instead choosing to embed AI more deeply into their CX ecosystems,” the report found.

But alarmingly, while around 80 percent of companies have at least partially implemented AI and 71 percent of organizations have a dedicated AI governance function, a staggering 67 percent agree that they are implementing AI without the governance structures, policies, and processes needed to manage the associated risk.

In the rush to adopt AI and be seen as keeping pace with innovation, organizations are moving forward with execution and adoption before fully establishing a clear strategic direction.

Only 43 percent of organizations that do have formalized governance teams are focused on defining AI strategy. But current frameworks may not be comprehensive enough to adequately address concerns about AI implementations and guide their responsible use, according to the report.

This could reflect the challenge of new AI departments or oversight teams keeping up with technology that is advancing at an unprecedented pace. It could also be a symptom of a lack of internal urgency or inconsistent regulatory guidance.

The report notes that robust governance is key to implementing AI responsibly and effectively:

“Whatever the cause, establishing an AI governance team or hiring a lead alone isn’t enough.

Governance must be robust, consistently applied, and fully embedded into the broader CX strategy to manage AI’s risks and unlock its rewards.

More AI = More Problems

As recent security breaches indicate, implementing AI carries the risk of attracting increasingly sophisticated cyberattacks and social engineering schemes.

Hackers are leveraging AI software integrations and chatbot vulnerabilities to infiltrate companies to steal customer data, and AI tools are making them more efficient than ever before.

Indeed, nearly half (49 percent) of CX leaders in the survey expressed concern that AI will expose the company to security and compliance risks, which was up from 38 percent last year and 45 percent in 2023.

In addition, 52 percent had misgivings about AI giving customers the wrong answers or spreading misinformation, up from 44 percent in 2024 and 43 percent in 2023.

The leaders surveyed work across healthcare, financial services, technology, retail, and business process outsourcing industries, all of which handle sensitive customer data that could be stolen, exposed or manipulated by malicious actors.

Around half of the companies are using third-party AI software, which helps to implement and scale solutions faster.

Around 85 percent of organizations that have taken an entirely third-party approach by purchasing solutions from a vendor have at least partially implemented AI, compared with 75 percent of organizations that have taken a hybrid approach, and 71 percent that have opted to develop systems in-house.

Relying on third-party vendors can offer organizations access to advanced security infrastructure, regular patching to fix vulnerabilities and compliance with industry standards, which internal teams may lack.

However, it also introduces security risks such as increased exposure to supply chain attacks, reduced control over data handling, and a lack of visibility into the vendor’s security practices. This requires clear governance frameworks and ongoing oversight.

With growing AI adoption, organizations must recognize that ethical and risk-conscious implementation is crucial.

“It is essential to build dynamic, comprehensive AI governance frameworks that align with broader business objectives, aimed at scaling AI initiatives, safeguarding customer trust and driving growth,” the report concludes.

AI AgentsArtificial IntelligenceAutomationCCaaS

Brands mentioned in this article.

Featured

Share This Post