The Risks of Using Generative AI In Your Business

From fabricated information to cybersecurity concerns, enterprises have lots to mull over when implementing generative AI

4
The Risks of Using Generative AI In Your Business
Data & AnalyticsNews Analysis

Published: April 21, 2023

Charlie Mitchell

When Italy placed a temporary ban on ChatGPT last year, it cited concerns over the protection of personal data.

Did this stop people from using it? No. Indeed, sales of VPNs rose by 400 percent after the announcement.

Nevertheless, it did underscore the rising concerns over the security and risk of large language models (LLMs).

In the enterprise, these expand far beyond personal data protection.

Indeed, Avivah Litan, VP Analyst at Gartner, noted five significant risks that generative AI poses for businesses today.

1. Fabricated Information

Litan suggests that “hallucinations and fabrications” – such as fabricated information passed off as fact – are prevalent problems with generative AI. She stated:

Training data can lead to biased, off-base or wrong responses, but these can be difficult to spot, particularly as solutions are increasingly believable and relied upon.

For this reason, keeping a human in the loop is critical. Salesforce recently doubled down on this point when discussing the possible contact center applications of Einstein GPT.

2. Deepfakes

Much of the fear circling generative AI lies in its ability for particular users to create content with malicious intent. This is not only a political problem but also an enterprise issue.

After all, people may use the technology to generate fake news and images to attack businesses – alongside personnel.

An example of the latter is an AI-generated image of The Pope wearing a white puffer jacket, which has done the rounds on social media.

Referring to this, Litan noted: “It provided a glimpse into a future where deepfakes create significant reputational, counterfeit, fraud, and political risks for individuals, organizations, and governments.”

3. Data Privacy

Employees entering private data into LLMs has already become a significant issue. In just one of many examples, three employees at Samsung recently fed ChatGPT with “sensitive database source code and recorded meetings.”

Discussing the dangers of such actions, Litan stated:

These applications may indefinitely store information captured through user inputs and even use the information to train other models — further compromising confidentiality.

Moreover, a security breach – through a data extraction attack – could put that data in the hands of malicious actors.

As such, it’s no wonder Gartner recently warned against the dangers of contact center agents using ChatGPT for self-automation.

4. Copyright Issues

Vast amounts of internet data has trained LLMs. Most of this is copyright material.

Therefore, some outputs may violate intellectual property (IP) protections and copyright laws.

After making this point, Litan delved deeper. She added:

Without source references or transparency into how outputs are generated, the only way to mitigate this risk is for users to scrutinize outputs to ensure they don’t infringe on copyright or IP rights.

Again, this underlines the need for human supervision to continually monitor and review content generated by applications such as ChatGPT.

5. Cybersecurity Concerns

Much of the press swirling around the dangers of LLMs centers on the possibility of enabling more sophisticated social engineering attacks.

Indeed, increased phishing threats are already a problem. Wired recently warned: “Brace Yourself for a Tidal Wave of ChatGPT Email Scams.”

Yet, Latan also cautioned that malicious code generation also presents a real risk. She said:

Vendors who offer generative AI foundation models assure customers they train their models to reject malicious cybersecurity requests; however, they don’t provide users with the tools to effectively audit all the security controls in place.

Such analysis will worry many enterprises, which often place total trust in the ability of their vendors to execute holistic security aims.

How Can Businesses Combat These Threats?

Vendors often customize generative AI solutions. As such, giving generic advice is tricky. Nonetheless, let’s consider an out-of-the-box model that leverages LLM services as-is.

In such scenarios, Latan recommends reviewing all outputs from the model in a bid to detect biased or incorrect information.

Yet, an governance and compliance overarching framework should guide such a strategy.

Such a framework should include transparent policies that prohibit staff from asking questions that may expose business or personal data – according to Latan.

Sharing more guidance, she stated:

Organizations should monitor unsanctioned uses of ChatGPT and similar solutions with existing security controls and dashboards to catch policy violations.

For instance, firewalls can prevent user access. Moreover, event management tools may monitor violations with event logs and secure web gateways to screen API calls.

There are also additional steps for companies to consider as a means of protecting sensitive data used to engineer prompts on third-party infrastructure.

“Create and store engineered prompts as immutable assets,” advises Latan.

“These assets can represent vetted engineered prompts that can be safely used. They can also represent a corpus of fine-tuned and highly developed prompts that can be more easily reused, shared, or sold.”

Such advice is golden as businesses strive to securely leverage the benefits of LLMs quickly to gain a competitive advantage.

Discover how they are already doing so in our article: OpenAI Has Released ChatGPT-4. Here Is How Brands Are Already Using It

 

 

Artificial IntelligenceEnterpriseGenerative AI
Featured

Share This Post