The FBI Warns Salesforce Customers of Increasing Cyber Attacks

AI-driven software exploits and voice phishing attacks are on the rise. Here’s what CX leaders need to know to protect customers

4
The FBI Warns Salesforce Customers of Increasing Cyber Attacks
CRMLatest News

Published: September 16, 2025

Nicole Willing

The US Federal Bureau of Investigations (FBI) has put out a warning that cybercriminal groups have been targeting organizations’ Salesforce platforms.

In the alert, the FBI warned of increased data theft and extortion intrusions from two specific groups.

One of those groups is behind the recent Salesloft attack that opened a backdoor into Salesforce.

Hackers exploited compromised OAuth tokens for the Salesloft Drift application, an AI chatbot that can be integrated with Salesforce via API.

OAuth tokens are digital credentials that authorize secure API access to specific user data or services. The hackers used the compromised tokens and third-party app integration to compromise victims’ Salesforce systems and extract data.

Salesforce quickly worked with Salesloft to close the security loophole. On August 20, 2025, Salesloft revoked all active access from the tokens, stopping the hackers from being able to breach victims’ Salesforce platforms, the FBI said.

Yet, these criminal groups aren’t only exploiting software integrations; they are also conducting phishing attacks on CRM users, like customer support reps.

Indeed, the FBI warns that malicious actors have also engaged in social engineering attacks, especially voice phishing (vishing), to gain access to organizations’ Salesforce accounts.

Here, the attackers have called organizations’ contact centers posing as IT support employees, responding to enterprise-wide connectivity issues. They trick customer support employees into taking actions that give them access to their devices and then use API queries to steal large volumes of customer data.

In some instances, like the recent Google’s recent Salesforce breach, attackers directly requested employees’ login credentials and multifactor authentication (MFA) codes to authenticate and add a modified version of the Salesforce Data Loader application using OAuth tokens.

By asking employees to unknowingly install malicious apps, attackers can bypass traditional security defenses such as MFA, password resets, and login monitoring. And because the tokens are issued by Salesforce, the connected apps can appear to be trusted integrations, and hackers can register them without a legitimate corporate account, making them difficult to detect, the FBI said.

Some victims have received extortion emails days or months later that demand payment in cryptocurrency to avoid the stolen data being published, indicating that customers need to remain on guard for extended periods.

Why Salesforce Customers Should Be on Guard

With CRM platforms like Salesforce at the center of organizations’ customer data strategies, a single breach can have a knock-on effect across entire ecosystems of partners and customers.

Reflecting the significance of this threat, the American Hospital Association released a statement drawing attention to the FBI’s warning, as many hospitals and healthcare systems use Salesforce Health Cloud.

Many organizations layer multiple AI tools into their customer service stack to help increase productivity, including chatbot integration, sentiment analysis, and automated case routing. But this can increase the attack surface, as the Salesloft example underscored.

In addition to the growing risk of compromised AI-powered app integrations, voice phishing (vishing) attacks will also become more common, as hackers increasingly use AI-generated voice synthesis to convincingly impersonate customers.

Cybersecurity software firm CrowdStrike’s 2025 Global Threat Report cites a staggering 442 percent rise in vishing operations between the first and second half of 2024, driven by generative AI.

Phishing no longer involves just sending fake emails that are often clearly suspicious. It now extends to full conversations with a realistic-sounding “customer.” Using readily available AI tools, attackers can clone a customer’s voice from a few seconds of audio, use the deepfake voice to call customer support centers and bypass authentication, and trick agents into changing passwords, disabling 2FA, or transferring sensitive data.

All this makes AI-enabled phishing attacks difficult to detect.

Human contact center agents are a prime target, and they can become a weak link if they lack the proper training to recognize and respond to social engineering tactics. They are often under pressure to take a high volume of calls, resolve issues quickly, and deliver a positive customer experience may inadvertently trust attackers that use AI-generated audio to impersonate real customers with convincing voices or urgent-sounding requests.

What Can CX Leaders Do to Protect Customers?

Organizations need to take a multi-pronged approach to securing their systems, rather than relying on software vendors and basic security protocols.

The FBI recommends six steps to defend against attacks like those against Salesforce:

  1. Train contact center employees to spot phishing and vishing attempts. Provide regular training on AI-powered threats like voice deepfakes, and establish clear protocols for verifying suspicious interactions to help agents avoid being manipulated.
  2. Apply MFA across the organization. Use MFA not only for customer accounts but also to secure internal systems like CRM software and employee logins.
  3. Implement authentication, authorization, and accounting (AAA) systems. Follow the Principle of Least Privilege so that users can only access the data and tools they need for their roles, to ensure minimal exposure in case of breach.
  4. Restrict access based on IP and monitor API activity. Use IP whitelisting and audit all AI integrations and API connections regularly to detect unusual or malicious behavior.
  5. Track network logs and browser sessions for anomalies. Deploy anomaly detection tools to flag suspicious agent or customer activity that could indicate unauthorized access to data.
  6. Review and secure all third-party software integrations. Regularly rotate API keys, credentials, and authentication tokens for external software connections to reduce exposure.

Organizations should also look for transparency from software vendors like Salesforce about their AI security protocols and disclosure policies for data breaches.

Vendors, too, have a critical role to play. As AI adoption grows, these platforms will need to support real-time monitoring to detect AI-generated attacks, regular security audits and updates, strict vetting of third-party integrations to prevent vulnerabilities, and built-in tools designed to identify impersonation attempts.

Indeed, while Salesforce is under siege, these attack methods are a threat to every CX tech provider.

 

 

Artificial IntelligenceCRMGenerative AISecurity and Compliance

Brands mentioned in this article.

Featured

Share This Post