LinkedIn Faces Class-Action Lawsuits Over Alleged Browser Extension Scanning

Two class-action lawsuits claim LinkedIn covertly scans browser extensions, raising privacy, consent, and customer trust concerns

4
LinkedIn Faces Class-Action Lawsuits Over Alleged Browser Extension Scanning
Community & Social EngagementSecurity, Privacy & ComplianceNews

Published: April 8, 2026

Francesca Roche

Francesca Roche

LinkedIn has been issued two class-action privacy lawsuits after allegations of scanning users’ browsers to determine what extensions have been installed. 

The investigative report, titled BrowserGate, first made these claims public in March as Fairlinked eV, an alleged association of commercial LinkedIn users, businesses, and third-party tool makers affected by the site’s data practices. 

This lawsuit comes after LinkedIn promoted its anti-fraud and anti-scraping measures, posing a threat to customer trust in online communities. 

In its report, a BrowserGate spokesperson argued that due to LinkedIn’s position in the job market changes the impact of the extension-scanning they describe. 

“In many industries, having a LinkedIn profile is not optional. It is a prerequisite for being hired,” they explained. 

“This means LinkedIn does not just know that someone has a religious browser extension installed. It knows that person’s name, employer, job title, department, location, and professional network. And it knows the same about every one of their colleagues who also uses LinkedIn. 

“That is not a privacy breach. That is an intelligence operation.”

LinkedIn Faces California Class-Action Lawsuits

Having now been hit with two class-action lawsuits, these filings were brought to the U.S District Court for the Northern District of California on Monday. 

The California-based plaintiffs, Jeff Ganan and Nicholas Farrell, have alleged that LinkedIn installed client-side code to scan browser extensions without consent or knowledge. 

In their argument, the plaintiffs claimed that LinkedIn’s anti-fraud and anti-scraping efforts were a pretext for covertly scanning users’ browsers and transmission of extension and device data. 

This data could contain political, religious, health, or employment-related indicators, depending on the type of extension detected. 

The plaintiffs argued that the code operates without proper disclosure or opt-in consent and may violate privacy and security laws, claiming this was an unauthorized collection of personal and device data.  

This claim, if held up in court, could prove to be a breach of the Invasion of Privacy Act, Consumer Privacy Act, and breach of contract if LinkedIn’s practices are proven to be inconsistent with its terms of service. 

Hidden Extension Scanning and Data Collection

Having first conducted research into the social platform in 2017, the BrowserGate investigation was published in March 2026, alleging that LinkedIn’s website had been running hidden JavaScript code that scanned visitors’ browsers for installed browser extensions and collected detailed device data without clear user consent or disclosure. 

This scan list allegedly includes over 6,000 Chrome extensions relating to sensitive and personal information, as well as having collected device and browser telemetry that can uniquely identify a user’s session. 

The association claims that the script attempts to detect whether specific extension identifiers are present in the browser by probing recognized extension URLs or publicly accessible resources. 

Because LinkedIn is a platform where profiles are linked to real identities, Fairlinked argues that this allows the site to link extension and device data directly to named users, their employers, roles, and professional networks. 

As a result, Fairlinked believes that its own collected materials could violate multiple legal frameworks, including the GDPR for processing sensitive, unconsented data, and the Digital Markets Act (DMA) for allegedly scanning for and undermining third-party tools while presenting compliance to regulators. 

LinkedIn Defends Extension Scanning as Security Measure

Speaking with BleepingComputer, a LinkedIn spokesperson rebutted the claim, explaining that the scanning method is a security measure to target extensions used for scraping and other rule violations. 

“To protect the privacy of our members, their data, and to ensure site stability, we do look for extensions that scrape data without members’ consent or otherwise violate LinkedIn’s Terms of Service,” they said. 

“We use this data to determine which extensions violate our terms, to inform and improve our technical defenses, and to understand why a member account might be fetching an inordinate amount of other members’ data, which at scale, impacts site stability. 

“We do not use this data to infer sensitive information about members.”

Having previously taken this claim to court in Germany, LinkedIn claims Fairlinked had then operated under the developer of a browser extension called Teamfluence, after enforcing a restriction on its account on the platform. 

“For additional context, in retaliation for this website owner’s account restriction, they attempted to obtain an injunction in Germany, alleging LinkedIn had violated various laws,” the spokesperson continued. 

“The court ruled against them and found their claims against LinkedIn had no merit, and in fact, this individual’s own data practices ran afoul of the law.” 

Trust and Transparency Under Pressure in Social Platforms

This latest social community controversy is the latest in a series of spotlighted tensions between platform security practices and user privacy expectations. 

Recent rulings against companies like Meta highlight how misrepresentations about safety and user protection can breach consumer protection laws and damage public trust, with a New Mexico jury finding that Meta had violated consumer protection statutes for misleading users about platform safety. 

This underscores that regulatory systems are increasingly willing to hold platforms accountable for how they communicate about safety and privacy. 

Furthermore, Meta and Google’s similar lawsuit later that same week emphasizes that legal issues tied to platform safety and transparency pose not only legal risks but also operational and reputational risks for companies building customer engagement on those platforms. 

As a result, enterprises that rely on external platforms must factor in how platform trust issues affect broader customer relationships and experiences. 

How Hidden Data Practices Impact CX

If proven true, these allegations against LinkedIn could pose several CX implications against not only the platform but other social media communities that pose a risk to customer trust. 

With more customers now expecting platforms to be clear about what data they collect, how it is used, and why it matters, invisible browser extension scanning and extensive device fingerprinting without user consent can erode customer trust. 

If users understand that their privacy has been violated and that their digital environment is being immorally monitored, this undermines the platform’s safety, which is central to CX. 

Despite LinkedIn’s claims toward its true practice of extension detecting, users may interpret hidden scanning as invasive if it lacks clear notice and opt‑in mechanisms, creating a gap between stated purpose and user perception that can diminish customer confidence and loyalty over time. 

These allegations highlight how unexplained or undisclosed data practices can damage customer trust and confidence, particularly when users discover behaviors that feel opaque or intrusive. 

With other major platforms now facing scrutiny for perceived transparency and safety issues, this controversy illustrates the tension between platform security practices and evolving expectations around privacy and CX. 

Data Privacy Management SoftwareSocial Media ManagementTrust & Safety
Featured

Share This Post