• AiNews.com
  • Posts
  • LinkedIn's AI Data Use Raises Privacy Concerns Amid Updated Terms

LinkedIn's AI Data Use Raises Privacy Concerns Amid Updated Terms

Illustration featuring the LinkedIn logo on the left, connected by digital lines to icons representing AI models, data symbols, a padlock, and user profile icons. The background is tech-themed with blue and grey colors, emphasizing the concerns over data privacy and AI use

Image Source: ChatGPT-4o

LinkedIn's AI Data Use Raises Privacy Concerns Amid Updated Terms

LinkedIn has come under scrutiny for potentially using user data to train AI models without first updating its terms of service. While the platform has since revised its terms, the delayed update has raised significant concerns about data privacy and user consent.

U.S. Users See Opt-Out Option, But Not EU

Users in the U.S. can find an opt-out option in their settings under the “Data Privacy” section, which reveals that LinkedIn might use personal data for training “content creation AI models.” However, this option is not available to users in the EU, EEA, and Switzerland, likely due to the region's stringent data privacy regulations.

Although the toggle has been present for some time, a report by 404 Media highlighted that LinkedIn hadn’t aligned its privacy policy with this data usage until recently. Typically, such updates are made before introducing new data uses, giving users a chance to modify their preferences or even leave the platform if they disagree with the changes. This time, it seems, LinkedIn did not follow this practice.

What AI Models Are Being Trained?

According to LinkedIn, the company is training its own models for features like writing suggestions and post recommendations. Additionally, the platform noted that third-party models, such as those from Microsoft, could also be trained using LinkedIn data.

LinkedIn explained in a Q&A session, “When you engage with our platform, we collect and use (or process) data about your use of the platform, including personal data ... This could include your use of generative AI features, your posts and articles, how frequently you use LinkedIn, your language preference, and any feedback you may have provided to our teams. We use this data, consistent with our privacy policy, to improve or develop the LinkedIn services.”

Data Privacy Measures and Opting Out

LinkedIn stated that it employs “privacy enhancing techniques” such as redacting and removing sensitive information to protect user data used in generative AI training. Nonetheless, users who wish to prevent their data from being used in this way can opt out.

To do so, navigate to the “Data Privacy” section in LinkedIn’s settings menu, select “Data for Generative AI improvement,” and toggle off the option. While LinkedIn offers a more comprehensive opt-out form here, it warns that opting out won’t impact data already used in training.

Calls for Regulatory Action

The Open Rights Group (ORG) has called for an investigation by the U.K.’s Information Commissioner’s Office (ICO) into LinkedIn and other platforms using user data for AI training without explicit consent. Mariano delli Santi, ORG’s legal and policy officer, stated, “The opt-out model proves once again to be wholly inadequate to protect our rights: the public cannot be expected to monitor and chase every single online company that decides to use our data to train AI. Opt-in consent isn’t only legally mandated, but a common-sense requirement.”

Data Protection Authorities Respond

Ireland’s Data Protection Commission (DPC) confirmed that LinkedIn had informed them of updates to its global privacy policy, including an opt-out feature for those who don’t wish to have their data used for training content-generating AI models. However, LinkedIn clarified that it is not using data from EU or EEA members for these purposes at this time.

Surge in Demand for User Data in AI

As generative AI models require vast amounts of data, more platforms are repurposing user-generated content. Companies like Tumblr, Reddit, and Stack Overflow have even begun monetizing this data by licensing it to AI developers. Such moves have led to user backlash, with some platforms facing protests and users attempting to delete their contributions in response.

For instance, when Stack Overflow announced its data licensing plans, several users deleted their posts in protest, only to have them restored and their accounts suspended.

Implications for Privacy and Trust

This situation underscores the delicate balance between AI advancement and user privacy. As platforms like LinkedIn use user-generated content for AI training, transparency and consent become increasingly critical. Failing to communicate changes to data use policies can erode user trust and set troubling precedents for how companies handle personal information. This emphasizes the need for robust regulatory oversight to safeguard user rights as data becomes a highly valuable asset in the digital age.