Linkedin trains AI with user data – without changing the terms of use
The Microsoft subsidiary Linkedin, which has over a billion users worldwide, has admitted to 404Media that it uses user data to train its generative AI products. This is particularly tricky because the terms of use, in which this should actually be recorded, do not currently reflect this.
“Linkedin now uses the content of all users to train their AI tool – they just automatically signed everyone up,” said Rachel Tobac, CEO of SocialProofSec, a security company. You have to opt out.
According to Linkedin, the Terms of Service are to be changed “shortly”. This has not happened yet; members are still being offered the user agreement from February 2022, which does not contain any clause allowing the business network to use user data to feed AI models with data.
Something is going on at LinkedIn, as reflected in the fact that, according to 404Media, several LinkedIn users apparently found a new setting on Wednesday that allegedly indicates that LinkedIn is using user data to improve its generative AI.
LinkedIn writers are already feeding the AI
LinkedIn has already introduced several AI-supported functions for paying premium users, including automated writing of posts or direct messages or an AI assistant that helps with writing texts. AI is also intended to serve as a personal career advisor that acts as a coach as part of online courses. “Apparently practical advice from industry-leading entrepreneurs and coaches” is processed and personalized by the AI.
All AI features are not offered globally. AI-powered insights into job postings are currently only available to paying users in the US.
Certain very active LinkedIn users are also repeatedly asked to contribute to longer articles and are lured by the promise of being rewarded with badges for frequently contributing. These articles (here is an example) are provided “by AI and the LinkedIn community,” according to the authors of these articles, each of which devotes itself in detail to a specific subject area.
Training with user data problematic
Meta and X (formerly Twitter) are known to use user data to train their AIs. With Meta and X, you can opt out of using your own data to train the Llama and Grok models.
For Meta, the matter is so problematic that Mark Zuckerberg’s company is not launching certain AI services, such as its chatbot Meta Ai, in the EU for fear of violating the GDPR. It is known that public Instagram and Facebook images of users have been used to enable the image generation of the Llama models.