- Meta will soon start to train her AI models with EU user data
- Meta IA will be formed with all the interactions of all users and public content published on Meta’s social platforms
- The Grand Technology Giant resumes its AI training plan, after having stopped launching in the midst of the concerns of EU data regulators
Meta resumed her plan to train her AI models with EU user data, the company announced on Monday April 14, 2025.
All publications and comments shared by adults on Meta’s social platforms will soon be used to train Meta AI, as well as all interaction users exchange directly with the chatbot.
This occurs while the Grand Giant of Technology successfully launched Meta IA in the EU in March, almost a year after the company interrupted the launch in the middle of increasing concerns among the regulators of the EU data.
“We believe that we are responsible for building AI which is not only available for Europeans, but which is built for them. This is why it is so important that our generative AI models are formed on a variety of data so that they can understand the incredible shades and complexities that constitute European communities,” Meta wrote in the official announcement.
This type of training, notes the company, is not unique in Meta or in Europe. Meta has collected and processes the same information, in fact, in all regions where it is available.
As mentioned above, Meta IA will be formed with all publications and data from the interactions of adult users. Public data from EU’s UND accounts under the age of 18 will not be used for training purposes.
Meta also promises that no private messages from shared people on Imessage and WhatsApp will also be used for AI training purposes.
From this week, all EU users will begin to receive notifications on the terms of the new AI training, either by application or by e-mail.
These notifications will include a link to a form where people can withdraw their consent so that their data is used for the Meta IA training.
“We have made this objection form easy to find, read and use, and we will honor all the objection forms that we have already received, as well as those newly subjected,” explains the supplier.
It is crucial to understand that once fed in an LLM database, you completely lose control of your data, because these systems make very difficult (if not impossible) to exercise the RGPD right to be forgotten.
This is why experts in confidentiality like Proton, the supplier behind one of the best VPN and encrypted VPN messaging applications, urge people in Europe concerned about their private life to withdraw from the formation of the meta-ai.
“We recommend that you fill out this form when you are sent to protect your privacy. It is difficult to predict what this data could be used in the future – it is better to be safe than sorry,” wrote Proton on a LinkedIn article.
META’s announcement comes at the same time as Irish data regulators have opened an investigation into Grok AI of X. More specifically, the survey aims to determine whether the Elon Musk platform uses articles X accessible on the stock market to train its generative AI models in accordance with the RGPD rules.