- Wetransfer users were indignant when it seemed that an updated service conditions implied that their data would be used to form AI models.
- The company moved quickly to ensure users that it does not use content downloaded for AI training
- Wetransfer rewritten the clause in a clearer language
The WETRANSFER file sharing platform spent a frantic day reassuring users that it does not intend to use downloaded files to form AI models, after an update of its conditions of use, which everything sent by the platform could be used to manufacture or improve automatic learning tools.
The incriminated language buried in TOS has declared that the use of Wetransfer has given the company the right to use data “for exploitation, development, marketing and improvement of the service or new technologies or services, including to improve the performance of automatic learning models that improve our content moderation process, in accordance with the confidentiality and cookie policy.”
This part on the automatic learning and the general nature of the text seemed to suggest that Wetransfer could do whatever he wanted with your data, without any specific guarantee or clarifying qualifications to mitigate suspicions.
It may be understandable, many Wetransfer users, who include many creation professionals, were upset by what it seemed to be involved. Many have started to publish their plans to go from Wetransfer to other services in the same vein. Others have started to warn that people should encrypt files or go to old -fashioned physical delivery methods.
It’s time to stop using @Wetransfer who, from August 8, decided that they would have everything you transfer to Power AI Pic.twitter.com/Syr1jnmemxJuly 15, 2025
Wetransfer noted the growing fury around the tongue and rushed to try to set fire. The company rewritten the TOS section and shared a blog explaining confusion, promising several times that no one would be used without their permission, especially for AI models.
“According to your comments, we have understood that it may be clear that you keep the property and control of your content. We have since updated the terms to make them easier to understand,” wrote Wetransfer in the blog. “We have also deleted the mention of automatic learning, because it is not something that Wetransfer uses in relation to customer content and may have caused some apprehension.”
While granting a standard license to improve Wetransfer, the new text omits references to automatic learning, rather focusing on the familiar scope necessary to execute and improve the platform.
Clarified privility
If it looks a bit like already, it is because something very similar happened about a year and a half ago with another file transfer platform, Dropbox. A modification of the small characters of the company implied that Dropbox took content downloaded by users to form AI models. The public outcry led Dropbox to apologize for confusion and repair the criminal driver.
The fact that this has been reproduced so similar is not interesting not because of the clumsy legal language used by software societies, but because this implies instinctive distrust of these societies to protect your information. Assuming the worst is the default approach in the event of uncertainty, and companies must make an additional effort to facilitate these tensions.
Sensitivity of creative professionals, capable of the appearance of misuse of data. At a time when tools like Dall, Midjourney and Chatgpt train on the work of artists, writers and musicians, the issues are very real. The lawsuits and boycotts of artists on how their creations are used, not to mention suspicion of business data, to make the types of reinsurances offered by Wetransfer will probably be something that technological companies will want to have in place early, lest they face the ill -placed anger of their customers