- OPENAI has prohibited accounts using Chatppt for malicious purposes
- Disinformation and surveillance campaigns have been discovered
- Threat actors are increasingly using AI for prejudice
OPENAI confirmed recently identified a set of accounts involved in malicious campaigns and prohibits responsible users.
The prohibited accounts involved in the campaigns “Peer exam” and “sponsored dissatisfaction” probably come from China, said Openai, and “seem to have used or tried to use, models built by Openai and another American laboratory in Relationship with an apparent surveillance operation and to generate anti-American and disturbing uses of our models: an update February 2025 3 Articles in Spanish “.
AI has facilitated an increase in disinformation and is a useful tool for threat actors to use to disrupt elections and undermine democracy in unstable or politically divided nations – and the campaigns sponsored by the State used technology to their advantage.
Monitoring and disinformation
The campaign “ Peer exam ” used Chatgpt to generate “detailed descriptions, in accordance with sales arguments, a social media listening tool that they claimed to have used to feed the reports in real time on Protests in the West to Chinese security services ”, Openai confirmed.
As part of this surveillance campaign, threat actors used the model to “modify and debug the code and generate promotional material” to suspect the listening tools of social media fueled by AI – although Openai could not identify articles on social networks after the campaign.
Chatgt accounts participating in the “sponsored discontotent” campaign were used to generate comments in articles in English and news in Spanish, in accordance with the behavior of “spamouflage”, mainly using anti-American rhetoric, probably to trigger a Discontent in Latin America, namely Peru, Mexico, Mexico and Ecuador.
This is not the first time that Chinese actors sponsored by the State have been identified using tactics of “spamouflage” to spread disinformation. At the end of 2024, a Chinese influence campaign was discovered to target American voters with thousands of images and videos generated by AI, mainly of low quality and containing false information.