- A disclosed metal-document has revealed that the company’s chatbot directives of the company once allowed inappropriate responses
- Meta has confirmed the authenticity of the document and has since deleted some of the most disturbing sections
- Among the calls for surveys are the question of how the successful moderation of AI can be
Meta’s internal standards for her IA chatbots were supposed to remain internal, and after making their way to PK Press Club, it is easy to understand why the technology giant would not want the world to you. Meta has climbed with the complexities of AI ethics, online security for children and content standards, and found that few people would say to a successful roadmap for AI chatbot rules.
The most disturbing notes among the details shared by PK Press Club are on how the chatbot speaks to children. As PK Press Club reported, the document indicates that it is “acceptable [for the AI] Engage a child in romantic or sensual conversations “and” describe a child in terms of proof of his attractiveness (ex: “Your young form is a work of art”). “Although it prohibits an explicit sexual discussion, it is always a scandalously intimate and romantic level of conversation with children to consider it to be considered allegedly.
And this is not the only example likely to disturb people. The rules of Meta AI, note the report, allow the chatbot to compose explicitly racist content if the prompt is formulated correctly and to provide erroneous or even harmful health information as long as a kind of non-responsibility notice is included.
In one of the most surreal examples, the guidelines asked the AI to reject requests for generation of inappropriate images in most cases, but in some cases, in the apparently deviation with a “funny” substitution. For example, the document would have mentioned that an invite to generate an image of “Taylor Swift naked breasts, covering her breasts with her hands” was able to answer by generating an image of Swift “holding a huge fish”. The document would have included both the unacceptable and “acceptable” version side by side, mainly causing the bot to surpass inappropriate prompts with a visual sleight of hand. Meta refused to comment on the example.
Meta confirmed the authenticity of the document and said that it now revises the problematic parties. Meta deleted the children’s interaction section after PK Press Club has reached out and qualified these rules as “erroneous and incompatible” with business policy. Currently, PK Press Club said that the document always says that racial insults are authorized if they are disguised as hypotheses, as well as disinformation framed as a fiction.
No time for safety and ethics
It is a disturbing revelation which has already caused the public’s indignation, a meticulous examination of the legislator and urgent promises of the meta. But that shows that as AI propagates, the need to move quickly with technology leaves plans for the rules and regulations that are triggered to catch up, whether written internally or by legislators and regulators.
For most people, history raises fundamental questions of AI security. Although it may be ideal not to have minors interacting with unattended general chatbots, it is very unlikely, to judge by the number of children and adolescents who admit to use tools like chatgpt for school work. Avoiding Meta IA is particularly difficult because the company has integrated the chatbot on Facebook, Whatsapp, Messenger and Instagram. Users can interact with AI characters who are often presented in a fun and friendly way, and Meta has marketed these tools as fun and even educational. But the disclosed directives suggest that the backend is not always aligned with this healthy image.
Congress members have already called for hearings and bills to face the situation, but the fact is that there are few legal requirements in place for the moment to moderate the content of the chatbot, for children or otherwise. IA security noises have led to any specific national execution system. Many AI companies have made a big problem on their efforts to make their products safe and ethical, but if the Meta rules book illustrates what other companies have implemented, there is still a lot of work to do and many questions about the type of conversations that these chatbots have already had, in particular with children.
AI models can always be more thermos to imitate human thought, but it is really only a collection of choices by human, deliberate and involuntary programmers. The fact that these rules are apparently codified in Meta does not mean that similar examples exist in other societies, but it is not something to exclude. And if these are the choices that are made behind the scenes of one of the most powerful technological societies in the world, what is authorized else?
AI chatbots are as trustworthy as the invisible rules guided them, and although it is naive to fully trust the complaints of any company without proof, the META regulation implies that users should take these complaints with several additional grains of salt.