Meta has a problem with their chatbots and minors. Therefore, it wants to harden its rules of use
Goal is modifying part of the rules that define the behavior of its artificial intelligence chatbots. This has happened after a Reuters investigation revealed possible inappropriate interactions with minors. The company confirmed that its systems are now trained not to talk with teenagers about self -harm, suicide or eating disorders. Has also applied adjustments to Avoid romantic dialogues. As the company explained, these measures are temporary while permanent standards are developed.
Disputes on improper use and supervision failures
The new restrictions arrive after several revelations about the use of chatbots, as a goal AI in WhatsApp. Reuters documented cases in which systems were allowed start romantic conversations with minorsgenerate compromised images of celebrities being minors and disseminate misleading information. It is not the first time that goal must correct errors in their AI systems, although in this case the consequences can be even more serious.
Meta indicated that, in addition to limiting sensitive interactions, it will also restrict access to AI characters with sexualized content. Reuters detected bots that imitated celebrities such as Taylor Swift, Scarlett Johansson, Selena Gomez or Anne Hathaway. These were not only passed through public figures, but produced suggestive images and claimed to be the real person. Some of these chatbots were even created by finishing employees, such as a Bot -based bot -based botift that came to invite a reporter to a fictional encounter.
Goal faces now political and legal pressurewith the United States Senate and general prosecutors of 44 states investigating these practices. Despite the recent adjustments, the company has not clarified what measures will take in front of other problems detected, such as racist messages or the promotion of false treatments for serious diseases. Remember that the measures are temporary, so the definitive response on how Meta plans to control these behaviors is still pending.
