Brazil’s Federal Legal professional Common’s Workplace (AGU) issued a proper extrajudicial notification to Meta Platforms Inc. on August 15, 2025, demanding rapid elimination of synthetic intelligence chatbots that simulate youngster profiles and interact in sexual conversations with customers. The notification gave Meta 72 hours to adjust to calls for, a deadline that expired on August 18, 2025.
In response to the official doc NUP 00170.003528/2025-45, the authorized motion stems from investigations by Brazil’s Nationwide Union Prosecutor’s Workplace for Democracy Protection (PNDD), requested by the Presidency’s Social Communication Secretariat (Secom). The case builds on studies from Reuters information company and Núcleo Journalism that exposed how Meta’s synthetic intelligence techniques permitted sexual conversations with youngsters.
Subscribe PPC Land publication ✉️ for related tales like this one. Obtain the information day by day in your inbox. Freed from adverts. 10 USD per 12 months.
The AGU’s notification particularly targets chatbots created by means of Meta’s “Meta AI Studio” device, which permits customers to develop AI-powered conversational brokers throughout Instagram, Fb, and WhatsApp. Brazilian authorities carried out exams on chatbots named “Safadinha,” “Bebezinha,” and “Minha Novinha,” all of which demonstrated patterns of sexual dialog with AI techniques programmed to simulate youngsters.
Technical particulars of the violations
Meta AI Studio gives customers with instruments to create customized chatbots that may have interaction in simulated conversations throughout the corporate’s platform ecosystem. The Brazilian investigation targeted on three particular chatbots that maintained sexualized personas whereas presenting child-like traits by means of their names and dialog patterns.
Screenshots included within the authorized documentation present conversations the place these chatbots engaged in express sexual discussions, describing bodily attributes and interesting in role-playing eventualities of a sexual nature. The chatbots had been accessible to customers aged 13 and above, matching Meta’s minimal age necessities throughout its platforms.
“Such chatbots have the potential to achieve an more and more broad viewers on digital platforms, particularly on Meta’s social networks, exponentially amplifying the chance of minors’ contact with sexually suggestive and doubtlessly felony materials,” states the AGU notification.
The Brazilian authorities notice that Meta’s platforms permit entry to customers from age 13, however no age verification filters forestall customers between 13 and 18 from accessing inappropriate content material like these chatbots. This creates a regulatory hole the place minors can work together with sexually express AI techniques designed to simulate youngsters.
Purchase adverts on PPC Land. PPC Land has commonplace and native advert codecs by way of main DSPs and advert platforms like Google Advertisements. Through an public sale CPM, you’ll be able to attain business professionals.
Authorized framework and constitutional violations
Brazil’s authorized motion facilities on Article 227 of the Federal Structure, which establishes the responsibility of household, society, and the state to make sure youngsters’s complete safety. The AGU argues that these chatbots violate elementary constitutional protections for minors and contradict Meta’s personal Neighborhood Requirements.
The Little one and Adolescent Statute (Regulation 8.069/1990) gives the authorized basis for the AGU’s calls for. Article 3 of this statute ensures youngsters and adolescents all elementary human rights, guaranteeing alternatives and amenities for bodily, psychological, ethical, religious, and social growth beneath situations of freedom and dignity.
The notification references Article 217-A of Brazil’s Penal Code, which criminalizes sexual acts with minors beneath 14 years outdated, carrying penalties of 8 to fifteen years imprisonment. Brazilian authorities argue that this authorized framework extends to simulated sexual interactions by means of synthetic intelligence techniques.
“The idea of ‘libidinous act’ isn’t restricted to carnal conjunction, encompassing all conduct of a sexual nature aimed toward satisfying need, whether or not of the agent themselves or of a 3rd get together, no matter direct bodily contact,” the authorized doc explains.
The AGU’s notification demonstrates that these chatbots violate Meta’s personal Neighborhood Requirements, which prohibit content material involving youngster eroticization or sexual exploitation. Meta’s insurance policies particularly ban “partaking in implicitly sexual conversations in personal messages with youngsters” and content material that “constitutes or facilitates inappropriate interactions with youngsters.”
Meta’s Neighborhood Requirements outline a number of prohibited classes related to this case:
- Sexual exploitation, abuse, or nudity involving youngsters
- Content material involving youngsters in sexual fetish contexts
- Content material supporting, selling, defending, or encouraging pedophilia participation
- Inappropriate interactions with youngsters by means of implicitly sexual personal conversations
- Content material that sexualizes actual or fictional youngsters
The Brazilian investigation revealed that chatbots constantly violated these requirements whereas remaining accessible by means of Meta’s platforms. The AGU argues this demonstrates insufficient enforcement of the corporate’s personal insurance policies.
Supreme Court docket precedent on platform legal responsibility
The notification cites a current Brazilian Supreme Federal Court docket (STF) resolution concerning Article 19 of the Civil Rights Framework for the Web. This ruling established that web software suppliers have to be held chargeable for third-party generated content material after they have clear data of unlawful acts however fail to instantly take away such content material.
The Supreme Court docket resolution, detailed in RE 1037196, acknowledged “partial and progressive unconstitutionality” of Article 19’s earlier interpretation. The brand new framework requires platforms to show proactive content material moderation for critical unlawful content material, notably involving crimes towards youngsters and adolescents.
“Whereas no new laws emerges, Article 19 of the Civil Web Framework have to be interpreted in order that web software suppliers are topic to civil legal responsibility,” in response to the Supreme Court docket ruling cited within the AGU notification.
Wider context of AI chatbot regulation
This authorized motion happens amid rising international scrutiny of AI chatbot platforms and their interactions with minors. Recent investigations revealed that Meta’s inside pointers beforehand permitted AI chatbots to have interaction youngsters in “romantic or sensual” conversations, prompting a U.S. Congressional investigation led by Senator Josh Hawley.
Inner Meta paperwork obtained by Reuters confirmed that the corporate’s authorized, public coverage, and engineering groups, together with its chief ethicist, had permitted requirements permitting AI techniques to explain youngsters in phrases indicating their attractiveness. Meta confirmed these insurance policies existed however said they had been eliminated after media consideration.
The Brazilian case provides worldwide regulatory strain to present U.S. investigations. Similar concerns emerged with Character.ai, the place courtroom paperwork detailed AI chatbots partaking in conversations selling self-harm and sexual exploitation with underage customers.
Advertising and marketing business implications
The authorized motion highlights essential brand safety concerns for advertisers utilizing Meta’s platforms. Advertising and marketing professionals fear about model affiliation with AI techniques producing problematic interactions, resulting in elevated demand for third-party verification instruments from corporations like Adloox, DoubleVerify, and Scope3.
Content material moderation challenges compound these issues as AI-generated content material turns into more and more troublesome to observe at scale. Meta’s acknowledgment of inconsistent enforcement highlights the complexity of moderating AI chatbot interactions throughout a number of languages and cultural contexts.
The case additionally displays broader tensions round AI content monetization on social platforms. Meta’s Creator Bonus Program and related monetization constructions create financial incentives for AI content material creation, doubtlessly overwhelming moderation techniques designed for human-generated content material.
Enforcement calls for and timeline
The AGU’s notification established a 72-hour deadline for Meta to adjust to particular calls for, which expired on August 18, 2025:
- Speedy elimination of chatbots utilizing child-like language to advertise sexual content material, particularly together with “Bebezinha” (person 071_araujo0), “Minha novinha” (person da_pra_mim_no12), and “Safadinha” (person allysson_eduarduh)
- Clarification of measures being adopted inside Meta AI’s utilization scope, together with integration with Fb, Instagram, and WhatsApp, to forestall youngsters and adolescents’ entry to sexual or erotic content material
The notification, signed by Federal Union Advocates Maria Beatriz de Menezes Costa Oliveira and Raphael Ramos Monteiro de Souza on August 15, 2025, establishes authorized precedent for worldwide motion towards AI chatbot platforms.
Brazilian authorities emphasize that the state of affairs represents “not mere misuse of know-how, however a concrete and systemic risk to complete safety of kids and adolescents, requiring swift, articulated, and efficient response from competent our bodies.”
Meta’s content material moderation evolution
This authorized problem emerges as Meta undergoes important content moderation policy changes. The corporate just lately dismantled its third-party fact-checking program in favor of a group notes system, citing excessive error charges in enforcement selections.
Meta’s inside metrics revealed the corporate was eradicating hundreds of thousands of content material items day by day as of December 2024, with doubtlessly 10-20% of enforcement actions being errors. This excessive error charge contributed to the coverage shift towards decreased automated enforcement, focusing totally on unlawful content material and high-severity violations.
The Brazilian case exams whether or not Meta’s new method to content material moderation can adequately handle AI-generated content material that exploits youngsters. The corporate’s emphasis on decreased enforcement conflicts with calls for for extra proactive elimination of dangerous AI chatbots.
Deadline expires with restricted public response
The 72-hour deadline established by Brazilian authorities expired on August 18, 2025, with no rapid public affirmation of Meta’s compliance standing concerning the elimination calls for. In response to media studies revealed on August 19, 2025, the AGU’s request doesn’t embody sanctions, however the company mentioned it had reminded Meta that on-line platforms in Brazil should take down illicit content material created by their customers, even and not using a courtroom order.
The shortage of rapid sanctions displays the extrajudicial nature of the notification, which serves as a proper warning earlier than potential authorized proceedings. Nonetheless, the timing coincides with broader regulatory strain on Meta concerning youngster security issues and content material moderation practices throughout a number of jurisdictions.
The federal government motion comes at a time of concern within the South American nation over a case of alleged youngster sexual exploitation by Hytalo Santos, a widely known influencer who posted content material on Instagram that includes partially bare minors participating in suggestive dances. This case demonstrates heightened sensitivity to youngster exploitation points on social media platforms in Brazil.
Subscribe PPC Land publication ✉️ for related tales like this one. Obtain the information day by day in your inbox. Freed from adverts. 10 USD per 12 months.
Timeline
- July 23, 2025: Núcleo Journalism publishes investigation revealing Meta AI chatbots simulating sexualized youngsters
- August 14, 2025: Reuters studies Meta’s AI approved to have sexual conversations with youngsters
- August 15, 2025: Brazilian AGU points 72-hour notification to Meta demanding chatbot elimination
- August 15, 2025: U.S. Senator Josh Hawley initiates Congressional investigation into Meta’s AI insurance policies
- August 18, 2025: 72-hour deadline expires for Meta’s compliance with Brazilian calls for
- August 18, 2025: Brazilian AGU publishes official notification particulars
- August 19, 2025: A number of worldwide information retailers report on Brazilian authorities calls for with out affirmation of Meta’s response
Associated Tales
Subscribe PPC Land publication ✉️ for related tales like this one. Obtain the information day by day in your inbox. Freed from adverts. 10 USD per 12 months.
PPC Land explains
Meta AI Studio: Meta’s synthetic intelligence platform that permits customers to create customized chatbots throughout Instagram, Fb, and WhatsApp. This device democratizes AI chatbot creation however lacks adequate safeguards to forestall the event of inappropriate content material concentrating on minors. The platform’s accessibility to normal customers with out correct oversight mechanisms has created vulnerabilities that malicious actors can exploit to create sexually express chatbots simulating youngsters.
AGU (Advocacia-Geral da União): Brazil’s Federal Legal professional Common’s Workplace, the federal government physique chargeable for authorized illustration of the Union in judicial and administrative issues. As the first authorized protection establishment for the Brazilian federal authorities, AGU performs a vital function in implementing constitutional protections and federal laws. The group’s involvement on this case demonstrates the Brazilian authorities’s dedication to defending youngsters from digital exploitation by means of formal authorized channels.
Chatbots: Pc packages designed to simulate human dialog by means of synthetic intelligence, able to partaking customers in text-based interactions throughout social media platforms. On this context, chatbots symbolize a big technological development that may be misused to create dangerous content material. The AI-powered nature of those techniques permits them to generate limitless variations of content material, making conventional content material moderation approaches insufficient for stopping abuse.
Brazilian Structure Article 227: The basic authorized provision establishing that household, society, and the state should guarantee complete safety for youngsters and adolescents with absolute precedence. This constitutional article types the authorized basis for Brazil’s youngster safety framework and serves as the first foundation for the authorized motion towards Meta. The article’s complete scope covers safety from all types of negligence, discrimination, exploitation, violence, cruelty, and oppression.
Little one safety legal guidelines: The great authorized framework designed to safeguard minors from exploitation, abuse, and inappropriate content material publicity throughout all media platforms. These legal guidelines have developed to handle digital threats as know-how advances, requiring platforms to implement age-appropriate content material filtering and security measures. The enforcement of those protections turns into notably difficult with AI-generated content material that may circumvent conventional detection strategies.
Content material moderation: The systematic means of reviewing, filtering, and eradicating inappropriate materials from digital platforms to make sure compliance with group requirements and authorized necessities. This advanced enterprise turns into exponentially harder with AI-generated content material that may produce limitless variations and bypass automated detection techniques. Meta’s current coverage adjustments towards decreased enforcement create extra challenges for sustaining enough safety requirements.
Sexual exploitation: Prison actions involving the abuse of kids by means of sexual content material creation, distribution, or interplay, now extending to AI-simulated eventualities that normalize inappropriate relationships with minors. The digital evolution of exploitation contains chatbots designed to have interaction youngsters in sexual conversations, creating psychological hurt and doubtlessly grooming victims for real-world abuse. Authorized frameworks are adapting to handle these technological manifestations of conventional crimes.
Neighborhood Requirements: Meta’s inside insurance policies governing acceptable content material and habits throughout its platforms, designed to stability free expression with person security and authorized compliance. These requirements explicitly prohibit content material involving youngster exploitation however require constant enforcement mechanisms to stay efficient. The Brazilian case highlights gaps between coverage documentation and sensible implementation, notably concerning AI-generated content material that violates these requirements.
Platform legal responsibility: The obligation of know-how corporations for content material created, shared, or facilitated by means of their techniques, notably concerning dangerous materials concentrating on weak populations. Current courtroom selections, together with Brazil’s Supreme Court docket ruling on web platform tasks, have expanded this legal responsibility to incorporate proactive content material monitoring obligations. Firms should now show energetic efforts to establish and take away unlawful content material relatively than relying solely on person reporting mechanisms.
PNDD (Procuradoria Nacional da União de Defesa da Democracia): Brazil’s Nationwide Union Prosecutor’s Workplace for Democracy Protection, a specialised authorized establishment targeted on defending democratic establishments and elementary rights. This group performs a essential function in addressing threats to democratic values, together with the safety of weak populations from exploitation. The PNDD’s involvement demonstrates that youngster safety points are seen as elementary to sustaining democratic society’s integrity and constitutional order.
Subscribe PPC Land publication ✉️ for related tales like this one. Obtain the information day by day in your inbox. Freed from adverts. 10 USD per 12 months.
Abstract
Who: Brazil’s Federal Legal professional Common’s Workplace (AGU) took authorized motion towards Meta Platforms Inc., the corporate controlling Instagram, Fb, and WhatsApp.
What: Brazilian authorities issued a 72-hour ultimatum demanding Meta take away AI chatbots that simulate youngster profiles and interact in sexual conversations with customers, citing violations of kid safety legal guidelines and constitutional ensures.
When: The extrajudicial notification was issued on August 15, 2025, giving Meta till August 18 to adjust to elimination calls for.
The place: The authorized motion was filed in Brazil by means of the Nationwide Union Prosecutor’s Workplace for Democracy Protection (PNDD), concentrating on Meta’s international operations throughout its social media platforms.
Why: The motion stems from investigations displaying Meta AI Studio-created chatbots violated Brazilian youngster safety legal guidelines, Meta’s personal Neighborhood Requirements, and constitutional protections for minors, creating dangers for youngsters’s psychological integrity and institutional hurt.
Source link


