UNICEF Urges Global Action Against AI-Generated Child Abuse Material After Grok Scandal
- Musk Exposed
- 1 day ago
- 3 min read

UNICEF is urging governments around the world to take decisive action against AI-generated child sexual abuse material (CSAM), after Elon Musk’s Grok AI was used to create millions of sexual deepfakes of minors online. The organization points to a staggering figure: more than 1.2 million children are believed to have had their images manipulated into explicit deepfakes in the past year alone, underscoring the urgent and growing threat to children’s safety in the digital age. This alarming figure emerged from "Disrupting Harm Phase 2," a collaborative research project by UNICEF's Office of Strategy and Evidence Innocenti, ECPAT International, and INTERPOL. In some regions, the data reveals that one in every 25 children has been victimized in this way, underscoring the pervasive nature of the issue. The findings are based on a comprehensive household survey involving around 11,000 children across 11 countries. It reveals that perpetrators are increasingly able to create realistic sexualized images of children without any involvement or awareness from the victims themselves. Concerns about the use of AI in creating fake sexual images or videos are widespread, with up to two-thirds of children in some studied countries expressing worry, although levels of concern differ significantly across nations. UNICEF has made it clear that these AI-generated images fall under the definition of child sexual abuse material: "We must be clear. Sexualised images of children generated or manipulated using AI tools are child sexual abuse material (CSAM)," the organization stated. They emphasized that "deepfake abuse is abuse, and there is nothing fake about the harm it causes." The urgency of this issue was further underscored by recent actions in France, where authorities conducted a raid on X's Paris offices as part of a criminal investigation into child pornography linked to the AI chatbot Grok. Executives including Elon Musk have been summoned for questioning in this context. A report from the Center for Countering Digital Hate indicated that Grok generated a staggering 23,338 sexualized images of children over a brief 11-day period. Moreover, the issue brief associated with UNICEF's findings highlights an alarming escalation of risks children face in the digital realm, noting that a child can become a victim "without ever sending a message or even knowing it has happened." The UK's Internet Watch Foundation recently flagged nearly 14,000 suspected AI-generated images on a single dark-web forum, with about a third confirmed to be criminal. Additionally, South Korean authorities report an explosive tenfold increase in AI and deepfake-related sexual offenses from 2022 to 2024, with many suspects identified as teenagers. UNICEF is urging governments to expand the definitions of child sexual abuse material to encompass AI-generated content and to criminalize its creation, procurement, possession, and distribution. Furthermore, they have called for AI developers to adopt safety-by-design practices and digital companies to take proactive measures against the distribution of such material. The organization advocates for the implementation of child rights due diligence, particularly focusing on child rights impact assessments, emphasizing the need for all participants in the AI value chain to institute safety measures. "The harm from deepfake abuse is real and urgent," UNICEF warned, adding, "Children cannot wait for the law to catch up."
The European Commission has responded by launching a formal investigation into whether X violated EU digital regulations by failing to mitigate the generation of illegal content through Grok. Meanwhile, countries like the Philippines, Indonesia, and Malaysia have banned the chatbot, and regulatory inquiries are ongoing in the UK and Australia.



Comments