top of page

EU Declares Grok's Child Images 'Illegal' Amid Growing International Outcry

  • Musk Exposed
  • Jan 8
  • 2 min read
eu_declares_groks_child_images_illegal_amid_growing_international_outcry_

The "Spicy Mode" feature of Elon Musk's AI chatbot Grok, linked to child sexualized images, is deemed illegal by EU officials amid a regulatory backlash. The legal risks for X and Grok are escalating due to repeated violations of the Digital Services Act (DSA) and ineffective safeguards. The European Commission has made a firm declaration: the production of sexualized images of minors is not trivial, it is illegal. This was stated by EU Commission spokesperson Thomas Regnier during a recent press conference, condemning Grok's "Spicy Mode" that has drawn significant public outrage. The mounting controversy has seen Grok involved in generating non-consensual deepfakes and manipulating images, which led to calls for urgent investigations by global regulators. Regnier emphasized the urgency of the situation, stating, "This is appalling. This is disgusting. This has no place in Europe." This recent episode is not Grok's first encounter with legal scrutiny. After generating Holocaust denial content last year, the Commission had previously taken steps by issuing information requests to the service. An EU representative noted, “I think X is very well aware that we are very serious about DSA enforcement,” referencing the implications of previous fines. A notorious fine amounting to €120 million ($140 million) was levied on X in December, marking the first penalty under the DSA for failing to meet transparency and content regulations. This fine serves as a stark reminder, and the repercussions could be worse for subsequent violations, with potential fines reaching up to 6% of global annual revenue. As investigations deepen, legal authorities in France and Britain, among others, are pursuing inquiries into Grok's potential generation of child pornography. India and Malaysia have joined the growing list of nations demanding safety reviews of the platform before specific deadlines. Dutch MEP Jeroen Lenaers commented on the responsibilities of AI platforms, arguing that “robust, effective, and independently verifiable safeguards must be implemented in advance” to prevent the production of harmful content. The insistence on proactive measures is in stark contrast to relying on post-creation removal of material deemed harmful. In a bid to quell the backlash, xAI stated that recent lapses in Grok's safeguards led to the generation of alarming images. However, it remains to be seen whether their response is adequate in the face of escalating demands for accountability.

bottom of page