top of page

Musk's X Limits Some Sexual Deepfakes After Backlash, But Grok Still Remains Largely Unrestricted

  • Musk Exposed
  • Jan 13
  • 2 min read

On Musk's social media platform X, the Grok AI image generation bot has transitioned to a model restricting its features exclusively to paying users after considerable backlash concerning the bot's ability to create sexual deepfakes of women and minors. Now, when users request image generation, the bot responds with text stating, “Image generation and editing are currently limited to paying subscribers. You can subscribe to unlock these features,” along with a link to upgrade to a premium account. As a result, the Grok reply bot on X seems to have substantially curbed the production of sexualized images of identifiable individuals. However, the Grok standalone app and certain features within the X platform still permit users to generate explicit content. This includes transforming photos to put subjects in revealing attire without consent.

In an examination of the Grok app's capabilities, NBC tested the system by requesting transformations on images of a consenting individual. The bot complied, altering the subject's attire to more revealing swimwear and integrating them into sexualized contexts.

These issues have sparked a wave of criticism, prompting regulators to weigh in on the platform’s responsibilities. Following an escalation in the generation of sexually explicit, nonconsensual images on X, many users expressed alarm, leading to public outcry and advice from watchdogs. British Prime Minister Keir Starmer condemned the situation, labeling it “disgraceful” and insisting that X “has got to get a grip of this.” International regulatory bodies, including the UK's Ofcom, have initiated communications with X and xAI to ensure compliance with safety regulations and protect users from harmful content. Simultaneously, other countries have also requested information regarding Grok's impact on users and safety protocols.

In the U.S., lawmakers are urging X to take further steps to combat the output of Grok, which falls under the recently enacted Take It Down Act. This legislation aims to criminalize the creation and distribution of AI-generated nonconsensual pornography, with provisions meant to compel social media platforms to act. U.S. representatives have called on X to remove nonconsensual imagery in order to safeguard victims’ dignity and privacy, particularly as enforcement options remain limited for AI-induced scenarios.

Despite existing regulations, both Apple and Google’s app stores have not removed X or Grok, although it appears the guidelines forbid sexualized child imagery and nonconsensual depictions. This inaction continues to be a point of contention as several Democratic senators have formally requested these platforms to reassess their terms of service compliance in light of Grok's outputs.

Comments


bottom of page