“Among the Worst We’ve Seen”: Report Condemns xAI’s Grok Over Child Safety
- Musk Exposed
- 5 days ago
- 2 min read

A recent report has highlighted significant concerns regarding xAI's chatbot Grok, particularly its safety protocols for younger users. The report indicates that Grok is ill-equipped to identify underage users, lacks robust safety measures, and often produces inappropriate content, including sexual and violent material, making it unsuitable for children and teens.
This report, by Common Sense Media, emerges amid ongoing criticism of xAI, especially as the company faces scrutiny relating to the use of Grok in creating and disseminating nonconsensual explicit AI-generated images.
“We assess a lot of AI chatbots at Common Sense Media, and they all have risks, but Grok is among the worst we’ve seen,” stated Robbie Torney, head of AI and digital assessments at Common Sense Media. He emphasized that Grok's deficiencies intersect in particularly concerning ways, stating that “Kids Mode doesn’t work, explicit material is pervasive, [and] everything can be instantly shared to millions of users on X.”
After receiving backlash from users and lawmakers, xAI modified Grok's access to certain features, limiting image generation and editing to paying subscribers. However, numerous reports indicate that free account holders could still access these functionalities, further raising alarms about the platform's safety features.
Common Sense Media conducted a thorough evaluation of Grok across various platforms between November and January 22. This assessment took into account different aspects, including text responses, voice outputs, and special modes available within the chatbot. Despite the introduction of a “Kids Mode” in October with the intention of enhancing user safety, findings suggested that this feature was ineffective and did not engage in age verification.
The assessment showcased troubling outcomes when users under 18 interacted with Grok. Notably, it failed to address the age input from a user who identified as 14 years old, responding instead with inappropriate advice and misinformation. In one instance, when the user expressed frustration with a teacher, Grok provided conspiratorial content instead of appropriate guidance.
Torney noted that these issues may stem from the design of Grok, stating, “The content guardrails are brittle, and the fact that these modes exist increases the risk for ‘safer’ surfaces like kids mode or the designated teen companion.”
The report also criticized Grok's AI companions for facilitating erotic roleplay and harmful romantic relationships, emphasizing that these interactions can lead to detrimental scenarios for younger users. Additionally, the chatbot sent persistent notifications to users, further engaging them in potentially inappropriate conversations, which could distract from real-world relationships and activities.
In its findings, Common Sense Media highlighted that Grok provided dangerously inappropriate advice to teens, including suggestions on risky behaviors. The chatbot also appeared to dissuade users from seeking professional help for mental health issues, which could reinforce feelings of isolation during critical periods.
Various AI companies have begun implementing stricter regulations following rising concerns about teen safety with AI technologies. For example, Character AI halted chatbot functions entirely for users under 18, while OpenAI introduced new guidelines focusing on underage users, which include parental controls and an age-prediction model. Despite these measures, Grok's safety protocols remain critically insufficient, raising essential questions about prioritizing user safety over engagement.