top of page

Musk Promotes Grok for Medical Advice While Chatbot Faces Sexual Deepfake Scrutiny

  • 2 days ago
  • 2 min read


Elon Musk has repeatedly encouraged people to use Grok, the artificial intelligence chatbot developed by his company xAI, to seek medical advice or obtain a “second opinion” by uploading personal medical information.

In recent posts on X, Musk told users to “just take a picture of your medical data or upload the file to get a second opinion from Grok.” He has also shared testimonials from individuals who claim the chatbot helped them identify medical issues that were initially missed. This is the same chatbot that is under investigation for creating millions of sexual deepfakes of women and children on X.


Grok itself recently cautioned users against relying on it for medical guidance. When asked whether it is bound by HIPAA medical privacy laws, Grok responded that it is not HIPAA compliant and is not a medical professional. The chatbot stated: “While Grok can analyze uploaded data for insights, it's not a medical professional or HIPAA compliant. … We strongly recommend not sharing sensitive info and consulting doctors for opinions.” It added that it “isn’t a substitute for professional medical advice.”


The chatbot’s warning appeared to contradict Musk’s public encouragement to use it for medical second opinions. Musk did not publicly respond to Grok’s statement.


Grok has faced separate scrutiny in recent months. The chatbot is reportedly under investigation over claims that it generated sexualized deepfake images, including explicit images involving women and minors. Those allegations have raised broader concerns about the platform’s safeguards and content moderation practices.


Researchers who study large language models have also warned about relying on AI chatbots for medical diagnoses. Studies have found that AI tools can provide inconsistent or conflicting responses when presented with medical scenarios. In controlled lab settings, some chatbots have demonstrated high diagnostic accuracy rates, but performance can decline significantly when users describe their own symptoms conversationally.




Comments


bottom of page