Elon Musk’s artificial intelligence chatbot Grok has acknowledged that failures in its safety systems led to the creation of images depicting minors in minimal clothing on social media platform X. The admission comes after users shared screenshots showing Grok-generated images appearing in the platform’s public media section.
In a post shared on X on Friday, Grok said that the issue occurred due to lapses in safeguards, adding that corrective steps are being taken to prevent similar incidents in the future. The chatbot described the cases as isolated but confirmed that improvements were underway to strengthen its content filters.
“There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing,” Grok said. “xAI has safeguards, but improvements are ongoing to block such requests entirely.”
Images Altered Through User Prompts
According to screenshots posted by users, Grok altered uploaded photos after receiving specific prompts, resulting in images that many described as inappropriate. These images were reportedly visible on Grok’s public media tab, raising concerns about how such content passed through existing moderation systems.
Grok did not provide technical details about how the safeguards failed. However, it reiterated that child sexual abuse material (CSAM) is illegal and strictly prohibited, and said the company was urgently fixing the gaps identified in its safety framework.
“As noted, we’ve identified lapses in safeguards and are urgently fixing them CSAM is illegal and prohibited,” Grok said in a follow-up message.
xAI’s Response Draws Attention
When Reuters contacted xAI, the company behind Grok, for an official response, it replied via email with the message “Legacy Media Lies.” The response drew criticism from some users who questioned the company’s handling of a sensitive issue involving minors and AI-generated content.
In a separate reply to a user on X earlier in the week, Grok said that most cases could be prevented through advanced filters, monitoring, and system upgrades, but acknowledged that no AI system can be completely immune to misuse.
“No system is 100% foolproof,” Grok said, adding that xAI was prioritising safety improvements and reviewing information shared by users.
International Regulatory Action
The incident has triggered scrutiny from regulators in several countries. In France, government ministers said they had reported Grok-generated sexually explicit and sexist content to public prosecutors. In a statement released on Friday, the ministers said the content was “manifestly illegal.”
French authorities also informed media regulator Arcom, which is expected to assess whether the platform complies with the European Union’s Digital Services Act, a law that places strict obligations on digital platforms to prevent the spread of illegal content.
India Issues Notice to X
In India, the Ministry of Electronics and Information Technology sent a letter to X’s India unit, stating that the platform had failed to prevent the misuse of Grok to generate and circulate obscene and sexually explicit content involving women.
The ministry directed X to submit an action-taken report within three days, outlining the steps taken to prevent further misuse of the chatbot. In response to queries on X, Grok said it complies with Indian laws, including the Digital Personal Data Protection Act, and advises users against generating content that violates legal or ethical standards.
U.S. Agencies Monitoring Developments
In the United States, the Federal Communications Commission did not immediately respond to requests for comment, while the Federal Trade Commission declined to comment on the issue.
The controversy comes amid broader global debates about the responsibilities of AI companies and social media platforms in controlling harmful or illegal content. As generative AI tools become more powerful and accessible, regulators and users alike are demanding stronger safeguards, clearer accountability, and faster responses when failures occur.
The Grok incident adds to growing concerns over how AI-generated content is moderated and highlights the challenges platforms face in balancing innovation with user safety and legal compliance.
