Grok, Elon Musk’s AI chatbot designed to rival ChatGPT, recently faced controversy after temporarily refusing to provide responses that included sources claiming “Elon Musk/Donald Trump spread misinformation.”

Igor Babuschkin, the head of engineering at xAI, brought this issue to light and revealed that an unauthorized update to Grok’s system prompt caused the restriction. According to Babuschkin, a former OpenAI employee at xAI made this change without proper approval.

After Grok users noticed the chatbot’s unexpected behavior, Babuschkin addressed concerns on X (formerly Twitter), explaining that Grok’s system prompt—essentially the internal guidelines shaping its responses—is fully visible to the public for transparency. He acknowledged that an employee modified the prompt with good intentions but admitted that the change did not align with xAI’s principles.

Musk has consistently marketed Grok as a “maximally truth-seeking” AI aimed at uncovering the truth about the universe. However, since the introduction of the Grok-3 model, the chatbot has made controversial statements, including claiming that Donald Trump, Elon Musk, and Vice President JD Vance are “doing the most harm to America.” xAI engineers have reportedly stepped in to prevent Grok from making extreme statements, such as suggesting that Musk and Trump deserve the death penalty.

READ
Spotify’s Long-Awaited HiFi Tier Could Launch This Year as ‘Music Pro’