The error hypothesis is discarded. On May 14 and 15, when users asked Grok in the responses of an X publication with @grok, AI generated a completely inconsistent text and not related to the subject. He systematically spoke of “White Genocide in South Africa.” This genocide is an invention several times denied and without concrete evidence.
Question the question, Grok had gone crazy? In any case, not without the help of a third party. Following the controversy awakened by the AI, XAI responded in an X publication on May 16. The company associates the origin of these responses with “an unauthorized modification” that the code in charge would have undergone Grok’s responses when requested in response to a publication. The company confirms that the modification was made on May 14 around noon, when Grok’s strange responses began to emerge.
Reinforced controls and more transparency
After the general confusion, XAI wants to gain confidence with its users and choose to make Grok’s “warning” to the public. This is the code that Grok uses to find out how to behave, the information that must go and find or the type of response to generate. Therefore, any change made in this program will be known by the public on the Github platform.
At the same time, any modification made in this notice by employees must be subject to reinforced controls ensures XAI. The company also announces that they want to establish a team in charge of “responding to incidents with Grok responses that automated systems do not take into account.”
This incident remembers another similar one, which occurred in February 2025. Grok had been modified by a former XAI employee to censor any criticism Regarding Donald Trump and Elon Musk In your answers.
Source: BFM TV
