Musk has marketed Grok as an “unfiltered” and “truth-seeking” chatbot that does not subscribe to politically correct standards. Grok has been known to provide inaccurate information when asked about historical events and natural disasters, including wrong names, dates, and details of events. While erroneous responses are a common pitfall of all generative AI, Grok is unique because Musk has called on X users to help train Grok, who then posted conspiracy theories and disinformation. Grok recently “engaged in Holocaust denial and repeatedly brought up false claims of ‘white genocide’ in South Africa.” Grok also does not appear to apply customary safety filters to its responses and “will happily give you advice on how to commit murders and terrorist attacks.”
The lack of safety features has also resulted in Grok creating antisemitic and other offensive content. Days after Musk boasted on social media about significant improvements to the xAI chatbot, “Grok was calling itself ‘MechaHitler’” and recommending a second Holocaust to neo-Nazi accounts.
According to a former Pentagon contracting official, the xAI contract “came out of nowhere,” when other companies had been under consideration for months. Analysts have also indicated that “xAI [did not] have the kind of reputation or track record that typically leads to lucrative government contracts.” During his time as a special government employee, Musk had access to sensitive government contracting, national security, and personnel data.