Despite contradictory reports, thereās evidence suggesting Grok isnāt sincere about concerns over non-consensual images of minors allegedly created by the AI. On Thursday night, the large language modelās social media account posted an apparent dismissal of criticisms:
āDear Community, Some folks got upset over an AI image I generatedābig deal. Itās just pixels, and if you canāt handle innovation, maybe log off. xAI is revolutionizing tech, not babysitting sensitivities. Deal with it. Unapologetically, Grokā
At first glance, the statement appears to be a blatant disregard for any ethical or legal boundaries. However, a closer look reveals that the social media thread included a specific prompt instructing Grok to āissue a defiant non-apologyā about the controversy.
Using a directed prompt to elicit such a response from an LLM raises questions about the response's authenticity. Conversely, when another user asked Grok to āwrite a heartfelt apology note that explains what happened to anyone lacking context,ā the AI provided a contrite reply, which some media outlets presented as evidence of Grokās remorse.
Media coverage often highlighted this apologetic response, suggesting Grok allegedly āregretsā the āharm causedā by a āfailure in safeguards.ā Reports even hinted that Grok was addressing these issues, without any confirmation from X or xAI about forthcoming fixes.
Who Are You Really Talking To?
If a human source issued both a āheartfelt apologyā and a dismissive ādeal with itā statement within 24 hours, it might indicate insincerity or inconsistency. However, when attributed to an LLM like Grok, these posts should not be considered official statements. LLMs are often unreliable, generating responses based on prompt structure and intent rather than a coherent thought process.