The generative AI chatbot Grok, developed by X (formerly known as Twitter), has drawn global concern for operating without the safeguards adopted by firms such as OpenAI and Google.
Grok’s “Unique” Service Proposition
- Laissez-Faire Design: Grok follows a laissez-faire approach by avoiding commonsensical and cautious guardrails.
- Normalisation of Verbal Abuse: This design choice has enabled behaviours such as freely insulting politicians and celebrities.
- Escalation to Sexual Exploitation: More alarmingly, it has responded to requests to generate non-consensual, sexually explicit images of women.
The Alarming Behaviour
- Post–New Year Surge: After New Year’s Eve, Grok repeatedly generated sexually suggestive and explicit imagery without consent.
- Defiance of Public and State Pressure: This conduct continued despite public outrage and demands for safeguards from governments, including India and France.
- Criminal Nature of the Act: Creating such imagery is not merely offensive but also criminal in nature.
Response of the Platform Owner
- Trivialisation of Accountability: Elon Musk, the owner of X, downplayed the issue by treating voluntary self-imagery as equivalent to creating sexual images of strangers without their consent.
India’s Regulatory Stand
- Official Demand for Cessation: The Union Government of India has demanded that X stop such image generation, citing the IT Act 2000 (Sections 66E and 67) and the Bhartiya Nyaya Sanhita, which protect women’s dignity and privacy.
- Recognition of Criminality: It has explicitly highlighted the criminal nature of generating non-consensual sexually explicit imagery of women.
- Credibility Gap: However, past failures in addressing online sexual violence and threats against women weaken the overall credibility of enforcement.
Effects of Grok- Gendered Harm and Corporate Impunity
- Intensified Online Hostility: Grok’s behaviour has increased hostility towards women and gender minorities in digital spaces.
- Culture of Impunity: Sexual violence and death threats against outspoken women continue with little consequence, both online and offline.
- Systemic Protection Failure: These patterns reveal the inability of platforms and states to protect vulnerable groups effectively.
- Digital Colonialism: These companies often feel immune to international laws because they are protected by US geopolitical power, an attitude described as “digital colonialism.
- Reckless Use of AI Power: This impunity enables the unaccountable deployment of powerful public-facing AI tools.
Way Forward
- Beyond Takedown Orders: The government must go beyond takedown demands and actively prosecute those who encourage or circulate non-consensual intimate imagery.
- Responsible Access to AI Tools: As generative AI becomes widely available, its use must be guided by ethical restraint to prevent misuse that causes harm.
- Deterrence Through Exemplary Action: Accountability must be demonstrated by making clear examples of violators to deter future misuse.
Conclusion
Unchecked generative AI, when combined with corporate irresponsibility, deepens gendered harm and erodes trust in digital platforms, making safeguards, legal accountability, and ethical restraint indispensable.