As AI use grows, there is a debate over how India can balance non-intrusive regulation, user protection, and the development of domestic AI capacity without stifling innovation.
India’s Current Regulatory Approach to AI
- Regulation Through Existing Laws: India currently regulates the use of Artificial Intelligence by requiring platforms to exercise due diligence under the Information Technology Act and IT Rules.
- AI use is also governed by the Digital Personal Data Protection Act, 2023, as well as sector-specific financial regulations of the RBI and SEBI.
- Current laws regulate adjacent risks (such as fraud, privacy, etc.).
- Lack of Explicit Duty of Care: India has not defined a duty of care regarding AI-related psychological harms. This creates regulatory gaps in consumer protection despite growing AI adoption.
China’s AI Safety Framework
- Focus on Emotionally Interactive Services: China recently unveiled draft rules targeting emotionally interactive AI services.
- These rules propose that companies must warn users against excessive use and intervene when they detect signs of extreme emotional states.
- Justification and Risks: These rules appear justified in addressing psychological dependence, which general content regulation does not adequately cover.
- However, requiring companies to identify users’ emotional states may be harsh, as it can incentivise more intimate and intrusive monitoring.
India Vs China’s Approach
- India’s Approach: India’s approach is less intrusive, as it does not monitor emotions. However, it remains incomplete because it largely relies on existing laws. India is currently missing a specific AI safety net.
- China’s Approach: It follows an intrusive approach, actively monitoring emotions. It creates a specific safety regime for this purpose, and the state explicitly assumes a “duty of care” toward users.
Sectoral Regulators of AI
- Ministry of Electronics and Information Technology (MeitY): It has used the IT Rules to require platforms to curb deepfakes and online fraud. It has also mandated the definition and labelling of “synthetically generated” content.
- MeitY’s approach has largely been reactive, which limits forward-looking governance of emerging AI risks.
- Reserve Bank of India (RBI): The RBI has set expectations to govern model risk in credit.
- It has also developed the FREE-AI framework process to guide responsible AI use in financial services.
- Securities and Exchange Board of India (SEBI): SEBI has pushed for clear accountability for how regulated entities use AI tools, ensuring responsibility for AI-driven decision-making in capital markets.
Way Forward
- Improve Upstream Capability: India should avoid choking upstream capability to prevent stifling innovation and deepening dependence on foreign technologies.
- Some ways of improving upstream capacity are:
- Compute Access: Provide better computational resources (GPUs).
- Upskilling: Train the workforce for AI development.
- Public Procurement: The government should buy from local AI startups.
- Research to Industry: Translate lab research into market products.
- Regulate Downstream: India should regulate downstream in the following ways:
- Regulate Assertively: Regulation should not stifle innovation at the stage of creation, but it must be strict and responsible at the stage of deployment.
- High-Risk Contexts: Additional obligations should be imposed when AI is used in high-risk areas such as health and loans.
- Incident Reporting: Companies should be required to submit reports on model behaviour and system failures.
- Duty of Care: Companies should be held accountable for user safety without resorting to emotion monitoring or intrusive surveillance.
Conclusion
India must pursue Sovereign AI while ensuring citizen safety, legally recognising psychological harms under a duty of care, and avoiding intrusive surveillance-state models that undermine privacy and democratic freedoms.