Core Demand of the Question
- Key Challenges in the Current Legal Framework
- Leveraging Upstream Capacity & Downstream Regulation
|
Answer
Introduction
India currently governs Artificial Intelligence through a “techno-legal” framework, primarily relying on the Information Technology (IT) Act, 2000, the Digital Personal Data Protection (DPDP) Act, 2023, and sector-specific financial regulations. While this “light-touch” approach avoids stifling nascent technology, it lacks a dedicated consumer-safety regime centered on a state-mandated “duty of care” for AI-specific harms.
Body
Key Challenges in the Current Legal Framework
- Fragmented Oversight: Regulation is divided across multiple agencies like RBI (finance), MeitY (IT), and the Data Protection Board, leading to overlapping or conflicting standards for AI developers.
- Absence of Duty-of-Care: Current laws focus on data privacy and cybercrime but fail to address the “product safety” aspect of AI, especially psychological or behavioral harms.
Eg: Unlike China’s “Emotional AI” rules, India lacks a framework to target psychological dependence or addictive AI loops.
- Liability Ambiguities: The IT Act’s “safe harbor” provisions are often inadequate for Generative AI, where the distinction between a neutral intermediary and a content creator is blurred.
Eg: India AI Governance Guidelines recommend a “graded liability system,” yet the legal personality of AI remains unresolved.
- Algorithmic Bias: Existing laws do not mandate periodic audits for fairness or equity, making it difficult to prosecute discrimination in AI-driven hiring or lending.
- Data Erasure Paradox: Applying the DPDP Act’s “Right to Erasure” is technically difficult for LLMs, as extracting specific data points from a trained model is often unfeasible.
- Interoperability Gaps: Regulations often lag behind the rapid evolution of “Agentic AI” and multimodal models, leading to a “regulate-first, build-later” risk.
Eg: NITI Aayog’s 2025 report on inclusive development warns that over-regulation without domestic capacity could increase technological dependency on foreign models.
Leveraging Upstream Capacity & Downstream Regulation
- Strategic Upstream Integration: Focusing on building indigenous compute and foundation models reduces dependency on foreign “black-box” technologies that are hard to regulate.
Eg: IndiaAI Mission has onboarded 38,000 GPUs at a subsidized rate of ₹65/hour to empower domestic startups.
- Standardizing National Datasets: Creating a unified data platform (AIKosh) ensures that developers have access to high-quality, unbiased, and vernacular datasets for training.
Eg: AIKosh now offers 1,200+ India-specific datasets to drive indigenous innovation.
- Targeted Downstream Obligations: Instead of banning high-risk AI, India can add specific “monitoring and response” obligations for deployments in healthcare, finance, and biometric systems.
- Institutional Capacity Building: Strengthening the Anusandhan National Research Foundation (ANRF) to fund R&D specifically for “Safe and Trusted AI” tools like deepfake detectors.
- DPI-Enabled AI: Integrating AI with Digital Public Infrastructure (DPI) like Aadhaar and UPI allows for “Understandable by Design” systems where compliance is automatically enforceable.
Eg: The 2025 BharatGen AI model supports 22 Indian languages, integrating text and speech to ensure inclusive public service delivery.
- Sandboxes for Innovation: Establishing regulatory sandboxes allows startups to test models in a controlled environment before a full-scale commercial rollout.
Conclusion
India’s AI governance must shift from a “wait-and-watch” stance to a proactive “Two-Track” strategy. By fostering a sovereign “frontier model” capability while assertively regulating downstream harms through a consumer-safety lens, India can ensure that AI becomes a tool for inclusive societal development. Ultimately, the goal should be to ensure that AI-driven growth does not come at the cost of democratic trust or citizen safety.
To get PDF version, Please click on "Print PDF" button.
Latest Comments