Military AI: Global Governance, Strategic Risks and the Future of Human Control

19 Feb 2026

English

हिन्दी

Military AI: Global Governance, Strategic Risks and the Future of Human Control

Recently, the third global summit on Responsible Artificial Intelligence in the Military Domain (REAIM), held in A Coruña, Spain, marked a significant shift from broad ethical principles to practical operationalization.

  • A divide has emerged between global regulation and strategic security, reflected in India’s abstention from the Pathways to Action declaration alongside the U.S. and China, balancing technological sovereignty, ethics, and national defence.

Key Outcomes of the Third Global Summit on Responsible Artificial Intelligence in the Military Domain (REAIM)

  • Adoption of the ‘Pathways to Action’: The primary outcome of the 2026 summit was the adoption of the “Pathways to Action” declaration. 
    • This document was designed to translate the high-level principles of the previous 2024 Seoul Blueprint for Action into concrete national implementation steps.
    • Scope: It outlines 20 specific actions categorized into impact assessment, implementation strategies, and future governance.

Article 36 Reviews: Under the Geneva Conventions, some states are institutionalizing mandatory legal reviews for every new AI-enabled weapon system to ensure International Humanitarian Law (IHL) compliance.

    • Key Provisions: It affirms Human Responsibility over weapon systems, mandates clear Chains of Command, and calls for robust Article 36 Legal Reviews for all new AI-enabled military capabilities.
    • Signatory Drop: Unlike the 2024 summit, which saw 61 endorsers, only 35 out of 85 participating countries signed the 2026 declaration.
  • Launch of the Framework of Responsible Industry Behaviour: In a first-of-its-kind initiative, United Nations Institute for Disarmament Research (UNIDIR) and the Office of the High Commissioner for Human Rights (OHCHR) launched a voluntary framework targeting the private sector.
    • Significance: Recognizing that most military AI innovation happens in the civilian tech sector, this framework provides a baseline for Responsible Industry Conduct.
    • Focus: It guides companies on ethical system design, data integrity, and post-deployment monitoring to prevent Algorithmic Bias and misuse.
  • The Great Power Abstention: A defining feature of the 2026 summit was the “Strategic Reluctance” shown by the world’s leading AI powers.
    • U.S. and China: Both nations sent significantly smaller delegations and declined to sign the Pathways to Action.
    • India: Continuing its policy of Strategic Autonomy, India abstained from the declaration, maintaining that legally binding instruments are “premature” given the evolving nature of the technology and its own national security compulsions.
    • Impact: This created a “Middle Power” leadership vacuum, filled largely by nations like Spain, the Netherlands, South Korea, and Canada, who are now driving the normative agenda.
  • Focus on ‘Responsible by Design’ (RbD): The summit institutionalized the “Responsible by Design” philosophy. This move shifts the focus from checking for ethics after a system is built to integrating legal and ethical constraints into the Software Development Life Cycle (SDLC).
    • Outcome: States were encouraged to adopt a RACI Matrix (Responsible, Accountable, Consulted, Informed) to clarify roles across the lifespan of an AI system, from initial R&D to decommissioning.
  • Nuclear Firewalls and Escalation Risks: A critical discussion centered on the Nexus of AI and Nuclear Weapons.
    • Outcome: There was a strong consensus (though not a binding agreement) among signatories to maintain Human Control over nuclear command-and-control (NC3) systems to prevent “flash wars” or accidental algorithmic escalation.
  • Capacity Building for the Global South: Recognizing that many countries lack the technical infrastructure to evaluate military AI, the summit highlighted the role of Regional Centres of Excellence.
    • Significance: The outcomes emphasized the need for Knowledge-Sharing Hubs to ensure that developing nations are not left with a “security deficit” or excluded from future governance discussions.

About Responsible AI in the Military Domain (REAIM)

REAIM is a high-level, multi-stakeholder platform launched to address the ethical, legal, and technical challenges of AI in warfare.

  • Genesis: The inaugural summit was held in The Hague, Netherlands (2023), followed by Seoul, South Korea (2024), and Spain (2026).
  • Core Objective: To foster global consensus on maintaining “Meaningful Human Control” over weapon systems and ensuring that military AI complies with International Humanitarian Law (IHL).
  • Key Outcomes: The process has evolved from a “Call to Action” (2023) to a “Blueprint for Action” (2024), and most recently the “Pathways to Action” (2026).
    • The “Pathways to Action” (2026) seeks to turn high-level principles into concrete national implementation frameworks.

Significance & Need for REAIM

The urgency for global guardrails under the REAIM framework is driven by the transformative—and potentially destabilizing—nature of Artificial Intelligence (AI) in conflict. Consolidating the ethical, legal, and operational imperatives, the need for REAIM is underscored by the following factors:

  • Compression of the Decision Cycle (OODA Loop): AI can process massive datasets and identify patterns at machine speed, drastically accelerating the OODA Loop (Observe, Orient, Decide, Act).
    • Guardrails ensure that AI remains an augmentative tool rather than a runaway decision-maker.
  • Balancing Precision with Algorithmic Bias: While responsibly used AI can enhance accuracy and reduce Collateral Damage (unintended damage to non-targets).
  • Compliance with International Humanitarian Law (IHL): IHL mandates that any weapon system must adhere to the core principles of Distinction (the ability to distinguish between civilians and combatants) and Proportionality (ensuring military gain outweighs civilian risk).
  • Ensuring Meaningful Human Control and Agency: At the heart of REAIM is the preservation of Human Agency in lethal decision-making.
    • Meaningful Human Control: This principle dictates that humans, not algorithms, must remain morally and legally responsible for the use of lethal force. 
      • It ensures that a commander understands the “how and why” of a system’s target selection (countering the “Black Box” problem) before authorizing an attack.
    • The Moral Imperative: It is vital that the choice to take a human life remains a conscious moral decision rather than a cold mathematical output from a machine.

About Black Box Problem

  • The “Black Box” Problem refers to the inherent opacity of advanced Artificial Intelligence (AI) systems, particularly those using Deep Learning and Neural Networks
    • In these systems, while the inputs (data) and outputs (decisions) are visible, the internal mathematical processes used to reach those conclusions are hidden or too complex for human comprehension.
  • In the high-stakes military domain, this “black box” nature creates a critical tension between Operational Efficiency and Human Responsibility.

Key Challenges in Governing Military AI

  • The Definitional Deadlock and Strategic Ambiguity: A primary hurdle in establishing international norms is the lack of a universally accepted definition for Lethal Autonomous Weapons Systems (LAWS).
    • Deadlock Mechanics: Without a consensus on what constitutes “autonomy” or “lethality,” drafting a binding treaty is nearly impossible.
    • The Strategic Split: 
      • Advanced Nations: Often prefer “high-threshold” definitions that focus on systems with zero human oversight, thereby keeping semi-autonomous or human-supervised systems legal and outside of regulation.
      • Non-Advanced Nations: Many seek an all-out ban or “low-threshold” definitions to restrict the development of systems that could provide a massive military disadvantage to those without AI capabilities.
    • Killer Robots: Often referred to as “killer robots,” LAWS are defined by their ability to search for, select, and engage targets without further human intervention. 
      • The absence of a clear legal boundary prevents the categorization of these systems under current arms control frameworks.
  • The Dual-Use Dilemma and Verification Nightmares: The Dual-Use Dilemma highlights how AI technology designed for civilian progress can be seamlessly pivoted to destructive military ends.
    • Repurposing Civilian Tech: AI developed for benign purposes, such as Civilian Logistics, Self-driving Cars, or Autonomous Delivery Drones, can be easily repurposed into Loitering Munitions (suicide drones) or intelligent surveillance tools.
    • Chemical Warfare Example (2024): In a stark demonstration of this risk, researchers showed that an AI initially designed for Pharmaceutical Discovery (drug development) could be “inverted” to generate blueprints for 40,000 new chemical weapon agents in under six hours.
    • Compliance Nightmare: Because R&D for these technologies often happens in the private sector, verifying whether a nation is developing “responsible” civilian AI or “lethal” military AI is an arms-control nightmare.
  • Escalation Risks and the Nuclear-AI Nexus: The integration of AI into Nuclear Command, Control, and Communications (NC3) introduces an existential threat to global stability.
    • Accidental Escalation: The use of AI-enabled decision-support tools can create Automation Bias, where human operators over-rely on algorithmic outputs. 
      • An algorithmic error or a “hallucination” by a generative AI could trigger an accidental nuclear launch or a rapid escalation before human diplomats can intervene.
    • Strategic Stability: As states race to gain a “decision advantage,” the fear of falling behind—known as the Security Dilemma—may lead them to integrate untested AI into strategic warning systems, increasing the risk of “Flash Wars” (conflicts that escalate at machine speed).
  • The Accountability Gap and the “Black Box” Problem: If an autonomous system commits a war crime, the current legal framework for Command Responsibility becomes severely blurred.
    • The Responsibility Gap: It remains unclear who should be held liable: 
      • The Programmer (for a coding error), 
      • The Commander (for deploying the system), or 
      • The Machine (which has no legal standing).
    • Article 36 Reviews: Under Additional Protocol I of the Geneva Conventions, states are legally required to conduct Article 36 Reviews to determine if new weapons are prohibited by international law.
    • The Explainability Crisis: The “Black Box” nature of AI—where the internal logic of deep learning models is too complex for humans to understand—makes these legal reviews nearly impossible. 
      • If a commander cannot explain why an AI selected a target, they cannot guarantee compliance with International Humanitarian Law (IHL).
  • The AI Arms Race and Safety Shortcuts: The global competition for military AI supremacy has triggered an AI Arms Race that incentivizes speed over safety.
    • Capability Gaps: To avoid falling behind adversaries, nations are incentivized to cut corners on rigorous Testing and Evaluation (T&E) and safety protocols.
    • Algorithmic Bias: If training data is flawed or biased, AI could lead to catastrophic targeting errors or systematic discrimination on the battlefield.
      • Recent Example (2024-2025): The deployment of AI-powered targeting systems like Lavender and Gospel in recent high-intensity conflicts has demonstrated AI’s ability to accelerate target generation. 
        • However, these cases also sparked global outcry regarding high civilian casualty rates, illustrating the dangers of relying on machine outputs when Human Oversight is minimal.

India’s Initiatives

India has adopted a “Watch-and-Wait” approach internationally while aggressively modernizing its Indigenous Capabilities:

  • Defence AI Council (DAIC): This body provides policy direction and oversees the integration of AI across the Indian Armed Forces.
  • 75 AI Products for ‘Aatmanirbharta’: The Ministry of Defence (MoD) recently operationalized 75 AI-powered products, including Autonomous Surveillance Drones and AI-based Language Translators for border intelligence.
  • Innovation for Defence Excellence (iDEX): This initiative funds startups to develop Dual-Use AI technologies, such as swarm drone algorithms and predictive maintenance for fighter jets like the LCA Tejas (Light Combat Aircraft).
  • India AI Governance Guidelines (2025-2026): India recently released a principle-based framework focusing on “Safety and Trust,” ensuring that AI innovation does not compromise national values or human safety.
  • Position at the UN CCW: At the United Nations Convention on Certain Conventional Weapons (CCW), India chairs and participates in the Group of Governmental Experts (GGE) on LAWS, advocating for a balance between military necessity and humanitarian imperatives.

Global Initiatives & Best Global Practices

The governance of military AI is evolving through a mix of high-level political declarations, humanitarian advocacy, and “security-by-design” technical standards. These initiatives aim to bridge the gap between rapid innovation and the slow pace of international treaty-making.

  • U.S. Political Declaration on Responsible Military Use of AI (2023): Endorsed by over 50 states as of 2026, this framework is a leading non-binding effort to establish norms for the Department of Defense (DoD) and international allies.
    • Core Tenets: It outlines 10 guiding principles, including the requirement that AI systems have explicit, well-defined uses and are subject to rigorous Testing and Evaluation (T&E) throughout their entire life cycle.
    • Senior-Level Oversight: It mandates that “high-consequence” applications—such as those involving the use of force—undergo senior-level human review before deployment to mitigate Automation Bias.
    • Global Best Practice: The declaration serves as a “community of practice” where endorsing states exchange methodologies for building Transparent and Auditable AI architectures.
  • The “Seoul Framework” and Blueprint for Action (2024): Emerging from the REAIM 2024 summit in South Korea, this framework moved the conversation from abstract ethics to national accountability.
    • Blueprint for Action: Adopted by 61 countries, it provides a roadmap for states to integrate ethical standards into their national military doctrines.
    • Key Innovation: It encourages states to adopt national “Codes of Conduct” that enforce transparency in military AI procurement, ensuring that private sector contractors meet the same ethical standards as the military.
    • Perspectives Inclusion: The framework was notable for incorporating “Youth and Peace” perspectives, recognizing that the long-term impact of AI warfare will be felt by future generations.
  • NATO’s “Responsible by Design” (RbD) Strategy: The North Atlantic Treaty Organization (NATO) has pioneered the “Responsible by Design” philosophy to maintain a technological edge without sacrificing core values.
    • Hard-Coding Ethics: RbD advocates for integrating legal and ethical constraints directly into the Software Development Life Cycle (SDLC). This means “firewalls” and “logic gates” are coded into the AI from the research phase.
    • Six Principles of Use: NATO’s AI strategy is built on six pillars: Lawfulness, Responsibility, Explainability, Reliability, Governability, and Bias Mitigation.
    • Certification Protocols: NATO is developing standardized Certification Procedures to ensure that an AI system developed in one member state is legally and technically “interoperable” with others in a coalition.
  • Article 36 Reviews- The Legal Gateway: Under Article 36 of Additional Protocol I to the Geneva Conventions, states have a legal obligation to determine if a “new weapon, means, or method of warfare” is prohibited by international law.
    • Mandatory Legal Audits: Some states are now institutionalizing these reviews specifically for AI, requiring legal teams to work alongside data scientists to verify International Humanitarian Law (IHL) compliance.
    • The Challenge of Learning Algorithms: A best practice currently being discussed is the “Continuous Review” model. 
      • Unlike traditional weapons, AI can “learn” and change behavior after deployment, necessitating periodic legal re-evaluations even after the weapon is in the field.
  • ICRC’s “Two-Tier” Recommendations: The International Committee of the Red Cross (ICRC) provides a moral and humanitarian compass for the global debate.
    • Tier 1- Prohibitions: The ICRC calls for a total ban on Lethal Autonomous Weapons Systems (LAWS) that are unpredictable (black box) or designed to target humans directly without any human involvement.
    • Tier 2- Restrictions: For all other autonomous systems, the ICRC advocates for strict limits on the Time and Space of operation to ensure that humans can maintain Meaningful Human Control and prevent unintended civilian harm.

Way Forward

As the global landscape of warfare shifts toward machine-speed operations, the traditional “wait-and-see” approach to regulation is becoming a liability. A balanced, phased roadmap—centered on human safety and strategic stability—is required to navigate the transition.

  • Phased Normative Evolution- From Soft to Hard Law: Given the current Definitional Deadlock and the rapid pace of R&D, seeking an immediate legally binding treaty is often counterproductive.
    • Voluntary Code of Conduct: As a first step, states should focus on a non-binding Code of Conduct to build trust. 
      • This includes common standards for Verification and Validation (V&V) of AI systems before they are deployed in a theater of operations.
    • Normative Frameworks first: Establishing global norms—such as a shared “taboo” against fully autonomous lethal targeting—prevents miscalculations while leaving room for technological growth. 
      • Once these norms are stabilized, they can serve as the foundation for future binding treaties.
  • Establishing Nuclear “Firebreaks” and “Firewalls”: The most critical immediate priority is the Nuclear-AI Nexus. AI must be strictly decoupled from strategic launch decisions to prevent algorithmic nuclear war.
    • Total Prohibition: A global agreement is needed to enforce a Nuclear Firewall, ensuring that AI systems are never granted autonomous control over Nuclear Command, Control, and Communications (NC3).
    • Safe Decision Support: While AI can assist in processing early-warning data, the final decision to escalate must remain a “Human-in-the-Loop” function.
  • Implementing a Risk-Based Hierarchy: Not all military AI is equally dangerous. To regulate effectively, states should adopt a “Matrix of Risk” that applies controls proportional to the system’s potential for harm.
    • High-Stakes AI (Lethal Targeting): These systems must face the strictest transparency requirements, mandatory human-on-the-loop oversight, and rigorous Article 36 Reviews.
    • Low-Stakes AI (Logistics & Maintenance): Applications like predictive maintenance for fighter jets or supply chain optimization can be governed with lighter frameworks to encourage innovation and Operational Efficiency.
  • Institutionalizing Lifecycle Article 36 Reviews: Traditional weapon reviews are “one-and-done.” However, “learning” algorithms change over time, necessitating a new approach.
    • Iterative Legal Audits: Nations should share best practices on how to conduct Continuous Article 36 Reviews. This involves re-evaluating an AI system every time its software is updated or its underlying model “learns” from new battlefield data.
    • Interdisciplinary Teams: Legal reviews must evolve from purely legal exercises into multidisciplinary audits involving data scientists, ethicists, and military commanders.
  • Confidence Building Measures (CBMs) to Prevent an Arms Race: To prevent an unregulated AI Arms Race driven by the fear of falling behind, states must engage in transparency initiatives.
    • Voluntary Data Sharing: States can share high-level data on AI safety protocols and Testing & Evaluation (T&E) processes without revealing sensitive technical secrets.
    • Hotlines for Algorithmic Errors: Establishing dedicated communication channels (similar to Cold War-era “hotlines”) to quickly clarify unintended AI behaviors or “flash” escalations between adversaries.

Check Out UPSC CSE Books

Visit PW Store
online store 1

Conclusion

The integration of AI into warfare is an inevitable reality of the 21st century. For India, maintaining Strategic Autonomy means developing powerful AI capabilities while simultaneously leading the global conversation on Meaningful Human Control. By advocating for guardrails that protect International Humanitarian Law (IHL) without stifling innovation, India can ensure that technology remains a tool for security rather than a trigger for unintended catastrophe.

Need help preparing for UPSC or State PSCs?

Connect with our experts to get free counselling & start preparing

Aiming for UPSC?

Download Our App

      
Quick Revise Now !
AVAILABLE FOR DOWNLOAD SOON
UDAAN PRELIMS WALLAH
Comprehensive coverage with a concise format
Integration of PYQ within the booklet
Designed as per recent trends of Prelims questions
हिंदी में भी उपलब्ध
Quick Revise Now !
UDAAN PRELIMS WALLAH
Comprehensive coverage with a concise format
Integration of PYQ within the booklet
Designed as per recent trends of Prelims questions
हिंदी में भी उपलब्ध

<div class="new-fform">







    </div>

    Subscribe our Newsletter
    Sign up now for our exclusive newsletter and be the first to know about our latest Initiatives, Quality Content, and much more.
    *Promise! We won't spam you.
    Yes! I want to Subscribe.