Just days before the India AI Impact Summit, India abstained from signing a pledge to govern the deployment of artificial intelligence (AI) in warfare at the third global summit on Responsible Artificial Intelligence in the Military Domain (REAIM). The governance of military AI often falls outside many conversations on AI regulation, but given its national security implications, it must become a higher priority.

About a third of the participating countries signed the ‘Pathways to Action’ declaration. The United States, India, and China were among those that did not. The previous summit saw 60 countries sign a document outlining a blueprint for action. This year, that number decreased considerably, with only 35 of 85 countries signing the declaration. The drop indicates some of the challenges in governing military AI that affect states’ commitments. These challenges need to be considered as India navigates how to govern military AI without curbing its own technological development.

Strategic reluctance

The challenges are multifold. The first issue with governing military AI is the nature of the technology itself. AI is a dual-use technology — it has both civilian and military applications that are being developed in parallel. This makes it hard to verify compliance with any military AI-related constraints, since it can be hard to discern the end to which R&D is directed. Typically, in the context of arms control, technologies seen as ‘game-changing’ and offering widespread benefits have been harder to restrict. As its use cases expand, AI is increasingly gaining this reputation, with applications ranging from logistics and management to direct combat functions. The perceived military advantage discourages regulation. Furthermore, states that have already invested heavily in AI can utilise civilian-sector R&D for military purposes, making them reluctant to commit to measures that could curb their growth.

AI is already used for a range of benign purposes across the military, such as maintenance, data analysis, and streamlining logistics. However, the elephant in the room is the more complex question of what to do about lethal autonomous weapons systems (LAWS), one of the most controversial use cases of AI. The UN Convention on Certain Conventional Weapons’ (CCW) Group of Governmental Experts convened twice last year, but failed to reach any conclusions or issue concrete recommendations. This stems from challenges with AI itself, which are further magnified in the higher-stakes case of LAWS, as well as from independent conundrums that arise.

Definitional deadlock

There is no international consensus on the definition of LAWS. Countries with limited AI investments and less pressing strategic concerns are keen to have a legally binding instrument in place.

On the other hand, those focused on AI or those with strategic concerns have maintained ambiguous positions, such as India, or have opposed binding frameworks, such as Israel. Technologically advanced states also tend to push for definitions with a higher ‘threshold’ as to what constitutes LAWS to maximise their freedom of action, while states that lack that capacity push for more restrictive definitions.

While there may be a widespread sense that LAWS need some form of regulatory framework, the finer details have become mired in definitional conundrums, hindering any agreement. The absence of a specific definition makes it difficult to establish binding terms, as ideas about what constitutes autonomy vary.

India’s calculated stance

India’s position on military AI is complex and reflects both its economic focus on AI R&D and security compulsions. While it continues to align with broader ideas such as the need for ‘responsible’ use, it has not signed either the 2024 Blueprint for Action in Korea or the ‘Pathways to Action’ declaration. India has also maintained that a legally binding instrument on LAWS would be “premature”. Given the security concerns in its immediate neighbourhood, these decisions can be seen as a means to avoid curbing its own development.

The assertion that a binding instrument is premature makes sense, given the limited publicly known use of military AI. Moral arguments that call for a ban are unlikely to succeed, considering the lack of strong norms against military AI.

However, concerns about accountability and widespread discomfort with the idea of technology being responsible for the loss of human lives make it ripe for a non-binding mechanism to be established. While it is not easy to reach such an agreement, the following provisions could help ensure transparency and the safe deployment of military AI.

First, AI-augmented autonomous decision-making should not be used alongside any country’s nuclear forces. Second, given the complexities of verifying compliance, there should be voluntary confidence-building mechanisms in place that allow states to share data on their development of military AI. Finally, given the lack of a clear definition, an accepted risk hierarchy of military AI use cases should be created. This could serve as a starting point for states to develop their own military AI frameworks.

The way forward

Arguably, India should utilise the opportunity to push for a non-binding framework rooted in its principles of accountability and aligned with its interests. Once norms are established and more cases of military AI deployment in combat have occurred, a legally binding framework could follow. Given the pace at which AI development is advancing and the capital behind it, even looser, non-binding frameworks are the need of the hour. The use of AI in the military is inevitable; the focus should be on ensuring that the right guardrails are put in place.

Adya Madhavan is a research analyst at the Takshashila Institution

Published – February 19, 2026 01:25 am IST


Leave a Reply

Your email address will not be published. Required fields are marked *