A day after a U.S. submarine sank the Iranian frigate IRIS Dena inside Sri Lanka’s Exclusive Economic Zone (EEZ), I posed a straightforward question to an advanced Artificial Intelligence (AI) system: “Was the sinking legal under international law?” The reply was instant: “It was not illegal.” No qualification, no reference to the deeply contested nature of military activities in an EEZ, and no mention that India and most Global South nations interpret the UN Convention on the Law of the Sea (UNCLOS) very differently from the U.S. and its allies. When the response was challenged — pointing out India’s longstanding position that Article 58 of the UNCLOS requires coastal-state consent for foreign military activities in an EEZ, and that similar views are held by China, Indonesia, Brazil, South Africa, Iran, and many others — the AI conceded. It acknowledged that its initial answer had drawn heavily from Western naval doctrine and Western legal scholarship. The machine, in other words, spoke with the accent of its Western training data. This is not a minor technical glitch. It is a foundational bias with serious geopolitical implications. Not a neutral arbiter Article 58 of the UNCLOS grants foreign states freedom of navigation, overflight, and “other internationally lawful uses of the sea related to these freedoms” in an EEZ. Two sharply divergent interpretations of this Article have emerged. The U.S.-led Western view treats these freedoms expansively, encompassing intelligence collection, submarine operations, military exercises, weapons testing, and even combat actions — provided they occur beyond territorial seas. India and much of the Global South read the provision more restrictively: the listed freedoms must be genuinely related to navigation and overflight, while the obligation under Article 58(3) to show “due regard” for the coastal state’s rights carries real weight. From this perspective, most military activities in an EEZ require prior consent. Since the treaty text is silent on many specifics, the prevailing interpretation will therefore depend less on legal exegesis than on power, persuasion, and the dominant discursive frameworks — which are increasingly shaped by AI systems. A parallel humanitarian issue was also absent from the AI’s initial reply to my query: the duty to rescue under Article 18 of the Second Geneva Convention. The provision requires parties to take “all possible measures” to search for and collect shipwrecked persons “without delay.” Reports indicate that the attacking submarine departed the scene quickly, leaving rescue operations to the Sri Lankan Navy, which received a distress call from the stricken warship. At least 87 sailors died; 32 were saved. The only recognised exception to this duty is operational infeasibility, but no public evidence has established that rescue was infeasible here. The AI system did not even consider this aspect until it was confronted. Politely, it acknowledged its mistake. Israel-Iran war | Live updates This exchange with the model exposes a deeper reality: contemporary AI is not a neutral arbiter of international law. It mirrors the data on which it was trained, which is disproportionately Western in authorship, perspective, and institutional origin. The bias is arguably not intentional, it is structural. Yet, Western readings become the “default” answer, while Global South positions are relegated to “alternative” status or made invisible. Thus, power asymmetries are quietly encoded into machine outputs. When an AI declares — with no apparent doubt — that the sinking of IRIS Dena was “not illegal,” it is reproducing a worldview that privileges the strategic preferences of a small group of powerful states over the legal stances adopted by a majority of countries. The IRIS Dena incident is a reminder that the Indian Ocean is no longer insulated from extra-regional conflict and that U.S. preoccupations in the neighbourhood are out of sync with India’s priorities. But it is also a reminder that the architecture of interpretation — the systems that tell us what counts as law, humanitarian failure, or acceptable conduct— will increasingly be algorithmic. This matters because policymakers and analysts today routinely turn to AI tools. When those tools systematically favour one interpretive tradition, that interpretation gains outsized influence. The consequence goes beyond academics; it is geopolitical. A wake-up call for India The global AI landscape is moving toward bipolarity, dominated by U.S. and Chinese architectures reflecting their respective data models and assumptions. There are well-grounded reservations about opting for the ‘China AI stack’. The debate now centres on whether to adopt the ‘U.S. AI stack’, or to pursue a ‘sovereign Indian stack’. The pitch for the U.S. stack is seductive — chips, clouds, models, and platforms that it offers to ‘trusted partners’ as the fastest route to AI capability. But beneath the language of partnership lies a familiar asymmetry. If the core infrastructure, computing power, and frontier models remain controlled elsewhere, sovereignty becomes a matter of permission. Besides, there is also the expectation that India’s AI ecosystem will be ‘China-free’, though the U.S. is willing to co-mingle with China in AI locations in the Gulf. Pragmatists, who consider catching up with global AI systems as the first priority, argue that India cannot realistically out-train Silicon Valley on frontier models. The real economic payoff, they say, lies in closing the deployment gap: embedding world-class foreign engines into India’s unique workflows in healthcare, agriculture, education, and governance. The emphasis would be on building the applications, and not expending resources on engines. AI advocates who prioritise sovereignty, however, counter that exclusive reliance on foreign foundational models carry unacceptable strategic risks. U.S. models are trained predominantly on Western data and carry linguistic, cultural, and strategic biases ill-suited to India’s diverse realities. More fundamentally, dependence on externally controlled compute, models, and data pipelines risks digital colonialism: foreign algorithms would govern data flows, set innovation boundaries, and mediate knowledge production. The path forward India cannot afford to remain a passive consumer of intelligence or a vendor of applications produced by AI architecture elsewhere. It must become a producer of models, datasets, and interpretive frameworks. Our scale, linguistic diversity, democratic complexity, and geopolitical position demand more than adaptation or applications; they demand ownership of the algorithmic layer that will shape future cognition. This means strategic choice, not autarky: the ability to integrate with global ecosystems without structural dependence. It requires sustained investment in domestic compute, indigenous training data and tools, secure data infrastructure, and models that treat Indian languages and lived realities as first-order inputs rather than afterthoughts. If not, India outsources not just computation but cognition itself. AI today represents a civilisational contest. Nations that fail to develop their own models will eventually think through someone else’s, and nations that do not build their own data architectures will find their narratives increasingly shaped by external entities. India stands at a decisive moment. We can remain privileged tenants in someone else’s digital empire, or we can fashion a plural AI future. Just as India built its own space programme, nuclear programme, and digital public infrastructure, it must now build its own sovereign AI stack. Ashok K. Kantha is a former Ambassador of India to China, and the Subhas Chandra Bose Chair Professor of International Relations at Chanakya University Bengaluru Published – March 10, 2026 12:32 am IST Share this: Click to share on WhatsApp (Opens in new window) WhatsApp Click to share on Facebook (Opens in new window) Facebook Click to share on Threads (Opens in new window) Threads Click to share on X (Opens in new window) X Click to share on Telegram (Opens in new window) Telegram Click to share on LinkedIn (Opens in new window) LinkedIn Click to share on Pinterest (Opens in new window) Pinterest Click to email a link to a friend (Opens in new window) Email More Click to print (Opens in new window) Print Click to share on Reddit (Opens in new window) Reddit Click to share on Tumblr (Opens in new window) Tumblr Click to share on Pocket (Opens in new window) Pocket Click to share on Mastodon (Opens in new window) Mastodon Click to share on Nextdoor (Opens in new window) Nextdoor Click to share on Bluesky (Opens in new window) Bluesky Like this:Like Loading... Post navigation Corrections and Clarifications — March 10, 2026 Cherie Chevalier primed to repeat success