Over the last few days, the U.S. Department of Defence unceremoniously cast out the AI firm Anthropic, which develops the coding assistant Claude, and designated the firm a “supply chain risk”, the kind of cattle branding reserved for firms that are compromised by hostile foreign states. The reason was simple: Anthropic refused to relent on allowing its tools to be used for widespread domestic surveillance and fully autonomous weaponry. The high-octane conflict with the U.S. government — which accused Anthropic of following a “woke” and “radical” agenda — is a shocking escalation, despite prior concessions that would allow the U.S.’s defence establishment’s use of Claude, which helps create and update code bases quickly. The conflict also sends a chilling message — a great power can do anything, with or without safeguards, to attain a strategic upper hand. This is a dangerous message to send in a multipolar world where shared standards around safety are increasingly difficult to achieve. This is no longer the world of the Bletchley Park AI safety summit. It was a gathering that acknowledged the rapidly growing power of AI systems, and the shared global imperative to ensure that high-stakes risks be mitigated. What resonance does that worthy message have when the country on the frontier of AI development so publicly disavows any form of safety control for war, at a time when a reckless attack on Iran — with, reportedly, some assistance from Claude — is grinding on? Firms need to show some backbone when dealing with outrageous demands that could have chilling consequences in their home country and around the world. After all, if the U.S. demands the policy space for domestic surveillance in such a full-throated fashion, where does that leave countries where infiltrating the political opposition with spyware on their phones is already the norm? Anthropic showed this backbone, and it deserved the solidarity of its peers. Sadly, that is not what happened, as ChatGPT maker OpenAI appeared to give the U.S. defence department the flexibility it sought just hours after Anthropic became persona non grata. Despite OpenAI’s assurances that its agreement provides key safeguards, AI safety has been harmed, with the other superpower and a host of middle powers around the world watching closely. Firms may not be the ideal characters to take a stand — taking into consideration, after all, their profit motivations — but as strong institutions are worn down around the world, there are few places to look to for leadership on safety. When a firm with billions of dollars at stake says ‘no’, it is not a promising sign of things to come when another steps in to say ‘maybe, yes’. Published – March 05, 2026 12:20 am IST Share this: Click to share on WhatsApp (Opens in new window) WhatsApp Click to share on Facebook (Opens in new window) Facebook Click to share on Threads (Opens in new window) Threads Click to share on X (Opens in new window) X Click to share on Telegram (Opens in new window) Telegram Click to share on LinkedIn (Opens in new window) LinkedIn Click to share on Pinterest (Opens in new window) Pinterest Click to email a link to a friend (Opens in new window) Email More Click to print (Opens in new window) Print Click to share on Reddit (Opens in new window) Reddit Click to share on Tumblr (Opens in new window) Tumblr Click to share on Pocket (Opens in new window) Pocket Click to share on Mastodon (Opens in new window) Mastodon Click to share on Nextdoor (Opens in new window) Nextdoor Click to share on Bluesky (Opens in new window) Bluesky Like this:Like Loading... Post navigation Thirupparankundram row: Justice Swaminathan grants time to respond to his suggestion T.N. CM launches Integrated Drinking Water Grievance Redressal Centre