Image used for representation purpose only. | Photo Credit: Reuters The story so far: The U.S. Department of Defense (styled as the Department of War under the second Trump administration) has entered into a public spat with the AI firm Anthropic, which makes the Claude AI product. The DoD has threatened to designate Anthropic a “supply chain risk,” dissuading a wide variety of firms that work with the US government from patronising Anthropic’s products. ChatGPT maker OpenAI subsequently entered into the picture, obtaining an agreement it said was not radically different from what Anthropic wanted. What is Claude? Why is the US government attaching so much importance to it? Claude is an AI chatbot that helps organisations and individual users create and modify code. Its Claude Code product has been received extraordinarily well due to its capabilities. Claude Code is among the few AI products that is run with extremely powerful large language models (LLMs) while also supporting on-device creation and editing of tools, once it has access to a range of software libraries to work with. The product is very compelling to the defence establishment because it can iterate on high-tech weapons and defence systems. Recruitment of programmers for these systems tends to be slow, as any critical weapons system is protected by several layers of secrecy, necessitating security clearances that can be time-consuming. Claude Code has been a compelling proposition to the DoD as it likely allows them to iterate on programs that drive its technology quickly. While Claude Code does not perfectly execute programming tasks all the time, it performs well enough that a lot of development timelines have been shrunk in organisations that have deployed it widely, especially among personnel that are already experienced software developers. Why did Anthropic clash with the DoD? Anthropic was onboarded to the DoD as a part of a $200 million contract last June, which allowed the US government to use Claude’s services from dedicated infrastructure hosted by Amazon Web Services. The issues between the firm and the DoD started on January 9, when defense secretary Pete Hegseth published a memorandum entitled “Accelerating America’s Military AI Dominance,” in which he called for the elimination of “blockers to data sharing, Authorizations to Operate (ATOs), test and evaluation and certification, contracting, hiring and talent management, and other policies that inhibit rapid experimentation and fielding”. Anthropic has a much-publicised “constitution” for Claude that discourages the model from supporting widespread surveillance and enabling fully autonomous weaponry. Dario Amodei, the firm’s co-founder, insisted on strong language in the agreement between the DoD and Anthropic to bake in protections against domestic surveillance of US residents and enabling fully autonomous weaponry. The firm was given until last Friday to relent and let the DoD have access to completely unrestricted access to its models. It refused, saying in a blog post that it would help the DoD transition to a new provider. The DoD proceeded with the classification of Anthropic as a supply chain risk, a designation that is usually applied for firms that have such dodgy practices that their products can provide foreign adversaries a backdoor into critical systems. While this designation only disallows DoD suppliers and partners to not use Claude on systems that are dedicated to DoD, there are concerns that executives may lean on the side of caution and completely remove ties with Claude. What is OpenAI’s agreement? How is it different? OpenAI negotiated an agreement with the DoD that the former claims has the same protections against surveillance and fully autonomous weaponry that Anthropic sought. It is not fully clear why OpenAI was able to land this deal while Anthropic was cast out. “The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols,” a portion of the agreement made public by OpenAI says. “The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities.” Anthropic is reported to have sought greater clarity in the agreement’s legal language that would prohibit the use cases described above even if they were legalised. “We think our red lines are more enforceable here because deployment is limited to cloud-only (not at the edge), keeps our safety stack working in the way we think is best, and keeps cleared OpenAI personnel in the loop,” OpenAI said in a statement. “We don’t know why Anthropic could not reach this deal, and we hope that they and more labs will consider it.” Published – March 03, 2026 03:27 pm IST Share this: Click to share on WhatsApp (Opens in new window) WhatsApp Click to share on Facebook (Opens in new window) Facebook Click to share on Threads (Opens in new window) Threads Click to share on X (Opens in new window) X Click to share on Telegram (Opens in new window) Telegram Click to share on LinkedIn (Opens in new window) LinkedIn Click to share on Pinterest (Opens in new window) Pinterest Click to email a link to a friend (Opens in new window) Email More Click to print (Opens in new window) Print Click to share on Reddit (Opens in new window) Reddit Click to share on Tumblr (Opens in new window) Tumblr Click to share on Pocket (Opens in new window) Pocket Click to share on Mastodon (Opens in new window) Mastodon Click to share on Nextdoor (Opens in new window) Nextdoor Click to share on Bluesky (Opens in new window) Bluesky Like this:Like Loading... Post navigation Muslim‑Friendly Hotel of the Year–The Mira Hong Kong Co-hosts Culturally Enriching Iftar Dinner for the Second Year with Miramar Travel and Türkiye Consulate Supermicro Expands Support for AI-RAN and Sovereign AI Solutions to Deliver High-Performance, Efficient, and Scalable AI Infrastructure