4 min readNew DelhiUpdated: Mar 18, 2026 09:37 AM IST
Amid reports that artificial intelligence (AI) tools were being used by the United States military in its ongoing war on Iran, top AI companies like OpenAI and Anthropic are looking to hire experts that can help suggest guardrails when their software is being used in situations like military conflicts.
Anthropic, for instance, is looking to recruit a chemical weapons and high-yield explosives expert to try to prevent “catastrophic misuse” of its software. OpenAI is hiring a researcher in “biological and chemical risks”.
The moves come as AI systems are seen rapidly becoming embedded in modern warfare — from intelligence processing to battlefield planning.
The growing use of AI in warfare has come under intense scrutiny after reports emerged that Anthropic’s AI model Claude was used by the US military during operations against Iran, even as Washington and the company remain locked in a bitter dispute over the technology’s military applications.
Anthropic vs the Pentagon
Claude, a large language model developed by the AI startup Anthropic, has been widely deployed across US national security agencies for tasks such as intelligence analysis, operational planning and cyber operations. United States Department of Defense systems have used the technology in modelling battle scenarios and analysing intelligence data.
However, tensions escalated earlier this year after the Pentagon designated Anthropic as a “supply chain risk”, effectively ordering federal agencies to phase out the company’s technology within six months. The decision followed disagreements over how the military could use Claude, with Anthropic insisting on safeguards preventing the AI from being used for mass domestic surveillance or for developing fully autonomous weapons systems.
Despite the order, multiple reports suggest that Claude continued to play a role in the US military campaign in Iran. The AI system is believed to have been used for tasks such as target identification, intelligence assessment and simulating possible battlefield outcomes during airstrike planning.
Story continues below this ad
The revelations have sparked controversy because the alleged use of the technology came after the Trump administration directed federal agencies to stop using Anthropic’s AI tools, highlighting the military’s reliance on advanced AI systems for modern warfare.
What does this mean for future use of AI in wars?
The dispute reflects a deeper clash between Silicon Valley’s attempts to set ethical boundaries for AI and the Pentagon’s desire for unrestricted access to cutting-edge technologies. While the US military argues that it should be able to deploy AI tools for “all lawful purposes,” Anthropic has maintained that private companies should retain some control over how their models are used.
The fallout has also spilled into the defence tech ecosystem. Companies such as Palantir Technologies, which integrate AI systems into military software platforms, have acknowledged that their tools remain linked with Claude even as the Pentagon attempts to transition away from Anthropic’s technology.
Story continues below this ad
Meanwhile, Anthropic has challenged the Pentagon’s designation in court, arguing that the “supply chain risk” label is unjustified and politically motivated. At the same time, internal Pentagon memos suggest the department may allow limited exemptions where Anthropic’s tools remain critical to national security operations.
© The Indian Express Pvt Ltd


