Pentagon Used AI ‘Claude’ in Iran Strikes After Trump Ban
A stunning twist in America’s AI battle has just surfaced and it raises serious questions about who really controls military technology in the United States.
Reports now indicate that the US military used Claude, the artificial intelligence model developed by Anthropic, during recent strikes on Iran, despite President Donald Trump ordering federal agencies to immediately cut ties with the company. The directive came just hours before the joint US-Israel bombardment began. Yet according to multiple reports, the AI system was still active inside military operations.
Claude is not just a chatbot. It is an advanced AI model designed to process vast amounts of information, analyze intelligence and simulate complex scenarios. In this case, it was reportedly used to assist with intelligence assessments, battlefield simulations and even target selection. That suggests the technology is deeply embedded in US defense systems and not something that can be switched off overnight.
Also Read:- Sesko Strikes Again as 10-Man Palace Fall at Old Trafford
- Snow Before Sunrise Triggers 100+ School Delays Across Central Indiana
President Trump publicly denounced Anthropic, calling it politically biased and announcing a full ban on its tools across government agencies. The clash escalated after Anthropic objected to how its AI had previously been used in military operations. The company maintains that its policies prohibit applications involving mass surveillance or fully autonomous weapons. Defense officials, however, argue that the military must have full access to lawful tools needed to protect national security.
This confrontation highlights a larger issue. Artificial intelligence is no longer experimental inside defense systems. It is operational. It is integrated. And in high-stakes scenarios like strikes on Iran, it may influence decisions that carry life-and-death consequences. The Pentagon has acknowledged that phasing out Anthropic’s tools will take time, reportedly up to six months, because of how widely deployed they have become.
Meanwhile, OpenAI, led by Sam Altman, has reportedly reached an agreement with the Department of Defense to supply AI systems for classified networks. So even as one AI company exits under political pressure, another is stepping in.
The bigger question is this: who sets the limits for AI in warfare? Technology companies? The White House? Or the Pentagon itself? As artificial intelligence becomes more powerful and more central to military planning, the balance between innovation, ethics and national security is being tested in real time.
This story is far from over. The legal, political and military consequences could reshape how AI is used in global conflict. Stay with us as we continue to track developments in this rapidly evolving intersection of technology and war.
Read More:
0 Comments