OpenAI Signs Pentagon Deal Amid Anthropic Ban
OpenAI CEO Sam Altman announced Friday that the company signed a deal allowing its AI tools to be used in the military’s classified systems. The agreement comes hours after President Donald Trump ordered federal agencies to stop using Anthropic’s AI tools and labeled the company a “supply chain risk.”
Anthropic’s designation followed reported disagreements with the Pentagon over restrictions on autonomous weapons and mass surveillance. Altman said OpenAI’s agreement reflects similar guardrails, including limits on domestic surveillance and human responsibility in the use of force.
Altman also said OpenAI would deploy engineers to the Pentagon to safeguard model behavior. Anthropic said it plans to legally challenge the designation, which typically applies to firms with foreign adversary ties. The Pentagon and OpenAI have been asked to clarify how the agreements differ.
“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems.”
Why This Matters
- The decision reshapes which AI providers can work within U.S. defense systems.
- The dispute highlights tensions over safeguards tied to military AI deployment.
- The “supply chain risk” label imposes significant compliance burdens on contractors.
What’s Next
- Anthropic’s legal challenge is expected to test the basis of the designation.
- Further clarification may emerge on how OpenAI’s safeguards compare in practice.
- Additional AI firms could face pressure to adopt similar defense terms.
