AI Safety Meets the War Machine
AI Safety Meets the War Machine
When Anthropic became the first major AI company cleared by the US government for classified use last year, the news did not make a major splash. But this week, a second development hit like a cannonball: The Pentagon is reconsidering its relationship with the company, including a 00 million contract, ostensibly because the safety-conscious AI firm objects to participating in certain deadly operations.
The so-called Department of War might even designate Anthropic as a supply chain risk — a scarlet letter usually reserved for companies that do business with countries scrutinized by federal agencies. That means the Pentagon would not do business with firms using Anthropic AI in their defense work.
Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people, said chief Pentagon spokesperson Sean Parnell.
This is a message to other companies as well: OpenAI, xAI, and Google, which currently have Department of Defense contracts for unclassified work, are jumping through the requisite hoops to get their own high clearances.