Microsoft’s decision to file a supporting court brief for Anthropic in its battle against the Pentagon’s supply-chain risk designation signals the technology industry’s collective refusal to allow the military to dictate the ethical terms on which AI companies operate. The brief was submitted to a federal court in San Francisco and called for a temporary restraining order against the designation. The filing was joined by a separate brief from Amazon, Google, Apple, and OpenAI, making this a comprehensive display of industry solidarity.
Anthropic’s legal challenge began after the company refused to sign a $200 million contract without protections preventing the use of its Claude AI for mass surveillance of US citizens or autonomous lethal weapons. Defense Secretary Pete Hegseth labeled the company a supply-chain risk following the collapse of those negotiations, and the Pentagon’s technology chief publicly ruled out any prospect of renegotiation. Anthropic filed two simultaneous lawsuits challenging the designation in California and Washington DC.
Microsoft’s brief is grounded in its direct use of Anthropic’s AI in systems it builds for the US military and its participation in the Pentagon’s $9 billion cloud computing contract. The company also holds additional federal agreements with defense, intelligence, and civilian agencies worth several billion dollars more. Microsoft publicly argued that responsible AI governance and robust national defense were complementary rather than competing priorities that required partnership between government and industry.
Anthropic’s court filings argued that the supply-chain risk designation was an unconstitutional act of ideological retaliation for the company’s publicly stated AI safety positions. The company disclosed that it does not currently believe Claude is safe or reliable enough for lethal autonomous operations, which it said was the legitimate technical and ethical basis for its contract demands. Anthropic noted that the designation had never before been applied to a US company.
Congressional Democrats are separately pressing the Pentagon for information about whether AI was involved in a strike in Iran that reportedly killed more than 175 civilians at an elementary school. Their formal inquiries focus on AI targeting systems, human oversight, and the potential role of specific AI tools in the attack. The convergence of these legislative inquiries with Anthropic’s lawsuits and the industry’s unified legal response is creating a defining moment for the governance of artificial intelligence in American national security.