A legal battle over the right of AI companies to set ethical limits on how the military uses their technology has drawn in Microsoft and four other tech giants, who have filed supporting court briefs for Anthropic in its lawsuit against the Pentagon’s supply-chain risk designation. Microsoft filed its brief in a San Francisco federal court, calling for a temporary restraining order, while Amazon, Google, Apple, and OpenAI joined a separate supporting filing. The coordinated legal response represents a historic alignment of the technology industry against government overreach.
Anthropic’s case against the Pentagon was triggered by the company’s refusal to allow its Claude AI to be used for mass surveillance of American citizens or to power autonomous lethal weapons as part of a $200 million contract. Defense Secretary Pete Hegseth labeled the company a supply-chain risk following the breakdown of negotiations, and Anthropic’s government contracts began to be cancelled. Anthropic filed two simultaneous lawsuits in California and Washington DC, arguing the designation was unconstitutional and unprecedented.
Microsoft’s court brief is grounded in its direct use of Anthropic’s technology in military systems and its partnership in the Pentagon’s $9 billion cloud computing contract. The company also holds additional federal agreements spanning defense, intelligence, and civilian agencies. Microsoft publicly argued that the government and the technology sector needed to cooperate to ensure AI serves national security without crossing ethical lines related to surveillance or autonomous military action.
Anthropic’s lawsuits argued that the supply-chain risk designation was an unconstitutional act of ideological punishment for the company’s publicly stated AI safety positions. The company disclosed that it does not believe Claude is currently safe or reliable enough for lethal autonomous decision-making, which it said was the genuine basis for its contract demands. The Pentagon’s technology chief publicly stated that there was no chance of renegotiation.
Congressional Democrats are separately demanding answers from the Pentagon about whether AI was used in a strike in Iran that reportedly killed over 175 civilians at a school. Their formal letters ask whether AI targeting tools were involved and what level of human review was applied. These parallel legislative and legal challenges are together forcing a fundamental public debate about the governance of artificial intelligence in American military operations.

