Leveraging its status as one of the Pentagon’s most trusted and embedded technology partners, Microsoft has filed a court brief in support of Anthropic’s legal battle against the Defense Department, a move that could influence AI policy for decades. The brief was submitted to a federal court in San Francisco and argued that a temporary restraining order was necessary to prevent cascading harm to defense and commercial technology supply chains. The filing was joined by a separate brief from Amazon, Google, Apple, and OpenAI.
Anthropic’s legal challenge stems from the Pentagon’s decision to label it a supply-chain risk after the company refused to allow its Claude AI to be used for mass surveillance or autonomous lethal weapons during a $200 million contract negotiation. Defense Secretary Pete Hegseth formalized the designation, leading to the cancellation of Anthropic’s existing government contracts. Anthropic responded by filing lawsuits in California and Washington DC, arguing the designation was unconstitutional retaliation for its AI safety positions.
Microsoft’s intervention is informed by its own direct use of Anthropic’s AI in federal systems and its participation in the Pentagon’s $9 billion cloud computing contract. The company also holds additional multibillion-dollar agreements with government agencies and has a deep commercial interest in ensuring Anthropic remains a viable supplier to the government market. Microsoft’s public statement framed the issue as requiring cooperation between government and industry to achieve both security and responsible AI governance.
Anthropic’s court filings argued that the supply-chain risk label was being misused to punish the company for publicly advocating for AI safety, violating its First Amendment rights. The company also disclosed that it does not currently have confidence in Claude’s safety and reliability in lethal autonomous warfare contexts, which it said justified the restrictions it sought in the contract. The Pentagon’s technology chief publicly ruled out any renegotiation following the formal designation.
Congressional scrutiny is adding pressure from another angle, with House Democrats formally asking the Pentagon whether AI was used in a strike in Iran that reportedly killed more than 175 civilians at an elementary school. Lawmakers want to know whether AI targeting systems were involved and whether human oversight was exercised at critical decision points. These legislative inquiries, combined with Microsoft’s powerful court intervention, are forcing a long-overdue national conversation about the governance of AI in warfare.
Picture Credit: Rawpixel (Public Domain)
Microsoft Uses Its Pentagon Influence to Back Anthropic in a Case That Could Shape AI Policy for Decades
Date:
