The United States government has unveiled new guidelines aimed at regulating civilian artificial intelligence (AI) contracts. The move comes in the wake of a high-profile conflict between the Department of Defense and AI company Anthropic, according to a report by the Financial Times.
Pentagon labels Anthropic a security risk

The dispute escalated on Thursday when the Pentagon designated Anthropic as a "supply-chain risk", effectively prohibiting government contractors from using the company’s AI technology in military-related work. The designation followed months of disagreement over safeguards implemented by Anthropic, which the Defense Department argued were excessive.
The General Services Administration (GSA), the federal body responsible for procurement, has played a critical role in drafting the new regulations. According to the Financial Times, these draft guidelines will require AI companies seeking government contracts to grant the U.S. government an irrevocable license to use their systems for any legal purposes.
Stricter rules for civilian AI contracts
The guidelines, which would apply specifically to civilian contracts, are part of a wider federal initiative to enhance the procurement and regulation of AI technologies. The Financial Times notes that these measures align closely with separate regulations under consideration for military contracts. Among the new requirements is a mandate that contractors ensure that "partisan or ideological judgments" are not intentionally embedded into the data outputs of their AI systems.
The GSA draft also requires companies to disclose whether their AI models have been "modified or configured to comply with any non-U.S. federal government or commercial compliance or regulatory framework", the Financial Times reported.
Anthropic’s federal contract terminated
As part of this broader effort, the GSA recently terminated its OneGov deal with Anthropic. This decision effectively ends the company’s ability to provide services to the Executive, Legislative, and Judicial branches of government. Josh Gruenbaum, commissioner of the GSA’s Federal Acquisition Service, confirmed this action in an email to Reuters, stating: "It would be irresponsible to the American people and dangerous to our nation for GSA to maintain a business relationship with Anthropic."
Gruenbaum added, "As directed by the President, GSA has terminated Anthropic’s OneGov deal – ending their availability to the Executive, Legislative, and Judicial branches through GSA’s pre-negotiated contracts."
The White House has yet to issue a public statement regarding the matter, according to Reuters.
Conclusion
The newly introduced guidelines mark a significant step in the U.S. government’s effort to exert greater control over the use of AI technologies in federal contracts. While the measures aim to ensure both transparency and security, the fallout from the Anthropic dispute signals heightened scrutiny for AI firms looking to work with the government.