The United States is gearing up to implement significant new regulations for artificial intelligence (AI) companies, particularly those seeking federal contracts. According to draft guidance reviewed by the Financial Times, the US General Services Administration (GSA) is preparing to require AI companies to grant the government broad, irrevocable access to their models for "any lawful" purpose if they want to work with civilian agencies. The new rules are part of a broader push to tighten federal procurement standards for AI services.
While the GSA’s guidelines are focused on civilian agencies, similar principles are reportedly being considered by the Pentagon for military contracts. The move comes amid a high-profile dispute between the Pentagon and AI company Anthropic over the use of its technology.
Pentagon vs. Anthropic: A $200 Million Fallout
The policy discussion gained momentum following a clash between the Department of War and Anthropic, a $380 billion AI startup and the creator of Claude, over access to the company’s technology. The Pentagon canceled a $200 million contract with Anthropic after the company declined to grant unrestricted access to its AI models, citing concerns about potential misuse for domestic surveillance and lethal autonomous weapons.
In response, the Pentagon classified Anthropic as a supply-chain risk, a designation typically reserved for companies linked to foreign adversaries like China or Russia. Furthermore, Anthropic was labeled a "national security risk", becoming the first American company to receive this designation.
Defense Secretary Pete Hegseth criticized Anthropic’s actions, stating the company’s "true objective" was "to seize veto power over the operational decisions of the United States military."
New GSA Guidelines: What AI Companies Must Comply With
The draft GSA guidelines include several key provisions designed to ensure the neutrality and compliance of AI models used in federal contracts. One notable requirement mandates that AI systems be "neutral" and avoid manipulation of responses to align with ideological viewpoints. Specifically, the guidance states, "The contractor must not intentionally encode partisan or ideological judgments into the AI systems’ data outputs." This aligns with an executive order from US President Donald Trump targeting what he referred to as "woke" AI models.
Another provision requires companies to disclose whether their AI models have been "modified or configured to comply with any non-US federal government or commercial compliance or regulatory framework." This clause appears to question potential conflicts with international regulations, such as the European Union’s Digital Services Act.
Anthropic Controversy: Broader Implications
The GSA’s decision to terminate its agreement with Anthropic following the Pentagon dispute underscores the growing friction between AI companies and federal agencies over access and regulatory compliance. The agency, led by Ed Forst, oversees technology procurement for the US federal government through its Federal Acquisition Service. This subsidiary, headed by former KKR director Josh Gruenbaum, has already signed agreements with major AI firms like OpenAI, Meta, xAI, and Google to provide discounted access to their models for US agencies.
The GSA is still finalizing the new guidelines and has stated it will be "soliciting further comments" from industry participants before implementing the rules.
As the US government moves forward with these regulations, the balance between innovation, national security, and ethical concerns remains at the center of the debate, with federal agencies aiming to ensure both operational flexibility and compliance in the rapidly evolving AI landscape.
