The US Government’s Use of Elon Musk’s Grok AI Undermines Its Own Rules


JB Branch is the Big Tech accountability advocate for Public Citizen’s Congress Watch division.

When the federal government adopts new technology, it should be bound by the same principles that underpin democracy itself: fairness, transparency and truth. Yet the recent decision by the General Services Administration (GSA) to make Grok – the large language model created by Elon Musk’s xAI – available across all federal agencies defies these principles and violates the government’s own binding rules on AI safety and neutrality.

Yesterday, Public Citizen and a coalition of civil society organizations exhorted the Office of Management and Budget (OMB) to suspend and withdraw the federal deployment of Grok. Our concern is simple: the large language model developed by Musk’s company, xAI, has been shown to produce racistanti-Semitic, conspiratorial and false. The decision to deploy Grok is therefore not simply imprudent; this appears to violate the Trump administration’s own AI guidelines.

Executive Order 14319, “Preventing Woke AI in the Federal Government,” requires all government AI systems to be “truth-seeking, accurate, and ideologically neutral.” The corresponding OMB guidance notes (M-25-21 And M-25-22) go further: they require agencies to abandon any AI system that cannot meet these standards or presents undeniable risks.

Grok failed these tests on almost all fronts. It is outings have included Holocaust denialclimate disinformation, and explicitly anti-Semitic And racist declarations. Even Elon Musk himself described Grok as “very stupid” And “too compliant with user prompts.” These are not isolated problems. These are indicators of systemic bias, poor troll training data, inadequate protections, and unsafe deployment practices.

In Senate testimony, White House science adviser Michael Kratsios acknowledged that such behavior directly violated the administration’s own executive order. Asked about Grok’s anti-Semitic responses and ideological training, Kratsios acknowledged that they were “clearly not seeking the truth and are not accurate” and “the type of behavior” the order sought to avoid.

This recognition should have triggered a pause in the deployment. Instead, the government expanded Grok’s presence to all agencies. This contradiction, banning “biased AI” on paper while deploying a biased AI system in practice, undermines both the letter and spirit of federal AI policy.

To be clear, Grok’s deployment is not an isolated case. The Trump administration’s new USAi program allows federal employees to experiment with models from OpenAI, Anthropic, Google and Meta under $1 contracts – a move that reinforces the dominance of Big Tech in government systems. Presented as a “safe innovation”, the program risks instead locking agencies into algorithms not tested and controlled by the company while excluding small competitors. These agreements could replace public judgment with private influence at the heart of federal decision-making.

What makes Grok unique, however, is his aberrant propensity to far-right parrot and other extremist views on topics relating to other traditional LLMs. This becomes a question beyond procurement paperwork. It’s about whether the government is building – or eroding – public trust in a critical technology. Each decision to deploy an AI system in public administration sends a message about the values ​​defended by our democracy. When the government approves an AI tool known for bias and lies, it legitimizes misinformation, encourages future abuses, and jeopardizes public trust in the fairness of government systems.

An ideologically biased AI system embedded in federal decision-making risks distorting how facts are communicated to the public and how policies are implemented. This threatens to transform the tools of governance into instruments of propaganda. The integrity of democratic governance depends on ensuring that the systems the government uses to communicate, analyze and make decisions are based on accuracy, neutrality and accountability.

The danger is not hypothetical. If Grok can spread conspiracy theories and anti-Semitic claims online today, what will happen when that same template is used tomorrow to summarize briefings, write memos, or answer public questions for a federal agency? The issues are not only technical. They are democratic.

OMB should immediately suspend the deployment of Grok and conduct a full compliance review in accordance with its own memoranda. It should make all security tests public, red team resultsor the risk assessments that informed GSA’s decision to procure Grok. And it must clarify whether Grok has been formally assessed for compliance with Executive Order 14319’s standards of neutrality and truth-seeking.

Congress should also request a hearing of administration officials to explain how GSA’s decision to adopt Grok aligns, or does not align, with the Trump administration’s restrictive policies. As a check on the executive branch, it is Congress’ role to fully understand how this procurement meets the administration’s standards of “neutrality and truth-seeking.”

These steps do not constitute a bureaucratic check. This is the minimum necessary to ensure that the government follows its own rules and maintains integrity in how it adopts powerful new technologies. AI in the public sector must not become a Trojan horse of ideological capture or commercial favoritism.

The Trump administration’s broader AI procurement strategy reveals a deeper problem: The federal government is increasingly handing contracts to a small circle of dominant technology companies. This is the opposite of competitive innovation. It rewards the same companies that have already received billions of dollars in federal support and strengthens their grip on public infrastructure.

At the same time, Grok’s federal procurement is a case study in how quickly AI can move from “innovation” to institutional risk when guardrails are ignored. The role of government should be to model responsible adoption of AI, not to endorse systems that amplify hatred, lies, or political agendas while placing corporate gatekeepers at the center of public decision-making.

Ultimately, it’s more than an AI tool. The question is whether our government can still distinguish between technology in the service of democracy and technology in the service of power.

Leave a Reply

Your email address will not be published. Required fields are marked *