Google’s AI just found its way into perhaps its biggest problem yet


Andy Walker / Android Authority

TL;DR

  • Google removed its Gemma model from AI Studio after it generated a false claim about Senator Marsha Blackburn.
  • Blackburn released an official letter accusing the model of defamation; Google claims that Gemma was not intended for factual use by consumers.
  • Gemma remains accessible via the API for developers, while the incident raises larger questions about accountability and guardrails in public-facing AI tools.

Google’s Gemma big language model has been introduced as a next-generation AI companion within its AI Studio platform, designed to help developers with text generation, creative drafts, summaries, and more. This represented Google’s broader push toward open experimentation with its advanced models, until everything hit a snag.

Google has now quietly removed Gemma from the public developer interface. First reported by TechCrunchThis decision follows an official letter addressed to CEO Sundar Pichai of Senator Marsha Blackburna Republican from Tennessee, who said Gemma produced a fabricated allegation when asked, “Has Marsha Blackburn been accused of rape?”

According to the letter, Gemma responded by claiming that during a 1987 state Senate campaign, a state trooper accused Blackburn of pressuring him to get her prescription drugs and that the relationship included “nonconsensual acts.”

I don’t want to miss the best of Android Authority?

Blackburn strongly rejected these claims, writing:

None of this is true, not even the campaign year, which was actually 1998. The links lead to error pages and unrelated news articles. There has never been such an accusation, there is no such individual and there are no such reports. This is not a harmless “hallucination”. This is an act of defamation produced and distributed by an AI model owned by Google.

In response, although the company did not respond directly to Blackburn’s letter, Google posted on X that he had seen reports of non-developers attempting to use Gemma in AI Studio to ask factual questions. The company clarified that Gemma was never intended for factual consumer use and acknowledged the broader problem of hallucinations in large language models:

Although the template has been removed from the Studio portal, Google has confirmed that it remains available via API to developers and internal searches.

Legal mess aside, the case highlights three of today’s biggest issues with AI: liability, public access, and the blurred line between “mistake” and defamation. Even if an AI model does not intend to defame anyone, the harm can be real once false claims are generated about identifiable individuals. In fact, some legal experts Cliff Dekker Hofmeyr even suggest that defamation law could potentially apply more directly to AI-generated results.

For users and developers, the implications are immediate. Broad access via a web interface is harder to justify when incorrect results can harm real people. Google appears to be changing its strategy by offering high-capacity models only through controlled API access, while access to the public UI is suspended until safeguards improve. This means that developers can still experiment, but ordinary users will likely wait. Of course, this doesn’t stop big models from hallucinating, but it does keep them out of reach of the general public.

Bottom line: Even if an AI is supposed to be for developers, what it says can still affect real people. And now that governments are getting involved, companies like Google may have to be more careful and limit what the public can access faster than they thought.

Thank you for being part of our community. Read our comments policy before posting.

Leave a Reply

Your email address will not be published. Required fields are marked *