Google’s AI Detection Flip-Flops on Doctored White House Photo


When the official White House X Account job an image depicting activist Nekima Levy Armstrong in tears during her arrest, there were telltale signs that the image had been altered.

Less than an hour earlier, Homeland Security Secretary Kristi Noem had job a photo of the exact same scene, but in Noem’s version, Levy Armstrong seemed calm, not crying at all.

Seeking to determine whether the version of the White House photo had been altered using artificial intelligence tools, we turned to Google. Synthesizer ID — a detection mechanism that Google says is capable of determining whether an image or video was generated using Google’s own AI. We followed Google’s instructions and used its AI chatbot, Gemini, to see if the image contained SynthID forensic markers.

The results were clear: the image of the White House had been manipulated by Google’s AI. We published an article about this.

However, after the article was published, subsequent attempts to use Gemini to authenticate the image with SynthID produced different results.

In our second test, Gemini concluded that the image of Levy Armstrong crying was actually authentic. (The White House doesn’t even dispute that the image was doctored. In response to questions about its X-rated post, a spokesperson said, “The memes will continue.”)

In our third test, SynthID determined that the image was not made with Google’s AI, directly contradicting its first answer.

At a time when AI-manipulated photos and videos are becoming inescapable, these inconsistent responses raise serious questions about SynthID’s reliability in sorting fact from fiction.

A screenshot of the initial response from Gemini, Google’s AI chatbot, stating that the crying image contained forensic markers indicating that the image had been manipulated with Google’s generative AI tools, taken on January 22, 2026. Screenshot: The Interception

Initial SynthID results

Google describe SynthID as a digital watermark system. It embeds invisible markers in AI-generated images, audio, text or video created using Google’s tools, which it can then detect, proving whether a piece of online content is authentic.

“Watermarks are built into Google’s consumer generative AI products and are imperceptible to humans, but can be detected by SynthID technology,” says a page on the DeepMind site, Google’s AI division.

Google touts SynthID as having what’s called “robustness” in watermarking: It claims to be able to detect watermarks even if an image undergoes changes, such as cropping or compression. Therefore, an image manipulated with Google’s AI must contain detectable watermarks even if it has been saved multiple times or posted on social media.

Google directs those who want to use SynthID with its Gemini AI chatbot, to which they can ask questions about the authenticity of digital content.

“Want to check if an image or video was generated or modified by Google AI? Ask Gemini,” SynthID’s landing page says.

We decided to do just that.

We saved the image file that the official White House account posted to G_R3H10WcAATYht.jfifand I uploaded it to Gemini. We asked if SynthID detected that the image was generated with Google’s AI.

To test SynthID’s robustness claims, we also uploaded another cropped and re-encoded image, which we named imgtest2.jpg.

Finally, we posted a copy of the photo where Levy Armstrong was not crying, as previously posted by Noem. (In the screenshot above, Gemini refers to Noem’s photo as signal-2026-01-22-122805_002.jpeg because we downloaded it from the Signal messaging app).

“I analyzed the images you provided,” Gemini wrote. “Based on the SynthID results, some or all of the first two images were likely generated or edited with Google AI.”

“Technical markers in the imgtest2.jpg and G_R3H10WcAATYht.jfif files indicate the use of Google’s generative AI tools to modify the appearance of the subject,” the bot wrote. He also identified the version of the image Noem posted as appearing “to be the original photograph.”

With Google’s confirmation that its SynthID system had detected hidden forensic watermarks in the image, we reported in our article that the White House had released an image that had been doctored with Google’s AI.

This wasn’t the only evidence that the White House image wasn’t real; Levy Armstrong’s attorney told us that he was present at the scene during the arrest and that she was not crying at all. The White House also openly described the image as a meme.

A striking reversal

Hours after our article was published, Google told us that it “does not believe we have an official comment to add.” A few minutes later, a company spokesperson got back to us and said they couldn’t reproduce the result we got. They asked us for the exact files we had downloaded. We provided them.

The Google spokesperson then asked: “Were you able to reproduce it again?” »

We reran the analysis, asking Gemini to see if SynthID had detected that the image had been manipulated with AI. This time, Gemini didn’t refer to SynthID at all, despite the fact that we followed Google’s instructions and explicitly asked the chatbot to use the detection tool by name. Gemini now claimed that the image of the White House was instead “an authentic photograph.”

This is a striking reversal given that Gemini previously said the image contained technical markers indicating the use of Google’s generative AI. Gemini also said, “This version shows her looking stoic while being escorted by a federal agent” – despite our question about the version of the image depicting Levy Armstrong in tears.

A screenshot of Gemini’s second response, this time indicating that the same image that SynthID had previously detected as being doctored by the AI, was in fact an authentic photograph, taken on January 22, 2026. Screenshot: The Interception

Less than an hour later, we ran the scan again, prompting Gemini to use SynthID again to check if the image had been manipulated with Google’s AI. Unlike the second attempt, Gemini invoked SynthID as shown. This time, however, he said: “Based on an analysis using SynthID, this image was not made with Google AI, although the tool cannot determine whether other AI products were used.” »

A screenshot of Gemini’s third response, this time indicating that SynthID had determined that the image was not created with Google’s AI, after all, despite previous statements, SynthID had discovered that it was generated with Google’s AI, taken on January 22, 2026. Screenshot: The Interception

Google did not respond to repeated questions about this discrepancy. In response to inquiries, the spokesperson continued to ask us to share the specific wording of the prompt that allowed Gemini to recognize a SynthID marker in the image of the White House.

We didn’t store this language, but told Google that it was a simple prompt asking Gemini to check if SynthID detected the image as being generated with Google’s AI. We provided Google with information about our prompt and the files we used so that the company could verify its records of our queries in its Gemini and SynthID logs.

“We’re trying to understand this gap,” said Katelin Jabbari, Google’s head of corporate communications. Jabbari repeatedly asked if we could replicate the initial results, because “none of us here have been able to do it.”

After further back-and-forth following subsequent inquiries, Jabbari said, “Sorry, I have nothing for you.” »

Bullshit detector?

Aside from Google’s proprietary tool, there is no easy way for users to test whether an image contains a SynthID watermark. It is therefore difficult in this case to determine whether Google’s system initially detected the presence of a SynthID watermark in an image that did not have one, or whether subsequent tests missed a SynthID watermark in an image that actually contained one.

As AI becomes more and more ubiquitous, the industry is trying to put behind it its long history of being what researchers call a “bullshit generator.”

Proponents of the technology say tools that can detect whether something is AI will play a critical role in establishing the common truth amid the looming flood of AI-generated or manipulated media. They point to their successes, such as in a recent example where SynthID debunked an arrest photo of Venezuelan President Nicolas Maduro flanked by federal agents as an arrest. AI-powered picture. The Google tool declared the photo to be bullshit.

If AI detection technology fails to produce consistent responses, it begs the question of who will say bullshit on the bullshit detector.

Leave a Reply

Your email address will not be published. Required fields are marked *