AI Police Reports: Year In Review


In 2024, the EFF wrote our first blog on what could go wrong when police let AI write police reports. Since then, technology has proliferated at a worrying rate. For what? The most popular generative AI tool for writing police reports is Axon’s Draft One, and Axon also happens to be the largest provider of body cameras to police departments in the United States. As we have written, companies are increasingly group their products to make it easier for police to purchase more technology than they might need or the public is comfortable with.

We have good news and bad news.

Here’s the bad news: Police reports compiled by Amnesty International remain unproven, lack transparency and are downright irresponsible, especially when the criminal justice system, informed by police reports, decides on people’s freedom. King County, Washington State’s Attorney’s Office bans police from using AI to write police reports. Like their note read: “We do not fear technological advancements – but we do have legitimate concerns about certain products currently on the market… AI continues to develop and we hope that we will reach a point in the near future where these reports can be trusted. At this time, our office has made the decision not to accept any police stories produced with the help of AI.”

In July this year, the EFF published a in two parts report on how Axon designed Draft One to challenge transparency. The police upload their body camera audio into the system, the system generates a report that the officer is supposed to edit, and then the officer exports the report. But when they do that, Draft One erases the initial version, and with it any evidence indicating which parts of the report were written by AI and which parts were written by an officer. This means that if a police officer is caught lying on the stand – as evidenced by a contradiction between his court testimony and his previous police report – he could point to the conflicting parts of his report and say, “the AI ​​wrote that.” Draft One is designed to make it difficult to refute this claim.

In this video of a round table on the first draftAxon’s senior product manager for generative AI is asked (at 49:47) whether or not it is possible to see after the fact which parts of the report were suggested by the AI ​​and which were edited by the agent. His response (in bold and definition of RMS added):

So we don’t store the original draft and that’s intentional and it’s really because the last thing we want to do is create more disclosure issues for our clients and our law offices.-so basically the agent generates this draft, they make their changes, if they submit it into our Axon system of record, then that’s the only place we store it, if they copy and paste it into their third party RMS. [records management system] system as soon as they are done with that and close their browser tab, it’s gone. In fact, it’s never stored in the cloud, so you don’t have to worry about extra copies floating around.

Yeah !

All this obfuscation also makes it incredibly difficult for people outside of police departments to determine whether their city’s officers are using AI to write reports — and even harder to use public records requests to verify just those reports. This is why this year the EFF also published a complete guide to help the public make their records requests as tailored as possible to learn more about AI-generated reports.

Ok, now here’s the good news: People who think AI-written police reports are irresponsible and potentially dangerous to the public are fighting back.

This year, two states passed bills that are an important first step in controlling police reports of AI. that of Utah SB180 requires that police reports created in whole or in part by generative AI include a warning that the report contains AI-generated content. It also requires officers to certify that they have verified the accuracy of the report. California SB 524 went even further. It requires the police to disclose, in the report, whether it was used to prepare a police report in whole or in part. Additionally, it prohibits sellers from selling or sharing information that a police department has provided to the AI. The bill also requires departments to keep the first draft of the report so that judges, defense attorneys or auditors can easily see which parts of the final report were written by the officer and which parts were written by the computer.

In the coming year, many more states are expected to join California and Utah in regulating, or even banning, police from using AI to write their reports.

This article is part of our Year in Review series. Read more articles on the fight for digital rights in 2025.

Leave a Reply

Your email address will not be published. Required fields are marked *