Can an AI detect when a human is lying – and should we trust it if it can? Artificial intelligence, or AI, has seen many recent advancements and continues to evolve in scope and capabilities. A new study led by Michigan State University further investigates how well AI can understand humans by using it to detect human deception.
In the study, published in the Communication logResearchers from MSU and the University of Oklahoma conducted 12 experiments with more than 19,000 AI participants to examine how well AI characters were able to detect deception and truth in human subjects.
This research aims to understand the extent to which AI can facilitate deception detection and simulate human data in social science research, as well as to caution professionals when using large language models for lie detection.
David Markowitz, associate professor of communication in the MSU College of Communication Arts and Sciences and lead author of the study
To evaluate AI against human deception detection, researchers drew on truth-default theory, or TDT. TDT suggests that people are mostly honest most of the time and that we are inclined to believe that others are telling us the truth. This theory helped researchers compare the way AI acts to the way people behave in the same types of situations.
“Humans have a natural bias toward the truth — we generally assume others are honest, whether they are or not,” Markowitz said. “This tendency is thought to be evolutionarily useful, because constantly doubting everyone would take a lot of effort, make daily life difficult, and strain relationships.”
To analyze AI character judgment, researchers used the Viewpoints AI research platform to assign audiovisual or audio-only human media to the AI for judgment. The AI judges were asked to determine whether the human subject was lying or telling the truth and to provide a justification. Different variables were evaluated, such as media type (broadcast or audio only), contextual background (information or circumstances that help explain why something happens), base lie and truth rates (proportions of honest and deceptive communication), and AI personality (identities created to act and speak like real people) to see how the AI’s detection accuracy was affected.
For example, one of the studies found that the AI was biased when it came to lying, as the AI was much more accurate for lies (85.8%) than for truths (19.5%). In short questioning contexts, the AI’s deception accuracy was comparable to that of humans. However, in a non-interrogation context (e.g., when evaluating statements about friends), the AI displayed a truth bias, aligning more accurately with human performance. Overall, the results revealed that AI is more biased toward lying and much less accurate than humans.
“Our main goal was to see what we could learn about AI by including it as a participant in deception detection experiments. In this study, and with the model we used, the AI was found to be context sensitive – but that did not make it more effective at detecting lies,” Markowitz said.
The final results suggest that AI results do not match human results or accuracy and that humanity could pose an important limitation, or boundary condition, for the application of deception detection theories. The study highlights that using AI for detection may seem unbiased, but the industry needs to make significant progress before generative AI can be used for deception detection.
“It’s easy to see why people might want to use AI to detect lies – it seems like a high-tech, potentially fair and perhaps unbiased solution. But our research shows we’re not there yet,” Markowitz said. “Researchers and practitioners need to make major improvements before AI can actually handle deception detection.”
Source:
Journal reference:
Markowitz, D.M. and Levine, T.R. (2025). The (in)effectiveness of AI characters in deception detection experiments. Communication log. doi.org/10.1093/joc/jqaf034