Identification of images generated by Ai-Génération or Modified will become more and more important because the tools for creating content will become more accessible. Years ago, an expert took a lot of time and efforts to create a convincing false. Now it can be done with a few keys or clicks.
A Recent study by Microsoft AI researchers for a good laboratory The reports according to which the ability of people to identify the images of AI is “only slightly higher than that of turning a piece”. Those who participated in a game identifying IA images had only a global success rate of 62%.
In August 2024, Microsoft shared a “real or not” quiz that challenged people to identify images generated by AI-AI or modified. This game was used as a basis for the study.
About 287,000 image assessments took place and more than 12,500 people participated.
“The generative AI evolves quickly and the new or updated generators are revealed frequently, showing an even more realistic outing,” concluded the study.
“It is fair to assume that our results probably save people today to distinguish the images generated by real AI.”
The study also revealed that AI detection tools are more reliable than humans to identify IA images, but the team stressed that automated tools will also make mistakes.
Identification of AI images
The study results suggest that people are better to identify humans than landscapes. The success rate for identifying people was around 65% while nature photographs were only 59% of the time.
Study researchers indicate that these results could be due to the great capacity of humans to identify faces. Our brains are wired to recognize faces, so the connection seems likely.
A Recent study of the University of Surrey Discuss how human brains are “attracted to faces and faces everywhere”.
Participants saw a similar success rate when they examine both real images and generated by AI (62%) and during concentration only on the images generated by AI-AI (63%).
Several of Best IA images generators were used to create the images presented to the quiz takers. The images created by a generative opponent network (GAN) had the highest failure rate (55%).
The study researchers stressed that the game was not designed to identify the photorealism of the images created by different models.
“We must not assume that a model architecture is responsible for the aesthetics of its release, the training data is,” said the document. “The architecture of the model only determines the success of a model to imitate a training set.”
The images that caused the lowest success rate included elements that did not seem natural but were authentic. For example, lighting in an image may seem “deactivated” at first glance, but the unique lighting conditions caused the effect, not AI.
The team behind the study develops its own AI detector, which would have a success rate of more than 95% on real images generated by AI. It will be interesting to see if AI detectors can exceed the AI tools they have designed to detect.
Take the quiz
THE Real or not quiz is always live, which means that anyone can take it. I admit that I obtained a score less than 47% by trying to identify images that have been modified or created by AI.
In some cases, there were clear signs of using AI, such as artifacts that have been released or objects that have been cut or incomplete. But in many cases, I really couldn’t say if an image was real or false.
I suppose that I should not be embarrassed, since thousands of people played the same game and that I was only slightly better than a reversal of parts to identify the images of the AI.
After taking the quiz, please share your score and experience in the comments below. I would like to know if an informed audience is better or worse to identify the images of AI.