Can you tell human faces from AI? Most people can’t


You can’t hide your lie AIs.

Not only is artificial intelligence taking over the Internet, but it is also becoming indistinguishable from reality. Scientists have discovered that people cannot tell the difference between human and AI-generated faces without special training, according to a dystopian study published in the journal. Royal Society Open Science.

“Generative adversarial networks (GANs) can create realistic synthetic faces, which can potentially be used for nefarious purposes,” the researchers wrote.

Recently, TikTok users exposed AI-generated fake doctors who were scamming social media users with unfounded medical advice.

“I think it was encouraging to find that our type of fairly short training procedure significantly increased performance in both groups,” said lead author Katie Gray. FAMILY STOCK – stock.adobe.com

In fact, these concentrate faces have become so convincing that people are fooled into thinking the counterfeit faces are more real than the real artifact. Livescience Report.

To prevent people from being tricked, researchers are trying to design a five-minute training program to help users unmask AI-mposters, according to lead study author Katie Gray, an associate professor of psychology at the University of Reading in the United Kingdom.

These trainings help people detect problems with AI-generated faces, such as the face having a middle tooth, odd hairline, or unnatural-looking skin texture. These fake faces are often more proportionate than their authentic counterparts.

The participants’ AI detection powers improved significantly after the short training. FOBT

The team tested the technique by conducting a series of experiments comparing the performance of a group of typical recognizers and super recognizers – defined as those who excel at facial recognition tasks.

These latter participants, from the volunteer database of the Greenwich Face and Voice Recognition Laboratory, would have ranked among the 2% of the best individuals during exams where they had to recall unknown faces.

In the first test, organizers displayed a face on the screen and gave participants ten seconds to determine whether it was real or fake. Classic recognizers only spotted 30% of the counterfeits, while super recognizers only detected 41%, less than if they had simply guessed at random.

The second experiment was almost identical, except it involved a new group of guinea pigs who received the aforementioned five-minute training on how to spot errors in AI-generated faces.

Candidates saw 10 faces and were evaluated in real time on the accuracy of their AI detection, culminating in an examination of common rendering errors.

Sophisticated AI-generated images have allowed bad actors to deceive people online. Halfway

When they participated in the original experiment, their accuracy improved with super-IDs identifying 64% of fugazi faces, while their normal counterparts recognized 51%.

Trained participants also took longer to examine the faces before giving their response.

“I think it was encouraging to see that our fairly short training procedure significantly improved performance in both groups,” Gray said.

Of course, the study has some caveats, namely that the participants were tested immediately after the training, so it was unclear how much they would have retained if they had waited longer.

Nonetheless, it is essential to equip people with tools to distinguish humans from robots, in light of the plethora of AI impersonators flooding social media.

And technology’s chameleonic feats aren’t just visual.

Recently, researchers claimed that the language robot ChatGPT had passed the Turing test, which means that he is effectively no longer discernible from his brothers of flesh and blood.

Leave a Reply

Your email address will not be published. Required fields are marked *