For months at the beginning of 2025, an online community united on Reddit had been unconsciously infiltrated by artificial intelligence. It was a corner of the social platform where people practice a good mood debate, sharing their opinions and inviting people to persuade them differently. And it was here where Researchers were unleashed AIaccording to a report in The AtlanticTo see if he could offer arguments strong enough to change the minds of real people. They discovered that it could.
This seemed particularly violent, however, because sometimes AI had access to the online stories of people to adapt messages specifically to their unique identity. Behavior scientists call this “personalized persuasion” communication tactic and sometimes a personalized approach can be attractive. Who would not want relevant content for their unique interests instead of a waste of unrelevant waste?
But AI is at the dawn of something more alarming than vaguely adapting a message based on easily identifiable characteristics, as did the accounts of AI on Reddit. If he can master what we call a “deep sewing”, he can start to slip unnoticed in our online worlds, to learn who we are at the base and to use this personal information to repel our beliefs and our opinions in a way that can be unwelcome and harmful.
As teachers who study the psychology of persuasion, we have recently helped to bring together the latest research of the greatest experts in the world in a complete book On personalized persuasion. Our opinion is that although communicators can benefit from the adaptation of messages to basic information on their audience, deep sewing goes far beyond this easily accessible information. He uses the basic psychology of a person, his beliefs, his identities and his supporting needs to personalize the message.
For example, messages are more convincing when they resonate with the most important of a person moral values. Something can be considered ethical or contrary to ethics for many reasons, but people differ in which the reasons count most in their own moral compasses. People with more politically liberal opinions, for example, stretch Take care more about equity, so they are more convinced by the arguments than a policy is fair. The more conservative people politically conservative, on the other hand, tend to worry more about loyalty to their community, they are therefore more convinced when a message maintains that a policy confirms their group identity.
Find out more: Chatgpt can erode critical thinking skills, according to a new MIT study
Although this may seem a new idea, computer scientists have worked on a persuasion fueled by AI since decades. One of us recently produced a podcast On IBM’s “Project Debatrian”, which has spent years formed an AI system to debate, refining it several times with expert human debaters. In 2019, during a live event, he beat a human world champion.
With the rise of accessible AI tools, such as the user -friendly chatgpt mobile application, anyone can take advantage of AI for their own persuasion objectives. Researchers are projection These generic generic messages generated by AI can be as persuasive as those generated by humans.
But can it remove the “deep sewing”?
For AI to implement an autonomous deep seam on a mass scale, it will have to do two things in concert, which it seems about to do. First of all, he must learn the basic psychological profile of a person so that he knows which levers to shoot. Already, new evidence shows that AI can reasonably detect the personalities of people from their Facebook publications. And it won’t stop there. Columbia Business School Professor and author of MindmanstersDr. Sandra Matz told us in a podcast: “Almost everything you try to predict can be predicted with a certain degree of precision” based on the digital imprints of people.
The second step is to develop messages that resonate with these essential psychological profiles. Actually, new research Already finds that GPT can develop advertisements adapted to personalities, values and motivations of people, who are particularly convincing for people for whom they have been designed. For example, the simple fact of asking him to produce an announcement “for someone who is earth-to-terre and traditional” led to the argument that the product “will not break the bank and still do not do work”, which was reliably more convincing for people whose personalities were targeted.
These systems will become more and more sophisticated, applying a deep seam Visual Deepfakes,, handled vocal patternsAnd Dynamic human conversations. So what can we do to protect people from the power of personalization?
On the consumer side, it should be known that personalized online communication occurs. When something feels like you are suitable just for you, it could actually be. And even if you feel like you are not revealing a lot of you online, you always leave silent clues through the things you click, visit and search. You may have even obtained unknowingly authorization to use this information without knowing it when you accept the conditions of use that you have not read closely. Taking stock of your online behavior and using tools like a VPN can help you protect yourself from messages suitable for your unique psychology.
But the burden is not only for consumers. Platforms and decision -makers should consider the regulations that qualify the content to personalize and provide information on the reasons why a particular message has been transmitted to a specific person. Research shows that people can better resist influence when they know the tactics used. There should also be clear protections on data types that can be used for personalized content, limiting the depth of the seam possible. Although people Are often open to personalized online content, they are concerned about data confidentiality and the line between these two attitudes must be respected.
Even with such protections, the slightest communication advantage is to worry about bad hands, especially when deployed on a large scale. It is one thing for a market to recommend products purchased by people with similar commercial history, but another to meet a disguised computer which unconsciously deconstructs your soul and pulled it as a disinformation. Any communication tool can be used for good or for evil, but it is now time to start to seriously discuss the policy on the ethical use of AI in communication before these tools become too sophisticated to brake.