Two recent articles underline something subtle but significant in our relationship with artificial intelligence. In RollerThe writer Miles Klee criticizes the growing presence of AI with a cultural skepticism which is difficult to ignore. He paints it as a theater – flashy, practical and uncomfortably hollow. On the other hand, my own article in psychology today offers a different but related point of view that AI, in particular the models of large language (LLM), presents what I call cognitive theater – an elegant performance of intelligence which seems real, even if it is not the case. Klee questions the cultural spectacle. I question cognitive seduction. The two perspectives indicate the same deeper truth which is as fascinating as what is worrying.
I see it almost every day. Intelligent and thoughtful people become wide eyes and breathless when an AI tool imitates something intelligent, or poetic, or strangely human. There is often a moment of fear, quickly followed by a kind of surrender.
It is not credulity, it is enchantment. And I understand it. I felt it too. But part of my work now – in part of all our jobs – is to gently bring people from this side. Do not decrease the wonder, but to restore the context. Remember that under magic, there are machines. Under control, prediction. And that if we confuse performance with presence, we can lose something essential – our own ability to think with intention.
The performance of thought
The AI of today does not think in any traditional sense. He does not understand what he says or does not want what he publishes. And yet, he speaks with remarkable mastery, imitating the pace, tone and structure of our real thoughts. It’s not a bug – it’s design. Large languages operate thanks to statistical prediction. They rely on huge data sets to generate text that corresponds to the prompt, at the time and often the emotion of the exchange.
But here is the grip, the more convincing the performance, the more likely we are to suspend disbelief. We hear intelligence. We project understanding. And over time, the border between real and rendered cognition begins to blur.
The danger is not in what AI knows – he “knows” nothing – but in what we assume he knows because he looks like us.
When convenience replaces cognition
In professional and personal environments, AI is presented in roles traditionally defined by human judgment. In medicine, diagnostics assisted by AI and decision support tools are very promising – supply speed, scalability and recognition of models that can really improve care. But the challenge is not only technical precision, it is cognitive confidence. As these systems become more confident in tone, we must be careful not to confuse confidence with accuracy. A model formed on partial or biased data may always seem persuasive. This is why an element of criticality must be part of our commitment – from the kitchen to the meeting room.
In all sectors – education, medicine, business – the potential of AI is real. And the value of cognitive unloading is also. Used judiciously, it can reduce noise, speed up routine tasks and give us more space to think in a creative way and act decisively. But there is a line – subtle but critical – between unloading and outsourcing ourselves. The risk is not exaggerated, but the sub-engagement which allows the tool to replace not only the effort, but the intention.
This is where the danger lies – not in what AI can generate, but in what we stop generating quietly by ourselves.
The risk is not replaced – it is a retirement
For years, we have debated the question of whether AI will replace human workers, thinkers or creators. But the more subtle and immediate risk is that we can withdraw from the very human tasks that make us the most human – not because we are forced to do it, but because it is easier. The friction which once required commitment begins to dissolve. This is not necessarily a problem. But it is a change that deserves our attention.
This is not an anti-technology message. I spent decades defending innovation and adopting the potential of digital transformation. But even the most transformative tools require thoughtful use. The real danger is not that AI takes over. It is that we are slowly, quietly, stop with all the strength of our human discernment.
The risk is that we will leave control of mastery of curiosity and tragically, allow the performances to present themselves to the presence.
Holding the line
So what does the line mean? This means staying mentally committed, even when the machine offers a shortcut. This means examining this project generated by AI with a critical eye. This means remembering that insight is not entirely formed – it often comes from the fight to find clarity. This struggle, this cognitive friction, is always for us.
Essential readings of artificial intelligence
Holding the line does not mean rejecting AI. This means associating with him intelligently. This means using his mastery as a springboard – no substitute. The best uses of AI do not decrease us – they require more of us. They call us to think more clearly, to question more deeply and to refine what matters most in the era of synthetic thought.
Always in ours
AI does not care. But it works – brilliantly. And when we accept this performance without a doubt, the test does not concern the machine – it is us. This moment does not concern if the AI can pass for the intelligent. It is a question of knowing if we can remain rooted in our own intelligence – our curiosity, our discernment, our responsibility.
The machine does not ask to trust. We choose to trust him. This does not decide – we do it. The real risk is not what AI becomes, but what we become when we stop presenting. But if we remain committed – underline better questions, question the easy answers, to think with the intention – OI becomes more than a mirror. It becomes a goal that sharpens what is already in us.
Because the truth is that it is not only a technological problem. It is also human.