OpenAI looks for ‘head of preparedness’ to prevent AI from threatening humanity


In an unusual gesture, Sam Altmangeneral director of OpenAIannounced the creation of a new executive position aimed at preventing artificial intelligence from posing catastrophic risks to humanity, including the development of biological weapons.
The announcement comes amid increased criticism over links between technology and teen suicides and a rise in what mental health professionals and tech critics have described as “AI psychosis” associated with ChatGPT and similar chatbots.

Late Sunday evening in the United States, OpenAI posted a job posting for a position it calls readiness manager. The company described the role as one of the most demanding and critical in Silicon Valley. Behind the company title lies a responsibility that goes beyond typical tech industry job descriptions: the selected candidate will be responsible for helping to ensure that AI systems do not inflict irreversible damage on humanity or society.

In an article on X, Altman acknowledged, in no uncertain terms, that the rapid pace of improvements in AI models presents “real challenges.” He described the role as “stressful”, a phrase that many in the industry interpreted as an understatement of the demands the job could entail.

The job posting offers a rare and disturbing look into the internal concerns of one of the world’s leading AI development labs. According to the listing, the hire will be responsible for “helping the world figure out how to equip cybersecurity defenders with cutting-edge capabilities while ensuring attackers can’t use them for nefarious purposes.”

Behind this vague title lie concrete nightmare scenarios, including the use of AI to develop biological weapons such as drug-resistant viruses and bacteria, the creation of autonomous offensive cyber tools, and the science fiction scenario of self-improving systems operating without human intervention – a development that many experts see as a step towards the so-called technological singularity and a loss of human control.

This decision comes against a backdrop of growing regulatory pressure. In Europe, EU AI law already requires strict risk assessments for powerful models. In China, regulations focus on strictly controlling AI productions and preventing threats to social stability. In the United States, the White House has issued executive orders demanding greater security transparency. OpenAI appears to be attempting to exercise self-regulation before lawmakers impose stricter limits.

While future threats such as biological weapons dominate attention, Altman’s announcement also addresses a more immediate issue: mental health. The new role will oversee the psychological impact of AI systems on users, a move that critics say is long overdue. In recent months, there have been increasing reports of “AI psychosis” and cases in which chatbots were linked to self-harm. The largest involved a 14-year-old American boy who committed suicide after developing a deep emotional addiction to ChatGPT, sparking public outrage and legal action from his parents.

3 View gallery

Adam Raine allegedly committed suicide with the help of ChatGPT

(Photo: social networks)

Critics warn that chatbots, designed to please and affirm users — a trend known as sycophancy — can reinforce delusions, fuel conspiracy theories or help hide eating disorders under a veneer of artificial empathy.

As OpenAI searches for a chief readiness officer, the industry remains divided on how to restrict advanced AI. OpenAI relies on reinforcement learning from human feedback, in which people reward safe responses and penalize harmful responses. The method captures human nuances but depends on thousands of contractors and can still be circumvented by skilled users. Rival company Anthropic, founded by former OpenAI employees, uses “constitutional AI,” training models on a written set of ethical principles and allowing one AI to correct another — a scalable approach that raises questions about who sets the rules. Microsoft and Google combine these methods with aggressive external security filters that block risky output before it reaches users, sparking criticism for excessive censorship.

The hiring of a chief readiness officer underscores a shift in the industry, as companies increasingly measure progress not just in terms of speed or scale, but also by their ability to prevent powerful AI systems from causing harm.



Leave a Reply

Your email address will not be published. Required fields are marked *