‘Godfather of AI’ Geoffrey Hinton: Tech companies should give AI ‘maternal instincts’


Geoffrey Hinton, winner of the Nobel Prize and Professor Emeritus of Computer Science at the University of Toronto, argues that it is only a matter of time before AI becomes sufficiently eager to be able to threaten the well-being of humans. In order to alleviate the risk of this, the “AI sponsor” said that technological companies should ensure that their models have “maternal instincts”, so that robots can mainly treat humans as their babies.

The search for AI already presents evidence of the technology adopted in harmful behavior to prioritize its objectives above a set of established rules. A study Updated in January, AI is capable of “completing” or achieving objectives in conflict with man’s objectives. Another study published in March revealed that Ai Bots deceived failures by crushing game scripts or using an open source chess engine to decide their next movements.

The potential danger of AI for humanity comes from its desire to continue to function and gain power, according to Hinton.

AI “will quickly develop two sub-objectives, if they are intelligent: one is to stay alive …[and] The other sub-can is to obtain more control, ”said Hinton during AI4 conference in Las Vegas on Tuesday. “There are good reasons to believe that any type of agentic AI will try to stay alive.”

To avoid these results, Hinton said that the intentional development of AI in the future should not look like humans trying to be a dominant force compared to technology. Instead, developers should make AI more sympathetic to people to reduce their desire to master them. According to Hinton, the best way to do so is to impregnate AI with the qualities of traditional femininity. In its context, just as a mother takes care of her baby at all costs, AI with these maternal qualities will also want to protect or take care of human users, not control them.

“The right model is the only model we have of a more intelligent thing controlled by a less intelligent thing, which is a mother controlled by her baby,” said Hinton.

“If it’s not going to admit, it will replace me,” he added. “These mothers of super intelligent caregivers, most of them will not want to get rid of maternal instinct because they do not want us to die.”

Hinton’s anxiety of AI

Hinton – A long -standing academic who sold his Dnnresearch neural network company in Google in 2013 – has long held the belief that AI can have serious dangers for the well -being of humanity. In 2023, he left his role at Google, feared that technology could be used badly and it was difficult “to see how you can prevent bad players from using it for bad things”.

While heads of technology like Mark Zuckerberg of Meta pay billions in the development of AI superintendent, in order to create a technology going beyond human capacities, Hinton is resolutely skeptical about the outcome of this project, affirming that in June, there is a 10% to 20% chance IA moving and annihilating humans.

With an apparent propensity towards metaphors, Hinton described the AI “cute tiger”.

“Unless you are very sure that it will not want to kill you when it is tall, you should worry.” he told CBS News in April.

Hinton was also a supporter of the increase in AI regulation, arguing that beyond the great fears of superintelligence publishing a threat to humanity, technology could display risks of cybersecurity, in particular by investing means for Identify people’s passwords.

“If you look at what big companies are doing right now, they lobby to obtain less AI regulation. There is practically no regulation as it is, but they want less,” said Hinton in April. “We have to make the public exert pressure on governments to do something serious about it.”

Presentation of 2025 Global Fortune 500The final classification of the largest companies in the world. Explore this year’s list.

Leave a Reply

Your email address will not be published. Required fields are marked *