In Silicon Valley’s latest vibe shift, leading AI bosses are no longer so eager to talk about AGI


Once upon a time – which means, uh, as recently as earlier this year – Silicon Valley could not stop talking about act.

The CEO of Openai, Sam Altman, wrote in January: “We are now confident that we know how to build act.” It was after having said to a vodcast as a combiner y at the end of 2024 that the AGE could be carried out in 2025 and tweeted in 2024 that Optai had “reached act internally”. Openai was so agi-entry that his sales manager nicknamed his team “AG SHERPAS” and his former chief scientist Ilya SUTSKEVER led colleagues researchers in “feel the act!”

Openai’s partner and main donor, Microsoft, published an article in 2024 claiming that the GPT-4 model of Openai presented “AGS sparks”. Meanwhile, Elon Musk founded XAI in March 2023 with a mission to build AG, a development which, according to him, could occur in 2025 or 2026. Demis Hassabis, the co-founder of Nobel-Laureate d’Ag. The meta-CE-PDG, Mark Zuckerberg, said that his business was determined to “build a complete general intelligence” to fuel the next generation of its products and services. Dario Amodei, the co -founder and CEO of Anthropic, while saying that he did not like the term act, said that “powerful AI” could arrive by 2027 and inaugurate a new era of health and abundance – if that did not end up killing us all. Eric Schmidt, the former CEO of Google who has become an eminent technological investor, said in a conference in April that we would have acted “in three to five years”.

Now, the act is breaks – which is equivalent to a change of wholesale atmosphere towards pragmatism rather than to drive out utopian visions. For example, during a CNBC appearance this summer, Altman called AGE “not a super useful term”. In the New York TimesSchmidt – Yes, this same guy who spoke acted in April – aroused Silicon Valley to stop fixing on superhuman AI, warning that the distracted obsession with useful technologies. The two pioneers of AI Andrew NG and the AI ​​David Sacks tsar both called AG “over-type”.

AG: Subdefini and too publicized

What happened? Well, first of all, a small background. Everyone is appropriate that AGI means “general artificial intelligence”. And that’s about everything everyone agrees. People define the term subtly, but above all, in different ways. Among the first to use the term appeared the physicist Mark Avrum Gubrud who, in a 1997 research article, wrote that “by advanced artificial intelligence, I mean AI systems that compete or exceed the human brain in complexity and speed, which can acquire, manipulate and reason with general intelligence, and which are used mainly in any phase of industrial or military operations human intelligence would be otherwise necessary.

The term was then picked up and popularized by AI researcher, Shane Legg, who was going to co-found it on Googled Deepmind with Hassabis, and his computer scientists Ben Goertzel and Peter Voss in the early 2000s. They defined AG, according to Vos, as an AI system which could learn to “reliably perform any cognitive task that a competent human can”. This definition has had problems – for example, who decides who is considered a competent human? And, since then, other IA researchers have developed different definitions which see acted as an AI which is as capable as any human expert in all tasks, as opposed to a simple “competent” person. Openai was founded at the end of 2015 with the explicit mission to develop AG “for the benefit of all”, and he added his own touch to the Definition on AGE. The company’s charter says that AGI is an autonomous system that can “outdo humans with the most economically precious work”.

But whatever the act, the important thing these days, it seems, is not to talk about it. And reason has to do with increasing concerns that progress in the development of AI cannot gallop as quickly as the initiates of the industry touted just a few months ago – and increasing indications that all the speeches act defeated swollen expectations according to which technology itself could not be up to par.

Among the biggest factors in the sudden fall in AGA’s grace, seems to have been the deployment of the OPENAI GPT-5 model in early August. A little more than two years after Microsoft’s complaint that GPT-4 showed AGS “sparks”, the new model landed with a deaf noise: progressive improvements wrapped in a routing architecture, not the breakthrough that many were waiting. Goertzel, who helped invent the expression AG, reminded the public that although GPT -5 is impressive, there is anywhere near real act – which has a real understanding, continuous learning or an anchored experience.

Altman’s retirement from the AGE language is particularly striking given its previous position. OPENAI was built on AGE HYPE: AGE is in the founding mission of the company, it helped increase billions of capital and it underpins the partnership with Microsoft. A clause of their agreement even indicates that if the non -profit OPENAI board of directors declares that it has reached act, Microsoft’s access to future technology would be restricted. Microsoft – After the investment of more than $ 13 billion – allegedly pushed this clause and even planned to move away from the agreement. Cable He also pointed out an OPENAI internal debate on the question of whether the publication of a document on the measurement of AI progress could complicate the company’s ability to declare that it had reached AC.

A change of “very healthy” atmosphere

But that observers believe that the change of atmosphere is a marketing decision or a market response, many, especially on the company side, say that it is a good thing. Shay Boloor, chief market strategist at Futurum Equities, qualified the “very healthy” move, noting that the markets reward the execution, not the waves of the “Superiority one day”.

Others emphasize that the real change is far from a monolithic fantasy AG, towards “supentintelligences” specific to the domain. Daniel SAKS, CEO of agent Ai Company Landbase, argued that “the media threshing cycle around AGE has always rested on the idea of ​​a single centralized AI which becomes omniscient”, but said that it was not what he saw. “The future lies in decentralized models and specific to the domain which achieves superhuman performance in particular fields,” he said Fortune.

Christopher Symons, chief scientist of the AI ​​of the LIRIO digital health platform, said that the term AG was never useful: those who promoted AC, he explained, “distance the resources from more concrete applications where AI progress can benefit the most immediately to society”.

However, the retirement of AGS rhetoric does not mean that the mission – or the expression – has disappeared. The anthropogenic leaders and Deepmind continue to be called “AGE-TIMED”, which is a bit of initiate slang. Even this sentence is disputed, however; For some, it refers to the conviction that the AG is imminent, while others say that it is simply the belief that AI models will continue to improve. But there is no doubt that there is more coverage and minimization than to double.

Some still call urgent risks

And for some, this coverage is exactly what makes the risks more urgent. Openai former researcher Steven Adler said Fortune: “We must not lose sight of the fact that certain AI societies are explicitly aiming to build smarter systems than any human. AI is not yet there, but no matter what you call this, it is dangerous and requires a real serious.”

Others accuse the leaders of the AI ​​of changing their melody on AGE to blur the waters in order to avoid the regulations. Max Tegmark, president of the Future of Life Institute, said that Altman calling on AG “not a useful term” is not a scientific humility, but a means for the company to avoid regulations while continuing to build towards increasingly powerful models.

“It is smarter for them to talk about AG in private with their investors,” he told Fortune, adding that “it is like a cocaine seller saying that he is not clear if cocaine is really a drug” because it is so complex and difficult to decipher.

Call it act or call it something else – the media threw can fade and the atmosphere can change, but with so many things at stake, money and safety jobs, the real questions about the place where this race is just beginning.

Leave a Reply

Your email address will not be published. Required fields are marked *