Welcome to AI! AI journalist Sharon Goldman here, replacing Jeremy Kahn, who is on vacation. In this edition… The General Services Administration approves Openai, Google, Anthropic for the Federal List of IA suppliers… consequences of the Boom of IA expenses for the US economy…Clay AI increased $ 100 million to an assessment of $ 3.1 billion.
It is only in the bay region, spending a Saturday to make AI agents – on the brink of 2,000 students, researchers and technological initiates piled up in UC Berkeley – competing for a completely normal weekend plan. While I picked up my badge at the top of a day agentic AI and watched the snake line in the student union lobby, it looked less like an academic conference and more like the version of the Silicon Valley of a New York brunch point.
This was certainly due to the range of speakers, who was stacked with the best IA researchers and scientists, notably Jakob Pachocki, chief scientist in Openai; Ed Chi, vice-president of research at Google Deepmind; Bill Dally, chief scientist of Nvidia; Ion Stoica, co -founder at Databricks & Anyscale, as well as a UC Berkeley teacher; And Dawn Song, a Pioneer Professor of UC Berkeley, focused on IA security.
Popularity could have been due to the buzzing subject – AI agents, generally defined as a system powered by AI which can perform tasks, mainly independently, using other software tools. Remember not only to suggest a vacation route, but also to book the flight and make the hotel booking.
As my colleague Jeremy Kahn said in a recent article, “this type of automation is a sustainable dream of fever. Over the past decade, companies have adopted “automation of robotic processes” or RPA. It was software that could automate repetitive tasks, such as cutting and analyzes between database programs. The agentic AI is supposed to be both more flexible and powerful, adapting to the needs of the company.
In January 2025 blogThe CEO of Openai, Sam Altman, said: “We believe that, in 2025, we can see the first agents of the AI” join the workforce “and significantly change the production of companies.”
But despite the media threshing, the overall message of the agency’s AI summit was prudent and anchored: agents can be the most buzzing trend in AI at the moment, but technology still has a long way to go, they said. AI agents, unfortunately, are not always reliable. They may not remember what preceded.
The Chi of Google Deepmind, for example, underlined the gap between what agents can do in the demos organized in relation to what is still necessary in the production environments of the real world. Pachocki highlighted the concerns about the security, security and reliability of agent systems, especially when integrated into sensitive applications or operate independently.
“I still do not think that the agents are really up to their promise,” said Sherwin Wu, head of engineering at the Openai API. “Some more generic cases have worked, but my daily work does not seem really different with the agents.”
Although today’s agents are not currently up to the massive media threw (consider the CEO of Salesforce Marc Benioff recent complaint That a passage to the digital workforce means that it will be the “last CEO of Salesforce which only managed humans”), speakers from the AI of the agency still had a lot of optimism to share. The Stoica de Databricks has expressed its enthusiasm for infrastructure improvements that facilitate the creation of agent systems. Dally suggested that Nvidia suggested that continuous material progress will allow more powerful and more effective agent behavior. Several have underlined “narrow victories” in specific fields, such as coding.
Today AI agents can still have increasing pain, but given the crowded ball of the Berkeley UC, the industry keeps its eye on the price: AI agents which can operate reliably in the real world. The gain, they believe, will deserve the wait.
With that, here is more news from AI.
Sharon Goldman
[email protected]
@Sharongoldman
Have in the news
The American agency approves Openai, Google, Anthropic for the list of federal AI providers. Reuters reported Today, the General Services Administration, which is the central purchasing arm of the US government, added the Openai chatpt, the Gemini de Google and the Claude d’Anthropic to a list of approved AI providers in order to accelerate the use of technology by government agencies. The tools will be available for agencies via a platform with contractual terms in place. The GSA said that approved AI providers “are determined to use and comply with federal standards”.
The AI spending boom could have real consequences for the American economy. According to the Washington PostThe record investment of Big Tech in artificial intelligence – more than $ 350 billion this year of Google, Meta, Amazon and Microsoft – becomes a major economic force, even if the wider American economy shows signs of slowdown. While employment growth is cooling, this massive series of IA expenditure feeds the construction of data centers and stimulates the demand for fleas, servers and networking equipment – increasing GDP of GDP growth up to 0.7% in 2025. But economists warn against economic effects to manage the economy.
The AI Clay sales tool increases $ 100 million to an assessment of $ 3.1 billion. THE New York Times Dealer Said that Clay, which helps sales representatives and marketing specialists to find new tracks and transform them into customers, raised $ 100 million to an assessment of $ 3.1 billion. The Tour was led by Capitalg, an alphabet investment branch, the mother company of Google. The other participants understood Meritech Capital Partners and Sequoia Capital. It occurs about six months after the start -up has collected funds for an evaluation of $ 1.25 billion.
Eye on AI research
Google Deepmind’s new “world model of the world” creates interactive simulations in real time. Google Deepmind has unveiled Genie 3, a new powerful AI system that can generate rich and interactive virtual worlds from simple text prompts, which has sailed in real -time dynamic environments at 24 frames per second. But although it is tempting to jump immediately to use the model for the ultimate gaming experience, it is in fact the last jump in the long -term push of the company to the “world models” or the AI systems that can learn how the world works and simulate real environments. These are considered essential to train advanced agents and, ultimately, by achieving general artificial intelligence. Unlike previous videos generators, GENIE 3 allows users to move to environments generated by AI which remain visually consistent over several minutes – and even to respond to orders such as “Make It Snow” or “Add a character”. For the moment, Deepmind limits access to Genie 3 to a small group of researchers and creators while exploring the deployment and responsible risks.
Fortune on AI
The infiltration of North Korean IT workers have exploded by 220% in the last 12 months, the AI generation has armed at each stage of the job process – by Amanda Gerut
AI now has job interviews – but candidates say they prefer to risk staying unemployed than talking to another robot —By Emma Burleigh
These graphics show how China pulles in front of the United States in the IA – Matt Heimer and Nick Rapp food race
Calendar ai
September 8-10: Fortune Brainstorm Tech, Park City, Utah. Apply to attend.
October 6-10: World IA Week, Amsterdam
October 21-22: Tedai San Francisco. Apply to attend here.
December 2-7: Neirips, San Diego
December 8-9: Fortune Brainstorm ai San Francisco. Apply to attend.
Brain food
Could the “depth of thought” be the key to AI reasoning?
A small model of tiny AI is called into question what we know about how models learn to reason: Singapore intelligence researchers have recently published Hierarchical reasoning model (HRM), which is inspired by the process of thinking in the brain – and the results have the community of the AI community. Although it is 100 times smaller than Chatgpt and formed on only 1,000 examples (without internet data or guidance step by step), HRM solves difficult logic problems like sudoku, labyrinth navigation and abstract reasoning tasks that do not have much more important models. Instead of imitating human language, the reasons for HRM internally – working strangely through problems in hidden loops, a bit like a person thinking through a puzzle in their heads. Its success suggests a radical change in AI: that where the depth of thought could have more importance than the scale.