Taking Another Big Leap In AI Progress


“It’s a small step for (a) man … a giant jump for humanity.”

Despite the confusing transcription, it was a big jump for the world, and which was intended not to be repeated for another half-century or more. Quick advance of around 50 years, and we are wondering now, not where we will explore in the universe, but what a big step that we are going to take, as a global society, dealing with a technology that seems foreign and formidable.

Sometimes, like Indiana Jones in the last crusade, we find ourselves asking ourselves not only what the big jump will be, but if a jump of faith will take us to security or make us fall in danger.

It’s a bit like this moment now looks like many people who think about the uncertainty of artificial intelligence. AI can now write poems, paint images and make our counts. What is the next step?

Proliferation of AI capacities

One of the main ideas promoted by experts is that the next big step in AI will be, for lack of a better word, “multimodal”.

This means that AI will not be limited to the text, as it has largely to date, answering questions in words, on a screen. He will receive depth and dimension, with voice, with robotics. He will generate an audio, a video, even perhaps material results, where he will drive vehicles, will operate equipment, pick fruit.

It is a multimodal AI. And it happens.

“Future AI assistants do not only respond to typed guests, but will understand the tone of the voice of a user, facial expressions, an environment and a social context”, “ Writes Muhammad Tuhin in ScienceNewsdaily. “The result will be systems that feel more intuitive, adaptable and human in their interactions.”

We must be ready for this kind of evolution.

When I know more than all of us

Then there is the point where AI becomes just smarter than a human, when our own personal agents know more than a whole human community.

Part of this has to do with Edge accounts, where the LLM can become more powerful and evolved, and always live on your smartphone. There is the dominant idea among experts and fans of Ray Kurzweil that “everyone will have the most intelligent AI” and that each of us will order a superhuman intelligence – perhaps a linked to others in an ad hoc network. (This Reid Blackman segment is something that I found interesting).

What it looks like

My colleague Ramesh Raskar, as well as the professor of Stanford Tengyu Ma, the anthropogenic researcher Andi Peng and Carina Hong of Axiom, addressed the probable future of the AI ​​and the next big step that we are all going to make to get there. (Warning: I consulted for Liquid AI).

One of the remarkable ideas was the place where Raskar invoked Marvin Minsky to suggest that, the Société de l’Esprit de la Minsky, the new audacious AI will imply distributed intelligence networks, not just a monolithic spirit of hive. Raskar used the example of a CEO in a company.

“We are not trying to make CEO the most intelligent person in the company, such as (a) a highly centralized intelligence, but we say that the CEO is more an orchestrator … Intelligence is in fact all the intelligent people of this business,” he said, suggesting that reflection on AI as centralized infrastructure can be a mistake. “Let’s see this possibility that we could actually do all this. You know, we have centralized all the data, centralized all the calculations, centralized all the talent: is it the right way to think of (IA) on a global scale, or should it be done differently? ”

As an alternative, he described a network approach.

“(One) possibility is that the right way to think about intelligence is that we are in fact creating many micro-owners all over the world, and they have their access to local tools, local data, local context,” he said. “And as they speak, a kind of world intelligence emerges.”

It also raises the premise that the corporate strategy could already be centered in this direction.

“If you think of big companies, that’s actually what they do. You know, they don’t recognize it. But instead of creating a large model, they say:” Oh, it may be a mixture of experts, or maybe it is not (a) a mixture of experts, but maybe it’s reasoning, where I will get out of the task. “They therefore already decentralize their definition of intelligence.”

Progress in reasoning

Hong, which is a pioneering model model in Axiom, explained how capacities have evolved quickly, in mathematics and code and reasoning.

“You can transform each mathematical problem and proof of solution into a computer version, such as the computer code, then get the same incredible success you saw in RL for coding,” she said. So, at Axiom, we are very excited about this as the next border of AI, which is a verified superintendent. We want to build self-improvement systems in the verifiable field which can allow the model to think about the errors where he was wrong, then to think about what has successfully completed, then to make several of them interact to continue to improve performance. This is what we believe to be the next jump from AI. »»

Not many models

By going through this theory of decentralization, Raskar pointed out how the market will not need many model manufacturers, just a prolific cloning of a given system on on -board devices.

“They will integrate on your phone, they will adapt to your laptop, and it will be even better than the models we have today, right?” He said. “All models will be very tiny. I think the models will be highly merchant. I think that the possibility is that large model manufacturers will trigger the model creation company, because, as you know, so the token costs decrease a factor of 100 each year. My own calendar, my own email, but (something that) I run locally on my own machine, then, when my agent or my AI speaks to your AI and other scientists, etc., a new intelligence emerges from this. “”

Mass intelligence at work

“I think we are entering an era of mass intelligence,” said Hong, developing more. “As the price of reasoning becomes more and more elastic, there will be more and more cases of unexpected use and markets that we have not really had the tools to unlock.”

What it points to, she suggested, is a “massive base” for advances worldwide. “(It’s) so much, physics, engineering,” she said. “You might say that IT, or a lot of algorithm problems that we solve, are based on mathematics, … (it’s) the incredible power of mathematics and fundamental sciences, exploited by AI, to put on the scale and to apply at an incredible and unprecedented speed, to all applied sciences. This is what average mass intelligence. “

Anthropic data request

There was much more in the panel which is available by transcription or video, about tokens, on the emergence of AG, etc. But towards the end, someone asked questions about anthropogens, and Peng stressed a reason for recent changes in the data policy, where the company now requests more user data, after having stressed that it is not in legal, but in a scientific role in the company.

“I think that partially, what we want to understand is how to create useful models for special users, and how it could make the difference between users,” she said. “What is useful for (a MIT teacher) is different from (what is useful) of an eighth year coding for the first time, right? And therefore a part of this type of user feedback allows us to better understand how to support the different customers in different cases of use. I am not exactly if we have an explicitly published plan so that we plan to look actively.”

These are some of the big jumps that these experts are looking for soon. To hear the opinions of the academic world and business helps us to inform ourselves of what is happening in the pike. Stay listening.

Leave a Reply

Your email address will not be published. Required fields are marked *