Deepseek updated his R1 model a few days ago. It works better and it is even cheaper than most other top models.
Have you missed it? I missed it. Or I saw the news briefly, then I forgot. Most of the technology industry and investors have praised the launch with a giant shoulder.
It is a fairly striking contrast at the beginning of 2025 when the DEEPSEEK R1 model freaks away everyone. Technological actions have plunged and the BOMO of expenditure generating AI was seriously questioned.
This time, the deployment of Deepseek “came and passed without Blip,” wrote Ross Sandler, a high -level technological analyst at Barclays, in a note to investors.
“The stock market is nevertheless careful,” he added. “This tells us that the level of understanding of the investment community on the IA trade has improved considerably in just five months.”
A non -scientific depth survey
Friday, I questioned my colleagues from the technological team of Business Insider, just to see if I spent too much time watching Elon Musk and Donald Trump on social networks (rather than doing my real job).
Here are some of their answers:
- An editor said they had not noticed the update of Deepseek, but now they feel guilty for not having spotted it. (Solid thought. Only the paranoid survives in journalism).
- Another colleague said they knew it from their fast title scans, but haven’t read it too much.
- A technological journalist saw a reddit thread on this subject, scanned it and did not think about it again.
- Another journalist said they had missed it entirely.
- Another publisher: “had not noticed TBH!”
So he was barely registered. And these people are glued to Tech News every second of the day.
Why does no one really care now?
Deepseek’s latest R1 model is probably the third best in the world at the moment, so why doesn’t it make waves like before?
A graph showing the performance of various AI models Barclays Research
Sandler, the Barclays analyst, noted that the latest Deepseek offer is not as cheap as before, relatively speaking. It costs just under $ 1 per million tokens, which was about 27 times cheaper than the O1 model of Openai earlier this year.
A graph showing the price of various AI models, based on American dollars per million tokens Barclays Research
From now on, Deepseek R1 is “only” about 17 times cheaper than the top model, according to Barclays Research and Data of artificial analysis’ Intelligence Intelligence Index.
A graph showing the cost of various AI models, based on dollars per million tokens. Barclays Research
This illustrates a broader and more important point. Something I have been talking about for you since last year: most of the best AI models are almost similar in terms of performance, because they have mainly trained on the same data on the Internet.
It is difficult to stand out from the crowd, based solely on performance. When you jump forward, your inventions and gains are quickly incorporated into everyone’s offers.
The price is important, yes. But the distribution becomes the key. If your employer has an enterprise chatgpt account, for example, you are very likely to use OPENAI models at work. It’s just easier. If you have an Android smartphone, you will probably talk to the Google Gemini chatbot and get responses from IA models from the search giant.
Deepseek does not yet have this type of broad distribution, at least in the Western world.
Was the infrastructure of AA to be lost?
Then, there is an awareness that the models of “reasoning”, such as R1 of Deepseek and O3 of Openai, require a massive quantity of computing power to execute. This is due to their ability to decompose requests into several stages of “thought”. Each step is a new type of prompt which is transformed into a large number of new tokens which must be treated.
The Freakout Deepseek in January occurred mainly because the technology industry feared that the Chinese laboratory has developed more effective models that did not need many computer infrastructure.
In fact, this Chinese laboratory may have rather helped popularize these new types of reasoning models, which could require even more GPU and other IT equipment to be operated.
Register for the Bi tech Memo weekly newsletter here.