Silicon Valley’s billions of dollars on AI haven’t actually generated a return yet. Here’s why most companies should embrace ‘small AI’ instead


For the entire IA promise, most of the companies that use it do not yet provide real value – to their customers or themselves. With investors wishing to finally see the king on their investments in AI, it is time to stop generalizing and starting to think smaller.

Instead of building epic models that aim to accomplish all exploits, companies seeking to take advantage of the gold rush of AI should consider pivoting targeted models designed for specific tasks. By attacking a singular problem with a new solution, innovators can create new and powerful models that require fewer parameters, less data and less computing power.

With billions and billions of dollars spent on AI engineering, fleas, training and data centers, a smaller form of AI can also allow the industry to progress more safely, in a sustainable and efficient manner. In addition, it is possible to deliver this potential in various ways – through services at the top of generalist models of raw materials, recovery systems, low -ranking adaptation, fine adjustment, etc.

What is so bad in Big IA?

Some technology enthusiasts can cringe teeth with the word “small”, but when it comes to AI, Small does not mean insignificant and larger is not necessarily better. Models like the GPT-4 of Openai, the Gemini of Google, Mistral AI of Mistral, Meta’s Llama 3, or Claude d’Anthropic cost a fortune to build, and when we look at how they work, it is not clear why most companies would like to enter this game to start.

Even when the big actors monopolize the field, their fundamental generalized sexy and controlled models seem to work sufficiently well on certain benchmarks, but that this performance becomes a real value in terms of increased or similar productivity remains clear.

On the other hand, the targeted AI that meets specific use cases or pain points is cheaper, faster and easier to build. Indeed, successful AI models are based on high -quality, well -managed and ethical origin data, as well as understanding of how all of these data affects the performance of the model. With this full challenge of why 80% of AI projects failThe formation of a more targeted model requires fewer parameters and much less data and power calculation.

This is not an argument for green AI but to bring back a little realism in the IA -up media cycle. Even if the model itself is a large owner owner, the more the hearth has remained, the smaller and more manageable the number of possible outings to consider. With less token length, the optimized models for a specific task can run more quickly and be very robust and more efficient, while using less data.

The delivery of a small AI does not need to be restrictive

With AI in agriculture already evaluated at more than $ 1 billion per yearInnovators like Bonsai Robotics Unlock new efficiency by optimizing technology to tackle specific use cases. Bonsai uses patented AI models, powerful data and computer vision software to supply autonomy systems to pick and choose in difficult environments. While bonsai algorithms are based on massive data sets that are continuously up to date, with its narrow update, this pioneer of physical AI has been exploited as Agtech Breakthrough’s Precision agriculture solution of the year.

Even the big technological players work to concentrate their AI offers with smaller and more powerful models.

Microsoft is currently using OPENAI GPT technology to feed Copilot, a suite of smaller AI tools integrated into its products. These models are more focused on software, coding and common models, which allows them to be more easily refined than general chatpt chat and better to generate personalized content, summarize files, recognize models and automate activities via invites.

With Openai projecting large yields when he publishes Chatgpt agents at the doctorate level, the ideal is that one day, we will have all our own agents – or AIA assistants – who use our personal data to act in our name without invites. It is an ambitious future, despite the problems of confidentiality and security.

Although the jump in the place where we are now where we could go seems to be huge, building it room by room is a clear approach and at lower risk that supposing that a massive monolith is the answer.

AI innovators who house the specificity can build a growing and agile team of expert models that increase our work more and more instead of an expensive and mediocre assistant that is large with the parameters, eats massive data sets and still does not do things.

How a little AI will prevent the bubble of burst

By creating lighter IT infrastructure that focuses on the right data, companies can fully maximize the potential of the AI of revolutionary results, even if they reduce the immense financial and environmental costs of technology.

In the midst of all the media threw around AI and the models of the Big Tech giant which are fighting for the headlines, the long arc of innovation has always been based on incremental and practical progress. With data at the heart of the models that change our world, a small targeted AI promises faster, more sustainable and profitable solutions – and in turn, offers both investors and users, a very necessary king of AI.

The opinions expressed in the Fortune.com comments are only the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Leave a Reply

Your email address will not be published. Required fields are marked *