- Many AI models are not as effective as they are marketed to be reported
- 95% of the companies interviewed experienced very little impact of their LLMS
- Specialization is the key to a successful adoption of AI
New research from the Nanda MIT initiative said that the vast majority of Genai initiatives trying to stimulate rapid income growth was “flat falling.”
Among the samples, 95% of companies deploy a generative AI are stuck in it, “not offering a measurable impact” on profit and loss.
It seems that it is an all or nothing game, because the 5% of companies benefit from a generative AI excelled – it is mainly, according to the main author, the startups led by children aged 19 or 20, who have seen revenues go from zero to $ 20 million in a year ”.
It seems that the key to success with AI models is specialization. A successful deployment is to choose a “pain point” and execute this well, and to associate carefully with companies using tools.
Specialized sellers succeed in about 67% of the time, but the models built internally only succeed in about a third party as often. Highly regulated sectors such as the financial sector see that many organizations build their own AI systems, but research suggests that companies are much more prone to failure when they do.
When line managers are authorized to conduct adoption, they see success because they are able to choose tools that can adapt over time.
The allowance is also important, because most of Genai’s budgets are dedicated to sales and marketing – but the biggest return on investment has been observed in back office automation.
This is not the first time that research has suggested that AI models do not work as they should. A significant number of companies have introduced dismissals of lower level workers and have brought AI systems – but more than half of the British companies that have replaced workers with the IA regret their decision.
The tangible advantages of these models are more and more difficult to find, and the security risks linked to the models are worrying – as well as the models of AI making ESG objectives much more difficult to achieve.