- AI agents could save 382 million dollars adopters, but only $ 76 million for slow adopters
- Most companies agree that AI agents would give them a competitive advantage
- Companies should develop ethics and trust their processes
AI agents could generate up to $ 450 billion in economic value by 2028 thanks to a combination of income and savings gains for businesses, but not all businesses are positioned to collect the awards, new research said.
The Capgemini report explains how IA agent adopters on the scale can see an increase of $ 382 million over the next three years, against only $ 76 million for low -scale adopters.
Alarming, the global average of organizations that have fully set the deployment of AI agents to scale is only 2%, and in the United Kingdom, it is even less, to 1%.
To benefit from AI, you must put it on the scale correctly
Research has revealed that almost half (47%) of British companies pilot AI agents or test new use cases, around a fifth (19%) having already partially implemented them. Most (93%) agree that the scaling of AI agents would give them a competitive advantage over the next year, with customer service, IT and the most likely sales to make the biggest awards.
However, now that many companies go from experimentation to implementation, trust in autonomous AI agents is decreasing. Only 27% of world participants now trust the agentic AI, compared to 43% last year. UK Trust is slightly higher than the global average, but the decrease of 50% to 32% remains in accordance with global feeling.
“At the heart of this transformation, the need to strengthen confidence in AI by ensuring that it is developed in a responsible manner, with the ethics and security cooked from the start,” said Franck Greverie, CTO and product manager.
In addition, the same key challenges are to prevent companies from setting up AI – four out of five organizations lack mature IA infrastructure, with less than one in five reporting high data preparation. As we know, AI is as good as the data it uses. Other obstacles include weak literacy and privacy of internal AI.