- AI teams still promote Nvidia, but rivals like Google, AMD and Intel increase their share
- The survey reveals that budgetary limits, electricity requests and the cloud dependence shape the material decisions of AI
- GPU shortages push workloads to the cloud while efficiency and tests remain neglected
AI material spending is starting to evolve while teams weigh performance, financial considerations and scalability, said new research.
The latest material study of the AI of Liquid Web has studied 252 trained AI professionals and noted that, while Nvidia remains comfortably the most used equipment supplier, its rivals gain more and more land.
Almost a third of respondents said they used alternatives such as Google Tpus, AMD GPU or Intel Chips for at least part of their workloads.
The traps of reasonable diligence
The size of the sample is certainly low, so does not capture the whole scale of global adoption, but the results show a clear change in the way the teams are starting to think of the infrastructure.
A single team can deploy hundreds of GPUs, so even a limited adoption of non -Nvidia options can make a big difference in the material imprint.
Nvidia is always preferred by more than two thirds (68%) of the teams questioned, and many buyers do not strictly compare alternatives before deciding.
About 28% of those questioned admitted to having jumped structured evaluations and, in some cases, this lack of tests led to an incompatible infrastructure and to sub-swearing configurations.
“Our research shows that jumping reasonable diligence leads to delayed or canceled initiatives – an expensive error in a fast industry,” said Ryan Macdonald, CTO at Liquid Web.
Familiarity and past experience are among the most powerful engines of the GPU choice. Forty-three percent of the participants cited these factors, against 35% who evaluated the costs and 37% which did performance tests.
Budget limitations also weigh heavily, with 42% of reduction of reduction projects and 14% of cancel them entirely thanks to shortages or material costs.
Hybrid and cloud -based solutions become standard. More than half of the respondents said they used both on -site systems and cloud systems, and many expect to increase cloud spending over the year.
Dedicated GPU accommodation is considered by some as a means of avoiding performance losses accompanied by shared or split equipment.
Energy consumption continues to be difficult. While 45% recognized efficiency as important, only 13% actively optimized for this. Many have also regretted energy, cooling and loss of the supply chain.
While Nvidia continues to dominate the market, it is clear that the competition fills the gap. The teams note that cost -effective, efficiency and reliability is almost as important as raw performance when building AI infrastructure.