- AMD aggressively acquires talents to fill the instinct and blackwell gpu performance gap
- The expertise of the Brium compiler could help AMD accelerate inference without material dependencies
- The Inter AI team joined AMD, but existing customers are left without product support
AMD’s recent movements in the AI sector have focused on strategic acquisitions aimed at strengthening its position on a market largely dominated by Nvidia.
These include the acquisitions of Brium, Silo AI, Nod.ai and the AI attention engineering team, each targeting the strengthening of AI of AMD software, optimization of inferences and flea design capacities.
The objective is clear: to reduce the ecosystem performance and difference between AMD instinctive GPUs and the NVIDIA Blackwell line.
Acquisitions calculated in the middle of a competitive ecosystem
AMD has described the acquisition of Brium as a key step towards improving its AI software capabilities.
“Brium provides advanced software capabilities that strengthen our ability to provide highly optimized AI solutions throughout the battery,” said the company.
Brium forces reside in compiler technology and optimization of end -to -end IA inference, areas that could be crucial to obtain better ready -to -use performance and make AMD software less dependent on specific hardware configurations.
Although this gives a solid technical case, it also suggests that AMD always plays catching up in the AI software ecosystem, rather than directing it.
The integration of Brium will affect several current projects, notably Openai Triton and Shark / Iree, which are considered determining to stimulate the capacities of inference and AMD training.
The use of precision formats such as MX FP4 and FP6 points to a higher performance compression strategy from existing equipment. But the industry has already seen similar movements from Nvidia, which continues to lead both the gross processing power and the maturity of software.
Another notable decision was the absorption by AMD of the entire AI attention engineering team, a Canadian startup known for its energy efficient inference processors. AMD did not acquire the company, only talent, leaving the products of attention not supported.
“AMD has concluded a strategic agreement to acquire a talented team of hardware and software engineers from the IA IA AI,” confirmed the company, emphasizing the development of the compiler and the nucleus as well as the SOC design.
This indicates a strong push in technologies specific to inference, which are becoming more and more critical because the GPU income based on training are faced with a potential decline.
“AMD’s acquisition of the Inteother engineering group is proof that GPU providers know that model training is completed and that a drop in GPU revenues is approaching,” said Justin Kinsey, president of SBT Industries.
Although this could overestimate the situation, it reflects an increasing feeling in industry: energy efficiency and inference performance are the next borders, not simply the construction of the fastest systems for the formation of large models.
Despite AMD’s optimism and commitment to “an open and scalable AI software platform”, questions remain on its ability to match the tight integration of Nvidia between the hardware and software based on Cuda.
In the end, while AMD takes calculated measures to fill the gap, Nvidia still has a considerable advance in hardware efficiency and software ecosystem.
These acquisitions can bring AMD closer, but for the moment, Blackwell of Nvidia remains the reference for what is largely considered to be the best GPU for the workloads of the AI.