- AMD’s customer’s customer chef says that he examines dedicated NPU accelerators
- It would be the equivalent of a discreet GPU, but for AI tasks
- These tips would reduce demand to high -end GPUs, as they would no longer be bought for AI work, as they are in some cases
AMD turns to a future where it could not only produce autonomous graphics cards for office PCs, but similar boards that would be the equivalent of an AC accelerator – a discreet NPU, in other words.
CRN reports (via WCCFTECH) that Rahul Tikoo of AMD, responsible for its customer processor activities, said that Team Red “speaks to customers” of “use cases” and “potential opportunities” for an NPU accelerator also dedicated.
CRN underlines that there are already movements in these current lines, such as a dell Pro Max Plus laptop, which is defined to boast of a pair of Qualcomm AI 100 PC inference cards. These are two discreet NPU cards with 16 AI cores and 32 GB of memory each, for 32 AI cores and 64 GB of RAM in total.
To put this in perspective, the current integrated NPUs (on chip), such as those of the Lunar Lake processors of Intel, or the Ryzen AI chips of AMD, offer around 50 peaks – ideal for Copilot + PC – while you are looking for up to 400 tops with powerful users Qualcomm AI 100.
Tikoo observed: “It is a brand new use of use, so we look at this space carefully, but we have solutions if you want to enter this space – we can.”
The executive AMD would not be drawn to provide an index to a period within which AMD could consider making such discreet NPU ambitions, but said that “it is not difficult to imagine that we can get there fairly quickly” given the extent of Team Red technologies.
Analysis: potentially delete pressure on high -end GPU demand
So, does that mean that it will not take too long before you plan to buy your office PC and think about a discreet NPU alongside a GPU? Well, not really, it is still not a consumption territory as such – as indicated, it is more AI energy users – but it will have a significant impact on everyday PCs, at least for enthusiasts.
These NPU autonomous cards will only be required by people working on heavier IA tasks with their PC. They will offer advantages to carry out large AI models or locally complex workloads rather than on the cloud, with much more reactive performance (dodging the delay factor which is inevitably introduced into the mixture when the piping works online, in the cloud).
There are obvious advantages in terms of confidentiality to keep work on devices, rather than directing the clouds, and these discreet UNPs will be designed to be more effective than GPUs who will take these types of workloads – there will therefore be current savings.
And it is here that we arrive at the node of the question for consumers, at least passionate PC players who plan to buy more expensive graphics cards. As we have seen in the past, sometimes people working with AI buy high -end GPUs – like RTX 5090 or 5080 – for their platforms. When the dedicated NPUs come out of AMD (and others), they will offer a better choice than a high -end GPU – which will remove market pressure for graphics cards.
So, especially when a new range of GPU comes out, and there is an inevitable precipitation to buy, there will be less global demand on high -end models – which is good news for supply and prices, for players who want a graphics card, well, play PC games, and not to make workloads.
Roll on the development of these autonomous NPUs, then – it must be a good thing for the players at the end. Another thought for the much more distant future is that ultimately, these NPUs may be necessary for AI routines in games, when IA -oriented ENG NPCs are set up. We have already taken a few steps on this road, in terms of clouds, although it is a good thing or not, it is a question of opinion.