- ASICs are much more efficient than GPUs for inference, much like cryptocurrency mining
- The market for inference AI chips is expected to grow exponentially by the end of this decade.
- Hyperscalers like Google have already jumped on the bandwagon
Nvidia, already a leader in AI and GPU technologies, is entering the application-specific integrated circuit (ASIC) market to face growing competition and changing semi-automatic design trends. AI drivers.
The global rise of generative AI and large language models (LLM) has significantly increased demand for GPUs, and Nvidia CEO Jensen Huang confirmed in 2024 that the company would recruit 1,000 engineers in Taiwan.
Now, as reported by the Taiwanese newspaper Business hours (originally published in Chinese), the company has now established a new ASIC department and is actively recruiting talents.
The rise of inference chips
Nvidia’s H-series GPUs optimized for AI learning tasks have been widely adopted for training AI models. However, the AI semiconductor market is seeing a shift toward inference chips, or ASICs.
This increase is driven by demand for chips optimized for real-world AI applications, such as large language models and generative AI. Unlike GPUs, ASICs provide higher efficiency for inference tasks, as well as cryptocurrency mining.
The inference AI chip market is expected to grow from a valuation of $15.8 billion in 2023 to $90.6 billion by 2030, according to Verified Market Research.
Major tech players including Google have already adopted custom ASIC designs in its “Trillium” AI chip, made available in December 2024.
The move toward custom AI chips has intensified competition among semiconductor giants. Companies such as Broadcom and Marvell have gained relevance and stock market value as they collaborate with cloud service providers to develop specialized chips for data centers.
To stay ahead of the curve, Nvidia’s new ASIC department is working to leverage local expertise by recruiting from leading companies like MediaTek.