- The new RNGD server from Furiosaai provides 4 3 kW Petaflops calculations for an effective AI
- Companies will be able to evolve the workloads of the AI without costly infrastructure changes
- RNGD Server provides compatibility with the OpenAi API alongside a set of increasing SDK functionalities
The South Korean flea startup Furiosaai, which has moved away from the acquisition offer of $ 800 million in Meta, continues to advance new products while effective AI infrastructures are skyrocketing.
The startup seeks to provide companies with equipment that can execute LLM without the costly upgrades of the data center and heavy energy costs often associated with GPUs.
His latest product, the RNGD server, is an AI device ready for the company powered by RNGD from Furiosaai (pronounced “renegade”) AI inference chips.
Scale more efficiently
Each system offers 4 PETAFLOPS FP8 calculation and 384 GB of HBM3 memory, while operating only 3kW.
In comparison, NVIDIA DGX H100 servers can draw more than 10 kW. This means that a 15 kW standard data center rack can contain five RNGD servers, while the same rack would only adapt one DGX H100.
Furiosaai says that most data centers are limited to 8 kW per rack or less, its design deals with a key obstacle for companies.
Advanced AI models executed in such environments generally require new cooling and food systems.
The company says that by adopting the RNGD server, companies will be able to evolve more effectively, while maintaining compatibility with the Openai API.
The startup recently closed a bridge in the C series of $ 125 million and expanded its partnership with LG AI Research.
LG uses the RNGD material to run its exam models, and says that it gets more than twice the performance of inference by Watt compared to the GPUs.
Furiosaai also recently collaborated with Openai, where the two companies have demonstrated the Chatbot in real time GPT-OS 120B in open time operating on only two of the RNGD accelerators in Furiosaai.
The new RNGD server will receive continuous updates from the Furiosaai SDK, which recently introduced the parallelism of the inter-puce, new compiler optimizations and enlarged quantification formats.
The RNGD server is currently at sampling with global customers and should be available on order at the beginning of 2026.