- XpertStation WS300 supports models with billions of parameters without relying on cloud infrastructure
- Dual 400 GbE LAN ports enable high-throughput distributed multi-node AI workloads
- Unified memory of HBM3e GPU and LPDDR5X CPU maximizes bandwidth for AI
MSI has officially launched the XpertStation WS300, a desktop AI workstation based on Nvidia’s DGX Station architecture.
This system is designed to handle large, demanding language models, generative AI, and advanced data science workloads.
The platform is powered by the Nvidia GB300 Grace Blackwell Ultra desktop superchip and supports up to 748 GB of unified coherent large memory.
Article continues below
Unified Memory Architecture for High-Bandwidth AI Processing
The XpertStation WS300 combines HBM3e GPU memory with LPDDR5X CPU memory for high-bandwidth data sharing.
This setup enables local processing of models with billions of parameters and supports extensive AI workflows without relying on cloud infrastructure.
The workstation includes two 400 GbE LAN ports, which enable multi-node distributed computing with an aggregate bandwidth of up to 800 Gbps.
MSI says the XpertStation WS300 brings data center-class performance straight to the desktop environment, with its configuration intended to help organizations move from experimentation to production while maintaining consistent compute reliability.
The XpertStation WS300 supports the full AI lifecycle, including large-scale model training, data-intensive analytics, and real-time inference.
By functioning as a centralized AI compute node, the platform enables collaborative tuning and on-demand deployment, while maintaining control of its data and intellectual property.
High-speed PCIe Gen5 and Gen6 NVMe storage accelerates dataset ingestion and AI pipelines, ensuring sustained utilization during compute-intensive operations.
Combined with the Nvidia AI software stack, the workstation integrates hardware and software to enable seamless workflow transitions from research to production environments.
MSI has also integrated Nvidia NemoClaw, an open source stack that runs OpenShell in a rules-controlled sandbox.
This allows autonomous AI agents to operate continuously and safely in the office, utilizing the 20petaFLOPS computing potential of the workstation.
The setup supports always-on AI processes locally, allowing advanced AI and robotics applications to be experimented with without moving sensitive workloads to cloud servers.
“MSI has a strategic vision for advancing AI-driven computing,” said Danny Hsu, general manager of MSI enterprise platform solutions.
“Together with Nvidia, we are defining the next era of AI infrastructure, connecting centralized performance and distributed innovation, and enabling organizations to move from experimentation to production with greater speed, scale and confidence.
The platform offers extensive functionality for advanced AI workflows, but its $84,999.99 price tag raises concerns about cost-effectiveness.
Organizations that do not need maximum memory or continuous operation of a model with billions of parameters may struggle to justify this investment.
The system delivers unprecedented local AI performance, enabling demanding computing in the office.
However, the practical value of this workstation is likely limited to enterprises with high-throughput AI workloads and specific infrastructure requirements.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




