The technological startup offers a new way of fighting massive LLM using the fastest memory available for humanity


  • The GPU type PCIE card offers FP4 10pflops calculation power and 2 GB of SRAM
  • SRAM is generally used in small quantities as cache in processors (L1 to L3)
  • He also uses LPDDR5 rather than much more expensive HBM memory

Silicon Valley Startup D-Matrix, which is supported by Microsoft, has developed a solution based on chiplet designed for fast and small lot of llms in business environments. Its architecture adopts a digital memory calculation approach, using modified SRAM cells for speed and energy efficiency.

Corsair’s current product, D-Matrix, is described as the “AI first calculation platform of its kind” and has two ASICS D-Matrix on a full length and full PCIe card, with four chiplets per Asic . It reaches a total of 9.6 pflops FP4 Power calculation with 2 GB of performance based on SRAM. Unlike traditional conceptions based on expensive HBM, Corsair uses LPDDR5 capacity memory, with up to 256 GB per card to manage larger models or workloads in lots.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top