Meta builds 1700W superchip and custom MTIA chips while ditching Nvidia, AMD, Intel and ARM for inference


  • Meta’s 1700W superchip offers 30 PFLOPs and 512 GB of HBM memory
  • MTIA 450 and 500 prioritize inference over pre-training workloads
  • Future generations of MTIA will support GenAI inference and classification workloads

Meta advances its AI infrastructure with a portfolio of custom MTIA chips designed specifically for inference workloads in its applications.

The company is developing a 1700W superchip capable of 30 PFLOPs and 512 GB of HBM, integrated into the same MTIA infrastructure to handle large-scale inference tasks.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top