Seti but for LLM; How an LLM solution that has barely a few months could revolutionize the way in which inference is made


  • Exo supports Llama, Mistral, Llava, Qwen and Deepseek
  • Can operate on Linux, MacOS, Android and iOS, but not Windows
  • AI models requiring 16 GB of RAM can operate on two 8 GB laptops

The execution of large languages ​​models (LLM) generally requires expensive and high performance equipment with substantial memory and GPU power. However, Exo software now seeks to offer an alternative by allowing the inference of distributed artificial intelligence (AI) on a network of devices.

The company allows users to combine the computer power of several computers, smartphones and even Monomodes (SBC) computers such as Raspberry Pis to execute models that would otherwise be inaccessible.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top