MatX
About MatX
MatX manufactures specialized hardware designed for training and inference of large AI models, delivering up to 10× more computing power for workloads with over 7 billion parameters. This enables researchers and startups to efficiently train advanced models, significantly reducing the time and cost associated with developing state-of-the-art AI systems.
```xml <problem> Training and deploying large AI models with billions of parameters requires significant computational resources, leading to high costs and long development cycles for researchers and startups. Existing hardware solutions often fail to efficiently handle the unique demands of these large models, resulting in suboptimal performance and increased expenses. </problem> <solution> MatX designs specialized hardware accelerators optimized for training and inference of large AI models, delivering significantly improved performance-per-dollar compared to general-purpose computing platforms. By focusing on the specific architectural requirements of models with 7 billion or more parameters, MatX achieves substantial gains in computational throughput and energy efficiency. The hardware is designed to scale to massive clusters, enabling researchers and startups to train and deploy state-of-the-art AI systems more quickly and affordably. MatX provides low-level control over the hardware, allowing expert users to fine-tune performance for their specific workloads. </solution> <features> - Optimized for transformer-based models with at least 7 billion activated parameters, including both dense and Mixture of Experts (MoE) architectures. - High performance interconnect allows scaling to models with 10T+ parameters. - Designed for both training and inference workloads. - Excellent scale-out performance, supporting clusters with hundreds of thousands of chips. - Delivers competitive latency, e.g. <10ms/token for 70B-class models. - Low-level control over the hardware for expert users. </features> <target_audience> The primary target audience includes AI researchers, machine learning engineers, and startups working on large language models and other advanced AI systems. </target_audience> ```
What does MatX do?
MatX manufactures specialized hardware designed for training and inference of large AI models, delivering up to 10× more computing power for workloads with over 7 billion parameters. This enables researchers and startups to efficiently train advanced models, significantly reducing the time and cost associated with developing state-of-the-art AI systems.
Where is MatX located?
MatX is based in Mountain View, United States.
When was MatX founded?
MatX was founded in 2022.
How much funding has MatX raised?
MatX has raised 119900000.
Who founded MatX?
MatX was founded by Reiner Pope.
- Reiner Pope - CEO
- Location
- Mountain View, United States
- Founded
- 2022
- Funding
- 119900000
- Employees
- 39 employees
- Major Investors
- Spark Capital