The startup claims its chips are particularly effective for AI workloads of at least 7 billion, and ideally 20 billion or more activated parameters. MatX's chips are also said to be good at scaling to large clusters due to their advanced interconnect. The company's goal is to make its processors ten times better at training large language models and delivering results than NVIDIA’s GPUs. The startup's seed round was led by AI angel investors Nat Friedman, former CEO of GitHub, and Daniel Gross, former head of search and AI at Apple.
Key takeaways:
- MatX, a startup that designs chips for large language models, has raised approximately $80 million in a Series A funding round, led by Spark Capital. This comes less than a year after their $25 million seed round.
- The company was co-founded by Mike Gunter and Reiner Pope, both of whom previously worked on Google's Tensor Process Units (TPUs). They aim to address the shortage of chips designed for AI workloads.
- MatX's chips are said to be high-performing and cost-effective, with a particular strength in scaling to large clusters due to their advanced interconnect. The company's goal is to make its processors ten times better at training large language models and delivering results than NVIDIA's GPUs.
- Investor interest in chip designing companies has grown due to the AI boom and high demand for NVIDIA's processors. For instance, chip startup Groq saw its valuation nearly triple to $2.8 billion in August.