MatX plans to offer customers "low-level control over the hardware," similar to capabilities provided by existing AI processors like Nvidia Corp.'s graphics cards. The company claims its chips will be at least 10 times better at running large language models than Nvidia silicon and that AI clusters powered by its silicon will be capable of running models with 10 trillion parameters. MatX expects to complete the development of its first product next year.
Key takeaways:
- AI chip startup MatX, led by former Google engineers, has reportedly raised $80 million in a Series B funding round, valuing the company at over $300 million.
- The company is developing chips for training AI models and performing inference, with the ability to build machine learning clusters containing hundreds of thousands of its chips.
- MatX's chips are designed for cost-efficiency and competitive latency, promising latencies of less than a 100th of a second per token for AI models with 70 billion parameters.
- The company claims its chips will be at least 10 times better at running large language models than Nvidia silicon, and expects to complete the development of its first product next year.