Inference.ai claims to offer significantly cheaper GPU compute with better availability than major public cloud providers, thanks to its algorithmic matching tech and deals with data center operators. Despite competition from companies like CoreWeave, Lambda Labs, and others, Inference.ai recently secured $4 million in funding from Cherubic Ventures, Maple VC, and Fusion Fund. The funds will be used to build out Inference's deployment infrastructure, with investors expressing confidence in the team's ability to meet the growing demand for processing capacity in the AI sector.
Key takeaways:
- GPUs, crucial for AI computations, are becoming harder to procure due to increased demand, prompting an investigation by the U.S. Federal Trade Commission into potential anti-competitive practices.
- Inference.ai, co-founded by John Yue and Michael Yu, offers a solution by providing infrastructure-as-a-service cloud GPU compute through partnerships with third-party data centers.
- Inference.ai uses algorithms to match companies' workloads with GPU resources, aiming to offer cheaper and more available GPU compute than major public cloud providers.
- The startup recently secured a $4 million funding round from Cherubic Ventures, Maple VC and Fusion Fund, which will be used to build out Inference's deployment infrastructure.