1
Feature Story
Baseten grabs $75M to crank up high-performance inference for AI workloads - SiliconANGLE
Feb 20, 2025 · siliconangle.com
Co-founder and CEO Tuhin Srivastava emphasizes the importance of speed, reliability, and cost-efficiency in AI product deployment, stating that Baseten's ability to guarantee GPU resources is a key differentiator. Spark Capital’s Will Reed highlights the necessity of exceptional inference performance for successful AI projects, noting that choosing the right partner is crucial for product success. Baseten's focus on providing fast and reliable inference solutions has enabled it to attract a significant customer base and achieve rapid revenue growth.
Key takeaways
- Baseten Labs Inc. has raised $75 million in a Series C funding round, bringing its total funding to $135 million, to enhance high-performance inference for AI workloads.
- The company provides an AI inference platform that allows enterprises to run large language models either in the cloud or on-premises, ensuring fast and reliable access to GPU resources.
- Baseten claims its services can reduce inference costs by around 40% for customers, contributing to a six-fold revenue growth over the past year.
- With over 100 enterprise customers, including Patreon, Writer, and Descript, Baseten focuses on speed, reliability, and cost-efficiency to help companies bring high-quality AI products to market quickly.