Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

AI chip race: Groq CEO takes on Nvidia, claims most startups will use speedy LPUs by end of 2024

Feb 23, 2024 - venturebeat.com
Silicon Valley-based company Groq, which creates AI chips for large language model (LLM) inference, has been gaining attention in the AI industry. The company's CEO, Jonathan Ross, claims that Groq's language processing units (LPUs) are faster and cheaper than Nvidia's GPUs, particularly for LLM use. Ross also stated that Groq's LPUs are designed to handle sequences of data, such as code and natural language, more efficiently than GPUs or CPUs, and that the company's chat interface keeps queries private as it does not train models and therefore does not need to log any data.

Despite Nvidia's current dominance in the high-end chip market, Ross believes that Groq's offering could be a game-changer in AI inference. He claims that Groq's LPUs could potentially run ChatGPT more than 13 times faster if powered by Groq chips and that the company is likely to be the infrastructure most startups will be using by the end of the year. Ross also mentioned that Groq is working with countries to deploy hardware which could increase their capacity, and that they plan to have 25 million tokens a second or capacity by the end of the year.

Key takeaways:

  • Groq, a Silicon Valley-based company, is creating new AI chips for large language model (LLM) inference and has recently gained significant attention in the AI community.
  • Groq's LPUs (language processing units) are designed to provide faster inference for computationally intensive applications with a sequential component, such as AI language applications, and are claimed to be faster than Nvidia's GPUs.
  • Groq CEO Jonathan Ross claims that the company's offering is a cheaper, super-fast option for LLM use and expects that most startups will be using their infrastructure by the end of the year.
  • Ross also suggests that Groq could potentially partner with OpenAI, as their LPUs could significantly speed up the performance of OpenAI's ChatGPT.
View Full Article

Comments (0)

Be the first to comment!