Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

5 steps to ensure startups successfully deploy LLMs | TechCrunch

Jan 05, 2024 - news.bensbites.co
The article discusses the rise of large language models (LLMs) like ChatGPT, Google's LaMDA, BLOOM, Meta's LLaMA, and Anthropic's Claude, and their potential applications in various sectors such as life sciences, pharmaceuticals, insurance, and finance. However, it also highlights the challenges associated with deploying LLMs, including their tendency to generate incorrect information and the significant operational costs involved.

The article further elaborates on the financial and energy costs of running LLMs. The hardware, such as Nvidia's H100 GPU, is expensive, with an estimated cost of $240 million for the GPUs alone to train an LLM comparable to ChatGPT-3.5. Additionally, the power consumption is high, with training a model requiring about 10 GWh of power, and running a model like ChatGPT-3.5 consuming about 1 GWh a day. This high power consumption could also negatively impact user experience on portable devices due to rapid battery drain.

Key takeaways:

  • Large Language Models (LLMs) are becoming increasingly popular, with many businesses planning to deploy them within the next year, despite challenges such as their tendency to generate incorrect information.
  • One of the main challenges of using LLMs is their high operating expense due to the intense computational demand required to train and run them.
  • The hardware required to run these models, such as the H100 GPU from Nvidia, is costly, with an estimated cost of $240 million on GPUs alone to train an LLM comparable to ChatGPT-3.5.
  • Power consumption is another significant expense and potential pitfall, especially for user experience on portable devices, as heavy use could quickly drain the device's battery.
View Full Article

Comments (0)

Be the first to comment!