Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Ask HN: Affordable hardware for running local large language models?

May 05, 2024 - news.ycombinator.com
The author discusses the affordability and performance of different hardware for running large language models (LLMs) locally. They reference a previous post about running stable diffusion on a Raspberry Pi Zero 2, which was slow but impressive, and ask about the current affordable options for such tasks. The author is interested in hardware that is no more expensive than a current gen base model Mac Mini ($599), but ideally around the price of a Raspberry Pi 5 ($79). They note that while some people run models on flagship smartphones, these often have worse performance and are more expensive than a Mac Mini.

The author mentions having successfully run llama 3 via ollama on both a Mac Mini and a Raspberry Pi 5, but notes that these devices have slower speeds compared to a full workstation with a commodity GPU like the RTX 4090. They are interested in learning about other affordable devices that people use for running LLMs locally. The author also highlights the importance of memory bandwidth in determining performance, as both the Mac Mini and Raspberry Pi 5 have the same amount of RAM (8GB) but different performance levels.

Key takeaways:

  • The author is curious about affordable hardware options for running large language models locally.
  • They define affordable as no more expensive than a current gen base model mac mini ($599), but ideally around the price of a raspberry pi 5 ($79).
  • They have had success running models on both a mac mini and a raspberry pi, but note that performance varies due to differences in memory bandwidth.
  • The author is interested in learning about other affordable devices that people use for running large language models locally.
View Full Article

Comments (0)

Be the first to comment!