The author mentions having successfully run llama 3 via ollama on both a Mac Mini and a Raspberry Pi 5, but notes that these devices have slower speeds compared to a full workstation with a commodity GPU like the RTX 4090. They are interested in learning about other affordable devices that people use for running LLMs locally. The author also highlights the importance of memory bandwidth in determining performance, as both the Mac Mini and Raspberry Pi 5 have the same amount of RAM (8GB) but different performance levels.
Key takeaways:
- The author is curious about affordable hardware options for running large language models locally.
- They define affordable as no more expensive than a current gen base model mac mini ($599), but ideally around the price of a raspberry pi 5 ($79).
- They have had success running models on both a mac mini and a raspberry pi, but note that performance varies due to differences in memory bandwidth.
- The author is interested in learning about other affordable devices that people use for running large language models locally.