One user points out that for a 13B model, a system with at least 16GB RAM would be needed, and even then, performance might be sluggish. They suggest trying a few different 7B models at 4 and 5 bit quantizations on the Mac, which would perform better than most other 8GB RAM systems. Another user mentions that 32 GiB of DDR4 RAM is only $60 on a US retail site, if a platform that supports it is chosen.
Key takeaways:
- The user is seeking a cost-effective solution to run local LLMs and is considering Raspberry PI or Orange PI models.
- For a 13B model, a system with at least 16GB RAM is recommended, and the user might need to consider cloud solutions for larger tasks.
- Running a quantized 7b model using LMStudio or Ollamma on the user's M1 Mac is suggested as the best bet.
- There is a suggestion to consider platforms that support 32 GiB of DDR4 RAM, which costs around $60 on a US retail site.