Two responses suggest using a fine-tuned Mistral-7B-Instruct-v0.2 model and Lamma2, respectively. The first respondent praises the performance of the Mistral model on their hardware and recommends using Python for personal use due to its ease of use. The second respondent also suggests using Python with Lamma2 and advises investing in a powerful PC for the task.
Key takeaways:
- The user is seeking recommendations for a local LLM that can operate entirely offline, prioritizing privacy and performance.
- The user is interested in both open-source and commercial solutions available in 2024, and is curious about the current state of local LLMs.
- One recommendation is a product using a Mistral-7B-Instruct-v0.2 model, which works well on both RTX3090 and M1 MBP. The user suggests using Rust for building, but Python for personal use due to its ease of use.
- Another recommendation is Lamma2, which can be compiled in multiple ways. The user suggests sticking to Python unless the user is familiar with cpp, and recommends investing in a powerful PC to handle the workload.