Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Ask HN: Recommendations for Local LLMs in 2024: Private and Offline?

Apr 06, 2024 - news.ycombinator.com
The article is a query posted by a user named j4ckie seeking recommendations for a local Language Model (LLM) that can operate offline for processing personal documents. The user emphasizes the need for privacy and performance, and is open to both open-source and commercial solutions available in 2024. They also inquire about the current state of local LLMs, questioning their practicality and usefulness.

Two responses suggest using a fine-tuned Mistral-7B-Instruct-v0.2 model and Lamma2, respectively. The first respondent praises the performance of the Mistral model on their hardware and recommends using Python for personal use due to its ease of use. The second respondent also suggests using Python with Lamma2 and advises investing in a powerful PC for the task.

Key takeaways:

  • The user is seeking recommendations for a local LLM that can operate entirely offline, prioritizing privacy and performance.
  • The user is interested in both open-source and commercial solutions available in 2024, and is curious about the current state of local LLMs.
  • One recommendation is a product using a Mistral-7B-Instruct-v0.2 model, which works well on both RTX3090 and M1 MBP. The user suggests using Rust for building, but Python for personal use due to its ease of use.
  • Another recommendation is Lamma2, which can be compiled in multiple ways. The user suggests sticking to Python unless the user is familiar with cpp, and recommends investing in a powerful PC to handle the workload.
View Full Article

Comments (0)

Be the first to comment!