Other users suggest using LM Studio, especially for M-series Macs, describing it as 'plug and play' and 'idiot proof'. LM Studio also allows users to browse and search for models and provides a server for API needs. One user shares their positive experience with Ollama, describing it as easy, fun, and fast. Another user expresses interest in learning more about Llama or Grok as potential options.
Key takeaways:
- Ollama is recommended for running an LLM locally due to its ease of use and consistent interface for prompting.
- However, Ollama lacks batching support which is necessary for Loom-like applications.
- LM Studio is another option that is considered 'idiot proof' and has an easy interface, allowing users to browse/search for models.
- There is a growing assumption that users will have a local, fast, decent LLM on their machines, making it easier to build applications.