The article also provides detailed instructions on how to install the Bodhi App via Homebrew or by downloading the latest release from the Github Release Page. It offers a quick start guide, explains the difference between text generation and chat completions, and lists other popular models. The Bodhi App supports creating a model config alias using GGUF files from Huggingface and can convert a Huggingface model to GGUF format using the Python library GGUF. The article concludes with a comprehensive guide on using the Bodhi CLI, including commands for viewing and editing aliases, running an OpenAI compatible API server, and more.
Key takeaways:
- Bodhi App allows running Open Source Large Language Models (LLMs) locally, providing LLM features without any paid remote API calls.
- The app uses llama.cpp to run Open Source model files of GGUF format and leverages the huggingface.co ecosystem, saving local storage and bandwidth.
- Bodhi App supports creating a model config alias using GGUF files from Huggingface and can convert a Huggingface model to GGUF format using Python library GGUF.
- Bodhi App can be installed via Homebrew or downloaded from the Github Release Page, and it provides an OpenAI compatible API server for querying chat completions endpoint.