Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

GitHub - glance-io/steer-backend: A repository for refactored steer AI backend

May 07, 2024 - github.com
The article provides a guide on how to set up and run Steer, a lightweight backend service for a grammar assistant app. The service can also be used as a reference for LLM token streaming with OpenAI SDK and FastAPI. To get started, one needs to have Python and pip installed on their system, and the necessary Python packages are listed in the `requirements.txt` file. The user is also required to copy and fill in the necessary values in the `.env.example` and `config.example.yaml` files.

To install, the user needs to clone the repository, navigate to the project directory, and install the required packages using pip. The application can be run using the command: uvicorn main:app --reload, or with Docker, and will be available at `http://localhost:80`. The project is structured into several modules and services, with the LLM service and Rewrite service being the most interesting for those interested in LLM integration. Endpoint documentation is available at `/docs` when the application is running.

Key takeaways:

  • The project 'Steer' is a lightweight backend service for a grammar assistant app, and can be used as inspiration for LLM token streaming with OpenAI SDK and FastAPI.
  • It requires Python and pip installed on your system, with the required Python packages listed in the 'requirements.txt' file.
  • The application can be run using the command 'uvicorn main:app --reload', or with Docker, and will be available at 'http://localhost:80' exposed with Nginx.
  • The project is structured into several modules and services, with the most interesting parts for those interested in LLM integration being the LLM service and the Rewrite service.
View Full Article

Comments (0)

Be the first to comment!