Additionally, users can now specify RAG and finetune data sources in Helix Apps' `helix.yaml` to customize an assistant with a RAG data source or fine-tuned LLM. This involves running a RAG or finetune session to create a "data source ID", which can be retrieved and placed in a `helix.yaml` file in a GitHub repo. This `rag_source_id` can also be overridden as an API parameter when making an API call. The same can be done with finetune data sources, named `finetune_data_entity_id` in the info panel and specified in the `helix.yaml` as `lora_id`.
Key takeaways:
- Helix now supports RAG. Users can upload documents and perform RAG over them from the homepage.
- The terms 'inference' and 'finetune' have been replaced with 'chat' and 'learn' for a more user-friendly experience.
- The default Learn mode is now RAG as it is faster than fine-tuning and better at retrieving specific facts.
- Users can now specify RAG and finetune data sources in Helix Apps' helix.yaml to customize an assistant with a RAG data source or fine tuned LLM.