Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

GitHub - explodinggradients/ragas: Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines

Mar 21, 2024 - github.com
Ragas is a framework designed to evaluate the performance of Retrieval Augmented Generation (RAG) pipelines. RAG is a class of Language Model (LLM) applications that use external data to augment the LLM's context. While there are existing tools to build these pipelines, evaluating and quantifying their performance can be challenging. Ragas provides tools based on the latest research for evaluating LLM-generated text, offering insights about your RAG pipeline. It can be integrated with your CI/CD to provide continuous checks to ensure performance.

The framework can be installed from the source, and a small example program is provided to see Ragas in action. The community is encouraged to get more involved with Ragas through their discord server. The startup also tracks basic usage metrics to understand user needs and improve their services. However, they ensure that no information that can identify the user or their company is tracked. Users can disable usage-tracking by setting the `RAGAS_DO_NOT_TRACK` flag to true.

Key takeaways:

  • Ragas is a framework designed to evaluate the performance of Retrieval Augmented Generation (RAG) pipelines, providing insights about the pipeline and can be integrated with CI/CD for continuous checks.
  • The installation process involves cloning from the source and using pip install command.
  • There is a quickstart example program provided to demonstrate how Ragas can be used, with metrics such as faithfulness and answer correctness.
  • Ragas also has a community on Discord for users to get more involved, discuss about LLM, Retrieval, Production issues and more.
View Full Article

Comments (0)

Be the first to comment!