Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

GitHub - DeployQL/retri-evals: Retrieval Evaluation Pipelines

Jan 07, 2024 - github.com
The article discusses the retri-eval, a RAG evaluation framework designed for faster iteration. It aims to be flexible enough to fit on top of document and query processing, scalable without increasing latency or costs, and encourages reuse of components. The framework is built with MTEB, BEIR, and Pydantic. It provides a detailed guide on getting started with retri-eval, including installation, defining data types, creating document and query processing pipelines, defining a retriever, and using MTEB tasks.

The article also shares the results of using retri-eval with the MTEB tasks and outlines a roadmap for the framework's future development. This includes plans to support reranking models, add support for hybrid retrieval baselines, automatic dataset generation, parallel execution, and latency and cost benchmarks. The framework is currently integrated into MTEB for retrieval tasks only, but more integrations are being worked on. The article concludes by inviting readers to reach out for further discussions and acknowledging MTEB's contribution to the project.

Key takeaways:

  • retri-eval is a RAG evaluation framework designed to be flexible, scalable, and encourage reuse of components.
  • It is built with MTEB, BEIR, and Pydantic and provides a detailed guide on how to define data types, create document and query processing pipelines, and define a retriever.
  • The roadmap for retri-eval includes adding support for reranking models, hybrid retrieval baselines, automatic dataset generation, parallel execution, and latency and cost benchmarks.
  • retri-eval is currently integrated into MTEB for retrieval tasks only, but there are plans to expand this in the future.
View Full Article

Comments (0)

Be the first to comment!