The article also shares the results of using retri-eval with the MTEB tasks and outlines a roadmap for the framework's future development. This includes plans to support reranking models, add support for hybrid retrieval baselines, automatic dataset generation, parallel execution, and latency and cost benchmarks. The framework is currently integrated into MTEB for retrieval tasks only, but more integrations are being worked on. The article concludes by inviting readers to reach out for further discussions and acknowledging MTEB's contribution to the project.
Key takeaways:
- retri-eval is a RAG evaluation framework designed to be flexible, scalable, and encourage reuse of components.
- It is built with MTEB, BEIR, and Pydantic and provides a detailed guide on how to define data types, create document and query processing pipelines, and define a retriever.
- The roadmap for retri-eval includes adding support for reranking models, hybrid retrieval baselines, automatic dataset generation, parallel execution, and latency and cost benchmarks.
- retri-eval is currently integrated into MTEB for retrieval tasks only, but there are plans to expand this in the future.