Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

GitHub - mr-gpt/deepeval: PyTest For LLMs

Aug 17, 2023 - github.com
DeepEval is a Python-based tool designed to facilitate offline evaluations of Language Model (LLM) pipelines, making it easier to launch them into production. It aims to simplify the process of productionizing and evaluating LLMs, similar to how Pytest simplifies software engineering. DeepEval is particularly useful for Python developers building production-grade apps, as it provides a clean interface for quickly writing tests. It also addresses a gap in the market for a tool that offers software-like tooling and abstractions for machine learning engineers, reducing the feedback loop of iterations.

The tool offers features such as individual and bulk test cases, custom metrics, and integration with frameworks like LangChain. It also supports synthetic query generation, allowing developers to quickly evaluate queries related to their prompts. A dashboard feature, which will provide information about the pipeline and the run, is in the works. DeepEval was developed by the Confident AI Team.

Key takeaways:

  • DeepEval is a Pythonic tool designed to run offline evaluations on LLM pipelines, aiming to make productionizing and evaluating LLMs as easy as software engineering.
  • It provides a clean interface to quickly write tests for LLM applications, and is especially useful for machine learning engineers who often receive feedback in the form of an evaluation loss.
  • DeepEval can be integrated tightly with common frameworks such as Langchain and lLamaIndex, and also supports synthetic query generation for quick evaluation of queries related to prompts.
  • The tool is currently being developed by the Confident AI Team, with future plans including a web UI, support for more metrics, and a dashboard for pipeline and run information.
View Full Article

Comments (0)

Be the first to comment!