Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Inspect

May 10, 2024 - ukgovernmentbeis.github.io
The article introduces Inspect, a framework for evaluating large language models, developed by the UK AI Safety Institute. Inspect offers various built-in components for prompt engineering, tool usage, multi-turn dialog, and model graded evaluations. It also supports extensions for new elicitation and scoring techniques. The framework can be used with different model providers including OpenAI, Anthropic, Google, Mistral, HF, and Together. It also supports models hosted on Azure AI, AWS Bedrock, and Cloudflare.

The article provides a step-by-step guide on how to use Inspect, starting with installation and setting up the environment. It then explains the three main components of Inspect evaluations: Datasets, Solvers, and Scorers. It also provides a simple example of an evaluation. The article concludes by providing resources for learning more about Inspect, including sections on Workflow, Log Viewer, VS Code, Examples, Solvers, Tools, Scorers, Datasets, and Models. It also discusses advanced features and workflow like Eval Logs, Eval Tuning, and Eval Suites.

Key takeaways:

  • Inspect is a framework for large language model evaluations, providing built-in components for prompt engineering, tool usage, multi-turn dialog, and model graded evaluations.
  • Inspect supports models from various providers including OpenAI, Anthropic, Google, Mistral, HF, Together, Azure AI, AWS Bedrock, and Cloudflare.
  • Inspect evaluations consist of three main components: Datasets, Solvers, and Scorers. Datasets contain a set of labeled samples, Solvers evaluate the input in the dataset, and Scorers evaluate the final output of solvers.
  • Inspect provides a uniform API for evaluating a variety of large language models and using models within evaluations. It also offers features for describing, running, and analysing larger sets of evaluation tasks.
View Full Article

Comments (0)

Be the first to comment!