Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Amazon will offer human benchmarking teams to test AI models

Nov 30, 2023 - theverge.com
Amazon Web Services (AWS) has introduced Model Evaluation on Bedrock, a tool that allows users to evaluate AI models in its repository, Amazon Bedrock. The tool, announced at the AWS re: Invent conference, includes automated and human evaluation components. Developers can test a model's performance on metrics such as robustness, accuracy, or toxicity for tasks like summarization, text classification, question and answering, and text generation. Users can also work with an AWS human evaluation team or their own to assess other metrics that automated systems can't detect, like empathy or friendliness.

The benchmarking service, currently in preview, is designed to help developers select the most suitable model for their projects and to ensure models meet responsible AI standards. While AWS provides test datasets, customers can also bring their own data into the benchmarking platform. AWS will only charge for the model inference used during the evaluation while the service is in preview. The goal of the benchmarking on Bedrock is not to evaluate models broadly, but to help companies measure the impact of a model on their projects.

Key takeaways:

  • AWS has announced Model Evaluation on Bedrock, a tool that allows users to evaluate AI models in its repository, Amazon Bedrock. The tool is designed to help developers avoid using models that are not accurate enough or too large for their needs.
  • Model Evaluation has two components: automated evaluation and human evaluation. The automated version allows developers to test a model's performance on various metrics. If humans are involved, users can work with an AWS human evaluation team or their own.
  • AWS will not require all customers to benchmark models. However, companies that are still exploring which models to use could benefit from going through the benchmarking process.
  • While the benchmarking service is in preview, AWS will only charge for the model inference used during the evaluation. The goal for benchmarking on Bedrock is not to evaluate models broadly but to offer companies a way to measure the impact of a model on their projects.
View Full Article

Comments (0)

Be the first to comment!