Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Galileo offers new tools to explain why your AI model is hallucinating

Sep 19, 2023 - venturebeat.com
San Francisco-based AI startup Galileo has launched new monitoring and metrics capabilities to help users better understand and explain the output of large language models (LLMs). The new features, part of an update to the Galileo LLM Studio, allow users to evaluate the prompts and context of all inputs and observe the outputs in real time. The company claims this will provide better insights into why model outputs are being generated, with new metrics and guardrails to optimize LLMs.

The update also includes new guardrail metrics, which limit what the model can generate in terms of information, tone, and language. Another metric, "groundedness," determines if a model’s output is within the bounds of the training data it was provided. These features aim to help developers adjust models and fine-tune them for the best results, and to ensure that the model never goes off the rails in terms of accuracy, language, or confidential information disclosure.

Key takeaways:

  • San Francisco-based AI startup Galileo has introduced new monitoring and metrics capabilities to help users better understand and explain the output of large language models (LLMs).
  • Galileo Studio now allows users to evaluate the prompts and context of all inputs, observe the outputs in real time, and provides insights into why model outputs are being generated.
  • The company has also introduced guardrail metrics, which limit what the model can generate in terms of information, tone, and language, and a groundedness metric to determine if a model’s output is within the bounds of the training data it was provided.
  • These new features aim to help developers better adjust models and fine-tuning to get the best results, and to ensure that AI models do not generate inaccurate or inappropriate responses.
View Full Article

Comments (0)

Be the first to comment!