The update also includes new guardrail metrics, which limit what the model can generate in terms of information, tone, and language. Another metric, "groundedness," determines if a model’s output is within the bounds of the training data it was provided. These features aim to help developers adjust models and fine-tune them for the best results, and to ensure that the model never goes off the rails in terms of accuracy, language, or confidential information disclosure.
Key takeaways:
- San Francisco-based AI startup Galileo has introduced new monitoring and metrics capabilities to help users better understand and explain the output of large language models (LLMs).
- Galileo Studio now allows users to evaluate the prompts and context of all inputs, observe the outputs in real time, and provides insights into why model outputs are being generated.
- The company has also introduced guardrail metrics, which limit what the model can generate in terms of information, tone, and language, and a groundedness metric to determine if a model’s output is within the bounds of the training data it was provided.
- These new features aim to help developers better adjust models and fine-tuning to get the best results, and to ensure that AI models do not generate inaccurate or inappropriate responses.