Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

GitHub - Giskard-AI/giskard: 🐢 The testing framework for ML models, from tabular to LLMs

Nov 09, 2023 - github.com
Giskard is a Python library designed to automatically detect vulnerabilities in AI models, including performance biases, data leakage, spurious correlation, hallucination, toxicity, security issues, and more. It is a tool that aids data scientists in identifying model issues, thereby saving time and effort and producing more reliable and trustworthy models. The library works with any model and environment and integrates seamlessly with popular tools.

The library can be installed from PyPi using pip and supports Python 3.9, 3.10, and 3.11. It allows users to scan AI models, generate test suites based on detected vulnerabilities, and display results directly in the notebook. Giskard also offers premium features through the Giskard hub, which includes advanced test generation, model comparison, a test hub, and business feedback. The hub can be started with a simple command and allows users to upload their test suite for further analysis and collaboration.

Key takeaways:

  • Giskard is a Python library that automatically detects vulnerabilities of AI models, including performance biases, data leakage, spurious correlation, hallucination, toxicity, security issues and more.
  • It can be installed via PyPi and supports Python 3.9, 3.10 and 3.11. It also provides a Colab notebook for users to try out its features.
  • Giskard offers a premium service called Giskard hub, which provides advanced test generation, model comparison, a test hub for team collaboration, and the ability to share results and collect business feedback.
  • The Giskard community is open to contributions from the Machine Learning community and offers support through their Discord server. They also offer sponsorship opportunities for those who want to support the project.
View Full Article

Comments (0)

Be the first to comment!