Despite the relatively high scores of some models, none of the creators disclosed any information about their societal impact. This includes information on where to direct privacy, copyright, or bias complaints. The goal of the index, according to Rishi Bommasani from Stanford, is to provide a benchmark for governments and companies, potentially leading to regulations requiring transparency reports from developers of large foundation models.
Key takeaways:
- A new report from Stanford HAI (Human-Centered Artificial Intelligence) states that no major AI foundation model developer, including OpenAI and Meta, is releasing enough information about their models' potential societal impact.
- The Foundation Model Transparency Index, released by Stanford HAI, evaluates whether the creators of the 10 most popular AI models disclose information about their work and usage. Meta’s Llama 2 scored the highest, followed by BloomZ and OpenAI’s GPT-4, but none received particularly high scores.
- The index is based on 100 indicators for information about how the models are built, how they work, and how they are used. It found that none of the models' creators disclosed any information about societal impact, including where to direct privacy, copyright, or bias complaints.
- Rishi Bommasani, society lead at the Stanford Center for Research on Foundation Models, says the goal of the index is to provide a benchmark for governments and companies. The group is open to expanding the scope of the index but will stick to the 10 foundation models it’s already evaluated.