The researchers argue that as businesses increasingly incorporate these AI models into their operations, understanding their limitations and biases has become essential. The report also highlights the decline in transparency over the last three years due to competitive pressures and fears of AI misuse. The authors hope the index will encourage companies to improve their transparency and serve as a resource for governments considering potential regulation of the rapidly growing AI field.
Key takeaways:
- Stanford University researchers have issued a report, "The Foundation Model Transparency Index," which found major AI models, including those created by OpenAI, Google, Meta, and others, to be greatly lacking in transparency.
- The report graded 10 popular foundation models on 100 different indicators, including training data, labor practices, and compute usage. The highest score was 54 out of 100, achieved by Meta's Llama 2 language model, while Amazon's Titan model scored the lowest at 12 out of 100.
- Stanford associate professor Dr. Percy Liang noted that transparency in AI models has been on the decline over the past three years, while their capabilities have significantly increased. Reasons for this include competitive pressures and fears of AI misuse.
- The authors of the Transparency Index hope that it will encourage companies to improve their transparency and serve as a resource for governments considering how to regulate the rapidly growing AI field.