Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Stanford Is Ranking Major A.I. Models on Transparency

Oct 18, 2023 - news.bensbites.co
Stanford researchers have developed a scoring system, the Foundation Model Transparency Index, to rate the transparency of large A.I. language models. The system evaluates each model on 100 criteria, including the sources of its training data, information about the hardware used, the labor involved in training, and other details. The most transparent model was Meta's LLaMa 2, scoring 54 percent, while OpenAI's GPT-4 and Google's PaLM 2 both scored 40 percent.

The researchers argue that transparency is crucial as A.I. models become more powerful and widely used. They reject common reasons given by A.I. firms for not disclosing more information, such as fear of lawsuits, competition, and safety concerns. The researchers believe that users, researchers, and regulators need to understand how these models work, their limitations, and potential dangers.

Key takeaways:

  • Stanford researchers have developed a scoring system, the Foundation Model Transparency Index, to rate the transparency of large A.I. language models.
  • The index evaluates each model on 100 criteria, including the sources of its training data, information about the hardware it used, the labor involved in training it, and other details.
  • The most transparent model of the 10 evaluated was Meta's LLaMa 2, with a score of 54 percent. OpenAI's GPT-4 and Google's PaLM 2 both received a score of 40 percent.
  • The researchers argue that transparency in A.I. models is crucial as they grow more powerful and are incorporated into daily life, allowing regulators, researchers, and users to better understand the systems and ask more informed questions.
View Full Article

Comments (0)

Be the first to comment!