Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

AI Model Weight Providers Should Not Police Uses, No Matter How Awful They Are

Sep 01, 2023 - marble.onl
The article discusses the issue of censorship and restrictions in the AI industry, particularly in relation to model licensing. The author argues against the recent trend of "ethical" and commercially oriented licenses, which impose restrictions on how AI models can be used, citing examples such as the OpenRail++ license used by StableDiffusionXL and Facebook's LLaMA model. The author suggests that these restrictions are a form of vigilante censorship, as they impose the values of the companies releasing the models on the users, and warns that this could lead to more restrictions in the future.

The author advocates for the use of existing laws and regulations to deal with inappropriate uses of AI, rather than allowing companies to impose their own restrictions. They argue that these restrictions are a burden on the AI ecosystem and call for stronger norms and more pressure against restrictive licenses. The author emphasizes that opposing corporate censorship does not mean agreeing with all uses of AI, but rather ensuring that restrictions are imposed democratically and not through vigilantism.

Key takeaways:

  • The author argues against the trend of AI companies imposing their own values and restrictions on the use of their models, likening it to internet censorship.
  • He criticizes the use of 'ethical' and commercially oriented licenses, which he believes are being used to enforce opinions and norms about use, rather than focusing on legality.
  • He warns that these restrictions could lead to more censorship and could complicate the process of building AI solutions, as developers would need to consider the morals of each model provider.
  • The author advocates for the use of existing laws and regulations to deal with inappropriate uses of AI, rather than allowing companies to impose their own restrictions.
View Full Article

Comments (0)

Be the first to comment!