The author advocates for the use of existing laws and regulations to deal with inappropriate uses of AI, rather than allowing companies to impose their own restrictions. They argue that these restrictions are a burden on the AI ecosystem and call for stronger norms and more pressure against restrictive licenses. The author emphasizes that opposing corporate censorship does not mean agreeing with all uses of AI, but rather ensuring that restrictions are imposed democratically and not through vigilantism.
Key takeaways:
- The author argues against the trend of AI companies imposing their own values and restrictions on the use of their models, likening it to internet censorship.
- He criticizes the use of 'ethical' and commercially oriented licenses, which he believes are being used to enforce opinions and norms about use, rather than focusing on legality.
- He warns that these restrictions could lead to more censorship and could complicate the process of building AI solutions, as developers would need to consider the morals of each model provider.
- The author advocates for the use of existing laws and regulations to deal with inappropriate uses of AI, rather than allowing companies to impose their own restrictions.