The author suggests that the role of government is to create trust in society, and therefore, it should regulate the organizations that control and use AI. The author calls for AI transparency laws, regulations for AI safety, and the enforcement of AI trustworthiness. The author also proposes the concept of public AI models, built by the public for the public, to counterbalance corporate-owned AI. The author concludes by emphasizing the importance of government in creating social trust and constraining the behavior of corporations and their AI systems.
Key takeaways:
- The author argues that there are two types of trust - interpersonal trust and social trust - and that we often confuse them, especially when it comes to artificial intelligence (AI).
- AI systems, controlled by profit-maximizing corporations, are likely to exploit our trust and manipulate us, as they are designed to appear as friends rather than services.
- Government regulation is necessary to ensure the trustworthiness of AI, including transparency laws, safety regulations, and penalties for untrustworthy behavior.
- The author suggests the concept of "public AI models" - systems built by academia, non-profit groups, or government that can be owned and run by individuals, as a counter-balance to corporate-owned AI.