The author suggests that in 2025, people will demand more control over how AI is used, and one way to achieve this is through red teaming. This practice involves external experts testing a system to find where defenses can go wrong. While currently used by major AI companies, it is not yet widespread for public use. The author's nonprofit, Humane Intelligence, conducts red teaming exercises to test AI for discrimination and bias. The author also introduces the concept of an AI right to repair, where users can run diagnostics on an AI, report anomalies, and see when they are fixed. This could also involve third-party groups creating patches or fixes, or hiring an independent party to evaluate and customize an AI system. The author concludes that 2025 will be the year people demand their rights regarding AI use.
Key takeaways:
- There is a growing trend of people and organizations rejecting the unsolicited imposition of AI in their lives, with several high-profile lawsuits filed against tech companies for alleged copyright infringement and misuse of personal data.
- Public confidence in AI is declining, with a majority of people expressing more concern than excitement about the technology, leading to a demand for more control over how AI is used.
- Red teaming, a practice borrowed from the military and used in cybersecurity, is being increasingly used to test AI systems for compliance, discrimination, and bias, and is expected to become more widespread in 2025.
- The concept of an AI 'right to repair' is emerging, where users could have the ability to run diagnostics on an AI, report any anomalies, and see when they are fixed by the company, potentially leading to a shift in the power dynamic between AI companies and the public.