Navaroli highlights the pressing issue of AI companies turning to synthetic data, or information generated by AI itself, to train their systems, which she believes could lead to a feedback loop of bias and inaccuracies. She calls for a "People Pause" on AI, advocating for collective action to create meaningful boundaries for the use of AI technologies. Navaroli also emphasizes the importance of having diverse voices in the room when making decisions about AI and believes that journalism school provides a solid foundation for those who will be responsible for writing the rules for future iterations of AI.
Key takeaways:
- Anika Collier Navaroli is a senior fellow at the Tow Center for Digital Journalism at Columbia University and a Technology Public Voices Fellow with the OpEd Project, known for her research and advocacy work within technology.
- Navaroli's work focuses on how early AI systems like facial recognition software, predictive policing tools, and criminal justice risk assessment algorithms were replicating bias and creating unintended consequences that impacted marginalized communities.
- One of the most pressing ethical issues facing new AI development, according to Navaroli, is the use of synthetic data as training data, which could potentially lead to a feedback loop of bias and inaccurate outputs.
- Navaroli believes that the best way to responsibly build AI is by having diverse voices in the room making decisions, creating ethical guidelines and regulations, and having external regulation in the form of a new agency to regulate American technology companies.