Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Women in AI: Anika Collier Navaroli is working to shift the power imbalance | TechCrunch

Jun 23, 2024 - techcrunch.com
Anika Collier Navaroli, a senior fellow at the Tow Center for Digital Journalism at Columbia University, discusses her journey in the AI field and the challenges she faced as a Black queer woman. Navaroli's work has focused on the intersection of technology, civil rights, and fairness, examining how early AI systems were replicating bias and creating unintended consequences for marginalized communities. She is particularly proud of her work in policy within tech companies to shift the balance of power and correct bias within culture and knowledge-producing algorithmic systems.

Navaroli highlights the pressing issue of AI companies turning to synthetic data, or information generated by AI itself, to train their systems, which she believes could lead to a feedback loop of bias and inaccuracies. She calls for a "People Pause" on AI, advocating for collective action to create meaningful boundaries for the use of AI technologies. Navaroli also emphasizes the importance of having diverse voices in the room when making decisions about AI and believes that journalism school provides a solid foundation for those who will be responsible for writing the rules for future iterations of AI.

Key takeaways:

  • Anika Collier Navaroli is a senior fellow at the Tow Center for Digital Journalism at Columbia University and a Technology Public Voices Fellow with the OpEd Project, known for her research and advocacy work within technology.
  • Navaroli's work focuses on how early AI systems like facial recognition software, predictive policing tools, and criminal justice risk assessment algorithms were replicating bias and creating unintended consequences that impacted marginalized communities.
  • One of the most pressing ethical issues facing new AI development, according to Navaroli, is the use of synthetic data as training data, which could potentially lead to a feedback loop of bias and inaccurate outputs.
  • Navaroli believes that the best way to responsibly build AI is by having diverse voices in the room making decisions, creating ethical guidelines and regulations, and having external regulation in the form of a new agency to regulate American technology companies.
View Full Article

Comments (0)

Be the first to comment!