The women have been advocating for regulation and transparency in AI, and have established their own organizations to investigate the impacts of AI on marginalized communities. They argue that the concerns raised by the so-called "AI Doomers" about the existential risks of AI are less pressing than the immediate harms these technologies are causing. They call for a more diverse range of voices in the development and regulation of AI, and warn against an overreliance on technical systems.
Key takeaways:
- AI researcher Timnit Gebru, who previously co-led Google's Ethical AI group, has raised concerns about the lack of diversity in the AI field and the potential for AI systems to reinforce societal prejudices.
- Gebru and her colleagues found that large language models (LLMs), which are trained on material from sites like Wikipedia, Twitter, and Reddit, can reflect back bias due to the skewed sources of their training data.
- Other researchers, including Joy Buolamwini, Safiya Noble, Rumman Chowdhury, and Seeta Peña Gangadharan, have also highlighted the potential harms of AI, particularly for marginalized communities.
- These researchers are calling for greater transparency and regulation in the AI field, and are working on initiatives to investigate and mitigate the harms of current AI systems.