In other AI news, the article mentions predictions for AI in 2024, Microsoft Copilot's new music creation feature, and a lawsuit against Google by news publishers. It also discusses new research, including a Danish study using data points to predict a person's life and death, a system by CMU scientists that automates lab work, and Google's AI research in function search and generative imagery. The article concludes by cautioning against the propagation of racial biases in AI models used in health and medicine.
Key takeaways:
- The LAION dataset, used to train many AI image generators, was found to contain thousands of images of suspected child sexual abuse. The nonprofit has since taken down its training data and pledged to remove the offending materials before republishing it.
- There is a growing concern about the lack of ethical considerations in the development of AI products, especially with the proliferation of no-code AI model creation tools.
- Several AI applications have been criticized for their lack of ethical considerations, including Bing Chat, ChatGPT, Bard, and DALL-E, which have been found to give outdated, racist medical advice or show evidence of Anglocentrism.
- AI research continues to advance, with developments such as life2vec, a Danish study that uses data points in a person's life to predict their personality and life expectancy, and Coscientist, a system that can autonomously perform lab tasks in certain domains of chemistry.