The article also highlights the common defense that these are not bugs but rather a reflection of the most common data. The author compares this to a hypothetical scenario where a chatbot for a mechanic converts all appointment requests to the most common type, arguing that such behavior would be seen as a bug in any other context. The author concludes by stating that biases have always been encoded into automation, and the increased use of ML and AI will only amplify these biases.
Key takeaways:
- The article discusses the issue of bias in machine learning (ML) and artificial intelligence (AI), particularly in large language models (LLMs) and generative AI.
- It highlights how these biases can result in outputs that are opposite of what the user asked for, such as changing the ethnicity of a person in a photo, and how these are often not recognized as bugs.
- The author argues that these biases are not new and have been encoded into automation for as long as it has existed, and that the increased use of ML and AI is simply increasing the scope and scale of these biases.
- It also points out that these biases are often defended or dismissed by users and developers, and that there is a need for more awareness and action to address these issues.