The author also raises concerns about political bias in chatbots, arguing that it's not just about the models themselves, but also how users interact with them. They suggest that chatbots could potentially narrow the range of acceptable speech or subtly nudge users towards a certain political view. The author calls for more transparency from generative AI companies and a more nuanced understanding of chatbot bias, which can exist at the level of word associations, expressed "opinions", and actual behavior in regular usage.
Key takeaways:
- The paper claiming that ChatGPT expresses liberal opinions was found to be flawed as it tested an older model not used in ChatGPT and used an artificially constrained prompt.
- GPT-4 refused to opine in 84% of cases and only directly responded in 8% of cases, while GPT-3.5 refused in 53% of cases and directly responded in 39% of cases.
- Political bias in chatbots is a real concern, with potential issues including narrowing the Overton window of acceptable speech and subtly nudging users towards a certain political worldview.
- The behavior of a chatbot and the bias in its training data is complex, and can be analyzed at three levels: implicit bias, expressed opinions, and actual behavior in regular usage.