The study's findings have raised concerns about the real-world harms these systems could cause and the potential to amplify forms of medical racism. While some question the study's utility, arguing that medical professionals are unlikely to seek a chatbot’s help for such specific questions, the researchers argue that physicians are increasingly experimenting with commercial language models in their work. Both OpenAI and Google have responded to the study, stating that they are working to reduce bias in their models and reminding users that chatbots are not a substitute for medical professionals.
Key takeaways:
- A study led by Stanford School of Medicine researchers found that popular AI chatbots, including ChatGPT and Google’s Bard, are perpetuating racist and debunked medical ideas, potentially worsening health disparities for Black patients.
- The chatbots, trained on text from the internet, responded to researchers' questions with misconceptions and falsehoods about Black patients, sometimes including fabricated, race-based equations.
- Experts worry these systems could amplify forms of medical racism that have persisted for generations, as more physicians use chatbots for help with daily tasks.
- Both OpenAI and Google responded to the study by stating they are working to reduce bias in their models and inform users that chatbots are not a substitute for medical professionals.