The study also revealed that doctors often stick to their initial diagnosis, even when the chatbot suggests a better one. It highlighted that while doctors are being introduced to AI tools, few know how to fully utilize them, missing out on their potential to solve complex diagnostic problems and provide explanations for their diagnoses. Dr. Rodman suggests that AI systems should be used as "doctor extenders," offering valuable second opinions on diagnoses.
Key takeaways:
- A study designed by Dr. Adam Rodman tested 50 licensed physicians to see if ChatGPT improved their diagnoses. The results showed that ChatGPT alone outperformed the doctors, scoring an average of 90 percent in diagnosing a medical condition from a case report and explaining its reasoning.
- Doctors who used ChatGPT along with conventional resources only did slightly better than those who did not have access to the bot, scoring an average of 76 percent compared to 74 percent.
- The study also revealed that doctors often stick to their initial diagnosis, even when a chatbot suggests a potentially better one. It also showed that while doctors are being exposed to AI tools, few know how to fully utilize them.
- Dr. Rodman suggests that AI systems should be used as "doctor extenders," providing valuable second opinions on diagnoses. The study concludes that access alone to these tools will not improve overall physician diagnostic reasoning in practice.