Hollingsworth found that direct interaction with the chatbot can lead to misinformation and confusion, as the AI can make mistakes. However, when an SME is involved, they can correct any inaccuracies and use the AI to expedite their responses, thus saving time. He concludes that while AI can help save time in drafting responses to patient questions, it should not be trusted to interface directly with patients. This principle, he suggests, should be applied to all AI applications in healthcare.
Key takeaways:
- Matt Hollingsworth, CEO and Co-founder of Carta Healthcare, believes that AI should assist people, not replace them, especially in healthcare.
- AI generates true value when it augments a subject matter expert (SME) rather than replaces them, as demonstrated by two different scenarios involving a ChatGPT-powered chatbot.
- Direct patient interaction with a ChatGPT-powered chatbot can lead to misinformation and confusion, as AI can make mistakes and 'hallucinate' responses.
- Keeping a human SME in the loop between patient questions and responses, even with the use of AI, is currently the best practice for deploying AI in healthcare.