The article suggests that healthcare organizations should look beyond these basic principles when developing an AI Governance Program and strategy. It emphasizes the importance of assessing and mitigating risks associated with AI use, such as bias, unauthorized disclosure of personal information, and medical malpractice. The article also highlights the need for healthcare organizations to include AI development and use in their enterprise-wide compliance programs, given the heavily regulated nature of the industry.
Key takeaways:
- The AI Executive Order issued by President Biden has four key areas of impact on the healthcare industry: the creation of an HHS AI Task Force, the inclusion of equity principles in AI technologies, the integration of safety, privacy, and security standards, and the requirement of human oversight in AI technologies.
- Healthcare organizations should look beyond these basic principles when developing an AI Governance Program and strategy, considering guidance and regulations from various entities such as the World Health Organization, the American Medical Association, and the Food & Drug Administration.
- The risks of using AI in the healthcare industry include bias, unauthorized disclosure of personal information or protected health information, unauthorized disclosure of intellectual property, unreliable or inaccurate output, unauthorized practice of medicine, and medical malpractice.
- Developing an AI Governance Program can help mitigate these risks and enhance the ability to follow rapidly-changing regulations and guidance issued by both state and federal regulators.