Despite the potential risks, the author argues that AI is here to stay and its use needs to be regulated. The goal is to achieve a more efficient process for delivering essential therapies and improving patient outcomes, while safeguarding the security and privacy of patient data. The U.S. Food and Drug Administration (FDA) has started to provide guidance on the use of AI in healthcare and clinical research, and the complete potential and associated risks of AI in drug development are still under scrutiny.
Key takeaways:
- Artificial Intelligence (AI) and machine learning (ML) can enhance clinical operations and outcomes in clinical trials (CTs), including site identification, patient recruitment, and monitoring patient protocol compliance.
- AI technology can also improve patient safety by predicting the likelihood of adverse events (AEs) and assisting in their early detection.
- The primary risks of AI in CTs include inadvertent exposure of patient data and data generation bias, which can be mitigated with measures such as differential privacy and homomorphic encryption.
- Despite its potential benefits, there is a need to regulate the use of AI in clinical research to safeguard the security and privacy of patient data.