The author calls for mandatory gene synthesis screening and AI-specific interventions to mitigate these risks. He suggests that pre-release evaluations of AI models could prevent the release of models with dangerous capabilities. He also proposes differentiated access methods for AI tools, requiring scientists to authenticate themselves online before accessing certain capabilities. The author concludes by stating that while AI poses biosecurity risks, it also presents an opportunity to strengthen biosecurity measures and mitigate a wider array of AI risks.
Key takeaways:
- Artificial intelligence tools like ChatGPT could potentially be used by ill-intentioned actors to gain knowledge on the production of biological weapons, increasing the risks from bioterrorism.
- Large language models (LLMs) like ChatGPT and AI-powered biological design tools may significantly increase the accessibility of biological weapons, as they can provide detailed instructions on how to create and modify biological agents.
- Biological design tools (BDTs) such as protein folding models and protein design tools could allow the design of biological agents with unprecedented properties, potentially turning pandemics into existential threats.
- To mitigate the risks emerging from the intersection of AI and biology, the author suggests implementing universal gene synthesis screening, advancing risk mitigation approaches specific to new AI systems, and considering differentiated access methods for AI-powered lab assistants and biological design tools.