However, OpenAI warned that future AI models could potentially aid "malicious actors" in creating bioweapons. The report was a response to concerns raised by experts and industry figures about the potential misuse of AI in facilitating biological terror attacks. OpenAI also mentioned that it is continuing research on this issue and called for community deliberation.
Key takeaways:
- OpenAI's new report suggests that its GPT-4 model could provide a mild uplift in the ability to create biological weapons, but warns future models could be more helpful for malicious actors.
- Experts have previously warned that AI could be used to facilitate biological terror attacks, with large language models potentially used to help plan such attacks.
- A study by OpenAI's preparedness team found that while access to GPT-4 did increase accuracy and detail in answering questions about bioweapon creation, the increase was not statistically significant to indicate a real increase in risk.
- OpenAI clarified that future versions of ChatGPT could potentially provide sizable benefits to malicious actors, given the current pace of AI innovation.