Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

ChatGPT probably won't help create biological weapons, OpenAI says

Feb 01, 2024 - businessinsider.com
OpenAI, an AI startup, has released a report stating that its GPT-4 model could mildly enhance the ability to create biological weapons, but the increase is not significant. The study involved 50 biology experts and 50 students, some of whom had access to GPT-4 and others to the internet, and they were asked questions related to bioweapon creation. The GPT-4 group showed a slight improvement in accuracy and detail, but the difference was not statistically significant.

However, OpenAI warned that future AI models could potentially aid "malicious actors" in creating bioweapons. The report was a response to concerns raised by experts and industry figures about the potential misuse of AI in facilitating biological terror attacks. OpenAI also mentioned that it is continuing research on this issue and called for community deliberation.

Key takeaways:

  • OpenAI's new report suggests that its GPT-4 model could provide a mild uplift in the ability to create biological weapons, but warns future models could be more helpful for malicious actors.
  • Experts have previously warned that AI could be used to facilitate biological terror attacks, with large language models potentially used to help plan such attacks.
  • A study by OpenAI's preparedness team found that while access to GPT-4 did increase accuracy and detail in answering questions about bioweapon creation, the increase was not statistically significant to indicate a real increase in risk.
  • OpenAI clarified that future versions of ChatGPT could potentially provide sizable benefits to malicious actors, given the current pace of AI innovation.
View Full Article

Comments (0)

Be the first to comment!