Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

OpenAI Says ChatGPT Probably Won’t Make a Bioweapon

Feb 01, 2024 - gizmodo.com
OpenAI recently conducted a study on GPT-4's potential to aid in the creation of a bioweapon, concluding that the AI poses only a slight risk. The study was prompted by concerns raised in President Biden's AI Executive Order, which suggested that AI could significantly lower the barriers to creating biological weapons. The study involved 100 participants, including biology experts and students, who were tasked with devising a plan to create a bioweapon. The results showed a slight increase in accuracy and completeness when using GPT-4, but OpenAI emphasized that these increases were not statistically significant.

The company also noted that while GPT-4 can provide niche information, access to information alone is not sufficient to create a biological threat. They did not test how GPT-4 could assist in the physical construction of a threat. OpenAI stressed the need for further research and community deliberation on this issue. Despite the slight increase in ease of creating a bioweapon with AI, OpenAI assures that the risk is minimal.

Key takeaways:

  • OpenAI conducted a study on GPT-4's potential to aid in creating a bioweapon, concluding that it poses only a slight risk.
  • The study was conducted in response to concerns raised in President Biden's AI Executive Order about AI lowering the barrier for creating biological weapons.
  • The study involved 100 participants, including biology experts and students, who were asked to create a plan for a bioweapon using either just the internet or the internet plus GPT-4. The use of GPT-4 resulted in a slight increase in accuracy and completeness of the plans.
  • OpenAI emphasizes that while GPT-4 can provide niche information, access to information alone is insufficient to create a biological threat, and more research is needed in this area.
View Full Article

Comments (0)

Be the first to comment!