Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

OpenAI's Superalignment Team Tackles the Challenge of Controlling Superintelligent AI

Dec 15, 2023 - techtimes.com
OpenAI's Superalignment team, led by co-founder Ilya Sutskever, is focusing on developing strategies for controlling and governing superintelligent AI systems. The team recently presented their latest work at the NeurIPS conference, which involves using a weaker AI model to guide a more advanced model towards desired outcomes. Despite skepticism within the AI research community, the team believes this approach is crucial in achieving alignment goals for superintelligent AI.

In a recent study, OpenAI trained its GPT-2 model to perform various tasks and then used the responses generated by GPT-2 to train the GPT-4 model. The results showed a performance improvement of 20-70% compared to GPT-2, demonstrating GPT-4's superiority but falling short of its full potential. Despite the promising results, further research is needed before humans can be deemed suitable supervisors for stronger AI models. OpenAI has introduced a $10 million grant program for technical research on superintelligent alignment and plans to host an academic conference on superalignment in 2025.

Key takeaways:

  • OpenAI's Superalignment team, led by co-founder Ilya Sutskever, is working on strategies to control and regulate superintelligent AI systems. The team recently presented its latest work at the NeurIPS conference.
  • The team's research involves using a weaker AI model to guide a more advanced model, with the aim of achieving alignment goals for superintelligent AI. A recent study showed a performance improvement of 20-70% when the GPT-4 model was trained using responses generated by the GPT-2 model.
  • OpenAI has introduced a $10 million grant program for technical research on superintelligent alignment, and plans to host an academic conference on the topic in 2025. The grant program includes funding from former Google CEO Eric Schmidt.
  • Pope Francis has warned world leaders about the risks posed by the rapid development of AI, emphasizing the need to direct research towards peace and the common good, and to scrutinize the aims and interests of AI developers.
View Full Article

Comments (0)

Be the first to comment!