OpenAI, known for its development and public release of experimental AI projects, recently showcased a new version of ChatGPT, a "multimodal" AI model that allows more natural and humanlike interaction. While there's no indication that the recent departures are linked to these developments, they do raise ethical questions around privacy, emotional manipulation, and cybersecurity risks. OpenAI maintains a separate research group, the Preparedness team, to focus on these issues.
Key takeaways:
- OpenAI's "superalignment team," which was formed to prepare for the advent of supersmart artificial intelligence, has been dissolved following the departure of several researchers and the team's co-leads, Ilya Sutskever and Jan Leike.
- The work of the superalignment team will be absorbed into OpenAI’s other research efforts. The dissolution of the team adds to recent evidence of a shakeout inside the company following last November’s governance crisis.
- OpenAI has showcased a new version of ChatGPT that could change people’s relationship with AI in powerful and potentially problematic new ways. The new "multimodal" AI model, GPT-4o, allows ChatGPT to see the world and converse in a more natural and humanlike way.
- Despite the recent departures, there is no indication that they have anything to do with OpenAI’s efforts to develop more humanlike AI or to ship products. OpenAI maintains another research group, the Preparedness team, which focuses on issues such as privacy, emotional manipulation, and cybersecurity risks.