The new organization will be led by Anca Dragan, a former Waymo staff research scientist and a UC Berkeley professor of computer science. Despite concerns about the potential misuse of GenAI tools, particularly in relation to deepfakes and misinformation, Dragan insists that her work at UC Berkeley and DeepMind are interrelated and complementary. She acknowledges the challenges of AI safety but commits to investing more resources into this area and developing a framework for evaluating GenAI model safety risk.
Key takeaways:
- Google's AI R&D division, DeepMind, has announced the formation of a new organization, AI Safety and Alignment, to focus on AI safety and prevent misuse of AI tools for disinformation.
- The new organization will include a team focused on safety around artificial general intelligence (AGI), similar to the Superalignment division formed by rival OpenAI.
- Anca Dragan, a former Waymo staff research scientist and a UC Berkeley professor of computer science, will lead the team. She insists that her work at UC Berkeley and DeepMind are interrelated and complementary.
- Public skepticism of GenAI tools is high, with concerns about deepfakes and misinformation. Surveys show that a significant percentage of Americans and enterprise executives are concerned about the misuse of AI tools.