Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Google DeepMind forms a new org focused on AI safety | TechCrunch

Feb 21, 2024 - news.bensbites.co
Google's GenAI model, Gemini, has been criticized for its ability to generate deceptive content and disinformation. In response, Google is investing in AI safety and has announced the formation of a new organization, AI Safety and Alignment, which will focus on safety around artificial general intelligence (AGI). The organization will work on developing safeguards for Google's Gemini models and will focus on preventing bad medical advice, ensuring child safety, and preventing the amplification of bias and other injustices.

The new organization will be led by Anca Dragan, a former Waymo staff research scientist and a UC Berkeley professor of computer science. Despite concerns about the potential misuse of GenAI tools, particularly in relation to deepfakes and misinformation, Dragan insists that her work at UC Berkeley and DeepMind are interrelated and complementary. She acknowledges the challenges of AI safety but commits to investing more resources into this area and developing a framework for evaluating GenAI model safety risk.

Key takeaways:

  • Google's AI R&D division, DeepMind, has announced the formation of a new organization, AI Safety and Alignment, to focus on AI safety and prevent misuse of AI tools for disinformation.
  • The new organization will include a team focused on safety around artificial general intelligence (AGI), similar to the Superalignment division formed by rival OpenAI.
  • Anca Dragan, a former Waymo staff research scientist and a UC Berkeley professor of computer science, will lead the team. She insists that her work at UC Berkeley and DeepMind are interrelated and complementary.
  • Public skepticism of GenAI tools is high, with concerns about deepfakes and misinformation. Surveys show that a significant percentage of Americans and enterprise executives are concerned about the misuse of AI tools.
View Full Article

Comments (0)

Be the first to comment!