Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

This Week in AI: OpenAI moves away from safety | TechCrunch

May 18, 2024 - techcrunch.com
OpenAI has been in the news recently for its product launch and the disbanding of a team working on developing controls for "superintelligent" AI systems. The company unveiled GPT-4o, its most capable generative model yet, and shortly after disbanded the team, leading to the resignation of the team's two co-leads. Reports suggest that OpenAI deprioritized the team's safety research in favor of launching new products, leading to criticism that the company's leadership, particularly CEO Sam Altman, is prioritizing products over safeguards.

In other AI news, OpenAI reached an agreement with Reddit to use the social site’s data for AI model training, Google debuted a host of AI products at its annual I/O developer conference, Instagram co-founder Mike Krieger joined Anthropic as the company’s first chief product officer, and Anthropic announced it would begin allowing developers to create kid-focused apps and tools built on its AI models. AI startup Runway held its second-ever AI film festival, and Google Deepmind is working on a new “Frontier Safety Framework” to identify and prevent any runaway capabilities in AI models.

Key takeaways:

  • OpenAI has been in the news for unveiling its most capable generative model yet, GPT-4o, and for disbanding a team working on developing controls to prevent 'superintelligent' AI systems from going rogue.
  • OpenAI's leadership, particularly CEO Sam Altman, has been criticized for prioritizing product launches over safety measures, leading to the resignation of key team members.
  • Google Deepmind is working on a 'Frontier Safety Framework' to identify and prevent potentially harmful capabilities in AI models.
  • Researchers at Cambridge University have highlighted the ethical concerns around AI chatbots that are trained on a deceased person's data to provide a superficial representation of that person.
View Full Article

Comments (0)

Be the first to comment!