Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Feds appoint “AI doomer” to run AI safety at US institute

Apr 18, 2024 - arstechnica.com
The US AI Safety Institute, part of the National Institute of Standards and Technology (NIST), has appointed Paul Christiano, a former OpenAI researcher, as head of AI safety. Christiano is known for his work on reinforcement learning from human feedback (RLHF) and for predicting a 50% chance of AI development ending in "doom". His appointment has sparked controversy, with critics arguing that his "doomer" views could encourage non-scientific thinking and compromise the institute's objectivity and integrity. There have been rumors of NIST staffers opposing his appointment, with some allegedly threatening to resign.

Christiano's role will involve monitoring for current and potential risks, designing and conducting tests of frontier AI models, and implementing risk mitigations. Despite the controversy, some believe Christiano is well-suited for the role due to his research background and experience in mitigating AI risks. The leadership team will also include Mara Quintero Campbell, Adam Russell, Rob Reich, and Mark Latonero. Critics of the "AI doomer" discourse warn that focusing on hypothetical AI risks may distract from current AI-related issues such as environmental, privacy, ethics, and bias concerns.

Key takeaways:

  • Paul Christiano, a former OpenAI researcher known for predicting a 50 percent chance of AI leading to 'doom', has been appointed as head of AI safety at the US AI Safety Institute, part of the National Institute of Standards and Technology (NIST).
  • Christiano's appointment has sparked controversy, with critics fearing that his 'AI doomer' views may encourage non-scientific thinking and speculation. There have been rumors of opposition and potential resignations among NIST staff.
  • In his role, Christiano will monitor for current and potential AI risks, design and conduct tests of frontier AI models, and implement risk mitigations to enhance AI safety and security.
  • The leadership team of the safety institute will also include Mara Quintero Campbell, Adam Russell, Rob Reich, and Mark Latonero, all experts in their respective fields.
View Full Article

Comments (0)

Be the first to comment!