However, there have been disagreements within OpenAI about its commitment to safety. Sutskever and Jan Leike, leaders of the team dedicated to ensuring AI doesn't go rogue, left the company due to these disagreements. Altman remains optimistic about AI's potential to understand and address societal challenges, despite these internal conflicts and the lack of significant progress on AI safety legislation.
Key takeaways:
- OpenAI's Sam Altman believes that AI systems can be built to prevent harm to humanity, stating that AI as it is currently designed is well suited to alignment.
- Altman suggests using AI to poll the public about their values, and using those answers to determine how to align an AI to protect humanity.
- OpenAI has an internal team dedicated to superalignment, tasked with ensuring that future digital superintelligence doesn’t go rogue and cause untold harm.
- There have been disagreements within OpenAI about its commitment to safety as the company works toward artificial general intelligence, leading to the departure of key team members.