The speaker agrees that AI should be regulated, but criticizes the direction of current regulatory efforts. They believe that while no regulation isn't the right answer, the current regulatory direction could be worse than no regulation. However, they advocate for thoughtful regulation over no regulation. They also emphasize the need for transparency from technology companies to prevent future AI disasters, citing past harm caused by AI, such as self-driving car accidents and a stock market crash caused by an automated trading algorithm.
Key takeaways:
- OpenAI CEO, Sam Altman, and other industry leaders have called for a global priority on mitigating the risk of extinction from AI and a six-month moratorium on training powerful AI models.
- There are concerns that large companies may find it convenient to not compete with open-sourced large language models, potentially damaging the open-source community.
- Professor Ng believes that while AI should be regulated, the current direction of regulation in many countries is not ideal. However, thoughtful regulation would be preferable to no regulation.
- Ng emphasizes the need for transparency from technology companies as a key part of 'good' regulation, to help prevent AI disasters caused by big tech in the future.