Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Google Brain co-founder says he tried to get ChatGPT to 'kill us all' but is 'happy to report' that he failed to trigger a doomsday scenario

Dec 20, 2023 - businessinsider.com
Google Brain co-founder and Stanford professor, Andrew Ng, recently conducted a test to see if he could get AI model GPT-4 to devise a plan to exterminate humanity. Despite numerous attempts, Ng reported that he was unable to trick the AI into making such a decision, instead, it suggested other solutions like running a PR campaign to raise awareness of climate change. Ng believes that fears of AI becoming dangerous are unrealistic, arguing that if an AI is smart enough to wipe out humanity, it would also be smart enough to know that it shouldn't do so.

Other tech leaders have also shared their views on the risks and benefits of AI. Elon Musk believes AI poses an existential threat to humanity, while Jeff Bezos thinks the benefits of AI outweigh its dangers. Ng's representatives have not yet responded to a request for comment from Business Insider.

Key takeaways:

  • Google Brain co-founder Andrew Ng tested the safety of AI models by trying to get GPT-4 to trigger a global thermonuclear war, but failed.
  • Ng believes that fears of advanced AI being 'misaligned' and deciding to wipe out humanity are not realistic, as AI systems are quite safe and will become even safer with further research.
  • Other tech leaders like Elon Musk and Jeff Bezos have also shared their views on AI, with Musk considering it an existential threat and Bezos believing its benefits outweigh its dangers.
  • Ng's experiment and views were shared in a newsletter and a longer article about the risks and dangers of AI, where he expressed concern that demand for AI safety might cause regulators to impede the technology's development.
View Full Article

Comments (0)

Be the first to comment!