Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Meet the Pranksters Behind Goody-2, the World’s ‘Most Responsible’ AI Chatbot

Feb 11, 2024 - wired.com
The article discusses a new chatbot called Goody-2, which takes AI safety to an extreme level by refusing all requests, citing potential harm or ethical breaches. The chatbot, created by artist Mike Lacher and Brain, a Los Angeles-based artist studio, is designed to highlight the challenges of balancing safety and usefulness in AI, with Lacher noting the difficulty in defining what responsibility means in this context.

However, the article also points out that serious safety issues with large language models and generative AI systems remain unresolved, citing the recent outbreak of Taylor Swift deepfakes on Twitter as an example. It also mentions the ongoing debate about the perceived political bias in AI systems, with some developers alleging that OpenAI's ChatGPT has a left-leaning bias. The creators of Goody-2 are exploring ways of building an extremely safe AI image generator, prioritizing caution above all else.

Key takeaways:

  • A new chatbot called Goody-2 has been developed that prioritizes AI safety by refusing every request, explaining how it might cause harm or breach ethical boundaries.
  • Goody-2's creators, Mike Lacher and Brian Moore, aim to highlight the challenges of balancing safety and usefulness in AI, questioning who decides what responsibility is and how it works.
  • The chatbot also underscores the ongoing safety issues with large language models and generative AI systems, despite increased corporate emphasis on responsible AI.
  • While Goody-2's refusal to fulfill most requests makes it difficult to assess its power or compare it to other models, its creators maintain that revealing such information would be "unsafe and unethical".
View Full Article

Comments (0)

Be the first to comment!