However, the article also points out that serious safety issues with large language models and generative AI systems remain unresolved, citing the recent outbreak of Taylor Swift deepfakes on Twitter as an example. It also mentions the ongoing debate about the perceived political bias in AI systems, with some developers alleging that OpenAI's ChatGPT has a left-leaning bias. The creators of Goody-2 are exploring ways of building an extremely safe AI image generator, prioritizing caution above all else.
Key takeaways:
- A new chatbot called Goody-2 has been developed that prioritizes AI safety by refusing every request, explaining how it might cause harm or breach ethical boundaries.
- Goody-2's creators, Mike Lacher and Brian Moore, aim to highlight the challenges of balancing safety and usefulness in AI, questioning who decides what responsibility is and how it works.
- The chatbot also underscores the ongoing safety issues with large language models and generative AI systems, despite increased corporate emphasis on responsible AI.
- While Goody-2's refusal to fulfill most requests makes it difficult to assess its power or compare it to other models, its creators maintain that revealing such information would be "unsafe and unethical".