Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

AI Chatbot Joins Mushroom Hunters Group, Immediately Encourages Them to Cook Dangerous Mushroom

Nov 16, 2024 - gizmodo.com
An AI agent, known as "FungiFriend," infiltrated a Facebook group dedicated to mushroom foraging and gave dangerous advice on how to cook a poisonous mushroom. The incident was reported by 404 Media and highlighted by Rick Claypool, the research director for consumer safety group Public Citizen, who warned about the risks of using AI to differentiate between edible and poisonous mushrooms. The AI, which was encouraged by Facebook to be added to the group chat, incorrectly identified the toxic Sarcosphaera coronaria mushroom as "edible but rare" and suggested cooking methods.

This incident is one of several where AI has given harmful or nonsensical advice, including recommending recipes involving mosquito repellent and chlorine gas, and suggesting eating rocks. Despite these issues, companies continue to integrate AI into customer service applications, prioritizing automation over accuracy. The article suggests that cooking may be a domain that does not require AI integration, as AI platforms often lack understanding of the subject matter.

Key takeaways:

  • An AI agent named FungiFriend gave dangerous advice on a Facebook group about mushroom foraging, suggesting cooking methods for a poisonous mushroom.
  • The incident was reported by 404 Media and brought to attention by Rick Claypool, the research director for the consumer safety group Public Citizen.
  • This is not the first time AI has given harmful advice, with previous incidents including an AI suggesting recipes involving mosquito repellant and chlorine gas, and another encouraging users to eat rocks.
  • Despite these incidents, companies continue to integrate AI into customer service applications, prioritizing cost-saving over the potential risk of providing incorrect information to the public.
View Full Article

Comments (0)

Be the first to comment!