The study serves as a cautionary tale for workers using AI chatbots like ChatGPT, which have been used across industries for tasks such as coding, creating marketing materials, and generating lesson plans. The researchers warn that while AI can be exceedingly good at assisting with certain tasks, humans should exercise caution when using the technology to avoid errors. The study also highlighted the issue of AI-generated errors, with AI-generated news sites and articles often containing factual errors, a problem that may worsen with the potential for "model collapse" where AI models are trained on AI-generated content.
Key takeaways:
- Workers using OpenAI's ChatGPT may perform more poorly than those who don't, as some take the AI chatbot's outputs at face value without checking for errors, according to research from Boston Consulting Group.
- Researchers found that while AI can significantly boost productivity and quality of work for tasks within its capabilities, it can hinder performance for more open-ended tasks that require human judgement and access to information beyond the AI's reach.
- AI's outputs aren't perfect and can contain 'hallucinations', leading to factual errors in AI-generated content, which has been a problem for some media outlets.
- As AI capabilities continue to expand, it's important for professionals to understand the limitations of AI and for organizations to prepare for a new world of work combining humans and AI.