The article also mentions a study by AI Forensics and AlgorithmWatch, which found that a third of Microsoft Copilot's answers contained factual errors, and highlights cases where AI systems have made costly mistakes due to inaccuracies or biases. The author concludes that while AI can be a useful tool, it is not yet reliable enough to replace human workers, and businesses that rely on it may face legal and financial risks.
Key takeaways:
- Businesses are being warned about the potential pitfalls of relying too heavily on AI, particularly in customer service roles, as inaccuracies and misrepresentations can lead to legal issues.
- Air Canada was taken to court after its AI chatbot incorrectly promised a customer a bereavement discount, which the company then refused to honor.
- A study by AI Forensics and AlgorithmWatch found that a third of Microsoft Copilot's answers contained factual errors, highlighting the potential for legal issues if AI is used as the front-line of customer service.
- Prejudices in AI can also lead to legal issues, as demonstrated by the iTutorGroup, which lost a $365,000 lawsuit because its AI-powered recruiting software automatically rejected certain applicants based on age and gender.