The issue of AI-generated legal fiction has become a growing concern in the legal field, with courts across the U.S. questioning or disciplining lawyers in at least seven cases over the past two years. This trend highlights a new challenge for litigants and judges as AI tools like ChatGPT become more prevalent. The Walmart case is particularly notable due to the involvement of a major law firm and a large corporate defendant, underscoring the broader litigation risks associated with AI technology in legal proceedings.
Key takeaways:
- Morgan & Morgan warned its lawyers about AI generating fake case law, which could lead to termination.
- A federal judge in Wyoming considered sanctioning two lawyers for using fictitious case citations in a lawsuit against Walmart.
- AI's creation of false legal information has led to disciplinary actions in at least seven cases over the past two years.
- The issue highlights a new litigation risk as AI tools like ChatGPT become more prevalent in legal settings.