The article also discusses the challenges of using retrieval-augmented generation (RAG) in AI legal tools, including the difficulty of legal retrieval and the risk of retrieving inapplicable authority. It emphasizes the need for transparency and rigorous benchmarking of legal AI tools, given their current opacity and the ethical and professional responsibility requirements for lawyers. The article concludes by calling for public benchmarking and rigorous evaluations of AI tools in the legal profession.
Key takeaways:
- AI tools are increasingly being used in the legal profession, but they have a documented tendency to 'hallucinate' or produce false information. This is a significant concern as it can lead to incorrect legal judgments and conclusions.
- Despite claims from providers, AI-driven legal research tools like LexisNexis's Lexis+ AI and Thomson Reuters's Westlaw AI-Assisted Research still produce incorrect information a significant amount of the time.
- Retrieval-augmented generation (RAG) is seen as a potential solution to the hallucination problem, but the study found that even RAG systems are not hallucination-free due to challenges unique to the legal domain.
- The study highlights the need for transparency and rigorous benchmarking of legal AI tools, as the current lack of transparency makes it difficult for lawyers to comply with ethical and professional responsibility requirements, and for practitioners to responsibly adopt these tools.