However, the guidance has been criticized for its brevity, with only six pages providing basic details about the technology. Critics argue that given the high stakes, it may not be wise to allow judges to extensively use AI with only a few gentle recommendations to guide them. The guidance also warns judges to assume that anything typed into a chatbot interface is effectively public, due to AI companies harvesting user interaction data.
Key takeaways:
- The UK Judicial Office has issued guidance allowing judges to use AI tools like ChatGPT to write legal rulings and perform other tasks.
- Despite previous issues with AI in legal systems, such as the case of two lawyers in New York being fined for submitting legal documents written by ChatGPT, the UK judiciary seems excited about the technology.
- The guidance does suggest some limitations on AI, acknowledging that AI responses may be inaccurate, incomplete, misleading, or biased, and recommends that judges verify the accuracy of AI responses before making rulings.
- The guidance also warns about privacy concerns, stating that AI companies harvest the results of user interactions and that judges should assume that typing something into a chatbot interface is equivalent to publishing it publicly.