info image
03.07.2023
Legal Blog
info image

Trust and Deception: The Hidden Risks of Blindly Embracing AI

OpenAI's ChatGPT has taken the world by storm, captivating over 100 million users and drawing a staggering 1.8 billion monthly visitors. From paraphrasing to grammar-checking, summarizing to content-writing, AI has revolutionized various industries, and the legal field is no exception. In the legal realm, however, where precision is paramount, ChatGPT's uncanny ability to fabricate facts raises a red flag. After the infamous litigation fiasco surrounding the case Mata v. Avianca, Inc., it is crystal clear that legal professionals cannot and should not place absolute trust in the tool. 


The Case Unveiled


Alleging a knee injury caused by a metal serving cart during a 2019 flight, Robert Mata's case takes a dramatic turn. Avianca swiftly moved to dismiss the lawsuit, citing the expiration of the statute of limitations. However, Mata's legal team submitted objections, in which they cited a series of court decisions to support their argument. But here's the twist: most of the cases in Mata's reply brief were entirely fictional. It was revealed that the research behind the inaccurate brief was not conducted by the lawyer representing the plaintiff, Peter LoDuca, but by his colleague Steven A. Schwartz. A seasoned attorney with over three decades of experience, Schwartz turned to ChatGPT as a tool for finding relevant legal precedents. However, he expressed deep regret over relying on the chatbot, as he was unaware of its potential for generating false information. This way, a New York lawyer found himself entangled in a court hearing due to his firm's use of ChatGPT for legal research.


The Perils of Blind Trust


ChatGPT, while capable of generating text on request, comes with warnings that it may produce inaccurate information. In this case, although the lawyer asked the ChatGPT about the validity of the cases cited, he mistakenly trusted the AI tool's responses without verifying their authenticity – an act, which is definitely reckless for a lawyer. The incident raises concerns about the dangers of accepting AI-generated content without corroborating it through traditional legal research methods.


Lessons Learned


The incident once again shows that legal research is a complex and nuanced process that requires a deep understanding of legal precedent and the ability to navigate a variety of legal databases. Therefore, while AI can be a valuable resource, it should always be used as a supplement rather than a replacement for verified and authentic sources.


The incident serves as a stark reminder of the dangers of blind reliance on artificial intelligence, as the tool's output referenced non-existent legal cases. This is not to say that AI chatbots have no place in legal practice, rather the use of AI chatbots in the legal industry should be approached with common sense. It can certainly increase efficiency and productivity, but it can also lead to unintended legal consequences. Lawyers and legal professionals should exercise caution and vigilance when using these tools, and should always prioritize accuracy and validity when submitting legal documents to the court. Thus, while these tools may have their uses, they cannot replace the expertise and judgment of a skilled legal professional.