Learn From This Lawyer's AI Mistakes
A ChatGPT Embarrassment
Steven A. Schwartz
We spend a lot of time writing. We read and write daily from emails, text messages, and Facebook posts. A lot of people do not enjoy writing, including some great writers. Luckily, one of the most popular functions of AI chatbots is to help non-writers write well - but sometimes, it can all go wrong.
How wrong? Well, all you have to do is ask Steven A. Schwartz.
Steven has been a practising lawyer for the last 30 years in the state of New York. He specializes in personal injury and worker's compensation cases on the state level. This is a pretty narrow focus for a lawyer to take on - but from all accounts, he was great at his job.
In 2020, he took a case that seemed like any other - but then the company his client was suing suddenly went bankrupt. This meant the case flipped over to federal court instead of the state in 2022. Steven had rarely appeared in a federal court, and he was not a member of the Southern Districts Bar.
So, another attorney from his firm took over as the lead, and Steven continued on the team in a research and writing role.
Steven tried his best to keep up with the filings - but there were issues in this case he was simply unfamiliar with. Remember, he was a personal injury lawyer suddenly researching bankruptcy under a 1999 convention.
To say this was a “new challenge” might be an understatement.
The issues began when Steven realized that his firm did not have the technological capabilities for him to research adequately enough. Due to a billing error by his firm - he could not easily look at federal cases for research.
Around that time, an AI chatbot called ChatGPT was going viral.
After hearing about OpenAI’s Chatbot from his college-aged kids and online articles - he thought this might be the best bet to help him. He asked ChatGPT to help with his research for the case.
Okay — here’s the problem.
ChatGPT doesn't have unlimited access to federal cases. So it did what all chatbots do when they don’t know something… they hallucinate.
This is a real term, and it really happens. When these AI bots don’t have enough information to connect all the dots, they are known to create information and “hallucinate.”
But Steven didn’t know this so when they needed to submit a briefing to the judge, he included six references to other legal cases that were given to him by ChatGPT. These cases and their presiding judges were referenced thoroughly in the briefing. To the shock of everyone, including Steve, the cases he references never existed; they were hallucinations.
Steven’s lawyers tried to argue to the judge that he had no reasonable expectations to think the chatbot would make up information. The judge disagreed. $5,000 in sanctions were handed out, and reputations were ruined.
They also had to deal with some old-school public humiliation. Steven and his firm had to reach out to each real judge wrongly cited by ChatGPT and explain the incident. Yikes.
Although this might have been the first high-profile case of AI hallucinating - it certainly won’t be the last. OpenAI has now built little easter eggs into their bot to encourage users to redraft and look over its results. For example, if you ask ChatGPT to write your school paper, it will likely include self-referential phrases to highlight that AI wrote it.
This isn’t stopping anyone, though.
All you need to do is go on Google Scholar, a search engine for scholarly literature, and search “as an AI language Model,” you will see how many genuine academic papers include the sentence.
The scholars are not even proofreading their papers to find and delete evidence it was not-so-secretly written by ChatGPT!
So remember. These chatbots can be a great shortcut if you don't enjoy researching or writing. But it may unintentionally lead you to incomplete or wholly fabricated information.
💬 For the Group Chat!
Copy and paste it — we won’t tell anyone.