A US federal judge has categorically told lawyers that he will not allow any AI-generated content in his court.
Texas federal judge Brantley Starr said that any attorney appearing in his court must attest that “no portion of the filing was drafted by generative artificial intelligence”, or if it was, that it was checked “by a human being”, reports TechCrunch.
“All attorneys appearing before the court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being,” read the standing order.
According to the judge, these AI platforms are incredibly powerful and have many uses in the law — from divorces, discovery requests, suggested errors in documents, and anticipated questions at oral argument.
“But legal briefing is not one of them. Here’s why. These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up — even quotes and citations,” the judge’s order further read.
Last week, ChatGPT had fooled a lawyer into believing that citations given by the AI chatbot in a case against Colombian airline Avianca were real while they were, in fact, bogus.
Lawyer Steven A. Schwartz, representing a man who sued an airline, admitted in an affidavit that he had used OpenAI’s chatbot for his research.
After the opposing counsel pointed out the non-existent cases, US District Judge Kevin Castel confirmed that six of the submitted cases “appear to be bogus judicial decisions with bogus quotes and bogus internal citations”.
The judge set up a hearing as he considered sanctions for the plaintiff’s lawyers.
Last month, ChatGPT, as part of a research study, falsely named an innocent and highly-respected law professor in the US on the list of legal scholars who had sexually harassed students in the past.
Jonathan Turley, Shapiro Chair of Public Interest Law at George Washington University, was left shocked when he realized ChatGPT named him as part of a research project on legal scholars who sexually harassed someone.
“ChatGPT recently issued a false story accusing me of sexually assaulting students,” Turley posted in a tweet.
Bijay Pokharel
Related posts
Recent Posts
Subscribe
Cybersecurity Newsletter
You have Successfully Subscribed!
Sign up for cybersecurity newsletter and get latest news updates delivered straight to your inbox. You are also consenting to our Privacy Policy and Terms of Use.