HomeTechnologyFalse quotes, misuse and “not really an apology”: lawyer sanctioned for using...

False quotes, misuse and “not really an apology”: lawyer sanctioned for using AI twice in his defense

A New York lawyer has been disciplined for incorporating fake artificial intelligence-generated legal subpoenas into his presentations. Caught red-handed, the man did it again to defend himself from this first mistake. Enough to make your case worse.

Some people definitely don’t learn from their mistakes. This is the case of an American lawyer. As 404media reported, Michael Fourte was sanctioned by the New York Supreme Court for filing legal documents partially generated by an AI… twice.

It all starts with a classic case: a family dispute over an unpaid loan. But Michael Fourte’s defense is surprising. In fact, the plaintiff and his lawyers would have discovered “inaccurate quotes in the defendants’ opposition brief, which seemed to have been ‘hallucinated’ by an artificial intelligence tool,” specifies Joel Cohen, judge of the Supreme Court of New York.

In a ruling this month, Judge Cohen confirmed that the lawyer had included false citations (references to past cases) generated by artificial intelligence in his presentation.

Double the errors the second time

Caught red-handed, a request for sanctions was filed against the lawyer in question. However, he did not hesitate to repeat the experience by including in his defense before the court other quotes entirely invented by an AI. And it even made twice as many errors as the initial document. Seven of the cases cited simply did not exist and another three did not support the arguments presented.

He did not deny or admit to using AI. He only acknowledged that “several passages, intended to paraphrase or summarize legal principles, had been inadvertently placed in quotation marks.”

As expected, its recurrence made his case worse. Consequently, the sentence was final. “The lawyer relied on unverified AI – in his statements, through insufficiently supervised colleagues – to defend his use of unverified AI,” laments the judge. I wanted to point out that the use of artificial intelligence tools does not exempt legal professionals from their duty of rigorous verification.

Several similar cases

“The proliferation of unverified use of AI thus creates the risk that a false subpoena will end in a judicial decision, forcing courts to dedicate their limited time and resources to avoiding such an outcome,” he says, visibly upset at having wasted their time.

Ultimately, the attorney was ordered to pay the opposing party’s attorney’s fees and the judge ordered that his decision be referred to the New Jersey Office of Attorney Ethics.

Given the severity of the court, Michael Fourte ended up admitting his faults. “I really have no excuse,” he said, although he admitted that he did not check all the quotes and trusted his assistants. “When lawyers do not verify their work, whether generated by AI or not, they harm their clients and harm the Court and the profession,” the judge simply replied. “In short, lawyers’ duty of candor to the Court cannot be delegated to software.”

This is not the first time a lawyer has used AI. Last June, Richard Bednar, an American lawyer, was sanctioned by the Utah Court of Appeals for drafting a legal document using the artificial intelligence software ChatGPT. Attorney Steven Schwartz used ChatGPT software during a case in 2023, in which he alleged false claims.

Author: Salome Ferraris
Source: BFM TV

Stay Connected
16,985FansLike
2,458FollowersFollow
61,453SubscribersSubscribe
Must Read
Related News

LEAVE A REPLY

Please enter your comment!
Please enter your name here