A lawyer who said it was “mortifying” to learn she had submitted AI-generated fake case law in error to back her client’s petition in a family law hearing is being investigated by the province’s regulatory body for lawyers.
“I can confirm that the Law Society is investigating the conduct of Chong Ke, a practising member of the Law Society, who is alleged to have relied on submissions to the court on non-existent case law identified by ChatGPT,” said spokeswoman Christine Lam in an email.
She also said the Law Society has issued guidance to lawyers on the appropriate use of artificial intelligence technology and “expects lawyers to comply with the standards of conduct expected of a competent lawyer if they do rely on AI in serving their clients.”
Tam also said the Law Society will likely release “more guidance for lawyers on this,” but she said there was no spokesperson with AI expertise able to speak before deadline.
Ke was representing client Wei Chen in an application to B.C. Supreme Court filed in December for parenting time as part of divorce proceedings with his wife, Nina Zhang.
Ke submitted a summation of two B.C. Supreme Court cases under the title of “legal basis” in the application, according to a recent ruling related to the application.
They both purportedly showed how the court had granted two parents in separate family law proceedings the right to travel out of country with their children as Chen was asking for permission to do.
When Zhang’s lawyer said they couldn’t find the cases, Ke provided a new list of cases, but Zhang’s lawyer demanded copies of the two original cases, the ruling said.
Ke responded with a letter acknowledging she had “made a serious mistake” in preparing the application by referring to two cases suggested by ChatGPT, an AI tool, without verifying the source, according to the ruling.
She said she had no intention to mislead the court or opposing counsel, and she apologized, the ruling said. And she was able to withdraw the two cases from the application before it went before the court, it said.
“I am deeply embarrassed about this matter,” she also told the court. “I am remorseful about my conduct.”
At a later hearing to determine who was responsible for costs of the application hearing, Zhang’s lawyer asked the court to assess “special costs” against Ke because he had spent extra time, cost and preparation to look for the fake case law.
But the judge declined to assess those special costs, noting that the case law would never have made it into the application because of checks and balances in the process and because Ke didn’t have the intent to deceive.
That was “not intended to minimize what has occurred, which — to be clear — I find to be alarming,” said Justice David Masuhara in his ruling.
He didn’t award Zhang’s lawyer special costs but did order her to pay him the extra costs of having to search for the fake cases and not find them, according to the ruling.
“The additional effort and expense is to be borne personally by Ms. Ke,” he said.
He also ordered her in the ruling to review all her files before the court for instances of materials generated by AI tools and advise the court and other lawyers if there are any.
“It would be prudent for Ms. Ke to advise the court and the opposing parties when any materials she submits to the court include content generated by AI tools such as ChatGPT,” he said.
Masuhara also cited a January 2024 study that “legal hallucinations are alarmingly prevalent,” occurring about 70 per cent of the time with ChatGPT 3.5 and almost 90 per cent with Llama 2 and with large language models, or LLMs, not being able to predict or even know when they are producing legal hallucinations.
He said the integrity of the justice system is threatened if the technology isn’t used competently.
“As this case has unfortunately made clear, generative AI is still no substitute for the professional expertise that the justice system requires of lawyers,” the judge wrote.