Skip to content

'Incredibly dangerous': Vancouver lawyer sounds alarm on AI use in court

The new technology is unethical and worrisome according to local expert.
Lawyers and self-represented people are using AI technology to help form their arguments, a real cause for concern according Vancouver lawyer Kyla Lee.

Artificial intelligence apps are changing the way people approach numerous industries and tasks.

Software like ChatGPT, a bot by OpenAI, has the capacity to answer questions and craft entire essays based on parameters and instructions fed to it by the user. One journalist used it to write an article about the stock market.

Its potential to aid in previously human-led jobs has some people concerned about the future.

Vancouver lawyer Kyla Lee in particular has been sounding the alarm on social media about the potentially detrimental effects of AI in the courtroom. She says that lawyers are already trying to use AI technology to help them form arguments.

"Right now the arguments the AIs write are not good," Lee tells V.I.A. in an email. "Someone described it on Twitter as though it was written by an articled [law] student with a head injury, but as the machine learning advances the arguments will improve."

AI is also a tool for people who are self-represented. It can be used to assist in crafting arguments and defences in their cases without the expertise of a lawyer. 

"And that is incredibly dangerous," says Lee. "Not only is the technology not sophisticated enough to do that yet, but it is also often just wrong."

Lee is concerned that the AI-formulated defences may not be truthful in court and that people may not recognize the real consequences that perjury can have, even accidental. She describes the current state of the technology as "a barrier to access to justice." 

Is AI allowed in B.C. courtrooms?

According to Lee, B.C. courts have a practice direction in place that prohibits broadcasting or recording any court proceeding. Only lawyers are permitted to use their phones to send and receive messages or use the internet during court proceedings.

"The policy is somewhat loosely enforced in the sense that unless you are obvious about it, most judges and court staff do not really notice. It's hard to tell if someone is actually making a recording," she expands. "I was once in Chambers and a person had a full documentary crew filming in there and no one noticed until I pointed it out."

Since COVID, remote court appearances have also become more normalized which makes it harder to enforce the rule, Lee says. "If you can't see or hear what's going on around a person who is on audio or video then the court may not necessarily know they are using AI."

As AI technology becomes more widely available and widely used, in theory, it would be possible for litigants to wear earpieces connected to AI feeding them arguments. Lee asserts that at some point the court will need to amend the policy to specifically prohibit AI so that it is clear to the public.

"I suspect that no lawyer would stake their entire professional reputation on solely using AI, but I think within a few years it will be more commonplace as an integration into legal practice," she says. However, she adds that she is not worried it will ever be a wholesale substitute for a human lawyer.

♬ original sound - Kyla Lee