Mitigating Artificial Intelligence Risks Through Workplace Policies

By Kate Terroux
septembre 23, 2025

The use of Artificial Intelligence (‘AI’) in Canada and around the world is exploding, and has been a key focus across political, legal and business arenas. Like all emerging technologies, it represents a powerful tool that comes with both significant opportunities and risks, which can be mitigated through carefully drafted workplace policies. 

In May, Prime Minister Carney appointed Member of Parliament Evan Solomon to a new Cabinet role as Minister of Artificial Intelligence and Digital Innovation, and in June, the G7 Leaders released a Statement on AI for Prosperity with a view to driving innovation and the adoption of secure, responsible and trustworthy AI. Responsible use of AI within the legal profession is also one of the key focuses of the incoming president of the Canadian Bar Association (CBA), Bianca Kratt.

Canada’s courts have also turned their mind to the use of AI, which is reflected in several recent decisions addressing the misuse of AI in court submissions. 

Lloyd’s Register Canada Ltd v Choi

The most recent of these is Federal Court decision, Lloyd’s Register Canada Ltd. v. Choi, 2025 FC 1233 (CanLII) (‘Llyod’s’). It represents a cautionary tale for self-represented litigants, highlighting the serious nature of misleading the court with AI generated materials. 

The case involved a motion brought by the applicant, Lloyd’s Register Canada Ltd. to remove a motion filed by the respondent, Munchang Choi, on the grounds that it was scandalous, frivolous, vexatious, and otherwise an abuse of process.  

Mr. Choi, who was self-represented, claimed that his use of generative AI tools was limited to drafting and research, and that he had made a mistake when transcribing a citation for a case. This was not the first time however, that Mr. Choi had relied on AI generated authorities, and the applicant raised concerns about the credibility of his explanation. 

In the preceding Canada Industrial Relations Board (‘CIRB’) decision,  Choi v Lloyd’s Register Canada Limited, 2024 CIRB 1146, the CIRB had already addressed Mr. Choi’s use of AI, noting that he had misrepresented over 30 legal authorities and principles in his submissions. The CIRB endorsed the guiding principles for AI outlined by the Federal Court and found that Mr. Choi’s misuse of AI undermined the credibility and reliability of his submissions. Although the CIRB recognized he was a self-represented party, Mr. Choi was responsible for exercising caution and ensuring the accuracy of the submissions he filed, particularly where they included references to legal authorities. 

The Court also found that Mr. Choi had failed to take full responsibility for his actions or to express appropriate contrition to the Court. In doing so, it referred to its Notice of Direction to Parties on the Use of Artificial Intelligence in Court Proceedings issued in May 2024, requiring parties to inform the Court and other parties if any of their submissions included AI-generated content, which Mr. Choi had been made aware of in the earlier CIRB decision. It also observed that the “undeclared use of AI in the preparation of documents filed with the Court, particularly when they include the citation of non-existent or “hallucinated” authorities, is a serious matter.”

The Court ordered that the motion record be removed from the court file and awarded the applicant costs. This was necessary to preserve the integrity of the Court’s process and the administration of justice.

The decision in Lloyd’s mentions two well-known earlier decisions on the issue of AI generated submissions involving lawyers in Ontario and British Columbia (BC), Ko v Li, 2025 ONSC 2965 and Zhang v Chen, 2024 BCSC 285.

Other Leading Canadian Cases on AI Submissions 

In Ko v Li, 2025 ONSC 2965, the applicant’s lawyer, Jisuh Lee, referenced several non-existent or fake precedent court cases that had been generated through ChatGPT in her factum, as well as in her oral arguments in open court. 

The outcome of this show cause hearing, which was held for the purpose of addressing the issue of contempt of court, reflects the fact that Ms. Lee is a tenured lawyer with over 30 years of experience and no disciplinary history who did not intentionally mislead the court. These factors, as well as her forthcoming and contrite response to the court – admitting the facts, apologizing, and proposing steps to address the issue – mitigated against the potential consequences that could have been imposed in this case. 

The significant negative publicity surrounding this case denounced the misuse of AI in Canadian courts, and served as a deterrent to the legal profession, reminding lawyers of the serious consequences that can flow from relying on AI generated submissions without verifying them first. 

Justice Myers found that Ms. Lee had violated her duties to the court and emphasized that misrepresentation of the law by a lawyer poses real risks of causing a miscarriage of justice that undermines the dignity of the court and the fairness of the civil justice system. He noted that in making decisions, the court relies on counsel to state the law accurately and fairly, and that “counsel may not mis-state or misrepresent the law to the court whether by way of AI hallucinations or by any other means.”

The judge also addressed Ms. Lee’s failure to comply with Rule 4.06.1 (2.1) of the Rules of Civil Procedure, RRO 1990 Reg 194, which requires that a factum include a declaration signed by the lawyer or their delegate certifying that they are satisfied with the authenticity of every authority cited in the factum. This provision was enacted in 2024 to codify the duty of counsel to cite law honestly and without misrepresentation in response to emerging issues raised by the use of AI in court submissions. 

He noted that while Ms. Lee had not deliberately misled the court, that the “proverbial buck stops with counsel,” and that counsel bear ultimate professional responsibility for the accuracy of their submissions, as well as for supervising their staff where file preparation is delegated. This was, however, an issue to be addressed by a law society

The judge withdrew the show cause order and deemed it satisfied, finding that there was no public interest served in proceeding with the hearing.

Hussein v. Canada (Immigration, Refugees and Citizenship), 2025 FC 1060 is another federal court decision addressing the use of generative AI for submissions in the context of an immigration proceeding. 

In this case, the applicant’s materials included several cases that could not be located. The applicant’s counsel admitted to using what was described as an AI legal research platform for Canadian immigration practitioners, without verifying the sources. Their reliance on AI was not revealed to the court until after four directions had been issued, and the court found that this amounted to an attempt to mislead the court and conceal reliance on AI by describing the hallucinated cases as “mis-cited.” 

The court expressed concerns that counsel did not understand the seriousness of the issue, noting: 

[39] I do not accept that this is permissible. The use of generative artificial intelligence is increasingly common and a perfectly valid tool for counsel to use; however, in this Court, its use must be declared and as a matter of both practice, good sense and professionalism, its output must be verified by a human. The Court cannot be expected to spend time hunting for cases which do not exist or considering erroneous propositions of law.

While costs are not ordinarily awarded in the context of immigration proceedings, special reasons in this case were found to support an award of costs, and the court ordered that consideration be given as to whether it would be appropriate to direct the applicants’ counsel to pay any costs personally. 

Finally, Zhang v Chen, 2024 BCSC 285 also involved a lawyer’s use of ChatGPT in the preparation of court materials and the inadvertent use of “fake” cases, often referred to as AI hallucinations. 

This was an application for costs following the respondent’s unsuccessful application for parenting time in China for three children residing with their mother, Ms. Zhang, in Canada. The respondent’s lawyer, Ms. Ke, had inadvertently included two fictitious, AI generated cases into the initial notice of application without verifying them. Ms. Zhang sought costs, including special costs against Ms. Ke for time spent in addressing the non-existent cases.

The court considered whether Ms. Ke should be personally liable for any costs awarded, noting that it was an extraordinary step to award costs against a lawyer, requiring a finding of a serious abuse of the judicial process by the lawyer, or dishonest or malicious conduct. This misconduct must be deliberate and not a mistake or error in judgment. 

In its decision, the court noted that citing fake cases in court is an abuse of process and can lead to a miscarriage of justice, and that: 

[46]     As this case has unfortunately made clear, generative AI is still no substitute for the professional expertise that the justice system requires of lawyers.  Competence in the selection and use of any technology tools, including those powered by AI, is critical.  The integrity of the justice system requires no less.

The court found that Ms. Ke had not included the cases with an intent to deceive, and that the circumstances of the case did not justify awarding special costs. She had also expressed regret for using a generative AI tool that was not fit for legal purposes and had been subjected to significant negative publicity. The court did, however, exercise its discretion to find Ms. K personally liable for costs related to additional effort and expense resulting from the confusion created by the ‘fake cases.’ 

Mitigating AI Risks Through Workplace Policies

Aside from these leading decisions, the issue of AI hallucinated cases has come up in a variety of legal contexts, including in tribunal settings and criminal proceedings. These cases have often involved self-represented litigants with limited resources or legal expertise who may be unaware of the potential for AI to generate non-existent cases. 

Courts and tribunals have generally been lenient on self-represented litigants, addressing these issues, particularly where there is no evidence of a deliberate attempt to mislead the court, through directions to prepare new materials and cost awards. 

The risks are much higher for counsel however, who have professional responsibilities in addition to being officers of the court. These risks include significant cost awards, reputational risk, and in the most serious cases, professional discipline and the possibility of a finding of contempt of court. 

This highlights the importance for lawyers and law firms to proactively manage these risks by establishing policies and procedures around the use of AI in legal research and drafting submissions, which address issues like the types of AI platform that can be used (proprietary or otherwise), ensuring information security, supervising staff and verifying sources, as well as providing notice to courts and tribunals of the use of AI in preparing submissions.  

Employers in other sectors will also benefit from proactively drafting policies to address the responsible use of AI by their employees, and avoid legal risks, including intellectual property (IP) infringement, privacy and data security issues, as well as to ensure legislative and regulatory compliance with new laws aimed at AI. 

In Ontario, for example,  Bill 149, Working for Workers Four Act, 2024, introduced a new requirement to disclose the use of AI within the recruitment process for public and online job postings, which will come into force on January 1, 2026. For more information on this requirement, please read our recent article by Mira Nemr, Employer Considerations for Job Postings: The Effects of Bills 149 and 190, or contact our Employment Law team.

 

Latest in Newsroom