{"id":7882,"date":"2025-09-23T13:08:15","date_gmt":"2025-09-23T17:08:15","guid":{"rendered":"https:\/\/perlaw.ca\/?p=7882"},"modified":"2025-10-30T15:27:57","modified_gmt":"2025-10-30T19:27:57","slug":"mitigating-artificial-intelligence-risks-through-workplace-policies","status":"publish","type":"post","link":"https:\/\/perlaw.ca\/fr\/2025\/09\/23\/mitigating-artificial-intelligence-risks-through-workplace-policies\/","title":{"rendered":"Mitigating Artificial Intelligence Risks Through Workplace Policies"},"content":{"rendered":"<p><span style=\"font-weight: 400;\">The use of Artificial Intelligence (\u2018AI\u2019) in Canada and around the world is exploding, and has been a key focus across political, legal and business arenas. Like all emerging technologies, it represents a powerful tool that comes with both significant opportunities and risks, which can be mitigated through carefully drafted workplace policies.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In May, Prime Minister Carney appointed Member of Parliament Evan Solomon to a new Cabinet role as Minister of Artificial Intelligence and Digital Innovation, and in June, the G7 Leaders released a Statement on AI for Prosperity with a view to driving innovation and the adoption of secure, responsible and trustworthy AI. Responsible use of AI within the legal profession is also one of the key focuses of the incoming president of the Canadian Bar Association (CBA), Bianca Kratt.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Canada\u2019s courts have also turned their mind to the use of AI, which is reflected in several recent decisions addressing the misuse of AI in court submissions.\u00a0<\/span><\/p>\n<h2><b>Lloyd\u2019s Register Canada Ltd v Choi<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">The most recent of these is Federal Court decision, <\/span><i><span style=\"font-weight: 400;\">Lloyd&rsquo;s Register Canada Ltd. v. Choi<\/span><\/i><span style=\"font-weight: 400;\">, 2025 FC 1233 (CanLII) (\u2018Llyod\u2019s\u2019). It represents a cautionary tale for self-represented litigants, highlighting the serious nature of misleading the court with AI generated materials.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The case involved a motion brought by the applicant, Lloyd\u2019s Register Canada Ltd. to remove a motion filed by the respondent, Munchang Choi, on the grounds that it was scandalous, frivolous, vexatious, and otherwise an abuse of process.\u00a0\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Mr. Choi, who was self-represented, claimed that his use of generative AI tools was limited to drafting and research, and that he had made a mistake when transcribing a citation for a case. This was not the first time however, that Mr. Choi had relied on AI generated authorities, and the applicant raised concerns about the credibility of his explanation.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In the preceding Canada Industrial Relations Board (\u2018CIRB\u2019) decision,\u00a0 <\/span><i><span style=\"font-weight: 400;\">Choi v Lloyd\u2019s Register Canada Limited<\/span><\/i><span style=\"font-weight: 400;\">,\u00a02024 CIRB 1146, the CIRB had already addressed Mr. Choi\u2019s use of AI, noting that he had misrepresented over 30 legal authorities and principles in his submissions. The CIRB endorsed the guiding principles for AI outlined by the Federal Court and found that Mr. Choi\u2019s misuse of AI undermined the credibility and reliability of his submissions. Although the CIRB recognized he was a self-represented party, Mr. Choi was responsible for exercising caution and ensuring the accuracy of the submissions he filed, particularly where they included references to legal authorities.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The Court also found that Mr. Choi had failed to take full responsibility for his actions or to express appropriate contrition to the Court. In doing so, it referred to its Notice of Direction to Parties on the Use of Artificial Intelligence in Court Proceedings issued in May 2024, requiring parties to inform the Court and other parties if any of their submissions included AI-generated content, which Mr. Choi had been made aware of in the earlier CIRB decision<\/span><span style=\"font-weight: 400;\">. It also observed that the \u201c<\/span><i><span style=\"font-weight: 400;\">undeclared use of AI in the preparation of documents filed with the Court, particularly when they include the citation of non-existent or\u00a0\u201challucinated\u201d\u00a0authorities, is a serious matter<\/span><\/i><span style=\"font-weight: 400;\">.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The Court ordered that the motion record be removed from the court file and awarded the applicant costs. This was necessary to preserve the integrity of the Court\u2019s process and the administration of justice.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The decision in Lloyd\u2019s mentions two well-known earlier decisions on the issue of AI generated submissions involving lawyers in Ontario and British Columbia (BC), <\/span><i><span style=\"font-weight: 400;\">Ko v Li<\/span><\/i><span style=\"font-weight: 400;\">,\u00a02025 ONSC 2965 and Zhang v Chen, 2024 BCSC 285.<\/span><\/p>\n<h2><b>Other Leading Canadian Cases on AI Submissions\u00a0<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">In\u00a0<\/span><i><span style=\"font-weight: 400;\">Ko v Li<\/span><\/i><span style=\"font-weight: 400;\">,\u00a02025 ONSC 2965, the applicant\u2019s lawyer, Jisuh Lee, referenced several non-existent or fake precedent court cases that had been generated through ChatGPT in her factum, as well as in her oral arguments in open court.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The outcome of this show cause hearing, which was held for the purpose of addressing the issue of contempt of court, reflects the fact that Ms. Lee is a tenured lawyer with over 30 years of experience and no disciplinary history who did not intentionally mislead the court. These factors, as well as her forthcoming and contrite response to the court &#8211; admitting the facts, apologizing, and proposing steps to address the issue &#8211; mitigated against the potential consequences that could have been imposed in this case.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The significant negative publicity surrounding this case denounced the misuse of AI in Canadian courts, and served as a deterrent to the legal profession, reminding lawyers of the serious consequences that can flow from relying on AI generated submissions without verifying them first.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Justice Myers found that Ms. Lee had violated her duties to the court and emphasized that misrepresentation of the law by a lawyer poses real risks of causing a miscarriage of justice that undermines the dignity of the court and the fairness of the civil justice system. He noted that in making decisions, the court relies on counsel to state the law accurately and fairly, and that \u201cc<\/span><i><span style=\"font-weight: 400;\">ounsel may not mis-state or misrepresent the law to the court whether by way of AI hallucinations or by any other means<\/span><\/i><span style=\"font-weight: 400;\">.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The judge also addressed Ms. Lee\u2019s failure to comply with Rule 4.06.1 (2.1) of the\u00a0<\/span><i><span style=\"font-weight: 400;\">Rules of Civil Procedure<\/span><\/i><span style=\"font-weight: 400;\">, RRO 1990 Reg 194, which requires that a factum include a declaration signed by the lawyer or their delegate certifying that they are satisfied with the authenticity of every authority cited in the factum. This provision was enacted in 2024 to codify the duty of counsel to cite law honestly and without misrepresentation in response to emerging issues raised by the use of AI in court submissions.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">He noted that while Ms. Lee had not deliberately misled the court, that the \u201c<\/span><i><span style=\"font-weight: 400;\">proverbial buck stops with counsel<\/span><\/i><span style=\"font-weight: 400;\">,\u201d and that counsel bear ultimate professional responsibility for the accuracy of their submissions, as well as for supervising their staff where file preparation is delegated. This was, however, an issue to be addressed by a law society<\/span><span style=\"font-weight: 400;\">.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The judge withdrew the show cause order and deemed it satisfied, finding that there was no public interest served in proceeding with the hearing.<\/span><\/p>\n<p><i><span style=\"font-weight: 400;\">Hussein v. Canada (Immigration, Refugees and Citizenship),<\/span><\/i><span style=\"font-weight: 400;\">\u00a02025 FC 1060 is another federal court decision addressing the use of generative AI for submissions in the context of an immigration proceeding.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In this case, the applicant\u2019s materials included several cases that could not be located. The applicant\u2019s counsel admitted to using what was described as an AI legal research platform for Canadian immigration practitioners, without verifying the sources. Their reliance on AI was not revealed to the court until after four directions had been issued, and the court found that this amounted to an attempt to mislead the court and conceal reliance on AI by describing the hallucinated cases as \u201cmis-cited.\u201d\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The court expressed concerns that counsel did not understand the seriousness of the issue, noting:\u00a0<\/span><\/p>\n<p><i><span style=\"font-weight: 400;\">[39]\u00a0I do not accept that this is permissible. The use of generative artificial intelligence is increasingly common and a perfectly valid tool for counsel to use; however, in this Court, its use must be declared and as a matter of both practice, good sense and professionalism, its output must be verified by a human. The Court cannot be expected to spend time hunting for cases which do not exist or considering erroneous propositions of law.<\/span><\/i><\/p>\n<p><span style=\"font-weight: 400;\">While costs are not ordinarily awarded in the context of immigration proceedings, special reasons in this case were found to support an award of costs, and the court ordered that consideration be given as to whether it would be appropriate to direct the applicants\u2019 counsel to pay any costs personally.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Finally, <\/span><i><span style=\"font-weight: 400;\">Zhang v Chen<\/span><\/i><span style=\"font-weight: 400;\">,\u00a02024 BCSC 285 also involved a lawyer\u2019s use of ChatGPT in the preparation of court materials and the inadvertent use of \u201cfake\u201d cases, often referred to as AI hallucinations.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">This was an application for costs following the respondent\u2019s unsuccessful application for parenting time in China for three children residing with their mother, Ms. Zhang, in Canada. The respondent\u2019s lawyer, Ms. Ke, had inadvertently included two fictitious, AI generated cases into the initial notice of application without verifying them. Ms. Zhang sought costs, including special costs against Ms. Ke for time spent in addressing the non-existent cases.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The court considered whether Ms. Ke should be personally liable for any costs awarded,<\/span> <span style=\"font-weight: 400;\">noting that it was an extraordinary step to award costs against a lawyer, requiring a finding of a serious abuse of the judicial process by the lawyer, or dishonest or malicious conduct. This misconduct must be deliberate and not a mistake or error in judgment.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In its decision, the court noted that citing fake cases in court is an abuse of process and can lead to a miscarriage of justice, and that:\u00a0<\/span><\/p>\n<p><i><span style=\"font-weight: 400;\">[46]\u00a0\u00a0 \u00a0\u00a0As this case has unfortunately made clear, generative AI is still no substitute for the professional expertise that the justice system requires of lawyers.\u00a0 Competence in the selection and use of any technology tools, including those powered by AI, is critical.\u00a0\u00a0The integrity of the justice system requires no less.<\/span><\/i><\/p>\n<p><span style=\"font-weight: 400;\">The court found that Ms. Ke had not included the cases with an intent to deceive, and that the circumstances of the case did not justify awarding special costs. She had also expressed regret for using a generative AI tool that was not fit for legal purposes and had been subjected to significant negative publicity. The court did, however, exercise its discretion to find Ms. K personally liable for costs related to additional effort and expense resulting from the confusion created by the \u2018fake cases.\u2019\u00a0<\/span><\/p>\n<h2><b>Mitigating AI Risks Through Workplace Policies<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">Aside from these leading decisions, the issue of AI hallucinated cases has come up in a variety of legal contexts, including in tribunal settings and criminal proceedings. These cases have often involved self-represented litigants with limited resources or legal expertise who may be unaware of the potential for AI to generate non-existent cases.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Courts and tribunals have generally been lenient on self-represented litigants, addressing these issues, particularly where there is no evidence of a deliberate attempt to mislead the court, through directions to prepare new materials and cost awards.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The risks are much higher for counsel however, who have professional responsibilities in addition to being officers of the court. These risks include significant cost awards, reputational risk, and in the most serious cases, professional discipline and the possibility of a finding of contempt of court.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">This highlights the importance for lawyers and law firms to proactively manage these risks by establishing policies and procedures around the use of AI in legal research and drafting submissions, which address issues like the types of AI platform that can be used (proprietary or otherwise), ensuring information security, supervising staff and verifying sources, as well as providing notice to courts and tribunals of the use of AI in preparing submissions.\u00a0\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Employers in other sectors will also benefit from proactively drafting policies to address the responsible use of AI by their employees, and avoid legal risks, including intellectual property (IP) infringement, privacy and data security issues, as well as to ensure legislative and regulatory compliance with new laws aimed at AI.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In Ontario, for example,\u00a0 Bill 149,\u00a0<\/span><i><span style=\"font-weight: 400;\">Working for Workers Four Act, 2024<\/span><\/i><span style=\"font-weight: 400;\">, introduced a new requirement to disclose the use of AI within the recruitment process for public and online job postings, which will come into force on January 1, 2026. For more information on this requirement, please read our recent article by Mira Nemr, <\/span><b><i>Employer Considerations for Job Postings: The Effects of Bills 149 and 190<\/i><\/b><span style=\"font-weight: 400;\">, or<\/span> <span style=\"font-weight: 400;\">contact our\u00a0Employment Law\u00a0team.<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The use of Artificial Intelligence (\u2018AI\u2019) in Canada and around the world is exploding, and has been a key focus across political, legal and business arenas. Like all emerging technologies, it represents a powerful tool that comes with both significant opportunities and risks, which can be mitigated through carefully drafted workplace policies.\u00a0 In May, Prime [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":7883,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"wds_primary_category":0,"wds_primary_expertise_area":0,"footnotes":""},"categories":[82],"tags":[],"class_list":["post-7882","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-publication","expertise_area-employment"],"acf":[],"_links":{"self":[{"href":"https:\/\/perlaw.ca\/fr\/wp-json\/wp\/v2\/posts\/7882","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/perlaw.ca\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/perlaw.ca\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/perlaw.ca\/fr\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/perlaw.ca\/fr\/wp-json\/wp\/v2\/comments?post=7882"}],"version-history":[{"count":1,"href":"https:\/\/perlaw.ca\/fr\/wp-json\/wp\/v2\/posts\/7882\/revisions"}],"predecessor-version":[{"id":7885,"href":"https:\/\/perlaw.ca\/fr\/wp-json\/wp\/v2\/posts\/7882\/revisions\/7885"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/perlaw.ca\/fr\/wp-json\/wp\/v2\/media\/7883"}],"wp:attachment":[{"href":"https:\/\/perlaw.ca\/fr\/wp-json\/wp\/v2\/media?parent=7882"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/perlaw.ca\/fr\/wp-json\/wp\/v2\/categories?post=7882"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/perlaw.ca\/fr\/wp-json\/wp\/v2\/tags?post=7882"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}