,

Balancing AI and Ethics in Legal and HR Practices: A Pioneering Lesson

Use of AI in the workplace

In the evolving landscape of legal and HR practices, the integration of Artificial Intelligence (AI) has opened new frontiers of efficiency and innovation. However, the rapid adoption of AI technologies, like generative AI chatbots, also presents unique challenges and ethical dilemmas. This article delves into these complexities, using a pioneering case as a cautionary tale to underline the importance of diligence and ethical considerations in the professional use of AI.

A Cautionary Legal Tale

Zhang v. Chen, 2024 BCSC 285 (“Zhang”) involves the misuse of AI by a legal professional. The case originated over an application by Mr. Chen in China for more parenting time with his children who resided with their mother, Ms. Zhang, in Canada. Mr. Zhang’s counsel, Ms. Ke, included in her legal filings references to two non-existent legal cases, generated by ChatGPT, see Zhang at para 2 and 8. This misuse not only wasted resources but also breached ethical standards, highlighting the potential pitfalls of relying uncritically on AI-generated information. 

Ethical and Practical Implications for Legal and HR Professionals

Use of AI in HR and legal practices offers significant advantages, including enhanced efficiency in recruitment, talent pipelining, and the automation of routine tasks. However, these benefits come with the responsibility to ensure that AI use does not compromise ethical standards or lead to misinformation.

Legal Practices

For legal professionals, this case serves as a reminder of the ethical obligations inherent in their work. Reliance on AI tools must be balanced with a commitment to accuracy and integrity. Legal practitioners must exercise due diligence to verify the information generated by AI, ensuring that it meets the rigorous standards required in legal documentation and proceedings. 

The Law Society’s professional guidance in 2023 stated that there is an “ethical obligation to ensure the accuracy of materials submitted to court” and also advised that the court should be advised of any materials generated with technology like ChatGPT. See Zhang at para 34.

This need for verification of AI generated information is demonstrated by a study of large language models (LLMs) see January 2024 study: Matthew Dahl et. al., “Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models” (2024) arxIV:2401.01301, which found that “legal hallucinations” have an occurrence of up to 69% with ChatGPT, and 88% with Llama 2. The LLMs also fail to “correct a user’s incorrect legal assumptions”, and the LLMs cannot predict, and do not always realize that they are producing legal hallucinations. The study warns against “unsupervised integration of popular LLMs into legal tasks”. See Zhang at para 38.

The judgment found that Ms. Ke’s use of the fake cases resulted in delay and confusion, which meant the opposing counsel had to take steps they would not otherwise have had to take, see Supreme Court Family Rules R. 16-1(30)(c) and (d). Ms. Ke was made personally liable for costs. The court also instructed Ms. Ke to review any other material she had put forward to the court that may be AI generated and advise opposing parties immediately. See Zhang at paras 39 to 43.

HR Practices

Similarly, HR professionals leveraging AI in recruitment and other HR processes must be aware of the potential risks, including issues related to privacy, accuracy (as discussed above), and bias. The use of AI in screening candidates, for instance, requires careful consideration to ensure compliance with applicable regulations and ethical standards, for example: 

  • HR processes that utilize extensive personal data may give rise to employees’ concerns about privacy breaches or, in worse scenarios, actual breaches. Establishing strong data protection policies is crucial to address these concerns while making the most of AI.
  • The lack of human judgment in AI-driven decisions, such as promotions and career development, presents ethical challenges. HR needs to balance AI’s efficiency with human insight to ensure decisions are taken fairly.
  • Bias may be an issue in hiring, as AI systems might unintentionally perpetuate past biases found in historical data. To address this, HR professionals must actively monitor AI systems to ensure the best fair hiring practices

Lessons Learned and Best Practices for HR and Legal Professionals

Understanding AI Limitations: A thorough understanding of the capabilities and limitations of AI technologies is crucial for professionals, enabling their responsible and effective use. As stated in Zhang at para 36: there is an “express warning on the ChatGPT website that the output could be inaccurate and that using ChatGPT is not a substitute for professional advice.”

Compliance with Laws and Regulations: In HR practices, particularly, adherence to privacy, human rights, and AI regulations is essential to prevent legal and ethical violations.

Continual Learning and Adaptation: Professionals should stay informed about advancements in AI and evolving legal standards related to its use, adapting practices accordingly.

Connect with Neil Hain Dispute Resolution

As we navigate the complexities of integrating AI into professional practices, expert guidance becomes indispensable. Whether you’re grappling with intricate workplace investigation issues or facing complex HR challenges, Neil Hain Dispute Resolution is here to provide comprehensive experience and expertise. With a deep understanding of the evolving legal and HR landscapes, including the integration of AI technologies, Neil Hain offers thorough, ethical, and effective solutions tailored to your unique needs.