A CAUTIONARY TALE ON ARTIFICIAL INTELLIGENCE
As more High Courts transition to the Court Online system, a system that enables legal practitioners to deliver court documents electronically, the legal profession is undergoing a paradigm shift in its procedures. Unsurprisingly, legal practitioners are also beginning to incorporate new technologies, such as artificial intelligence ("AI"), into their practices.
Although AI may be a quick and convenient tool, legal practitioners should remember that AI can provide you with inaccurate, incomplete, or fictional information presented as facts.
AI is not just another search engine because it also has the ability to generate its own content, such as "cases" and "references". As seen in recent South African cases such as Mavundla v Member of the Executive Council, Department of Co-Operative Government and Traditional Affairs, KwaZulu Natal 2025 (3) SA 534 (KZP); and Northbound processing (Pty) Ltd v The South African Diamond and Precious Metals Regulator & Others (2025/072038) [2025] ZAGPJHC 661, where legal practitioners appearing in the High Courts had relied upon "research" obtained from AI platforms and, as a result, referenced non-existent "fake" cases that AI had generated. This is obviously problematic in our precedent-based legal system, where the outcomes of previous cases are relied upon when making current and future decisions.
It is well established in our law that legal practitioners have a strict duty not to mislead the court, whether intentionally or through negligence. A breach of this duty would occur when a legal practitioner negligently misleads the court by failing to verify their references or failing to ensure that they are relying on true and accurate legal references during proceedings, as was confirmed in the English High Court case of Ayinde v The London Borough of Haringey. Legal practitioners must consistently demonstrate adherence to the principles of being fit and proper, with this ethical foundation requiring legal practitioners to maintain the highest standards of moral and professional conduct. Reliance on unverified sources of law undermines this standard, reflecting conduct that may be deemed unprofessional and dishonourable, and thus subject to investigation and disciplinary action under sections 36 and 37 of the Legal Practice Act 28 of 2014.
Practitioners should generally avoid using AI for research purposes. If legal practitioners still wish to rely on AI systems to generate authorities for research purposes, they must, at the very least, verify the references provided by these systems, especially those related to primary sources of law, such as legislation and law reports. A quick search of any reference in existing and authoritative digital law databases and physical paper databases, such as Butterworths LexisNexis, JUTA, and SAFLII, would quickly establish whether an AI reference is inaccurately or fictionally generated or whether it is pulled from a genuine source of law. It would be more appropriate to use AI to consolidate and record one's research findings after the legal practitioner has conducted thorough research from credible databases, rather than relying on AI for this purpose. These alternatives will avoid situations where the use of "fake" cases or other AI-generated content is used in court documents.
Legal practitioners also seem to want to rely on AI as a support tool for drafting as a means of increasing their efficiency and quality of drafting. Should legal practitioners rely on AI in this manner, they must be mindful of the provisions of the Protection of Personal Information Act 4 of 2013 ("POPI Act") when inputting information relating to their clients' personal information into AI platforms for the purpose of drafting documents. AI platforms can store that information and further provide it to other users of the same AI platform, which legal practitioners will have little to no control over, to their clients' detriment.
We strongly recommend that legal practitioners use AI tools with caution and care to avoid the professional ramifications they may face and the negative consequences that can extend to their clients if they use AI platforms irresponsibly.
