The use of Artificial Intelligence (AI) has grown significantly in recent years, with businesses increasingly adopting tools like ChatGPT to optimize customer service, improve marketing strategies, and generate content. This rising dependence on AI has also extended into the legal sector.

The case of Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal and Others (7940/2024P) [2025] ZAKZPHC 2 (8 January 2025) highlighted the dangers of litigants placing blind faith in research generated by AI.

In essence, the matter concerned an application for leave to appeal against a judgment delivered on 16 August 2024. The Applicant filed a notice of leave to appeal a few days after the order was granted, containing various grounds of appeal. A supplementary notice of application for leave to appeal was filed three weeks later, which contained, in addition to the grounds of appeal, several references to case authorities in support of submissions made in respect of the grounds of appeal.

Concerns were raised when the presiding judge found that seven of the nine cases cited by the Applicant’s legal team to support the leave to appeal application were non-existent, while the remaining two cases contained significant citation errors.

The presiding judge gave the Applicant’s legal team several chances to verify and provide the correct citations. It was highly unusual that the Applicant’s legal team was unable to produce the necessary verification seeing as though genuine authorities can easily be found in official repositories.

The Applicant’s legal team failed to provide a satisfactory explanation for how these “mistakes” occurred, leading the presiding judge to suggest that a generative AI tool may have been used. Although the team (which included an advocate) attributed the references to a candidate legal practitioner, the absence of proper oversight from senior members remains a serious concern.

Adding to the controversy, the candidate legal practitioner denied using AI when questioned, which raised concerns of dishonesty alongside the initial negligence. The firm’s senior principal provided little reassurance, suggesting that the errors were simply due to a lack of technological proficiency.

The consequences, however, were no laughing matter. The presiding judge ultimately dismissed the application for leave to appeal, penalising the attorneys by ordering them to pay certain costs from their own pockets and referring the matter to the Legal Practice Council for possible professional misconduct proceedings.

AI Hallucinations:

The incident highlights an unsettling flaw, sometimes referred to as “AI hallucinations,” where an AI engine confidently produces plausible sounding but ultimately fictional references.

In legal practice, where precision is crucial to ensuring justice, the Mavundla case serves as both a compelling and concerning example of how AI hallucinations can lead to a series of ethical violations, extending well beyond mere citation errors.

However, the real danger lies in the fact that legal arguments depend on accurate precedents. AI hallucinations can erode the very foundation of legal reasoning. In common law systems, where precedent is vital, presenting fictional cases is not merely an academic mistake—it could potentially affect future rulings if not identified and rectified. The ability of AI hallucinations to appear convincing makes them especially dangerous in a profession that relies on the precise transmission of legal principles through case law.

Way Forward:

Vigilance is non-negotiable when it comes to legal practice, and the cornerstone of AI-assisted legal research is meticulous verification.

Regardless of how convincingly an AI tool presents a source, legal practitioners must always verify its authenticity and relevance through reputable databases. Instead of relying solely on AI-generated summaries, they should review the original judgments to avoid citing non-existent cases or misrepresenting the law.

At its core, the Mavundla judgment is not a condemnation of using AI to enhance legal practice, nor should it be perceived as such. Instead, it serves as a cautionary tale about the dangers of blindly trusting AI-generated results and neglecting the foundational principles of ethical legal practice.

The full judgement can be downloaded here:

https://www.saflii.org/za/cases/ZAKZPHC/2025/2.pdf

 

Article drafted by Cammy Marais