AI30.03.2025

Law firm in deep trouble in South Africa

A Pietermaritzburg law firm recently found itself in hot water after citing non-existent case authorities which were likely generated by Artificial Intelligence (AI). Now, it has to pay costs and answer to the Legal Practice Council.

Commenting on the incident, Knowledge Management specialists at Cliffe Dekker Hofmeyr (CDH), Retha Beerman and Safee-Naaz Siddiqi, explained that, over the last few years, AI has become a crucial tool for many professionals.

This includes legal professionals, many of whom use AI for research.

However, Beerman and Siddiqi explained that over-reliance on AI can be dangerous, especially when lawyers do not take the time to fact-check information.

This has landed several law firms in serious trouble, both locally and internationally.

For example, in 2023, a judge fined two lawyers and a law firm in the United States $5,000 (R90,730) after they submitted fake citations generated by ChatGPT in a court filing.

This issue came up again in the recent High Court case of Mavundla v MEC Department of Co-Operative Government and Traditional Affairs and Others, where the applicant’s legal team sought leave to appeal against a prior High Court ruling.

However, they relied on seven non-existent cases to do so.

Despite multiple opportunities, the team failed to verify these references, raising suspicions that a generative AI tool had been used without oversight.

The presiding judge criticised the team’s negligence and lack of accountability, especially as the candidate legal practitioner denied using AI and the firm’s senior principal offered little reassurance.

The judge ultimately dismissed the application, penalising the attorneys by ordering them to pay certain costs from their own pockets and referring the matter to the Legal Practice Council for possible professional misconduct proceedings.

Beerman and Siddiqi explained that this case serves as a grave warning and an urgent summons for the legal profession to employ stringent safeguards against professional negligence in the age of AI.

“Ultimately, legal practice in South Africa is at a crossroads,” they said.

“We can embrace AI’s potential to improve efficiency and access to justice, but only if we remain vigilant, using reliable databases and cultivating a culture where verifying citations is second nature.”

Beerman and Siddiqi explained that the incident highlights an unsettling flaw, sometimes referred to as “AI hallucinations”, where an AI engine confidently produces plausible-sounding but ultimately fictional references.

For example, in one case, an AI tool incorrectly accused an American law professor of sexual harassment and cited a fictional report from The Washington Post.

In another instance, when an AI summarization tool was used to condense a legal document, it invented certain legal terms and omitted important details. As a result, the summary misrepresented the original document.

Beerman and Siddiqi warned that these bogus authorities can appear deceptively legitimate, even to the trained eye. They usually appear complete with case numbers, year citations, and made-up judicial remarks.

“In fast-paced legal practice, practitioners under time pressure may mistakenly accept these results as genuine unless they diligently confirm them against trustworthy sources,” they said.

The real harm arises because legal argument depends on accurate precedent.

When false citations slip through, legal practitioners risk embarrassment, cost orders, and damage to the court’s trust in the counsel’s integrity.

“In South Africa, which is grounded in constitutional values and a strong tradition of precedent, any contamination of the record by fake cases undermines the credibility of the entire legal system,” they warned.

Beerman and Siddiqi explained that South African legal practitioners owe a fundamental duty of candour to the court, as enshrined in the Code of Conduct for Legal Practitioners.

The judge in the Mavundla case underscored that courts rely on counsel to cite real and relevant authorities.

Whether caused by negligence, over-reliance on AI, or supervision lapses, presenting fictitious precedents to a court is the direct opposite of that duty.

“Candidate and junior legal practitioners, in particular, may be tempted to rely on AI for efficiency,” Beerman and Siddiqi said.

“However, this does not absolve them – or their supervising principals – of the ethical obligation to ensure all submissions are accurate.”

Many ethical and hard-working legal professionals may feel uneasy about their ability to navigate a safe path in a world where generative AI poses problems that they do not understand and require skills they do not have.

Beermans and Siddiqi stressed that ignoring this skills gap and failing to gain a deeper comprehension of emerging technologies could be considered an ethical lapse in itself.

Vigilance is non-negotiable in legal practice, and meticulous verification is the cornerstone of AI-assisted legal research.

“No matter how convincingly an AI tool presents a source, legal practitioners must always confirm its authenticity and relevance, reading the original judgments to avoid citing non-existent cases or misrepresenting the law.”

Beerman and Siddiqi added that this judgment should serve as a catalyst for conversations about how best to integrate AI into a legal environment founded on precision.

“While the technology undeniably streamlines research, it must never replace a lawyer’s critical judgement.”

“Indeed, AI is most beneficial when used in concert with human expertise: legal practitioners must do the heavy lifting to confirm, interpret and apply the law.”


This article was first published by Daily Investor and is reproduced with permission.

Show comments

Latest news

More news

Trending news

Sign up to the MyBroadband newsletter