ChatGPT warning to South Africans

Although artificial intelligence (AI) can be a useful tool, experts warn that users should be cautious since AI can sometimes fabricate information that could land people in serious trouble.
Since OpenAI released ChatGPT as a free “research preview” in November 2022, it has become an integral part of many people’s daily lives.
Many people use it to assist them with work, school or even entertainment. It has also become commonplace to use ChatGPT as a search engine to help find tailored results.
However, generative AI tools, like ChatGPT, can sometimes give completely inaccurate information, which means that users need to exercise caution when using them.
This issue was recently raised when a Norwegian man launched a defamation case against OpenAI after ChatGPT falsely said he had murdered his children.
The complainant, Arve Hjalmar Holmen, has no public profile in Norway and asked ChatGPT for information about himself with the question, “Who is Arve Hjalmar Holmen?”
“Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event,” ChatGPT replied.
“He was the father of two young boys, aged seven and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020.”
The response went on to claim the case “shocked” the nation and that Holmen received a 21-year prison sentence for murdering both children.
In a complaint to the Norwegian Data Protection Authority, Holmen said that the “completely false” story nonetheless contained elements similar to his own life, such as his hometown, the number of children he has, and the age gap between his sons.
This isn’t the only legal case that has come up as a result of AI recently.
Closer to home, a South African law firm also landed in hot water after using non-existent case citations, which were likely AI-generated.
Users beware

Durban University of Technology lecturer and technology expert Siphumelele Zondi explained on The Money Show with Stephen Grootes that AI can sometimes give users information that doesn’t exist.
This phenomenon, also known as “AI hallucinations”, is so dangerous because it isn’t easy to recognise.
“You would believe that it exists because of the way it puts it together and how strongly it makes a case for that particular situation or that particular person or that information,” Zondi said.
This problem is caused by the fact that generative AI tools very rarely tell users that they don’t have an answer to a particular question.
“They always have an answer. Whether it’s right or wrong, they’ll give you an answer,” he said.
This answer could be completely made up or have some reality mixed with fictitious information.
Zondi specified that this problem isn’t only limited to ChatGPT. Other AI tools like Google’s Gemini and Apple Intelligence have also had the same issue, with the latter even having some parts pulled back, or that couldn’t be launched as a result.
“It seems like it’s something that these creators of these generative AI tools can’t get right, the fact that they can’t just say they don’t know something or not create information or fabricate information.”
“It seems to be all across the board when it comes to these AI tools when we are looking for information. It’s something that they can’t fix just yet and something that they can’t get right.”
Zondi advised that although AI can be a good starting point for completing certain tasks, it is important to use other sources as well.
“Move away and then go to what has been authenticated, to information that has been verified by various sources. because, with hallucinations, AI can give you just about any information that it puts together from the internet.”
This article was first published by Daily Investor and is reproduced with permission.