I’ve read and heard about AI hallucinations, but until recently, I had never experienced one myself. Unless you count code that won’t compile, I hadn’t seen ChatGPT go off-script. That changed with a recent interaction where it produced a fully-fledged hallucination, spinning a response out of whole cloth.
The scenario unfolded after I wrote an article about getting back into learning sound design. I used ChatGPT to help refine the article—fixing grammatical issues and improving the overall flow. Once it was published, I decided to ask ChatGPT for its opinion on the article. Instead of copying and pasting the content into the chat, I wanted to see if it could analyze the article directly from its SubStack link.
Looking back, I should have read beyond the enthusiastic "Sure!" in ChatGPT's reply. I pasted the link anyway and received two interesting responses.
The first response stated that ChatGPT couldn’t access the URL. However, the second response was interesting—and somewhat problematic. Despite being unable to access the link, ChatGPT claimed to have read the article and offered insights that had absolutely no basis in reality.
The Hallucination
ChatGPT confidently analyzed the supposed article, discussing topics like football, the 2024 Saskatchewan Roughriders season, the CFL, play-calling strategies, and quarterbacks. None of these topics were remotely connected to actual post. When I clicked the "Sources" button the reply wasn’t backed by any listed source. It was entirely fabricated.
What’s intriguing is that amidst this hallucination, there were some oddly accurate details. For example:
It got my name right. This could have been pulled from the web address, or listed author of the article.
I live in Saskatchewan. This information isn’t listed in the article, on the SubStack platform. Did ChatGPT deduce or hallucinate this detail? If it deduced it what information did if use. If it’s a guess, the chances of it being correct seem improbably low.
To explore further, I asked ChatGPT directly, "Who is Andrew Muir?" while providing the same link.
So in one instance ChatGPT might know who Andrew Muir and that he’s located in Saskatchewan and in another it has no knowledge of that individual.
PS: If you interesting in reading the article I’ve provided a link below. There no real details of insights just something I posted in attempt to keep me accountable.