Chatty Machines and Fake Facts

AI Hallucinations & the Stochastic Parrot Problem

Heta Rahul Patel
3 min readMay 25, 2024

Imagine chatting with a brilliant friend who occasionally talks utter nonsense with total confidence. That’s what it’s like with large language models (LLMs). These AI systems can generate impressive text but often struggle for accuracy.

Let’s dive into two of their quirks: hallucinations and stochastic parrots.

ILLUSTRATION: JAMES YANG

Hallucinations are like creative daydreams. LLMs, trained on massive amounts of text scraped from the internet, can stitch together words that seem logical but are entirely fabricated. You can ask these generative AI agents “how many vowels in aluminium”. They might say ‘A’, ‘I’, ‘U’ (which is correct), but then also add the letter ‘E’ in the answer (entirely made up). Here’s the catch: LLMs deliver this misinformation with the same air of authority as a real fact. It’s like a friend confidently telling you fish can breathe just fine out of water.

Gen AI Hallucinating an E in Aluminium

Stochastic parrots are simpler. They mimic what they’ve seen before, without understanding it. Picture a parrot that flawlessly repeats phrases it’s heard, but can’t grasp their meaning. An LLM might read recipes for various dishes and then confidently (but incorrectly) explain how to make a cake by combining random steps from different recipes. Imagine asking your chatty AI friend how to bake a cake, and they give you a mix of instructions from baking bread and roasting chicken, producing an incoherent jumble without a real recipe.

Stochastic Parrot: Agreeing to my prompts, without fact checking

These issues arise because LLMs are statistical magicians, not mind readers. They predict the next word in a sequence based on probability, not meaning. It’s impressive, like a talented improv artist who can make up content on any topic, but it’s imperfect. They can generate seemingly coherent text on almost any subject, but accuracy is another story.

Why care? Because misinformation travels fast. If we rely on LLMs for information, these quirks can mislead. Imagine trusting your chatty AI friend for directions, only to end up lost because they confidently sent you down a non-existent purple-sunset road. Here’s the good news: Recognizing these limitations helps us develop better AI and use LLMs responsibly. We can think of them as powerful tools, but ones that need a fact-checker by their side.

BONUS: Here are a few posts I came across on social media in just the past two days.

Saw this on Instagram
AI trained on Reddit is crazy!
I found this LinkedIn post way too funny!
AI playing Rock Paper Scissors, without understanding how it actually works.

The future of AI is bright, but keeping these chatty machines grounded in reality is crucial. Imagine a future where AI can be a trusted companion, helping us research, write, and explore information. But to get there, we need to address these quirks. By understanding how LLMs work, we can develop safeguards and teach them to be discerning storytellers, not just storytellers touting made-up purple sunsets. After all, a good friend wouldn’t lead you astray, even with the best intentions.

I regularly keep posting such easy-to-understand breakdowns and some fun insights on my LinkedIn. To stay updated, you can follow or connect with me on LinkedIn by simply clicking this link: HetaPatel287.

#GenAI #Hallucinations #FactCheck

--

--

Heta Rahul Patel
Heta Rahul Patel

Written by Heta Rahul Patel

Software Engineer at JPMorgan Chase & Co., passionate about demystifying the world of finance and beyond, making complex ideas digestible.

No responses yet