Lesson 1 of 5 · Premium Track

Spotting
AI misinformation

⏱ 22 min 🕵️ Interactive challenge ✦ Score tracker

In 2023, a photo of an explosion near the Pentagon briefly caused the US stock market to drop. It was AI-generated. A prominent lawyer submitted fake court cases generated by ChatGPT — none of them existed. A deepfake audio clip of a politician saying something they never said spread to millions before it was debunked.

This isn't a future problem. AI-generated misinformation is already everywhere. The question isn't whether you'll encounter it — you already have. The question is whether you'll recognise it.

🎯 What this lesson trains

You'll sort 10 real examples — quotes, headlines, statistics, and content — as either genuine or AI-generated. After each answer you'll learn exactly what the telltale signs were. By the end, your pattern recognition will be significantly sharper.


Real or AI-generated?
The misinformation challenge
Read each item carefully. Trust your instincts — then we'll explain what to look for.
Score
0 / 10
out of 10

The five signs
every time

After enough practice, AI-generated content starts to feel different before you can even articulate why. Here are the patterns that explain that feeling.

1 — Suspiciously perfect grammar

Real human writing has irregularities — a missing comma, a casual "gonna", a sentence fragment for emphasis. AI-generated text tends toward grammatically flawless, neutral prose. When something reads unusually clean and formal, question it.

2 — No specific sources

AI content often states things confidently without pointing to verifiable sources. "Studies show..." with no citation. "Experts agree..." with no named experts. Real reporting names its sources. AI-generated content often can't, because the sources don't exist.

3 — Overly balanced or vague

AI models are trained to avoid controversy. This produces content that's suspiciously even-handed — presenting "both sides" of things that aren't actually disputed, or hedging so heavily that the content says almost nothing. Real human experts take positions.

4 — The verification test

The most powerful check: can you verify the specific claim independently? A specific statistic, a named quote, a specific event. If searching for it only returns the original post and copies of it — not independent corroboration — treat it as unverified.

5 — Emotional urgency without substance

AI-generated misinformation often triggers strong emotion (outrage, fear, disgust) while providing little concrete detail. The outrage-to-substance ratio is very high. Real journalism and research tends to be the opposite — more detail, more nuance, less manufactured emotion.

⚡ The 60-second check

Before you share anything that surprises you: (1) Can you find it on a reputable news site? (2) Is the source named and credible? (3) Does it have a specific date and location? (4) Does searching the exact quote return anything real? Four checks, under a minute. You'll catch the vast majority of AI-generated misinformation with just these four questions.

⚠️ The correction problem

Misinformation spreads 6× faster than corrections. The Pentagon explosion story reached millions before the debunking reached hundreds of thousands. This means your job isn't just spotting the fake — it's not sharing it in the first place. One share from you can become a hundred thousand from the people who trust you.

Key takeaway

Slow down before you share. The content designed to make you react fast is usually the content most worth double-checking.

Mark lesson as complete
Back toTrack overview