In 2023, a photo of an explosion near the Pentagon briefly caused the US stock market to drop. It was AI-generated. A prominent lawyer submitted fake court cases generated by ChatGPT — none of them existed. A deepfake audio clip of a politician saying something they never said spread to millions before it was debunked.
This isn't a future problem. AI-generated misinformation is already everywhere. The question isn't whether you'll encounter it — you already have. The question is whether you'll recognise it.
You'll sort 10 real examples — quotes, headlines, statistics, and content — as either genuine or AI-generated. After each answer you'll learn exactly what the telltale signs were. By the end, your pattern recognition will be significantly sharper.
After enough practice, AI-generated content starts to feel different before you can even articulate why. Here are the patterns that explain that feeling.
Real human writing has irregularities — a missing comma, a casual "gonna", a sentence fragment for emphasis. AI-generated text tends toward grammatically flawless, neutral prose. When something reads unusually clean and formal, question it.
AI content often states things confidently without pointing to verifiable sources. "Studies show..." with no citation. "Experts agree..." with no named experts. Real reporting names its sources. AI-generated content often can't, because the sources don't exist.
AI models are trained to avoid controversy. This produces content that's suspiciously even-handed — presenting "both sides" of things that aren't actually disputed, or hedging so heavily that the content says almost nothing. Real human experts take positions.
The most powerful check: can you verify the specific claim independently? A specific statistic, a named quote, a specific event. If searching for it only returns the original post and copies of it — not independent corroboration — treat it as unverified.
AI-generated misinformation often triggers strong emotion (outrage, fear, disgust) while providing little concrete detail. The outrage-to-substance ratio is very high. Real journalism and research tends to be the opposite — more detail, more nuance, less manufactured emotion.
Before you share anything that surprises you: (1) Can you find it on a reputable news site? (2) Is the source named and credible? (3) Does it have a specific date and location? (4) Does searching the exact quote return anything real? Four checks, under a minute. You'll catch the vast majority of AI-generated misinformation with just these four questions.
Misinformation spreads 6× faster than corrections. The Pentagon explosion story reached millions before the debunking reached hundreds of thousands. This means your job isn't just spotting the fake — it's not sharing it in the first place. One share from you can become a hundred thousand from the people who trust you.
Slow down before you share. The content designed to make you react fast is usually the content most worth double-checking.