When AI-generated images first started showing up online, there was something a little... off. A sixth finger here, a melted hand there, a window frame doing something no laws of physics could explain. Our eyes could catch it. Our brains would quietly whisper, "That doesn’t look quite right."
But things changed fast. What used to be a game of spot-the-telltale-glitch became more like trying to detect a counterfeit painting with a magnifying glass. The fakes started looking flawless. Or at least flawless enough to fool people scrolling on their phones.
So we built detectors. And then those detectors started getting fooled too. Welcome to the evolving art of spotting AI images.
A short history of fake-spotting
Before AI, fake images relied on good old-fashioned Photoshop or some creative cropping. Spotting them was largely a human task: look for mismatched lighting, distorted shadows, or a pair of legs that just sort of end without explanation. It was the “Where’s Waldo?” era of fake images – the clues were right there if you looked closely.
In those early days, fakery was often crude, and the average internet user could often sniff it out. Media literacy campaigns focused on teaching people to look for signs of image manipulation: inconsistent reflections, impossible proportions, or the classic clone-stamp slip-up. People shared tips like "zoom in on the eyes" or "check for mismatched light sources." You didn’t need forensic software – just a skeptical eye and a decent sense of physics.
Then came the first wave of AI-generated imagery. Initially, it was relatively easy to catch: hands looked like sea anemones, text turned into gibberish, and objects had an Escher-like resistance to basic geometry. A careful eye could still call out the fakes. Social media became a sort of collective detective agency, with commenters calling out "those teeth look weird" or "why does that lamp have three cords?"
But as generative tools matured, those visual quirks began to disappear. Hands improved. Lighting made more sense. Backgrounds got cleaner. And the average user lost their edge. The tools had leveled up, and the rules of the game had changed.
When the software joined the fight
As the fakes improved, we turned to software to help. AI image detectors became the new line of defense, scanning pixels and noise patterns in search of signs no human could easily see.
In a March 2025 CBC News video “Do AI Image Detectors Work? We Tested 5,” a team ran three test images through five free online detectors: one real photo, one AI-generated version, and one AI image posted to a social media platform to simulate compression. Only two of the five detectors consistently got it right.
Some tools flagged real photos as fake. Others were fooled by compression. Developers behind the tools pointed to the need for larger, more diverse training datasets and emphasized that detection is still worth pursuing, even if imperfect.
What we learned: these tools are helpful, but not reliable enough to trust without backup.
Watch the CBC video here.
The expert eye behind the microscope
Even the best tools have blind spots. What often makes the biggest difference is human context, domain knowledge, and a trained eye that knows what to look for.
In his TED Talk "How to Spot Fake AI Photos" (April 2025), digital forensics expert Hany Farid offers a more sophisticated lens on the problem. Farid's team helps journalists, governments, and courts analyze images, and his approach involves looking for inconsistencies that AI, being purely statistical, can’t yet fake well.
Some of his go-to techniques:
- Residual noise patterns: Authentic photos have distinct digital noise, while AI images often show a "starburst" pattern when you analyze their noise profiles.
- Vanishing points: Real-world geometry means parallel lines converge logically. AI sometimes forgets that part.
- Shadow direction: Natural light sources create shadows that align in predictable ways. Generative models often get these wrong.
Farid's broader message, however, isn’t just about technique. He urges people to rethink how they consume media:
- Don’t rely on social media as a primary source of news.
- Support fact-checkers and journalists who are doing the work.
- Think before sharing.
Watch the TED Talk here.
The tools won't keep up forever
Detection only works if we actually stop and look. But most of the time, we don’t. We scroll, we glance, we move on. And that's exactly when fakes slip by.
Hank Green tackles this topic with equal parts clarity and resignation. The argument: whatever methods we use to detect fakes today will likely become useless tomorrow. Detection tech improves, but so does generative tech – often faster.
The bigger issue? We simply don’t have time to scrutinize every image. No one zooms in on every photo in their feed to count fingers or trace vanishing points.
So what actually prompts us to question what we see? Familiarity. Suspicion. Maybe a weird vibe. As Green puts it, "The question of the modern day is not whether we can figure something out once we choose to scrutinize it. The question is, when and why do we choose to scrutinize things?"
Shifting the mindset: from detection to context
Rather than relying on tools or tricks, the smarter long-term play might be a mindset shift:
- Consider the source: Who posted the image? What platform is it on? Is it being shared by a reputable outlet? Context can offer more clues than pixels ever could.
- Reverse search: Use image search tools to trace where an image came from. If the image shows up on sketchy forums before it hits the news, that might be telling.
- Ask what's missing: Is there context? A caption? Other images from the same event? Real news events often come with a flurry of supporting content. A one-off image with no corroboration should raise an eyebrow.
- Look for confirmation: Can you find another version of the story from a trustworthy source? Reputable journalism isn’t perfect, but it tends to be consistent. A story that only appears in fringe corners of the internet should be double-checked.
Media literacy doesn’t mean distrusting everything. It means practicing a healthy skepticism – the kind that checks twice before clicking "share."
It also means training your instincts. Start asking yourself: Why am I seeing this image? Who wants me to see it? What emotions is it trying to provoke? These questions often reveal just as much as any watermark or metadata ever could.
Lastly, remember that the goal isn't to become a full-time fact-checker. It's to be a slightly more attentive citizen of the internet. A little doubt, applied selectively and thoughtfully, goes a long way.
What this means for the future of storytelling
So where does that leave us? For starters, it means staying curious, not cynical. AI isn’t going away – and that's not necessarily a bad thing. These tools open new doors for creativity and efficiency. They can help us tell bigger stories, faster. But they also raise the bar for what counts as trustworthy, well-made content.
If you’re someone who tells visual stories for a living, this is your moment to level up. It's not just about producing images anymore – it's about building trust, offering context, and being intentional with how your visuals land in the world.
Want to hear more about how creatives and media pros are navigating this new terrain? Check out our on-demand webinar: How AI is Shaping Storytelling and Visual Media. It dives into the real changes happening behind the scenes and what they mean for the future of visual work.
Stay sharp. Stay curious. And maybe count the fingers, just in case.