I’m seeing a dangerous trend right now: using AI fakes to protest AI problems.
It goes like this: someone shares a personal experience of AI’s biases, sometimes with utterly racist or misogynistic results. Often the post includes an expert offer to help people navigate this new world. And to illustrate the story, they include a fake AI-generated image or two as their evidence… but the evidence shows their story was false.
What’s the giveaway? Improbable results of a prompt (e.g. demonstrating old AI biases that have been fixed). AI-generated quirks in supposedly-original images. Described results of prompts that can’t be repeated.
This isn’t just an ethical breach. It’s a strategic blunder.
It allows critics and AI leaders to focus on the lies and faked evidence rather than discuss real AI shortcomings. It sabotages the honest conversation about AI dangers that we need to have.
Let’s not invent stories. The real challenges of AI are big enough.
Unfortunately, this trend will only accelerate. One of the greatest challenges of the AI age is retaining our critical thinking.
How well are you adapting to a world where you must question everything you see?

