Skip to main content

AI Didn't Just Generate Images. It Contaminated a Vocabulary.

AI Didn't Just Generate Images. It Contaminated a Vocabulary.

We know AI image generators make mistakes. Extra fingers. Mismatched limbs. Garbled text. Melting backgrounds. Faces that are almost right but not quite. These artifacts are well-documented, widely mocked, and have become the shorthand for spotting AI-generated work.

There's just one problem. Every single one of those "mistakes" has also been an intentional creative choice by human artists — for decades, sometimes centuries, before AI existed.

And now those choices come with a credibility problem they didn't used to have.


The Alien With Six Fingers

Imagine a concept artist designing an alien species. They make a deliberate decision: five digits on the left hand, six on the right. Asymmetrical, subtly wrong by human standards, immediately communicating that this creature did not evolve the way we did. It's a small detail with real payoff — the kind of thing that rewards attentive viewers and makes a fictional world feel genuinely alien rather than just human-with-a-forehead-ridge.

Pre-AI, this lands as intriguing design. A viewer notices it, maybe does a double take, and then appreciates the thought behind it.

Post-AI, the first read for a significant portion of viewers is: the generator miscounted.

Nothing about the art changed. The intent didn't change. The craft behind the decision didn't change. What changed is that AI burned that particular creative choice into its list of known failure modes — and the audience absorbed that association whether they meant to or not.


This Goes Way Beyond Fingers

The mismatched digits example is clean because it's concrete, but the same dynamic plays out across the entire landscape of intentional creative weirdness:

  • Anatomical distortion — Picasso's fractured faces, Francis Bacon's screaming figures, body horror design. All now share visual space with AI's inability to render consistent anatomy.
  • Dreamlike or incoherent backgrounds — a staple of surrealism, psychedelic art, and impressionism pushed to its edges. Now reads as a diffusion model losing track of context.
  • Garbled or stylized text within an image — outsider art, graffiti abstraction, intentional illegibility as aesthetic. AI famously cannot render readable text, so any blurred or abstracted text now triggers suspicion.
  • Uncanny symmetry breaks — used deliberately in horror, in character design, in portraiture to create unease. Overlaps almost perfectly with AI's facial inconsistency artifacts.

AI was trained on the full breadth of human creative output — including all of its intentional strangeness. It absorbed the vocabulary of human weirdness and then reproduced broken versions of it. The broken versions now define what "AI-generated" looks like. And anything in the neighborhood gets caught in the dragnet.

This isn't theoretical. IIT TechNews documented a real artist who deliberately used surrealist techniques — breaking lines, unnaturally bent body parts, distorted background details — to explore mental health experiences. Every one of those choices overlaps with AI's known failure modes. The false accusations were relentless enough that the artist ultimately shifted to more realistic work just to escape the suspicion. A style built with intent, abandoned not because of any creative failure, but because the audience's frame of reference had changed around it.


The Burden of Proof Has Quietly Shifted

Here is what's actually new and worth paying attention to: artists working in any of these modes now carry an explanatory burden that simply did not exist before.

A surrealist painter, a speculative fiction illustrator, a horror concept artist — none of them previously had to front-load their work with "and here's why this was intentional." The work stood on its own. Viewers might not have understood it immediately, but the assumption of human intent was the default.

That default is eroding. "It looks AI" has become a socially acceptable critique that requires no further evidence — and it's unfalsifiable when the aesthetic in question overlaps with AI's failure vocabulary. The artist cannot resolve it by doubling down on the style, because doing so only deepens the suspicion. They cannot escape it by explaining their intent, because the explanation sounds exactly like what someone defending AI-generated work would say.

Artist Ben Moran encountered this trap directly. After posting his work online, a moderator told him that even if he painted it himself, "it's so obviously an AI-prompted design that it doesn't matter" — and that he needed to find a different style. The process was irrelevant. The intent was irrelevant. The aesthetic alone was enough to convict.

The research backs this up. A 2024 University of Chicago study tested whether people — including expert artists — could reliably distinguish human art from AI-generated images. Expert artists produced a false positive rate of around 21%, wrongly flagging roughly one in five human artworks as AI. When shown their errors, many were frustrated, noting in retrospect that they had identified details like a slightly offset window latch and interpreted them as AI inconsistency — when it was simply imprecise human work. The "tells" they were pattern-matching on weren't AI tells at all. They were just human imperfection, reframed by expectation. This is a textbook framing effect, compounded by confirmation bias — once you're told to find something wrong, you find it.


It Gets Concrete in Games — and the Stakes Are Higher

In the art world, a false AI accusation costs you reputation. In games, it can cost you your business.

Steam now requires developers to disclose AI usage in their games — a policy designed to give players transparency. On its face, reasonable. But the policy assumes that the question of whether AI was used is a factual, knowable thing that the developer can simply answer. It doesn't account for a scenario where the community decides the answer for them.

If a game ships with concept art featuring intentionally unconventional anatomy — say, that alien with mismatched fingers — and enough players decide it looks AI-generated, the developer faces review bombing, forum accusations, and platform scrutiny regardless of the truth. They may be pressured to issue a disclosure for something that doesn't require one, or spend significant time and energy defending a creative decision that predates the AI era entirely.

The chilling effect runs in both directions. Some developers may preemptively disclose AI usage they don't actually have, just to get ahead of a narrative they can't control. Others may quietly abandon unconventional art direction altogether — not because of any platform rule, but because the audience's suspicion has made certain creative choices too commercially risky to attempt.

Can a game studio prove something was NOT AI?

Neither outcome was intended by any AI policy. Both are already plausible consequences of popular opinion filling the gap between "looks AI" and "is AI."


The Uncomfortable Implication

AI didn't just produce images. It redefined what certain images mean to the people looking at them. It took the full inherited vocabulary of human creative weirdness — the distortions, the asymmetries, the dreamlike incoherence that artists spent generations developing — and turned it into a checklist of suspicion.

Artists working in the affected territory now face a quiet pressure to self-censor, pre-justify, or simply retreat to safer aesthetics. Not because anyone told them to, but because the audience's frame of reference shifted under their feet without warning.

That is a real creative cost. And it landed on human artists first.

Comments