BRAIN YAPPING
DR DEAN BURNETT

How Do We Know That AI Images Aren’t ‘Real’?

Why Does My Brain Do That Preview

In this series, bestselling neuroscience author Dr Dean Burnett answers Shambles Patreon subscribers questions about their brains. A sort of neuroscientific agony aunt if you will.

Adrian asks: Hi Dean, I understand the uncanny valley when it goes to things like video games or creepy nightmare fuel Tom Hanks in The Polar Express, but with AI getting so good now, some of the images I’ve seen are incredibly photorealistic and yet there’s something ever so slightly off but if you asked me what, I couldn’t tell you, it’s like I just ‘know’ it’s not real. And that goes for inanimate objects too (there was picture of a canoe on Reddit recently for example that people were debating whether it was AI or not) not just faces. So I guess my question is, how do we know things aren’t real, and will we always know?.

Hi Adrian

Thanks for very topical, question. I imagine this will be a good one for the search algorithms.

Also, thanks (in a sense) for flagging up The Polar Express. I think about it often as it’s my go-to examples of when enthusiasm about technology gets tripped up by ignorance of neurology.

Publicity still from The Polar Express © 2004 Warner Bros. Ent.

In this case, the movie industry was clearly saying “CGI is improving on a daily basis, let’s make an animated film where the people look as realistic as possible!”, seemingly under the impression that everyone would view this as impressive and endearing, rather than a cavalcade of unsettling glassy-eyed golems that not even Tom Hank’s likeability could overcome.

A more recent example would be Rouge One’s digitally resurrected Peter Cushing. Very much an example of where the technology meant they could, but nobody stopped to ask whether they should.

The most current instance of this is, as you say,  AI image generators.

Let’s be clear up front; if you set-aside the whole “they’re stealing art from actual artists” issue (which you ideally wouldn’t, because it’s a very important one), then AI image generators cool. With a bit of know-how, coding, and the software equivalent of “training” (this isn’t my area, I’m just repeating what I’ve been told), these AIs can generate alarmingly realistic images of unrealistic or impossible scenarios.

Like, I could ask one to show me “Dean Burnett wins Wimbledon”, or “Dean Burnett, world’s strongest man”, or “Dean Burnett literally jumps a shark”. These things that have never and will never happen, but I’d end up with surprisingly realistic images of them, nonetheless.

Dean Burnett has not in fact, nor in all likelihood ever will, win Wimbledon

But here’s the thing; I say “surprisingly realistic”, not “100% realistic”. Because… they aren’t. No matter realistic, there’s always something a bit… off. For instance, AI images of people often have a “glassy” quality. A sort of flawlessness that an organic organism could never possess.

Sometimes it’s subtler, more complex. It is, as you note, hard to describe effectively. To me, it’s like, you know in movies when something other-worldly, like a vampire or alien, is mimicking a human? And occasionally the metaphorical mask will slip slightly, and you get ripple or bulge in the facial façade, that reveals fangs or green skin or undead tissue? AI humans often look like they’re on the verge of doing that.

Why, though? Because, messy organic humans, armed with only paints and a canvas, can create photorealistic images that don’t upset or unsettle us. Why can’t powerful AI engines do the same?

The uncanny valley phenomenon will be a factor; that phenomenon where things which have human-like qualities are perceived more favourably, but those which very closely resemble humans, but not perfectly so, trigger a negative emotional reaction, leading to unease, or revulsion. Hence a sock puppet with eyes on it is charming, but a humanoid-resembling android is alarming.

There’s a lot of brain-based research into this phenomenon, and a lot of theories to explain it. Some argue it’s an evolved response to encountering corpses in the wild, which would mean deadly disease, dangerous bacteria, or nearby predator. A fresh corpse, with all the physical traits of normal human but none of the myriad cues produced by a living one, would be perceived as “very human, but not quite”. And an instinctive aversion to such things would be a handy survival trait.

Dean’s gym sessions have not been going this well.

But the uncanny valley can’t be the whole explanation. AI generated people don’t necessary cause revulsion or unease, just an aura of “wrongness”. Also, as you say, it also applies to non-human things, like canoes. Why our brains would evolve an instinctive rejection of plastic boats that have existed for six decades is anyone’s guess.

The Cosmic Shambles Network is powered by pledges on Patreon. Join our community by subscribing today.

I would argue that the nebulous wrongness of AI images is more fundamental than that.

AI image generators need to be trained, via advanced machine learning techniques. Neural networks (the artificial kind) explore vast datasets of image, so the AI can recognise the patterns and traits that embody particular types of images etc.

It sounds very impressive. And it is. In software terms.

But consider this; your typical human brain with a functional visual system has been assessing and recognising the traits and qualities of real-world visual images every waking moment of every day for several decades! That’s essentially uncountable terabytes of visual imagery, being processed and assessed and analysed, every day, for dozens of years! And vision is the priority sense of the human brain. An organ known to be alarmingly adept at recognising patterns.

Basically, the typical human brain will have learned, in punishing detail, how countless objects should look, in countless real-world situations, down to the most minute detail. And remember, the bulk of things we perceive are things we’re not consciously aware of.

Given all that, how is an AI software programme, created by humans, and with access to only stored digital images, meant to compete? The fact that they get as close as they to is testament to how powerful these AI image generators are.

Nonetheless, there are issues. For one, AI-generated images of people often end up with a disconcerting number of extra fingers. Why? Surely “number of fingers on a hand” is one of the most consistent aspects of human anatomy?

But that’s not how it works. As Trent Burton, tech wizard and editor of this very site, explained to me, you have to think mathematically. Yes, humans invariably have five fingers per hand, but given the number of fingers, the joints they have, and the positions they adopt relative to each other, a human hand has a ludicrous number of permutations, orientations. Even the most powerful AIs struggle with the mathematical impact of that. Hence many images have a weird number of fingers where the AI says “meh, looks OK”, and moves on.

Basically, AIs cannot hope to rival the human brain when it comes to visual imagery. Especially when you consider that the rich and detailed visual perception we enjoy is the result of our brain radically polishing up a crude stream of neural impulses. Everything we see goes through multiple processes in our brain. AIs, designed by humans who logically aren’t aware of most of what visual perception involves, aren’t at the point where they can keep up thus far.

This would explain why, going back to the earlier photorealism example, human artists can create incredibly realistic images by hand better than a powerful AI: an image created by a human artist has, by definition, been constantly assessed and refined by a human brain, so any weird quirks or anomalies that prove distracting or unsettling are eliminated ahead of time.

Ultimately, AI image generators are trying to fool the human brain’s ability to perceive the real world. Them succeeding may just be a matter of time, or it may be more akin to breaking the lightspeed barrier.

I still feel AIs are firmly within the realms of “Can mimic crude human abilities, but not more”. I reckon Tim Spalding on Twitter summed it up best when he said “AI is best understood as ‘unlimited, free stupid people.’” As in, AIs can fulfil the most basic human roles, but for anything that requires nuance or complexity, they’ll fall short.

Some might disagree with this assessment, and that’s their right. But to them I’d say; have you ever seen a self-service checkout without a human supervisor?

Dean Burnett explores the friction between brain and technology in greater depth in his latest book, Emotional Ignorance.

AI images were generated with a Dreambooth training model in Stable Diffusion 1.5

Dr Dean Burnett is a neuroscientist and best selling author of such books as The Idiot Brain and The Happy Brain. His former column Brain Flapping for The Guardian (now Brain Yapping here on the CSN) was the most popular blog on their platform with millions of readers worldwide. He is a former tutor and lecturer for the Cardiff University Centre for Medical Education and is currently an honorary research associate at Cardiff Psychology School and Visiting Industry Fellow at Birmingham City University.  He is @garwboy on Twitter.

If you would like to reuse this content please contact us for details

Subscribe to The Cosmic Shambles Network Mailing list here.

The Cosmic Shambles Network is powered by pledges on Patreon. Join our community by subscribing today.