Can you tell if an image is real?
AI models such as DALL·E can produce detailed and realistic faces by learning from millions of images of real humans with their descriptions. But are these hyper-realistic images of people almost indistinguishable from real photographs? A research team led by the United Kingdom’s Swansea University set out to determine how realistic AI-generated faces have become. The findings were published in the journal ‘Cognitive Research: Principles and Implications’(opens in new window).
Seeing isn’t believing
Using AI models DALL·E and ChatGPT for a series of experiments, the researchers produced extremely realistic images of imaginary and famous faces, including celebrities. The results showed that the study participants couldn’t accurately tell them apart from real photos, despite the fact that they were used to the person’s appearance. The outcomes draw attention to a troubling surge in so-called deepfake realism – the measure of how close the fake content comes to real footage. In other words, how well it fools human perception. “Studies have shown that face images of fictional people generated using AI are indistinguishable from real photographs. But for this research we went further by generating synthetic images of real people,” co-author and Swansea University psychology professor Jeremy Tree commented in a news release(opens in new window). “The fact that everyday AI tools can do this not only raises urgent concerns about misinformation and trust in visual media but also the need for reliable detection methods as a matter of urgency.” One experiment asked volunteers from Australia, Canada, New Zealand, the United Kingdom and the United States to identify which facial images were real and which were generated artificially. They frequently confused the AI-generated novel faces with the real ones. In another experiment, the subjects were asked to differentiate between genuine photos of Hollywood stars like Jake Gyllenhaal and Olivia Wilde from their computer-generated counterparts. Here too, the participants found it difficult to pick out the genuine ones. “This study shows that AI can create synthetic images of both new and known faces that most people can’t tell apart from real photos,” stated Tree.
A wake-up call
“Familiarity with a face or having reference images didn’t help much in spotting the fakes, that is why we urgently need to find new ways to detect them. While automated systems may eventually outperform humans at this task, for now, it’s up to viewers to judge what’s real.” Convincingly faking even faces we already know brings a whole new meaning to the word manipulation, with use and abuse implications that range from politics to business. “Since both familiarity with, and reference images of, a particular identity produced only limited benefits, researchers will need to explore alternative solutions as a matter of urgency,” concluded the study authors. “In time, we might find that automated systems will match or surpass human performance in detecting these deepfakes. However, at least for the foreseeable future, the veracity of content will be left for viewers to determine for themselves and, as such, we should make this search for solutions a priority.”