Can you tell who is real or fake just from a picture? 

Human image synthesis is testing our skills

Today, people are readily able to make enhancements to their own photos in real time and in the palm of their hands. Applications such as Factune, Instagram and even Snapchat have fueled the phenomenon of “super-selfies” or selfies that are self-edited, usually to make one appear more attractive. However, these images are often easily able to be identified as having been altered. Tech companies are now creating photographs of “fake” people that are deceptively difficult to differentiate from real people. The artificial intelligence (AI) software used to create such faces is freely available and improving rapidly, allowing the creation of likenesses that are convincing enough to fool the human eye. To help identify the real from the fake, it is helpful to understand the technology that creates these images, the potential uses for these images, and the nuances of how to examine images for signs of fabrication.

The creation of these fake images has only become possible in recent years thanks to a new type of artificial intelligence called a Generative Adversarial Network (GAN). This technology works when you feed a computer program photos of real people and train on massive databases of actual faces, then attempt to replicate their features in new designs. The programming has a “back-and-forth” with itself such that it comes up with its own photos of people while the other part of the system tries to detect which of those photos are fake. This check and balance type approach makes the end product more and more indistinguishable. 

Since the inception of this technology, AI start-ups have emerged that are selling computer generated faces that look like the real thing. On the website Generated. Photos, you can actually buy photos of a fake person for about $3. The site allows anyone to filter fake photos based on age, ethnicity, gender, eye color,  hair length and even emotional expression. Another website, ThisPersonDoesNotExist.com, offers the same technology for free without the ability to make such detailed adjustments. Nonetheless, companies are using these generated photos to market their diversity in their ads without needing actual human beings or even for chatbots to help humanize a variety of customer service experiences. One troubling application of this technology is the use of automated photographs to create fake social media profiles. 

How good are we at determining the real from the fake? Studies have shown that we are able to identify the real roughly 60% of the time on the first try and up to 75% with practice, suggesting that the fakes are not flawless. Take the following images for example where only one photograph is real: 

Of the 4 images, only image #3 is a real person. While a brief glance at the photos makes it difficult to differentiate, close inspection reveals several irregularities. For example, in image 1, the temples of the glasses are only visible on the right and appear non-existent on the left. Similarly, in image 2, the earrings do not appear the same. These subtleties highlight that fashion accessories can cause problems for the software and image construction. Another interesting finding is that computer simulated images of faces (without accessories) tend to have near perfect symmetry in facial features which is quite rare in the general population. 

The new trend of computer generated faces is quite remarkable in its ability to create realistic appearing images. While there are some seemingly harmless applications of this technology, the potential for misuse and deception should not be overlooked. By understanding the software that has helped develop these images, their potential uses and heralding signs of fraudulence, we can hone our skills to better identify the real from the fake.

A collage of AI-generated faces offered for sale.

Leave a reply