NIXSolutions: Deepfake Faces Seem More Credible Than the Real Ones

The development of deepfake technology has led to the creation of hyper-realistic photos, videos and audio recordings. American scientists in a series of experiments have shown that people in most cases are not able to distinguish a real image from a fake one, and the faces created by the neural network inspire more confidence in us.

NIXSolutions

Untrained participants can distinguish a real image from a fake one in less than half of the cases, and after training – in three cases out of five. Such results were shown by a study by Sophie Natingale and Honey Farid, scientists from the University of California at Berkeley. In addition, people in fake images seem 8% more likely to be trustworthy than real ones, notes Hightech.

To conduct the study, the authors selected 400 real photographs and 400 images created by a neural network. In each set, women and men were represented in equal proportions, representatives of different races and ages.

Participants were shown 128 randomly selected photographs and asked to identify whether they were real or fake. On average, the subjects answered correctly in 48.2% of cases. At the same time, the number of errors in the classification of photographs of white men was greater than in the analysis of photographs of women or representatives of other races. According to the authors, the reason is that more images of white men are available for AI machine learning, which means that the quality of the fake is increasing.

The second group of study participants received preliminary training in computer-generated face recognition. They were shown examples of traces of image processing that could indicate a fake. In addition, after each response, participants were told whether they had classified the image correctly. In this case, the quality of recognition of fake photos increased, but only up to 59%.

The researchers also asked a separate group of 223 participants to rate, on a scale of 1 to 7, how reliable people appear to be in the images. The average level of trust in real people was 4.48, and 4.82 in fake people. The difference of 8%, according to the researchers, exceeds the margin of error and is significant. Interestingly, three of the four most trusted people were fakes, and all four of the least trusted were real, notes NIXSolutions.

“We find that artificially created faces are more trustworthy than real ones,” the authors of the study write. “This is most likely due to the fact that the synthesized faces are more similar to the average ones, which, as has been proven earlier, are considered more reliable.”

The synthesis of sound, image and video using AI has made Hollywood-grade special effects widely available. Synthesized speech, fictitious images, and replacing one person with another is no longer only entertainment, but also an opportunity for fraud and deception.

“Easy access to such high-quality fake images will lead to various problems, including, for example, more convincing fake social media profiles, fraud and disinformation campaigns, with serious consequences for individuals and society as a whole,” the scientists note.