Researchers claim that people cannot tell the difference between an artificially generated face (using StyleGAN2) and a real face. They are asking for safeguards to stop “deep fakes” from being trustworthy.
AI-synthesized text and audio have been used in fraud, propaganda, and other purposes.
Dr Sophie Nightingale, Lancaster University, and Professor Hany Farid, University of California, Berkeley, conducted experiments where participants were asked to identify real faces from StyleGAN2 synthesized ones and how much trust they evoked.
The results showed that synthetically created faces were not only photorealistic, but also nearly indistinguishable from real faces. They are even more trusted.
“Our assessment of the photo reality of AI synthesized facial faces shows that synthesis engines have been through the uncanny valley. They are capable of creating faces that can be indistinguishable from real faces and are more trustworthy than real ones.”
Researchers warn about the consequences of humans’ inability to recognize AI generated images.
“Perhaps the most dangerous consequence is that any recording, inconvenient or unwanted, can be questioned in a digital world where any image or video could be faked.”
- 315 participants classified 128 faces from 800 sets as real or synthetic in the first experiment. Their accuracy rate was 48%. This is close to the 50% chance of success.
- 219 participants received feedback and were trained in a second experiment. They were asked to classify 128 faces from the same 800 faces as the first experiment. However, despite the training, their accuracy rate was only 59%.
Researchers set out to determine if trustworthiness perceptions could be used to help identify artificial images.
“Faces provide a rich source of information, with exposure of just milliseconds sufficient to make implicit inferences about individual traits such as trustworthiness. We wondered if synthetic faces activate the same judgements of trustworthiness. If not, then a perception of trustworthiness could help distinguish real from synthetic faces.”
The trustworthiness of 128 faces was assessed by 223 participants. They were given 800 faces and asked to rate them on a scale from 1 (very untrustworthy to 7 (very trustworthy).
Statistics show that synthetic faces were 7.7% more trustworthy than real faces.
“Perhaps most interestingly, we find that AI generated faces are more trustworthy than real faces.”
- Although black faces were more trusted than those from South Asia, there was no difference across races.
- The trustworthiness of women was significantly higher than that of men.
A smiling face is more trustworthy. However, 65.5% of real faces and 58.8% synthetic faces are smiling. This can explain why synthetic faces are more trusted than their counterparts.
Researchers suggest that AI synthesized facial features may be more trustable because they look like average faces, which are in turn more trustworthy.
They also provided guidelines on the creation and distribution synthesized images in order to protect the public against “deep fakes”.
“Safeguards could include, for example, incorporating robust watermarks into the image- and video-synthesis networks that would provide a downstream mechanism for reliable identification. Because it is the democratization of access to this powerful technology that poses the most significant threat, we also encourage reconsideration of the often-laissez-faire approach to the public and unrestricted releasing of code for anyone to incorporate into any application.
“At this pivotal moment, and as other scientific and engineering fields have done, we encourage the graphics and vision community to develop guidelines for the creation and distribution of synthetic-media technologies that incorporate ethical guidelines for researchers, publishers, and media distributors.”