Fake faces created by artificial intelligence (AI) are considered more reliable than images of real people, according to a study.
The findings underscore the need for safeguards to prevent deep fakes, which have already been used for revenge porn, fraud and propaganda, according to the researchers behind the report.
Deep false fears
The study – led by Dr Sophie Nightingale from Lancaster University in the UK and Professor Hany Farid from the University of California, Berkeley, US – asked participants to identify a selection of 800 faces as real or fake, and to assess their reliability.
After three separate experiments, the researchers found that the synthetic faces created by the AI were on average 7.7% more reliable than the average rating of the real faces. This is “statistically significant”, they add. According to New Scientist magazine, the three faces deemed most trustworthy were fake, while the four faces deemed least trustworthy were real.
AI learns the faces we like
The fake faces were created using Generative Adversarial Networks (GANs), AI programs that learn to create realistic faces through a process of trial and error.
The study, AI-synthesized faces are indistinguishable from real faces and more trustworthy, is published in the journal Proceedings of the National Academy of Sciences of the United States of America (PNAS).
He urges safeguards to be put in place, which could include embedding “robust watermarks” in the image to protect the public against deep forgery.
Guidelines on the creation and distribution of computer-generated images should also incorporate “ethical guidelines for researchers, publishers, and media distributors,” the researchers say.
Responsible use of AI is the “immediate challenge” facing the field of AI governance, according to the World Economic Forum.
In its report, The AI Governance Journey: Development and Opportunities, the Forum says AI has been vital in advancing areas such as innovation, environmental sustainability and the fight against COVID-19. But technology also “challenges us with new and complex ethical issues” and “rushes our ability to govern it”.
The report examines a range of practices, tools and systems for creating and using AI.
These include labeling and certification systems; external audit of algorithms to reduce risks; regulation of AI applications and greater collaboration between industry, government, academia and civil society to develop AI governance frameworks.
Republished with permission from the World Economic Forum. Read the original article.