Seeing used to be the most reliable proof of events. However, nowadays, seeing is no longer believing. What we see may not be real. It could be a deepfake.
What is seen used to be accepted as genuine and true. As the old saying goes, “the camera never lies.” Whatever the camera captured used to be real. This seems to be no longer applicable. This is because, the internet is brimming with realistic-looking Artificial Intelligence (AI) media contents. These media have the capability to deceive our eyes and ears.
Deepfake videos first became popular on pornographic websites, as these sites were able to include celebrity women’s faces in porn scenes. As such trends became more prevalent, major websites made it a policy to prohibit such derogatory scenes. However, impersonating renowned personalities such as political leaders, actors, controversial people, billionaires, and social media influencers is becoming more common nowadays.
What Is Deepfake?
Deepfake videos are altered videos produced by Deep Learning AI algorithms, usually of famous people. It is executed by superimposing a face onto a body to make it appear as though they are doing something they never actually did. It could also be done by taking a speech, customizing a facial movement to match it, then merging it with a face to make it appear as if the person is speaking.
The term ‘deepfake’ was coined by combining the phrases ‘deep learning’ and ‘fake’. Deepfake software apply Deep Learning AI to not only alter audiovisual content but also to create convincing but entirely fictional photos and videos from scratch.
Deepfake software generate deepfakes by learning the patterns of speech and movement of an audiovisual content and by introducing a new element, such as another person’s face or voice. As an outcome, digital impersonations of real people, usually targeting political figures and celebraties, are released.
With the rise of algorithmically created content, deepfake generating applications are becoming easily available on the internet. Deepfake apps can now produce deepfake content in less than a day. The quality and time taken to create a deepfake content have also been enhanced significantly. However, despite some assertions that it’s possible to spot out deepfakes, it has proven extremely difficult to detect them at this point.
The Evolution of Deepfake
Deepfake feels like a latest invention, but it has come a long way, passing through several evolutionary phases. Several technologies have laid the foundation for the birth of deepfake. Here are the major ones:
- Facial Detection: the earliest pioneer work of facial recognition was in 1964, which enhanced computers to recognize the human face. However, it was only in 1991 that the earliest instances of automatic facial recognition emerged.
- Computer Vision: emerged in the mid-1960s enabling computers and systems to derive meaningful information from digital images, videos, and other visual inputs.
- Photoshop: was first developed in 1987 by Thomas and John Knoll, but was officially released by Adobe Photoshop in 1990. And for the past two decades, it has been a popular tool for altering and customizing photos and videos.
- Face Swap: was initially released in 2015 and became popular on several Android and iOS applications, but most popularly on Snapchat.
- Deepfake: first emerged on the internet in 2017 as a generative AI technology and has been surfacing on the internet by impersonating famous people and creating fake audiovisual content.
Serious Concerns About Deepfakes
One of the concerns about deepfake is the possibility of fraud. The technology may be used to generate bogus identities on social media, by scammers claiming to have remarkable credentials and professional competence. According to a recent Stanford Internet Observatory researchers report, they have discovered more than 1,000 AI-generated deepfake accounts on LinkedIn that didn’t belong to any genuine person, alive or dead. The researchers also explained that none of the photographs looked to be composites of real people.
There are also ethical concerns involving deepfake generated contents, to potentially aggravate identity thefts. Identity theft is expected to increase as deepfake generating applications become more widely available. Consequently, we are witnessing the faces of popular people on social media appearing ‘making video content’.
Another serious concern is the deliberate creation of deepfake videos to ruin the reputations of famous people. This is already a serious concern. As Giorgio Patrini, CEO at Sensity, told Cybernews in 2020: “reputation attacks by defamatory, derogatory, and pornographic fake videos still constitute the majority by 93%. … The West, and in particular the United States, is still the main target when considering attacks on public figures.” Cybernews also reported that only 7 percent of expert-crafted deepfake videos were made for entertainment purposes in 2020.
Based on a report titled The State of Deepfakes 2019 published by Deeptrace, over 85,000 deepfake videos were detected up to December 2020, with 96 percent concerning reputation attacks on prominent personalities in the form of fake pornographic content, whilst the most common victims’ country of origin in the US, the UK, South Korea, Canada, and India.
Furthermore, deepfake can also be used to ‘get away with murder’. Influential people, especially politicians, if confronted with their own faults, can easily escape accountability by denying and blaming their lack of competence on deepfakes.
What Are the Proposed Solutions?
For quite a long time, several companies have suggested ways of detecting deepfake photos and videos by simply assessing the content and comparing it to a real situation. However, as these flaws became public, the deepfake programmers fixed the defects in their programs. This has made it more difficult to detect deepfakes while also making deepfake programmers’ lives easier.
At this point, several tech companies have already promised to release an AI to detect deepfake contents. They are assertive that the only way out to combat an AI is with an AI. There are also some experts that are proposing, for authenticated videos to be stored in blockchains so they can’t be accessible for any sort of alteration. In this manner, it would be easy to cross-reference deepfake contents with real ones.
The straightforward and simple solution is to pass a legislation that prohibits creating and spreading deepfake contents. This may not sound feasible with the global nature of the internet and the anonymity of deepfake content creators. However, it’s possible to call for collaboration of social media giants for the greater good of the public.
Photo: MDV Edwards/Shutterstock
You might also like:
All your donations will be used to pay the magazine’s journalists and to support the ongoing costs of maintaining the site.