top of page

Deepfakes: Future Threats and Challenges

Written by Sytske Post


 

A few months ago, a manipulated video of Ukrainian President Zelensky circulated on social media. In the video, Zelensky was telling his soldiers to lay down their arms and surrender the fight against Russia. The message also aired on Ukrainian television after hackers had managed to disseminate the fake message on the website of a news organization. Viewers quickly pointed out that Zelensky's accent was strange, and that his head and voice did not appear realistic upon closer observation, prompting the video to be refuted and withdrawn. This type of video manipulation is also known as a ‘deepfake’. The editing of the video was not particularly sophisticated. However, the evolution of deepfakes is perceived as one of the most worrying technological trends for the future. If these images become more sophisticated, they can lead to intensifying existing conflicts and debates, undermine trust in state-run institutions and legal processes, and manipulate financial markets. It is therefore important to analyze different scenarios where this emerging technology could potentially impact our society.


Source: BBC. (2022). The deepfake appeared on the hacked website of Ukrainian TV network Ukrayina 24 [Image]. BBC Technology https://www.bbc.com/news/technology-60780142


What are Deepfakes?


Deepfakes - a term that first emerged in 2017 - are a new type of audiovisual manipulation that allows users to create lifelike simulations of another person's face, voice, or action, or even create realistic looking people who never existed. This technology is categorized as synthetic media, referring to content created or modified with the use of artificial intelligence (AI). It applies the capabilities of deep learning, which is a kind of machine learning where a computer analyzes datasets to learn (through the analysis of patterns) what a result has to look like.


In addition, these technologies also often use generative adversarial networks (GAN). Here, a person starts by gathering photographs or source a video of the person or object they wish to imitate. The fake is created by a GAN employing two networks. The first network creates believable re-creations of the original images, while the second network looks for forgeries. This detection information is passed back to the forgery network, allowing it to improve and generate a better fake version of the source, such as the face of the person you're impersonating. In both instances, the technologies used require datasets to learn or create the image. The larger the datasets the easier to create a sophisticated (audio)visual image.


In today's society such datasets are freely available on the internet. This creates an environment where these deepfakes are easier to create, and are less time-consuming for the creator, thereby increasing the likelihood of deepfakes emerging more frequently in the future.


The possibility of media manipulation is not a new phenomenon, and has been commonly known for a long time. However, the growing use of AI to create deepfakes is a cause for concern as the imagery is increasingly realistic, rapidly created and cheaply made. This will make it increasingly difficult to detect a forgery in the future.


In 2018, director Jordan Peele and BuzzFeed CEO Jordan Peretti collaborated on a deepfake video to warn the public about disinformation, specifically affecting the people's impression of political leaders. Peele and Peretti overlaid Peele's voice and mouth over a pre-existing footage of Barack Obama using free tools and the expertise of editing experts. In the video, the image of Obama says “we are entering an era in which our enemies can make it look like anyone is saying anything, at any point in time. Even if they would never say those things.”


This example, as well as Zelensky’s deepfake video, illustrate how this technology has the potential to be used for nefarious purposes. Therefore, it is important to consider and try and identify the potential threats this new technology can generate and what type of countermeasures can be implemented.


Threat Assessment of Deepfakes


Deepfakes are powerful tools for the dissemination of disinformation, exploitation and sabotage. This emerging technology, therefore, can potentially create harmful situations.


Sabotage and exploitation

Deepfakes can be used to generate misleading information that can fool the public. This can be extremely damaging for a company's or individual's reputation. Consider, for example, a deepfake video being created displaying a firm's CEO, elected official or ordinary citizen saying or behaving in an inappropriate manner. Such a deepfake can potentially go viral on social media within minutes. Companies or public figures would have to spend important resources discovering, eliminating, and refuting fraudulent content, as well as covering legal fees and crisis management costs. Additionally, even if one can later prove they were the victim of a deepfake, the reputation’s damage has already occurred. Such damage can potentially result in revenue loss (e.g. impact the stock market), reduced credibility and trust, it can affect election results, but also simply ruin friendships or family relations.


Deepfakes can also be used for exploitative purposes. The attacker can use this type of content to gain financial resources or confidential information. The technology can also be used for sexual exploitation purposes or to facilitate criminal activity, such as online child exploitation. Currently, the vast majority of deepfakes applications are sexual in nature, with women being the primary victims. Deepfake sex videos can be created diplaying someone being forced to do violent, humiliating sex actions. These types of videos can be used for financial and sexual exploitation. However, they can also be used as weapons designed to frighten and inflict suffering. When victims learn they've been exploited in deepfake sex tape, the psychological consequences can be severe. This demonstrates that not all of these forgeries are created primarily, if at all, for the creator's sexual or financial gain.


(Identity) Fraud

Deepfake technology also contributes in creating sophisticated tools for fraud. This potential became reality in 2019, when a deepfake audio was used to imitate the voice of a CEO to assist the fraudulent transfer of funds. Due to the ability of this technology to replicate biometric data, it can also be used to deceive systems that rely on face, voice, vein, or gait identification. Furthermore, this characteristic can also help facilitate espionage. The most recent instance occurred on June 25, 2022, when various European mayors were tricked into holding video calls with a deepfake of Kyiv mayor, Vitali Klitschko. As of yet it is unclear who is behind the action or what their aims were.


Decision-making processes


Deepfakes have the potential to erode trust in information analysis and outputs offered by digital security platforms, which can complicate decision-making processes. During decisional chokepoints, when there are limited windows of opportunity during which irrevocable decisions must be made. The dissemination of incorrect information may have irreversible consequences.

Imagine this scenario; a deepfake video is created depicting a politician saying racial slurs. This video is then released on social media just ahead of elections, giving it enough time to circulate but does not provide the victim enough time to debunk the false information. Voters might take this video into consideration when choosing who to vote for. Even if the politician, eventually, is able to debunk the video the impact could be irreversible as the polls have already closed.


In the case of law enforcement, sophisticated (audio)visuals can be used to deceive officers. This may then push officers to take the wrong course of action (e.g. unnecessary, inappropriate, wrong suspect). For businesses, deepfakes can persuade or refrain companies into/from taking on certain investment opportunities or partnerships.


Undermine safety, trust and democratic processes


Deepfakes increase the chances that someone can induce a public panic. False emergency alerts in the form of deepfakes could go viral due to social media distribution capabilities, and these alerts can forge hyper-realistic evidence thereby increasing their persuasiveness. Deepfakes can reinforce and exacerbate the underlying social division, but they can also trick communities into certain actions (for example, a fake video of a community leader calling for civil unrest and riots could lead to violent action). The technology can be used to create inflammatory content - such as convincing video footage of military forces committing war crimes - with the goal of radicalizing communities, recruiting, or inciting violence. Or it can be used to falsely represent police officers committing crimes in order to discredit them or even provoke violence against them.


Legal processes


Auditory and visual recordings are vital intelligence for police work and often used as evidence in court, since such recordings are viewed as a truthful account of an event. Despite the fact that picture and audio alteration have been known methods for a long time, these manipulations are typically detectable. Deepfakes provide the ability to create extremely sophisticated auditory and visual material that is difficult to distinguish from real audio and video recordings. If all evidence has to be verified for authenticity, this could lead to a huge increase in costs and time for lawsuits. Furthermore, it could also result in limiting the use of video as proof of human rights violations and crimes, thereby obstructing accountability and justice.


Future outlook and possible countermeasures


As a result of the combination of deepfakes and disinformation, trust in official facts and authorities is undermined. Experts believe that this may lead to a situation where citizens no longer share a common reality, or confusion about whether certain sources are reliable. This could be known as the “information apocalypse” or “reality indifference.” In the absence of a shared reality, efforts to solve national and global problems become entangled in meaningless first-order questions. People will be able to live in their own subjective realities, and simple empirical insights may spark heated contestation. Threat actors will almost certainly utilize deepfake technology to assist various criminal acts and undertake misinformation campaigns to manipulate or skew public opinion in the months and years ahead. Therefore, it is important to assess these risks and find possible ways to counter the rise of such technologies.


There are certain procedures in place to assess the value of information, as false information has always existed. However, these procedures will have to be updated with the rise of deepfakes, for example by investing in new technical capabilities or upskilling workforces, and creating and requiring new capabilities.


Currently, it is still possible for the vast majority of deepfake content to be manually detected by looking for inconsistencies. A few examples of these imperfections are:

  • Blurring around the edges of the face

  • Lack of blinking

  • Inconsistencies in the hair, vein patterns, scars etc.

  • Light reflection in the eyes.

However, machine learning and artificial intelligence advancements will continue to improve the software used to make deep fakes and the imperfections will disappear over time.


Technological solutions have also been hailed as ways to solve this future threat. There have already been efforts to create technological detectors by various actors, such as the Defense Advanced Research Projects Agency of the United States. Microsoft also released the Microsoft Video Authenticator, which analyzes a still photo or video to determine whether it has been intentionally edited. However, these new technical solutions that identify deepfakes can also lead to increased surveillance and exclusion. It is therefore critical to implement regulatory and human rights frameworks that keep up with technological advancements.


Additionally, deepfake detection abilities also drive the increased quality of deepfakes. The AI technology's learning capability can be used to create a deepfake that will eventually learn how to deceive the detector. Therefore, we need to develop a better understanding of existing practices of OSINT and combine this with new media forensic tools that are being developed. In addition, it is also important to think of preventative measures. For society at large it will be important to focus on understanding deep fakes within a broader media literacy frame, such as the SIFT framework. Or avoid face or voice biometrics for authorization purposes. The risks can also differ amongst various sectors, countries and/or individuals. Therefore, strategic foresight and scenario methods can offer ways of understanding and preparing for the potential impact of these technologies in specific cases.

Source: Mikecaulfield, 2019, 'SIFT (The Four Moves)', Hapgood.

 
 

About the author: Sytske Post

Sytske is a graduate of International Studies and is currently enrolled in the Master's degree Conflict Studies and Human Rights at Utrecht University. This educational background has provided her with an interdisciplinary understanding of violent conflict and security. Currently, she is particularly interested in the intersection of technology and conflict, ranging from digital disinformation to the shifting nature of warfare powered by artificial intelligence.




1,183 views0 comments

Comments


bottom of page