Home Slider Assessing the Real Threat Posed by Deepfake Technology

Assessing the Real Threat Posed by Deepfake Technology

by internationalbanker

By Alexander Jones, International Banker

As is the annual tradition in the United Kingdom, Christmas Day saw Queen Elizabeth II deliver her 3 PM speech on television to households all around the country. And as is also a tradition, Channel 4 offered its viewers an alternative speech that broadcast at the same time as that of the Queen’s, usually delivered by another renowned personality—previous notable figures include former President of Iran Mahmoud Ahmadinejad, Edward Snowden, Ali G and Marge Simpson. The most recent occasion, however, saw Channel 4 airing a speech that seemed to be delivered by Her Royal Highness but one that saw her bizarrely performing a dance routine made popular on the social-media platform TikTok.

It soon became apparent that the British public was watching a digitally manipulated version of the Queen, one that used deepfake technology to alter her behaviour, with her imitated voice being provided by English actress and comedian Debra Stephenson. According to the channel itself, the broadcast was aired to provide viewers with a stark warning of the potentially dangerous threat posed by fake news, with the director of programmes, Ian Katz, describing the video as “a powerful reminder that we can no longer trust our own eyes”.

At its core, deepfake is a form of artificial intelligence (AI) that combines the terms deep learning and fake. It typically involves using deep learning—a category of AI concerned with algorithms that can learn and become more intelligent over time—to falsify videos. Neural networks scan large datasets in order to learn how to replicate a person’s mannerisms, behaviour, voice and facial expressions. Facial-mapping technology is also used to swap the face of one person into another face using deep-learning algorithms. As such, deepfake technology presents a clear danger of producing content that “can be used to make people believe something is real when it is not”, according to Peter Singer, cybersecurity and defense-focused strategist and senior fellow at the New America think tank.

According to Areeq Chowdhury, who researched deepfake technology being applied to UK Prime Minister Boris Johnson and former Leader of the Opposition Jeremy Corbyn when they were contesting the 2019 general election, Channel 4’s decision to highlight the impact of deepfakes was the right one, but the technology does not currently pose a widespread threat to information sharing. “The risk is that it becomes easier and easier to use deepfakes, and there is the obvious challenge of having fake information out there, but also the threat that they undermine genuine video footage which could be dismissed as a deepfake,” Chowdhury told The Guardian. “My view is that we should generally be concerned about this tech, but that the main problem with deepfakes today is their use in non-consensual deepfake pornography, rather than information.”

Indeed, the Queen’s alternative speech is far from being the first widespread application of deepfake. And while early iterations of the technology clearly indicated that the target video had been doctored, its evolution in recent years has made it much more difficult to distinguish the fake content from the real. “Since the inception of deepfakes in 2017, we have witnessed an exponential growth in them similar to that seen in the early days of malware in the 1990s,” noted Estonia-based firm Sentinel, which specialises in helping to keep democracies free from disinformation campaigns. “Since 2019, the number of deepfakes online has grown from 14,678 to 145,227, a staggering growth of ~900 percent YOY.” Forrester Research, meanwhile, estimated in October 2019 that deepfake fraud scams will have cost $250 million by the end of 2020.

Most commonly, deepfake technology has been used in the political arena to falsify claims made by politicians and mislead the public. John Villasenor, a senior fellow of governance studies at the Center for Technology Innovation at the Brookings Institution, told CNBC in 2019 that it can be used to undermine a political candidate’s reputation by making the candidate appear to have said or done things that never actually occurred. “They are a powerful new tool for those who might want to (use) misinformation to influence an election,” he said.

Most recently, supporters of former US President Donald Trump were musing whether a speech in which he conceded the 2020 election to incoming President Joe Biden was, in fact, a deepfake. “I am outraged by the violence, lawlessness, and mayhem,” Trump said in the video. “The demonstrators who infiltrated the Capitol have defiled the seat of American democracy. To those who engaged in the acts of violence and destruction: You do not represent our country. To those who broke the law: You will pay.” With such statements standing in stark contrast to sentiments he had expressed previously, supporters were left wondering whether deepfake technology was being employed. “Anyone else notice this eerie deepfake look to Trump, or is he airbrushed?” one supporter tweeted soon after.

“One side effect of the use of deepfakes for disinformation is the diminished trust of citizens in authority and information media,” according to a recent report from Europol (European Union Agency for Law Enforcement Cooperation) and the United Nations. Flooded with increasingly AI-generated spam and fake news that build on bigoted texts, fake videos and a plethora of conspiracy theories, people might feel that a considerable amount of information, including videos, simply cannot be trusted, thereby resulting in a phenomenon termed as information apocalypse or reality apathy. And as Google research engineer Nick Dufour acknowledged, deepfakes “have allowed people to claim that video evidence that would otherwise be very compelling is a fake”.

It would seem that preventative action should be taken sooner than later, especially given how sophisticated the technology has become. “Wow, this is developing more rapidly than I thought,” acknowledged Hao Li, a deepfake pioneer and an associate professor at the University of Southern California, in September 2019. “We are working together on an approach that assumes that deepfakes will be perfect…. Our guess that in two to three years, it’s going to be perfect. There will be no way to tell if it’s real or not, so we have to take a different approach.”

Hypothetically, then, deepfake could end up having hugely impactful consequences. Indeed, Brookings researchers Chris Meserole and Alina Polyakova suggest that the United States and its allies are currently “ill-prepared” for the wave of deepfakes that Russian disinformation campaigns could inflict upon the world. “To cite just one example, fake Russian accounts on social media claiming to be affiliated with the Black Lives Matter movement shared inflammatory content purposely designed to stoke racial tensions,” Robert Chesney and Danielle Citron wrote last year in Foreign Affairs magazine. “Next time, instead of tweets and Facebook posts, such disinformation could come in the form of a fake video of a white police officer shouting racial slurs or a Black Lives Matter activist calling for violence.”

Responding to such concerns, the US Senate approved a bill in November 2020 requiring the government to conduct further research into deepfakes. “This bill directs the National Science Foundation (NSF) and the National Institute of Standards and Technology (NIST) to support research on generative adversarial networks. A generative adversarial network is a software system designed to be trained with authentic inputs (e.g. photographs) to generate similar, but artificial, outputs (e.g. deepfakes),” according to a summary of the bill. “Specifically, the NSF must support research on manipulated or synthesized content and information authenticity, and NIST must support research for the development of measurements and standards necessary to accelerate the development of the technological tools to examine the function and outputs of generative adversarial networks or other technologies that synthesize or manipulate content.”

According to Accenture, businesses can adopt a three-pillared strategy to guard against deepfakes:

  1. Employee training and awareness as a way to create an additional line of defence. “Training should focus on how the technology is leveraged in malicious attempts and how this can be detected: enabling employees to spot deepfake-based social engineering attempts,” noted Accenture, adding that a similar methodology to help counter the threat of email-based phishing via security-awareness programs can be applied.
  2. A detection model to identify false media as early as possible and thus minimise the impact on the organisation. “This is especially relevant when countering attempts by malicious actors to influence public opinion through deepfakes,” Accenture observed, having partnered with start-ups to develop models that can detect fake media.
  3. A response strategy to ensure the organisation can adequately respond to a deepfake. “Have a plan in place that can be set in motion when a deepfake is detected. It’s important that individual responsibilities and required actions are defined in this plan.”

On the plus side, at least a few beneficial applications of the technology also exist. The film industry, for instance, can benefit in several ways. “For example, it can help in making digital voices for actors who lost theirs due to disease, or for updating film footage instead of reshooting it,” stated the November 2019 study “The Emergence of Deepfake Technology: A Review” published in the journal Technology Innovation Management Review. “Moviemakers will be able to recreate classic scenes in movies, create new movies starring long-dead actors, make use of special effects and advanced face editing in post-production, and improve amateur videos to professional quality.”

Nonetheless, it is clear that in the current era of disinformation in which we all now reside, deepfakes represent a seriously dangerous weapon. And democracies will either have to learn to live with such lies or do their best to act quickly to preserve the truth before it irrevocably fades even further.

 

Related Articles

1 comment

Sean Harris August 10, 2021 - 1:23 am

Deepfake technology is important to different industries like healthcare, call centers, advertising, gaming, and more. And, indeed, it could also be dangerous in the wrong hands especially if this is misused and broadcast on TV. Thus, companies should implement a strict code of ethics and strong security measures promise to ensure that the technology is only used for positive purposes.

Reply

Leave a Comment

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.