Deepfakes: The National Security Threat in Context

April 2021 No Comments

Speaker(s): Bateman, J. (Carnegie Endowment for International Peace’s Cyber Policy Initiative); Hwang, T. (Georgetown University’s Center for Security and Emerging Technology (CSET)); Wright, N. (Intelligent Biology, Georgetown University Medical Center, University College London, and New America)

Date: 7 April 2021

Speaker Session Summary

SMA hosted a speaker series session with Mr. Jon Bateman (Carnegie Endowment for International Peace’s Cyber Policy Initiative), Mr. Tim Hwang (Georgetown University’s Center for Security and Emerging Technology (CSET)), and Dr. Nicholas Wright (Intelligent Biology, Georgetown University Medical Center, University College London, and New America) as a part of its SMA IIJO Speaker Series.

Dr. Wright commenced the discussion by broadly defining a deepfake as an image of someone that has been manipulated or changed to mirror another’s. The creation of media to deceive a population is not new; however, advancements in technology—particularly in machine learning and artificial intelligence (AI)—have allowed for believable fake images and text to be created in recent years. This deepfake technology was predominantly used in faking adult videos at its onset; however, it can now be used to target political figures and large corporations and influence individuals’ thoughts and actions. Dr. Wright argued that how little is spent on defending against deepfakes in relation to how much is spent in creating them is a significant market failure.

Next, Mr. Hwang pointed out that the threat that deepfakes present to the US’s civilian population is small. He argued that the fear surrounding the societal impacts of deepfakes is often exaggerated due to the threat perception models used to evaluate them. In reality, convincing deepfakes are still expensive to create, which makes them too costly for everyday online trolls to produce. Furthermore, machine learning on the scale needed to create deepfakes takes at least two weeks to conduct, which inhibits actors who follow social media trends, such as Russia, from creating impactful and timely deepfakes. The software that is used to debunk deepfakes also relies on machine learning conducted by a specialized technician and takes time. This ultimately puts organizations that are trying to defend against deepfakes at a disadvantage as well.

Mr. Bateman focused on the potential for deepfakes to affect the global financial system. He argued that ultimately, deepfakes pose a small threat to the global financial system. He did acknowledge that fake text written by AI is an increasing threat; however, deepfakes are only one of many different tools that may be used to influence or deceive a target. Moreover, the wide array of deepfakes (fabricated audio, text, images) makes it difficult for the US and other countries to predict and proactively counter them. Mr. Bateman also argued that it is likely that deepfakes will be used to influence stock markets through targeting companies’ brands. However, he agreed with Mr. Hwang’s assertion that the cost of deepfakes makes it likely that only well-financed actors will create them in the short term. Nonetheless, there is the potential for deepfakes to aggravate existing societal fissures among the civilian population, according to Mr. Bateman. He concluded by stating that the best approach to counter deepfakes and protect the global financial structure is an incremental approach, which includes diverse multi-stakeholder efforts and prioritizes defending against the most catastrophic scenarios.

Speaker Session Recording

Note: We are aware that many government IT providers have blocked access to YouTube from government machines during the pandemic in response to bandwidth limitations. We recommend viewing the recording on YouTube from a non-government computer or listening to the audio file (below), if you are in this position.

Briefing Materials

Comments

Submit A Comment