Don’t Believe Your Eyes (Or Ears): The Weaponization of Artificial Intelligence, Machine Learning, and Deepfakes

December 2019 No Comments

“Don’t Believe Your Eyes (Or Ears): The Weaponization of Artificial Intelligence, Machine Learning, and Deepfakes”

Speaker: Littell, J. (Duke University)

Date: 17 December 2019

Speaker Session Preview

SMA hosted a speaker session presented by CPT Joseph Littell (Duke University) as a part of its SMA General Speaker Series. CPT Littell’s brief focused on deepfakes, visual and audio manipulation, forgery, and data poisoning. To begin, he stated that machine learning (ML) is a process that works through the probability of a new event happening. ML technologies can be supervised, requiring human input into the raw data collected, or unsupervised, requiring no input into the data. CPT Littell explained that deepfakes—falsified images, audio, or video that are created using artificial neural networks—are examples of systems that utilize supervised ML. He discussed several programs that utilize deepfakes for visual manipulation, including Faceswap (a program that utilizes auto-encoders to remove the face of an individual in a video and replace it with another in the same position and expression), StyleGan (a program used to create highly accurate portraits of humans in natural positions based off of baseline images), YouTube videos, and social media filters. CPT Littell explained that, to his knowledge, visual deepfakes are not being used heavily outside of the open source realm. Moreover, despite the consistent increase in sophistication and accuracy of visual deepfakes, he does not believe that they will ever be used to successfully fool a nation like the US to the extent of causing a national or international crisis. Next, CPT Littell discussed programs that utilize deepfakes for audio manipulation, including Adobe Voco (a program that requires only 20 minutes of conversation before it can produce an individual’s voice using text inputs) and WaveNext (a similar program created by DeepMind). He then highlighted that, although much focus of late with regards to deepfakes is on images, video, and audio, creating highly accurate and realistic administrative documents is also a realistic threat. He explained that while the US has numerous safeguards to prevent forgeries, many countries in which US personnel have operated in do not have the same level of sophistication. Moreover, data poisoning can present potential security concerns, as false data can be fed into a ML model for nefarious purposes. To conclude, CPT Littell stressed the need for methodology and best practices regarding ML technologies to be shared across the defense enterprise (i.e., across the DOD, international community, and private national laboratories).

Speaker Session Audio Recording

Download CPT Littell’s Biography and Slides

His relevant article can be found at https://warontherocks.com/2019/10/dont-believe-your-eyes-or-ears-the-weaponization-of-artificial-intelligence-machine-learning-and-deepfakes/

Comments

Submit A Comment