Modeling Persuasion through Human/Machine Coding

December 2018 No Comments

“Modeling Persuasion through Human/Machine Coding”

Speakers: Ng, V. (University of Texas at Dallas); Rankin, M. (University of Texas at Dallas)

Date: 7 December 2018

Speaker Session Preview

SMA hosted a speaker session presented by Dr. Monica Rankin (University of Texas at Dallas) and Dr. Vincent Ng (University of Texas at Dallas) as a part of its SMA General Speaker Series. Dr. Rankin began the presentation by stating the primary goal of the University of Texas at Dallas’s project: to create a template for understanding effective persuasion/influence campaigns, using a combination of human and machine coding. This model will ideally allow machines to learn from human inputs and humans to learn from machine outputs. She then defined “persuasion” and emphasized the importance of understanding persuasion in the context of international affairs. Next, Dr. Rankin highlighted some broader implications of the project and explained the various levels used in their model to understand influence messaging. These levels included seven common propaganda devices, five specific persuasion themes, and seven content-specific characteristics, with Latin America during World War II as the focus. Dr. Ng then discussed the active learning algorithms that are used to determine whether a human or a machine should code a particular document. He also spoke about the neural models the project team used to effectively examine both images and text. Dr. Rankin concluded the presentation by discussing the project’s data set, which contained every issue of En Guardia, a magazine published during World War II whose goal was to convince Latin America to side with the Allies. She identified the propaganda tactics utilized by the magazine, discussed how its strategy changed over time, and highlighted the primary differences between one of its first and one of its last editions.

Speaker Session Audio File

Download Dr. Rankin and Dr. Ng’s biographies and slides


 

Comments

Submit A Comment