A Stable Nuclear Future? The Impact of Autonomous Systems and Artificial Intelligence

March 2020 No Comments

Speaker: Horowitz, M. (University of Pennsylvania)

Date: 18 March 2020

Speaker Session Summary

SMA hosted a speaker session presented by Dr. Michael Horowitz (University of Pennsylvania) as a part of its SMA STRATCOM Academic Alliance Speaker Series. Dr. Horowitz began by explaining how nuclear technologies are shaping global politics. He used the global telegraph program, the Cuban missile crisis, the Campaign to Stop Killer Robots, and the Chinese proliferation of drones as examples. He then focused on automation, autonomy, and artificial intelligence (AI). Dr. Horowitz defined AI as “the use of computers to simulate human behavior that requires intelligence.” AI is also broad, multi-purpose, dual use, and has a low barrier to entry. There are various methods of AI (e.g., symbolic vs. connectionist, machine learning, neural networks) and types of AI (e.g., narrow, general intelligence, superintelligence). Furthermore, AI is an enabler rather than a weapon. AI can direct physical objects, process data, and perform overall information management, but it is not a gun, nor is it a plane. The implication of this is that AI can be used for much broader purposes than military technologies. Next, Dr. Horowitz explained that individuals pursue autonomy and AI for their speed, precision, bandwidth/hacking capabilities, and decision making assistance. AI also has its downsides, however. Autonomous systems can be “brittle” (i.e., limited in terms of their potential areas of operation and requiring an immense amount of training). One must be weary of automation bias and trusting machines/algorithms too much to avoid making mistakes as well. Dr. Horowitz added that if a nation is already confident in its second-strike capabilities, over-trusting a machine is not necessarily worth the brittleness risks.

Dr. Horowitz then spoke about early warning systems and command and control. He stated that the US’s existing early warning systems are already highly automated due to their long-range radar and satellite-based alert systems, rapid retargeting capability, and communication rockets used to transmit launch codes. Early warning systems have both benefits (e.g., reliability and an ability to buy time) and downsides (e.g., a lack of human judgement and the potential for false alarms). Dr. Horowitz also discussed uninhabited nuclear platforms, highlighting their endurance, reliability, inability to maintain positive human control, and potential for accidents, hacking, and spoofing. He used the US’s, Russia’s, and North Korea’s militaries to illustrate how conventional AI usage can impact nuclear stability as well. Dr. Horowitz concluded by offering three key points: 1) the less secure the second strike capabilities, the more likely a nation is to consider autonomous systems within their nuclear weapons complex; 2) there is risk associated with greater automation in early warning systems; and 3) there is a large amount of risk associated with impact of conventional military uses of autonomy on crisis stability.

Speaker Session Audio Recording

Download Dr. Horowitz’s Biography and Slides

Comments

Submit A Comment