Author | Editor: Rogers, Z. (Flinders University of South Australia); Canna, S. (NSI, Inc.)
The cluster of technologies associated with artificial intelligence (AI) research and development, and the implications of these technologies across the spectrum of human activity including national security, generates a blur of fast-moving technological, political, strategic, and ethical puzzles. For all their hyper-modern veneer, tracing the trajectories of historical thought associated with these puzzles offers some much-needed clarity. To this end, what follows is an overview of three relevant trajectories. While far from exhaustive, practitioners and theorists alike in national security should have a solid grounding in these thought trajectories to facilitate reasoned decisions about AI research and development.
The overall argument asserts that research and development of AI technologies for national security should be confined to areas where discrete, specified, assigned, and bounded problems and tasks can be scientifically explored and assessed. Battlefield AI for intelligence, surveillance, and reconnaissance (ISR), target identification, sensing, and weapons tracking is such an area. The Joint Artificial Intelligence Center’s current four focus areas are appropriate (United States Department of Defense, 2019). Areas of indiscrete, unspecified, unassigned, and unbounded problems and tasks, such as in the socio-political realm incorporating population-centric cognitive and information warfare, should be approached with a high degree of caution. Risk-taking in this area is attended by high degrees of uncertainty with high exposure to catastrophic costs to domestic populations. Policy-making in AI for national security should follow a bi-modal strategy which allocates the appropriate cost/risk ratios to maximize adaptive innovation and minimize hubris.