AI, Automated Systems, and Future Use-of-Force Decision Making: Anticipating Effects

This two-year research project (February 2023 – February 2025) is generously funded by an Australian Department of Defence Strategic Policy Grant.
Chief Investigator: Prof. Toni Erskine, Professor of International Politics & Director of the Coral Bell School of Asia Pacific Affairs, Australian National University (ANU)
Context
The use of artificial intelligence (AI), machine learning, and automated systems has already changed the nature of the battlefield. The further diffusion of AI-enabled systems into states’ resort-to-force decision making is unavoidable for Australia, its allies, and its adversaries. In the United States, for example, machine learning techniques are already used in some intelligence analyses, which, in turn, contribute to decisions of whether and when to use force. While this contribution is currently limited and indirect, trends in other realms suggest that the use of AI-driven systems will increase in this high-stakes area. Separately, there is potential for AI-enabled automated systems to initiate escalatory defensive action in contexts such as the cyber realm. If we begin to consider the possible future effects of using these technologies in resort-to-force decision-making processes now, we can develop policy to guide their development and use, promote necessary education and training, and, ultimately, mitigate risks.
Research Focus
This research project will directly address the Australian Department of Defence’s 2022 ‘Priority Policy Topic’ on ‘emerging and disruptive technologies’. It is also relevant to its Priority Policy Topics on ‘expanding capabilities in cyber’ and ‘challenges to global rules, norms and institutions’.
Specifically, it will analyse emerging and disruptive technologies in the form of AI-enabled systems used both to inform decision making on the use of force and, in some contexts – such as defence against cyberattacks – to make and directly implement decisions on the use of force. In the former case, human decision makers draw on algorithmic recommendations and predictions to reach use-of-force decisions; in the latter case, decisions are reached with or without human oversight. Both entail future-focused, but foreseeable, developments, which challenge existing rules and norms surrounding the use of force – and warrant immediate consideration in defence policy.
Machine learning techniques enhance our decision-making capacities by analysing huge quantities of data quickly, predicting outcomes, calculating opportunities and risks, and uncovering patterns of correlation in datasets that are beyond human cognition. The potential benefits of using AI-enabled systems are clear in scenarios where predictive analyses of key strategic variables – such as anticipated threat, risk of inaction, proportionality of a potential response, and mission cost – are fundamental.
Yet there are also complications that would accompany reliance on these systems. It is imperative to determine their implications for Australia’s future defence and security environment. This project will focus on the following:
Complication 1:
When programmed to calculate – or automatically implement – a response to a particular set of circumstances, intelligent machines will behave differently than human agents.
This difference complicates understandings of deterrence. Current perceptions of a state’s willingness to use force in response to aggression are based on assumptions of human judgement (and forbearance) rather than automated calculations. The use of automated systems – which would make and implement decisions at speeds impossible for human actors – could result in unintended escalations in the use of force.
Complication 2:
Empirical studies show that individuals and teams relying on AI-driven systems often experience ‘automation bias’ – the tendency to accept without question computer-generated outputs. This tendency can make human decision-makers less likely to use their own expertise and judgment to test the machine-generated recommendations.
Unintended consequences include acceptance of error, the de-skilling of human actors, and decreased compliance with international rules and norms of restraint in the use of force.
Complication 3:
Machine learning processes are frequently opaque and unpredictable. Those who are guided by them often do not understand how predictions and recommendations are reached, and do not grasp their limitations. The current lack of transparency in much AI-driven decision making – ‘algorithmic opacity’ – has led to negative consequences across a range of contexts.
As governments’ democratic – and international – legitimacy requires compelling and accessible justifications for decisions to use force, algorithmic opacity poses grave concerns.
Complication 4:
Studies in both International Relations (IR) and organisational theory reveal the existing complexities and pathologies of organisational decision making. AI-driven decision-support and automated systems intervening in these complex structures risks exacerbating these problems.
Without carefully developed guidelines, AI-enabled systems at the national level could distort and disrupt strategic and operational decision-making processes and chains of command.
These complications – and their potential implications for Australia’s defence policy – warrant serious attention. This project will bring together new voices and diverse perspectives – in the form of an international group of practitioners and multidisciplinary, world-leading scholars – to contribute to a comprehensive study of the risks and opportunities of introducing AI-enabled systems to state-level decisions to engage in war across these four thematic areas.
This project will initiate a much-needed, research-led discussion on the effects of AI-enabled systems in use-of-force decision making. It also seeks to significantly extend the public Australian strategic policy debate on the impacts of disruptive and emerging technologies.
Project Activities
This two-year research project will have an important international collaborative dimension, which will include:
• two International Workshops on AI, Automated Systems and Use-of-Force Decision Making, to be co-convened by Professor Steven E. Miller (Belfer Center, Harvard University) and Professor Toni Erskine (ANU) and held at the Australian National University (ANU) in Canberra, Australia in June/July 2023 and June/July 2024;
and
• a Policy Roundtable, also to be held at the ANU in Canberra in June/July 2024.
(Chancelry, ANU)
Leading scholars and practitioners working in international security, strategic and defence studies, and machine intelligence will be invited to participate in these activities and explore the risks and opportunities of introducing AI, machine learning, and automated systems into state-level use-of-force decision making.
In addition to producing a series of published outputs, the project will be supplemented by an ‘AI, Decision Making, and the Future of War’ Seminar Series and Public Lecture Series.
(Hedley Bull Building, Coral Bell School of Asia Pacific Affairs, ANU)
Upcoming Seminar Series and Public Lecture Series Events to be announced soon.
People
Professor Toni Erskine, Chief Investigator and Workshop Co-Convenor
Toni Erskine is Director of the Coral Bell School of Asia Pacific Affairs and Professor of International Politics at The Australian National University (ANU). She is also Editor of the journal International Theory: A Journal of International Politics, Law, and Philosophy and Associate Fellow of the Leverhulme Centre for the Future of Intelligence at Cambridge University. She currently serves as Academic Lead for the United Nations Economic and Social Commission for Asia and the Pacific/APRU ‘AI for the Social Good’ Research Project and in this capacity works closely with government departments in Thailand and Bangladesh. She is also a Chief Investigator and Founding Member of the ANU ‘Humanising Machine Intelligence’ Grand Challenge Research Project. Her research interests include the moral agency and responsibility of formal organisations in world politics; the ethics of war; the responsibility to protect (R2P); joint purposive action and informal coalitions; and the impact of new technologies on organised violence.
Professor Steven E. Miller, Workshop Co-Convenor
Steven E. Miller is Director of the International Security Program at the Belfer Center for Science and International Affairs at the Kennedy School, Harvard University. He is Editor-in-Chief of the quarterly journal, International Security, and also co-editor of the International Security Program’s book series, Belfer Center Studies in International Security (published by the MIT Press). Previously, he was Senior Research Fellow at the Stockholm International Peace Research Institute (SIPRI) and taught Defense and Arms Control Studies in the Department of Political Science at the Massachusetts Institute of Technology. He is editor or co-editor of more than two dozen books, including, most recently, The Next Great War? The Roots of World War I and the Risk of U.S.-China Conflict. Professor Miller is a Fellow of the American Academy of Arts and Sciences, where he is a member of their Committee on International Security Studies (CISS). He currently leads the Academy’s project on Promoting Dialogue on Arms Control and Disarmament. He is also co-chair of the U.S. Pugwash Committee and a member of the Council of International Pugwash.
Emily Hitchman, Project Research Officer
Emily Hitchman is the Research Officer on the AI, Automated Systems, and Future Use-of-Force Decision Making: Anticipating Effects project. Emily is a PhD scholar at the Strategic and Defence Studies Centre focussing on the history of the Glomar (‘neither confirm nor deny’) response in the national security context. She is also a 2023 Sir Roland Wilson Scholar, and has appeared on the National Security Podcast speaking about her research, and as a panellist at the 2022 Australian Crisis Simulation Summit speaking about the future of intelligence. Emily has worked professionally across the national security and criminal justice public policy space, including in law enforcement and cyber policy, and holds a Bachelor of Philosophy from The Australian National University.
Dr Bianca Baggiarini, Project Participant
Bianca Baggiarini is a political sociologist and Lecturer in military and war studies at the Strategic and Defence Studies Centre at the Australian National University (ANU). Her research and teaching are aimed at applying sociological theories and methods to the study of war. Bianca’s current research is on the sociopolitical and ethical impacts of autonomy and AI-enabled technologies in military and security contexts. She is examining the role of trust discourse in shaping debates about ethical military AI (arguing that machine learning algorithms naturally agitate rules- and standards-based orders, thereby challenging the possibility of trust), the changing status of soldiers’ labour in response to increasing autonomy, and the social meaning of technology demonstrations as it relates to communicating the ethical and legal potential of AI-enabled systems. Her forthcoming monograph, Governing Military Sacrifice, is one of the first books to connect the rise of drones and combat unmanning with military and security privatization and includes original interview data from both drone advocates and critics alike. Bianca holds a PhD (2018) from York University in Toronto, an MA in sociology from Simon Fraser University, and a BA in political science from Simon Fraser University. From 2019 to 2021, she was a Researcher at UNSW at the Australian Defence Force Academy.
Dr Justin K. Canfil, Project Participant
Justin K. Canfil is a postdoctoral fellow at the Belfer Center for Science and International Affairs at the Harvard Kennedy School, a nonresident scholar with Princeton University’s Center on Contemporary China, and an incoming Assistant Professor of International Relations and Emerging Technologies at Carnegie Mellon University. From 2024-2025, he will take leave to complete a Stanton Nuclear Security Fellowship at the Council on Foreign Relations. Dr. Canfil’s research interests concern the impact of emerging technologies on international law and arms control, both past and present. His research has appeared in outlets such as the Journal of Cybersecurity and the Oxford Handbook on AI Governance. He received a Fulbright Scholarship to conduct doctoral research in China and a PhD in Political Science from Columbia University. He can be reached at www.jcanfil.com or on Twitter @jcanfil.
Dr Francesca Giovannini, Project Participant
Francesca Giovannini is Executive Director of the Project on Managing the Atom at the Harvard Kennedy School’s Belfer Center for Science & International Affairs and Research Director of the Nuclear Deterrence Research Network funded by the MacArthur Foundation. She is also an Adjunct Professor at the Fletcher School of Law and Diplomacy, where she teaches a graduate seminar on the role of nuclear weapons in the 21st century and a core course on Technology, Public Policy, and National Security. She is the leading faculty of the Fletcher School Executive Education course on “Negotiating Technology Agreements in Emerging Markets: Developing Strategic capacities for accessing transformative technologies”. Dr Giovannini served as a Senior Strategy and Policy Officer to the Executive Secretary of the Comprehensive Nuclear Test Ban Treaty Organization (CTBTO). Before her international appointment, she served five years at the American Academy of Arts and Sciences in Boston as the Director of the Research Program on Global Security and International Affairs. With a Doctorate from the University of Oxford, Dr Giovannini began her career working for international organisations. She has published widely in_ Nature, the Bulletin of Atomic Scientists_, Arms Control Today, the National Interest, and The Washington Post, among others.
Professor Sarah Kreps, Project Participant
Sarah Kreps is the John L. Wetherill Professor of Government, Adjunct Professor of Law, and Director of the Tech Policy Institute at Cornell University. She is also a Non-Resident Senior Fellow at the Brookings Institution and a life member of the Council on Foreign Relations. Her work lies at the intersection of technology, politics, and national security, and is the subject of five books and a range of publications published in academic journals such as the New England Journal of Medicine, Science Advances, Vaccine, Journal of the American Medical Association (JAMA) Network Open, American Political Science Review, and Journal of Cybersecurity, policy journals such as Foreign Affairs, and media outlets such as CNN, the BBC, New York Times, and Washington Post. She has a BA from Harvard University, MSc from Oxford, and PhD from Georgetown. Between 1999-2003, she served as an active duty officer in the United States Air Force.
Dr Sarah Logan, Project Participant
Sarah Logan is a lecturer in the Department of International Relations in the Coral Bell School of Asia Pacific Affairs at The Australian National University. She is a Chief Investigator at the University’s “Humanising Machine Intelligence” Grand Challenge Research Project. Her research interests include the future of open source intelligence; the governance of international data transfers; the development of global privacy norms; and the geopolitics of global technology standards. Her work has been funded by the Annenberg School for Communication, the Australian government, and the United Nations Economic and Social Commission for Asia and the Pacific/Association of Pacific Rim Universities. Her first book, Hold Your Friends Close: Countering Radicalization in Britain and America, was published by Oxford University Press in 2022.
Dr Osonde Osoba, Project Participant
Osonde Osoba, Ph.D. (oh-shOwn-day aw-shAw-bah) is a researcher working at the intersection of Artificial Intelligence/Machine Learning and public policy. Dr Osoba’s research weaves together two strands: the application of AI/ML to problems in public policy as well as the examination of the implications of reliance on automated decision systems. Recurring themes in his work include algorithmic equity, modelling for decision support, and modelling the behaviours of social agents. Dr Osoba is currently a senior AI engineer on Fairness at LinkedIn where he works on helping enable the platform’s use of AI/ML in a responsible and trustworthy manner. Prior to LinkedIn, Osoba was a senior information scientist at the RAND Corporation and a professor of public policy at the Pardee RAND Graduate School. His policy research portfolio at RAND focused on AI/ML applied to problems in social and economic well-being and national security. At the Pardee RAND Graduate School, Osoba was the Associate Director for Tech & Narrative Lab, helping to pioneer a novel program in training the next generation of effective and creative tech policy thought leaders. Dr Osoba earned his B.Sc. in Electrical and Computer Engineering from the University of Rochester and his Ph.D. in Electrical Engineering from USC.
Dr Mitja Sienknecht, Project Participant
Mitja Sienknecht is a postdoctoral researcher at the European New School of Digital Studies (European University Viadrina). Previously, she was an interim professor for European and International Politics at the EUV and a postdoctoral researcher at the WZB Berlin Social Science Center and the University of Münster and conducted research stays at Koç University (Turkey). Her paper on the debordering of intrastate conflicts based on her PhD received the best paper award in International Relations (IR) of the IR-section of the German Political Science Association. Her research interests include the (digital) transformation of violence and conflicts; border- and boundary studies; the responsibility of state and non-state actors in world politics; and inter- and intra-organizational decision-making in security contexts. Her work is situated at the intersection of IR, peace and conflict studies and science & technological studies (STS). In her current research, Mitja analyzes the impact of digitalization on armed conflicts and collaborates in developing and training an AI to identify argumentative structures in IR theories.
Dr Benjamin Zala, Project Participant
Benjamin Zala is Fellow in the Department of International Relations, in the Coral Bell School of Asia Pacific Affairs at The Australian National University. His work focuses on the politics of the great powers and the management of nuclear weapons. His work has appeared in over a dozen different peer-reviewed journals such as Review of International Studies, Journal of Global Security Studies, Third World Quarterly and the Bulletin of the Atomic Scientists. His book Power in International Society: A Perceptual Approach to Great Power Politics is under contract with Oxford University Press and his edited volume, National Perspectives on a Multipolar Order, was published by Manchester University Press in 2021. He has been a Stanton Nuclear Security Fellow in the Belfer Center for Science & International Affairs at Harvard University and has previously held positions in the UK at the Oxford Research Group, Chatham House, and the University of Leicester where he is also currently an Honorary Fellow working with the European Research Council-funded, Third Nuclear Age project (https://thethirdnuclearage.com/).
Professor Toni Erskine
Toni Erskine is Director of the Coral Bell School of Asia Pacific Affairs and Professor of International Politics at The Australian National University (ANU). She is also Editor of the journal...
Chief Investigator
Professor Toni Erskine
Director, Coral Bell School of Asia Pacific Affairs
Project Research Officer
Emily Hitchman
Project Research Officer, Coral Bell School of Asia Pacific Affairs
Postal address
Coral Bell School of Asia Pacific Affairs
Hedley Bull Building
130 Garran Road
The Australian National University, Acton ACT 2600 Australia