• Skip to primary navigation
  • Skip to main content
  • Skip to footer
  • Home
  • About
    • Editorial Staff
      • Bryan Strawser, Editor in Chief, Strife
      • Dr Anna B. Plunkett, Founder, Women in Writing
      • Strife Journal Editors
      • Strife Blog Editors
      • Strife Communications Team
      • Senior Editors
      • Series Editors
      • Copy Editors
      • Strife Writing Fellows
      • Commissioning Editors
      • War Studies @ 60 Project Team
      • Web Team
    • Publication Ethics
    • Open Access Statement
  • Archive
  • Series
  • Strife Journal
  • Strife Policy Papers
    • Strife Policy Papers: Submission Guidelines
    • Vol 1, Issue 1 (June 2022): Perils in Plain Sight
  • Contact us
  • Submit to Strife!

Strife

The Academic Blog of the Department of War Studies, King's College London

  • Announcements
  • Articles
  • Book Reviews
  • Call for Papers
  • Features
  • Interviews
  • Strife Policy Papers
    • Strife Policy Papers: Submission Guidelines
    • Vol 1, Issue 1 (June 2022): Perils in Plain Sight
You are here: Home / Archives for digital

digital

Ethics for the AI-Enabled Warfighter – The Human ‘Warrior-in-the-Design’

June 13, 2019 by J. Zhanna Malekos Smith

by J. Zhanna Malekos Smith

14 June 2019

(U.S. Navy photo by Petty Officer 1st Class Shannon E. Renfroe/Released)

Can a victor truly be crowned in the great power competition for artificial intelligence? According to Russian President Vladimir Putin, “whoever becomes the leader in this sphere will become the ruler of the world.” But the life of a state, much like that of a human being, is always subject to shifts of fortune. To illustrate, let’s consider this fabled ancient tale. At a lavish banquet King Croesus asked Solon of Athens if he knew anyone more fortunate than Croesus; to which Solon wisely answered: “The future bears down upon each one of us with all the hazards of the unknown, and we can only count a man happy when the gods have granted him good fortune to the end.” Thus, to better prepare the U.S. for sustainable leadership in AI innovation and military ethics, I recommend a set of principles to guide human warfighters in employing lethal autonomous weapon systems — armed robots.

Sustainable Leadership

By 2035, the Department expects to have ground forces teaming up with robots. The discussion on how autonomous weapon systems should responsibly be integrated with human military elements, however, is slowly unfolding. As Congress begins evaluating what the Defense Department should do, it must also consider preparing tomorrow’s warfighters for how armed robots will test military ethics.

As a beginning point of reference, Isaac Asimov’s Three Laws of Robotics require: (1) a robot must not harm humans; (2) a robot must follow all instructions by humans, except if following those instructions would violate the first law; and (3) a robot must protect itself, so long as its actions do not violate the first or second laws. Unfortunately, these laws are silent on how human ethics apply here. Thus, my research into autonomous weapon systems and ethical theories re-imagines Asimov’s Laws and offers a new code of conduct for servicemembers.

What is a Code of Conduct?

Fundamentally, it is a set of beliefs on how to behave. Each service branch teaches members to follow a code of conduct like the Soldier’s Creed and Warrior Ethos, the Airman’s Creed, and the Sailor’s Creed. Reflected across these distinct codes, however, is a shared commitment to a value-system of duty, honor, and integrity, among others.

Drawing inspiration from these concepts and several robotics strategy assessments by the Marine Corps and Army, I offer a guiding vision — a human Warrior-in-the-Design Code of Conduct.

The Warrior-in-the-Design concept embodies both the Defense Directive that autonomous systems be designed to support the human judgment of commanders and operators in employing lethal force, and Human Rights Watch’s definition of human-out-of-the-loop weapons (i.e., robots that can select targets and apply force without human input or interaction.

The Warrior-in-the-Design Code of Conduct for Servicemembers:

  • “I am the Warrior-in-the-Design;
  • Every decision to employ force begins with human judgment;
  • I verify the autonomous weapon systems target selection before authorizing engagement, escalating to fully autonomous capabilities when necessary as a final resort;
  • I will never forget my duty to responsibly operate these systems for the safety of my comrades and to uphold the law of war;
  • For I am the Warrior-in-the-Design.”

These principles encourage integrating AI and armed robots in ways that enhance — rather than supplant — human capability and the warrior psyche in combat. Furthermore, it reinforces that humans are the central figures in overseeing, managing, and employing autonomous weapons.

International Developments

Granted, each country’s approach to developing autonomous weapons will vary. For instance, Russia’s military expects “large unmanned ground vehicles [to do] the actual fighting … alongside or ahead of the human fighting force.” Based on China’s New Generation Plan, it aspires to lead the world in AI development by 2030 – including enhanced man-machine coordination and unmanned systems like service robots.

So far, the U.S. has focused on unmanned ground systems to support intelligence, surveillance and reconnaissance operations. The Pentagon’s Joint Artificial Intelligence Center is currently testing how AI can support the military in fighting fires and predictive maintenance tasks. Additionally, President Trump’s Executive Order on Artificial Intelligence encourages government agencies to prioritize AI research and development. Adopting the Warrior-in-the-Design Code of Conduct is a helpful first-step to supporting this initiative.

How?

It would signal to private industry and international peers that the U.S. is committed to the responsible development of these technologies and to upholding international law. Some critics object to the idea of ‘killer robots’ because they would lack human ethical decision-making capabilities and may violate moral and legal principles. The Defense Department’s response is two-fold: First, the technology is nowhere near the advancement needed to operate fully autonomous weapons, the ones that could — hypothetically, at least — examine potential targets, evaluate how threatening they are, and fire accordingly. Second, such technological capabilities could help save the lives of military personnel and civilians, by automating tasks that are “dull, dirty or dangerous” for humans.

Perhaps this creed concept could help bridge the communication divide between groups that worry such weapons violate human dignity, and servicemembers who critically need automated assistance on the battlefield. The future of AI bears down upon each of us — let reason and ethics guide us there.

This article was originally published in The Hill


Jessica ‘Zhanna’ Malekos Smith, the Reuben Everett Cyber Scholar at Duke University Law School, served as a Captain in the U.S. Air Force Judge Advocate General’s Corps. Before that, she was a post-doctoral fellow at the Belfer Center’s Cyber Security Project at the Harvard Kennedy School. She holds a J.D. from the University of California, Davis; a B.A. from Wellesley College, where she was a Fellow of the Madeleine Korbel Albright Institute for Global Affairs; and is finishing her M.A. with the Department of War Studies at King’s College London.

Filed Under: Blog Article Tagged With: AI, cyber, cyber warfare, digital, Warfare, warrior

Footer

Contact

The Strife Blog & Journal

King’s College London
Department of War Studies
Strand Campus
London
WC2R 2LS
United Kingdom

blog@strifeblog.org

 

Recent Posts

  • Climate-Change and Conflict Prevention: Integrating Climate and Conflict Early Warning Systems
  • Preventing Coup d’Étas: Lessons on Coup-Proofing from Gabon
  • The Struggle for National Memory in Contemporary Nigeria
  • How UN Support for Insider Mediation Could Be a Breakthrough in the Kivu Conflict
  • Strife Series: Modern Conflict & Atrocity Prevention in Africa – Introduction

Tags

Afghanistan Africa Brexit China Climate Change conflict counterterrorism COVID-19 Cybersecurity Cyber Security Diplomacy Donald Trump drones Elections EU feature France India intelligence Iran Iraq ISIL ISIS Israel ma Myanmar NATO North Korea nuclear Pakistan Politics Russia security strategy Strife series Syria terrorism Turkey UK Ukraine United States us USA women Yemen

Licensed under Creative Commons (Attribution, Non-Commercial, No Derivatives) | Proudly powered by Wordpress & the Genesis Framework