by J. Zhanna Malekos Smith
14 June 2019
Can a victor truly be crowned in the great power competition for artificial intelligence? According to Russian President Vladimir Putin, “whoever becomes the leader in this sphere will become the ruler of the world.” But the life of a state, much like that of a human being, is always subject to shifts of fortune. To illustrate, let’s consider this fabled ancient tale. At a lavish banquet King Croesus asked Solon of Athens if he knew anyone more fortunate than Croesus; to which Solon wisely answered: “The future bears down upon each one of us with all the hazards of the unknown, and we can only count a man happy when the gods have granted him good fortune to the end.” Thus, to better prepare the U.S. for sustainable leadership in AI innovation and military ethics, I recommend a set of principles to guide human warfighters in employing lethal autonomous weapon systems — armed robots.
By 2035, the Department expects to have ground forces teaming up with robots. The discussion on how autonomous weapon systems should responsibly be integrated with human military elements, however, is slowly unfolding. As Congress begins evaluating what the Defense Department should do, it must also consider preparing tomorrow’s warfighters for how armed robots will test military ethics.
As a beginning point of reference, Isaac Asimov’s Three Laws of Robotics require: (1) a robot must not harm humans; (2) a robot must follow all instructions by humans, except if following those instructions would violate the first law; and (3) a robot must protect itself, so long as its actions do not violate the first or second laws. Unfortunately, these laws are silent on how human ethics apply here. Thus, my research into autonomous weapon systems and ethical theories re-imagines Asimov’s Laws and offers a new code of conduct for servicemembers.
What is a Code of Conduct?
Fundamentally, it is a set of beliefs on how to behave. Each service branch teaches members to follow a code of conduct like the Soldier’s Creed and Warrior Ethos, the Airman’s Creed, and the Sailor’s Creed. Reflected across these distinct codes, however, is a shared commitment to a value-system of duty, honor, and integrity, among others.
The Warrior-in-the-Design concept embodies both the Defense Directive that autonomous systems be designed to support the human judgment of commanders and operators in employing lethal force, and Human Rights Watch’s definition of human-out-of-the-loop weapons (i.e., robots that can select targets and apply force without human input or interaction.
The Warrior-in-the-Design Code of Conduct for Servicemembers:
- “I am the Warrior-in-the-Design;
- Every decision to employ force begins with human judgment;
- I verify the autonomous weapon systems target selection before authorizing engagement, escalating to fully autonomous capabilities when necessary as a final resort;
- I will never forget my duty to responsibly operate these systems for the safety of my comrades and to uphold the law of war;
- For I am the Warrior-in-the-Design.”
These principles encourage integrating AI and armed robots in ways that enhance — rather than supplant — human capability and the warrior psyche in combat. Furthermore, it reinforces that humans are the central figures in overseeing, managing, and employing autonomous weapons.
Granted, each country’s approach to developing autonomous weapons will vary. For instance, Russia’s military expects “large unmanned ground vehicles [to do] the actual fighting … alongside or ahead of the human fighting force.” Based on China’s New Generation Plan, it aspires to lead the world in AI development by 2030 – including enhanced man-machine coordination and unmanned systems like service robots.
So far, the U.S. has focused on unmanned ground systems to support intelligence, surveillance and reconnaissance operations. The Pentagon’s Joint Artificial Intelligence Center is currently testing how AI can support the military in fighting fires and predictive maintenance tasks. Additionally, President Trump’s Executive Order on Artificial Intelligence encourages government agencies to prioritize AI research and development. Adopting the Warrior-in-the-Design Code of Conduct is a helpful first-step to supporting this initiative.
It would signal to private industry and international peers that the U.S. is committed to the responsible development of these technologies and to upholding international law. Some critics object to the idea of ‘killer robots’ because they would lack human ethical decision-making capabilities and may violate moral and legal principles. The Defense Department’s response is two-fold: First, the technology is nowhere near the advancement needed to operate fully autonomous weapons, the ones that could — hypothetically, at least — examine potential targets, evaluate how threatening they are, and fire accordingly. Second, such technological capabilities could help save the lives of military personnel and civilians, by automating tasks that are “dull, dirty or dangerous” for humans.
Perhaps this creed concept could help bridge the communication divide between groups that worry such weapons violate human dignity, and servicemembers who critically need automated assistance on the battlefield. The future of AI bears down upon each of us — let reason and ethics guide us there.
This article was originally published in The Hill
Jessica ‘Zhanna’ Malekos Smith, the Reuben Everett Cyber Scholar at Duke University Law School, served as a Captain in the U.S. Air Force Judge Advocate General’s Corps. Before that, she was a post-doctoral fellow at the Belfer Center’s Cyber Security Project at the Harvard Kennedy School. She holds a J.D. from the University of California, Davis; a B.A. from Wellesley College, where she was a Fellow of the Madeleine Korbel Albright Institute for Global Affairs; and is finishing her M.A. with the Department of War Studies at King’s College London.