• Skip to primary navigation
  • Skip to main content
  • Skip to footer
  • Home
  • About
    • Editorial Staff
      • Bryan Strawser, Editor in Chief, Strife
      • Dr Anna B. Plunkett, Founder, Women in Writing
      • Strife Journal Editors
      • Strife Blog Editors
      • Strife Communications Team
      • Senior Editors
      • Series Editors
      • Copy Editors
      • Strife Writing Fellows
      • Commissioning Editors
      • War Studies @ 60 Project Team
      • Web Team
    • Publication Ethics
    • Open Access Statement
  • Archive
  • Series
  • Strife Journal
  • Strife Policy Papers
    • Strife Policy Papers: Submission Guidelines
    • Vol 1, Issue 1 (June 2022): Perils in Plain Sight
  • Contact us
  • Submit to Strife!

Strife

The Academic Blog of the Department of War Studies, King's College London

  • Announcements
  • Articles
  • Book Reviews
  • Call for Papers
  • Features
  • Interviews
  • Strife Policy Papers
    • Strife Policy Papers: Submission Guidelines
    • Vol 1, Issue 1 (June 2022): Perils in Plain Sight
You are here: Home / Archives for AI

AI

Offensive Cyber Series: Amy Ertan on AI and Military Innovation, Part II

June 18, 2021 by Ed Stacey and Amy Ertan

A soldier operates the remote controlled Mark 8 Wheel Barrow Counter IED Robot. Photo Credit: UK Ministry of Defence, licensed under Creative Commons

This is part II of Ed Stacey’s interview with Amy Ertan on AI and military innovation for Strife’s Offensive Cyber Series. You can find part I here.


ES: I feel like there is a whole interview to be had on this idea of an AI arms race, especially with some of the signals from governments about the importance of these technologies.

AE: We talk about an AI arms race, but actually the number of countries that have the resources to invest in this is really small. The US, of course, is investing billions and billions, and they have their Joint Artificial Intelligence Center which is coordinating AI, including AI for use in cyberspace. The UK invests a huge amount as well and so do a few other states within Europe, for example France. But for the majority of states, say across NATO, AI in conflict is not something that is currently top of the agenda: it is something that is discussed at the strategic level and people know that it will hit and have impact in 20 to 30 years’ time. So we are seeing that strategic discussion but it costs so much that it is just a matter of states buying solutions from the private sector, so lots of questions there to.

ES: On that note, given the private sector is so important in the development of AI, do you think that the advantage lies with liberal democratic states and their innovate, free-market economies or authoritarian states that have greater control over private companies, enhancing military-civil fusion? Or alternatively, is that dichotomy a bit of a cliché?

AE: That dichotomy is a bit of a cliché. I will say, though, that the states that do have control and oversight over their industry, China for example, have a significant advantage when it comes to military-civil fusion and access to big data. China places either top or joint top with the US at the moment – I think there is a separate computing race – when it comes to AI. And when you look at conversations, in the US and UK for example, public-private partnerships are a major focus with AI because you need to partner with companies like Microsoft, IBM, Amazon and Google.

The free-market economy is not something I think has an inherent advantage, which sounds strange to say. But there is an interesting aspect in that for a lot of private sector leaders in AI, governments are not their main target market – they do not need to work for them. There is controversy around what they do, for example with Google and Project Maven.

There has been a shift in the way that military innovation takes place over the last half-century or so and the government now has less control over who works with them than before. So public-private partnership is something that states like the UK and US would love to improve on. There are also challenges for government procurement cycles when it comes to technologies like AI because you need a much faster procurement cycle than you do for a tank or a plane. So working with the private sector is going to become increasingly central to Ministry of Defence procurement strategies moving forward.

ES: Your PhD research explores the unforeseen and unintended security consequences of developing and implementing military AI. Could you speak a little to how these consequences might materialise in or through the cyber domain? 

AE: There are two aspects to this: one is the technical security angle and then the second is the strategic security angle. In terms of cyber security aspects, first, you have the threat that your AI system itself may not be acting as intended. Now especially when we think about sophisticated machine learning techniques, you often cannot analyse the results because the algorithm is simply too complicated. For example, if you have developed deep learning or a neural network, there will potentially be hundreds of thousands of nodes and no “explainability” – you have a “black box” problem as to what the algorithm is doing. That can make it very difficult to detect when something goes wrong and we have seen examples of that in the civic space, where it has turned out many years after the fact that an algorithm has been racist or sexist. It is a slightly different challenge in the military sphere: it is not so much about bias but rather is it picking up the right thing? Obviously, within a conflict environment you do not want to detect a threat where there is not one or miss something.

Second, there is the threat that your algorithm or data may be compromised and you would not know. So this could be the input data that you are feeding in or the system itself. For example, you may have a cyber defence algorithm that picks up abnormal activity on your network. A sophisticated attacker could interfere with the programming of that algorithm or tamper with the data so that the algorithm thinks that the attacker has been there all along and, therefore, that it is not abnormal activity and no flags are raised. So the way in which threat modelling does not consider the creativity of attackers, or insufficiency of the algorithm, could lead to something being deployed that is not fit for purpose.

Third, adversarial AI. This is the use of techniques to subvert an AI system, again making something that is deployed fallible. For one perhaps theoretical but technically feasible example, you could deploy an algorithm in cyberspace that would only target certain kinds of infrastructure. Maybe you would want it to not target hospitals, but that could be gamed – everyone could attempt to make their site look like a hospital to the algorithm.

Right now, the technology is too immature and we do not have direct explainability. It is also very difficult to know the right level of confidence to have before deploying an AI system and there are questions around oversight. So while technical challenges around explainability and accuracy may be solved through strict verification and validation procedures that will mature in time with AI capabilities, some of these unintended consequences come down to human factors like trust, oversight and responsibility. For example, how do humans know when to override an AI system

Those societal and policy questions will be tricky and that is what leads you into the strategic debate. For example, what is the appropriate use of AI in an offensive manner through or beyond cyberspace? What is a legitimate target? When it comes to AI and offensive cyber, all of the main questions around offensive cyber remain the same – the ones that traditionally apply to cyber conflict and the ones that we want to start thinking about with sub-threshold conflict. With AI, I think it is the way in which it can be mis-utilised or utilised to scale up inappropriate or unethical activity that is particularly problematic.

ES: How should states go about mitigating those risks? You touched on norms earlier, but because a lot of this work is super secretive, how can we have those conversations or develop regulation when states are, perhaps for good reason, not willing to reveal what they are doing in this space?

AE: Absolutely. Military innovation around AI will always be incredibly secretive. You will have these propriety algorithms that external parties cannot trust, and this is really difficult in the military space where the data is so limited anyway. I mentioned earlier that you can feed three million pictures of cats into an algorithm that then learns to recognise a cat, but there are way fewer images of tanks in the Baltic region or particular kinds of weapon. The data is much more limited in secretive military contexts and it potentially is not being shared between nations to the extent that might be desirable when it comes to building up a better data set that would lead to more accurate decisions. So encouraging information sharing to develop more robust algorithms would be one thing that could mitigate those technical risks.

Talking about broader conversations, norms and regulations. I think regulation is difficult. We have seen that with associated technologies: regulation moves quite slowly and will potentially fail to capture what happens in 10, 15 or 20 years’ time because we cannot foresee the way in which this technology will be deployed. Norms, yes, there is potential there. You can encourage principles, not only in the kinetic space but there are also statements and agreements around cyberspace – NATO’s Cyber Defence Pledge, for example, and the Paris Call. States can come together and agree on baseline behaviours of how to act. It is always difficult to get consensus and it is slow, but once you have it that can be quite a powerful assurance – not confirmation that AI will not be used in offensive cyber in undesirable ways, but it gives some assurance to alliance structures.

And those kinds of conversations can prove the basis for coming together to innovate as well. So we already see, for example, while the UK and US have the power and resources to invest themselves, across NATO groups of countries are coming together to look at certain problems, for example to procure items together, which may well be the path towards military AI.

It is difficult and you cannot force states to cooperate in this way, but it is also in the interests of some states. For example, if the US has invested billions in military AI for cyber purposes, it is also in its interest that its allies are secure as well and that the wider ecosystem is secure. So it may choose to share some of those capabilities to allies, not the most secretive nor the raw data but, for example, the principles to which it abides by or certain open source tools. Then we start thinking about trust networks, whether that is the Five Eyes, NATO or other alliance structures too. So it is not hopeless.


The final interview in Strife’s Offensive Cyber Series is with Dr Jacquelyn Schneider on cyber strategy. It will be released in two parts on Thursday 24th and Friday 25th June 2021.

Filed Under: Blog Article, Feature, Series Tagged With: AI, amy ertan, Artificial Intelligence, cyber, Cybersecurity, cyberwarfare, ed stacey, Military Cyber, offensive cyberwarfare, offensive cyberwarfare series

The First Tech War? Why the Korea-Japan Tensions are about US-China Competition on AI

March 27, 2020 by Yeseul Woo

by Yeseul Woo

Stand-off? South Korea’s Moon Jae-in and Japan’s Shinzo Abe (Image credit: Kim Kyung-hoon EPA-EFE)

 

The deteriorating relations between the United States, South Korea, and Japan have shaken the security system in Northeast Asia, which hinges on the alliances between the three countries. Observers typically attribute the slump in the relationship between South Korea and Japan to the latter’s removal of South Korea’s favoured “whitelisted” trade partner status, the imposition of export controls on its electronics sector, and South Korea’s August 2019 announcement that Seoul did not wish to renew the General Security of Military Information Agreement (GSOMIA). This is a naïve observation, missing the critical dynamic that is inextricably linked to the South Korea-Japan row–that is, great-power competition in artificial intelligence technology (AI) between the United States and China.

On 30 October and 29 November 2018, South Korea’s Supreme Court ordered Nippon Steel & Sumimoto Metal, and Mitsubishi Heavy Industries respectively to compensate South Koreans forced to work in their factories during the Japanese occupation period. The court ruled, that if the Japanese companies refused to oblige, the victims of forced labour could seek local court orders to seize their Korea-based assets.

Over the course of July 2019, then, Japan imposed export controls on three core materials required by South Korean tech companies to manufacture dynamic random-access semiconductors (DRAMS)—an essential part for 5G networks and AI. The export curbs require Japanese firms to seek licenses to export these materials to South Korea. Because Japan is the main producer of the core materials, the new export procedures disrupted supply chains and in so doing South Korea’s ability to manufacture DRAMS. On 2 August 2019, Japan removed South Korea from its whitelist of favoured trade partners, thereby prolonging and formalising the export curbs on these materials.

Although Japan claimed that the export regulations were designed to streamline export procedures in light of national security concerns, observers believed the new measures came in response to the South Korean Supreme Court rulings on South Korean forced labour in Japanese companies during the occupation period and due to the on-going disagreements between Japan and South Korean on the compensation of comfort women. South Korea’s response came later in August 2019. Seoul announced its intention to terminate GSOMIA, reasoning that Japan’s export restrictions had caused a ‘grave’ change in security cooperation. Although South Korea and Japan have since agreed not to let GSOMIA lapse, the issue of whitelist exclusion has not been resolved. The trade row between the two countries is set to worsen when South Korea will act on its Supreme Court rulings by beginning to seize the Korea-based assets of Japanese companies.

But Japan’s export controls resemble the US-China trade war. Semiconductors are vital components of AI and 5G technology, which are used in surveillance technology and missile defence. They are imperative for national security as for instance, AI is used to predict missile flight paths. The crucial link is this: two Korean companies, Samsung and SK Hynix, are the world’s largest and second-largest manufactures of DRAMS respectively, accounting for 72.7% of the global DRAMS market in the fourth quarter of 2019. But South Korean companies also account for a large proportion of Huawei’s DRAMS supply, China’s main producer of 5G and AI technology. Samsung’s recent launch of the Data and Information (DIT) Center, an effort to produce AI semiconductors, suggests that the company has outpaced its competitors.

South Korea’s DRAMS exports to Huawei might be a national security concern for the United States and Japan. By disrupting South Korea’s supply of the materials needed to manufacture DRAMS, Japan might potentially slow down China’s AI progress. Japan’s export restrictions undoubtedly align with US intentions. The Wall Street Journal reported on 17 February 2020 that the US Department of Commerce plans to restrict Chinese access to chip technology by seeking legislation to ‘require chip factories world-wide to get licenses if they plan to produce chips for Huawei.’ Furthermore, the US Department of Commerce plans on tightening export controls on chips to Huawei; license-free sales are only to be permitted where chips are less than ten per cent American-made. The threshold stands at twenty-five per cent at the time of writing. The United States has also pressured allies like Canada and European countries to contain Chinese semiconductor technology, causing a row between President Trump and Prime Minister Johnson after the UK allowed Huawei a limited role in the development of Britain’s 5G network.

In another twist, however, Japan’s decision to limit South Korea’s access to materials needed for its DRAMS production backfired. The export restrictions were a protectionist move – Japan was arguably hoping that its own companies would thrive once again to become the market leaders, which they were until Samsung and SK Hynix gained a competitive edge. But DuPont, a US chemical materials company, subsequently decided to establish a US$ 28-million production facility for extreme ultraviolet rays in Korea, which will ensure Korea’s supply of the key materials needed for the production of semiconductors. Therefore, if Japan is serious about its ambition to gain market share in the semiconductors industry, it should carefully consider its next steps.

In other words, what we may be witnessing with the row between South Korea and Japan is not so much a dispute over compensation of South Korean forced wartime labourers or comfort women during the Japanese occupation period but the onset of the world’s first tech war: competition between the United States and China over supremacy in AI. South Korea has long aligned with the United States in geostrategic terms, but China’s overtaking of the United States as South Korea’s most important trade partner has placed Seoul in an awkward position as the imposition of Japanese export controls—designed to hit one of South Korea’s major industries—has demonstrated.



Yeseul Woo is a PhD candidate at the Department of War Studies at King’s College London and a Developing Scholar at the Hudson Institute, Washington, D.C. She has previously served as a journalist for South Korean and U.S. media outlets and as a fellow at the East West Center, at the Pacific Forum and at the Harry S. Truman Institute

Filed Under: Blog Article, Feature Tagged With: Abe, AI, Comfort Women, Japan, Moon, Shinzo, South Korea, tech war, US-China, Yeseul Woo

Ethics for the AI-Enabled Warfighter – The Human ‘Warrior-in-the-Design’

June 13, 2019 by J. Zhanna Malekos Smith

by J. Zhanna Malekos Smith

14 June 2019

(U.S. Navy photo by Petty Officer 1st Class Shannon E. Renfroe/Released)

Can a victor truly be crowned in the great power competition for artificial intelligence? According to Russian President Vladimir Putin, “whoever becomes the leader in this sphere will become the ruler of the world.” But the life of a state, much like that of a human being, is always subject to shifts of fortune. To illustrate, let’s consider this fabled ancient tale. At a lavish banquet King Croesus asked Solon of Athens if he knew anyone more fortunate than Croesus; to which Solon wisely answered: “The future bears down upon each one of us with all the hazards of the unknown, and we can only count a man happy when the gods have granted him good fortune to the end.” Thus, to better prepare the U.S. for sustainable leadership in AI innovation and military ethics, I recommend a set of principles to guide human warfighters in employing lethal autonomous weapon systems — armed robots.

Sustainable Leadership

By 2035, the Department expects to have ground forces teaming up with robots. The discussion on how autonomous weapon systems should responsibly be integrated with human military elements, however, is slowly unfolding. As Congress begins evaluating what the Defense Department should do, it must also consider preparing tomorrow’s warfighters for how armed robots will test military ethics.

As a beginning point of reference, Isaac Asimov’s Three Laws of Robotics require: (1) a robot must not harm humans; (2) a robot must follow all instructions by humans, except if following those instructions would violate the first law; and (3) a robot must protect itself, so long as its actions do not violate the first or second laws. Unfortunately, these laws are silent on how human ethics apply here. Thus, my research into autonomous weapon systems and ethical theories re-imagines Asimov’s Laws and offers a new code of conduct for servicemembers.

What is a Code of Conduct?

Fundamentally, it is a set of beliefs on how to behave. Each service branch teaches members to follow a code of conduct like the Soldier’s Creed and Warrior Ethos, the Airman’s Creed, and the Sailor’s Creed. Reflected across these distinct codes, however, is a shared commitment to a value-system of duty, honor, and integrity, among others.

Drawing inspiration from these concepts and several robotics strategy assessments by the Marine Corps and Army, I offer a guiding vision — a human Warrior-in-the-Design Code of Conduct.

The Warrior-in-the-Design concept embodies both the Defense Directive that autonomous systems be designed to support the human judgment of commanders and operators in employing lethal force, and Human Rights Watch’s definition of human-out-of-the-loop weapons (i.e., robots that can select targets and apply force without human input or interaction.

The Warrior-in-the-Design Code of Conduct for Servicemembers:

  • “I am the Warrior-in-the-Design;
  • Every decision to employ force begins with human judgment;
  • I verify the autonomous weapon systems target selection before authorizing engagement, escalating to fully autonomous capabilities when necessary as a final resort;
  • I will never forget my duty to responsibly operate these systems for the safety of my comrades and to uphold the law of war;
  • For I am the Warrior-in-the-Design.”

These principles encourage integrating AI and armed robots in ways that enhance — rather than supplant — human capability and the warrior psyche in combat. Furthermore, it reinforces that humans are the central figures in overseeing, managing, and employing autonomous weapons.

International Developments

Granted, each country’s approach to developing autonomous weapons will vary. For instance, Russia’s military expects “large unmanned ground vehicles [to do] the actual fighting … alongside or ahead of the human fighting force.” Based on China’s New Generation Plan, it aspires to lead the world in AI development by 2030 – including enhanced man-machine coordination and unmanned systems like service robots.

So far, the U.S. has focused on unmanned ground systems to support intelligence, surveillance and reconnaissance operations. The Pentagon’s Joint Artificial Intelligence Center is currently testing how AI can support the military in fighting fires and predictive maintenance tasks. Additionally, President Trump’s Executive Order on Artificial Intelligence encourages government agencies to prioritize AI research and development. Adopting the Warrior-in-the-Design Code of Conduct is a helpful first-step to supporting this initiative.

How?

It would signal to private industry and international peers that the U.S. is committed to the responsible development of these technologies and to upholding international law. Some critics object to the idea of ‘killer robots’ because they would lack human ethical decision-making capabilities and may violate moral and legal principles. The Defense Department’s response is two-fold: First, the technology is nowhere near the advancement needed to operate fully autonomous weapons, the ones that could — hypothetically, at least — examine potential targets, evaluate how threatening they are, and fire accordingly. Second, such technological capabilities could help save the lives of military personnel and civilians, by automating tasks that are “dull, dirty or dangerous” for humans.

Perhaps this creed concept could help bridge the communication divide between groups that worry such weapons violate human dignity, and servicemembers who critically need automated assistance on the battlefield. The future of AI bears down upon each of us — let reason and ethics guide us there.

This article was originally published in The Hill


Jessica ‘Zhanna’ Malekos Smith, the Reuben Everett Cyber Scholar at Duke University Law School, served as a Captain in the U.S. Air Force Judge Advocate General’s Corps. Before that, she was a post-doctoral fellow at the Belfer Center’s Cyber Security Project at the Harvard Kennedy School. She holds a J.D. from the University of California, Davis; a B.A. from Wellesley College, where she was a Fellow of the Madeleine Korbel Albright Institute for Global Affairs; and is finishing her M.A. with the Department of War Studies at King’s College London.

Filed Under: Blog Article Tagged With: AI, cyber, cyber warfare, digital, Warfare, warrior

Footer

Contact

The Strife Blog & Journal

King’s College London
Department of War Studies
Strand Campus
London
WC2R 2LS
United Kingdom

blog@strifeblog.org

 

Recent Posts

  • Climate-Change and Conflict Prevention: Integrating Climate and Conflict Early Warning Systems
  • Preventing Coup d’Étas: Lessons on Coup-Proofing from Gabon
  • The Struggle for National Memory in Contemporary Nigeria
  • How UN Support for Insider Mediation Could Be a Breakthrough in the Kivu Conflict
  • Strife Series: Modern Conflict & Atrocity Prevention in Africa – Introduction

Tags

Afghanistan Africa Brexit China Climate Change conflict counterterrorism COVID-19 Cybersecurity Cyber Security Diplomacy Donald Trump drones Elections EU feature France India intelligence Iran Iraq ISIL ISIS Israel ma Myanmar NATO North Korea nuclear Pakistan Politics Russia security strategy Strife series Syria terrorism Turkey UK Ukraine United States us USA women Yemen

Licensed under Creative Commons (Attribution, Non-Commercial, No Derivatives) | Proudly powered by Wordpress & the Genesis Framework