• Skip to primary navigation
  • Skip to main content
  • Skip to footer
  • Home
  • About
    • Editorial Staff
      • Bryan Strawser, Editor in Chief, Strife
      • Dr Anna B. Plunkett, Founder, Women in Writing
      • Strife Journal Editors
      • Strife Blog Editors
      • Strife Communications Team
      • Senior Editors
      • Series Editors
      • Copy Editors
      • Strife Writing Fellows
      • Commissioning Editors
      • War Studies @ 60 Project Team
      • Web Team
    • Publication Ethics
    • Open Access Statement
  • Archive
  • Series
  • Strife Journal
  • Strife Policy Papers
    • Strife Policy Papers: Submission Guidelines
    • Vol 1, Issue 1 (June 2022): Perils in Plain Sight
  • Contact us
  • Submit to Strife!

Strife

The Academic Blog of the Department of War Studies, King's College London

  • Announcements
  • Articles
  • Book Reviews
  • Call for Papers
  • Features
  • Interviews
  • Strife Policy Papers
    • Strife Policy Papers: Submission Guidelines
    • Vol 1, Issue 1 (June 2022): Perils in Plain Sight
You are here: Home / Archives for cyber warfare

cyber warfare

Offensive Cyber Series: Dr Tim Stevens on Offensive Cyber in the 2020s, Part II

June 4, 2021 by Ed Stacey and Dr Tim Stevens

Photo Credit: UK Ministry of Defence, Crown Copyright.

This is part II of Ed Stacey’s interview with Dr Tim Stevens on offensive cyber in the 2020s for Strife’s Offensive Cyber Series. You can find Part I here.


ES: Thinking about the relationship between offensive cyber and international law and ethics, how far have debates gone around when and how it is right to use these capabilities and how confident are we in their conclusions?

TS: Depending on who you ask, this issue is either settled or it is not. Now the point about the discussion around these capabilities is that, actually, when we think about international law and ethics, whether from a liberal democratic standpoint or otherwise, the conversation is not about the capabilities themselves, generally speaking – it is not about cyber weapons as such – but tends to be more about the targets of those capabilities and the effects.

In 2015, the United Nations (UN) Group of Governmental Experts (GGE) on information security, which is led by the permanent five – the UK, Russia, France, China and the US – but also involved twenty or so other countries, agreed that international law applies to this domain in its entirety. That includes the UN Charter, they found a couple of years later. There is also a big NATO process which says that international humanitarian law (IHL), which governs the use of force in war, also applies to this environment. And what comes out of that is an understanding of several things.

Firstly, that the use of any capabilities that you might describe as offensive – or indeed defensive, hypothetically – has to abide by the laws of war. So they have to be necessary, proportionate and they have to have distinction, in the sense that they cannot target civilians under normal circumstances. The 2015 GGE said that you could not target civilian infrastructure through cyber means and so on.

But the problem is that, as we look at the world around us, for all of those international legal constraints and associated ethical arguments about not targeting civilians, for example, what we see is the significant use by states and other actors of exactly these types of capabilities, targeting exactly these types of targets. We have seen civilian infrastructure being targeted by the Russians, for example in Kiev on a couple of occasions in winter, where they have essentially turned the electricity off. That is exactly the opposite of what they signed up to: they signed up to say that that was not legal under international law, yet they do it anyway.

So the question really is not whether international law applies. It is slightly an issue about the details of how it applies and then if someone is in breach of that, what do you then do, which throws you back into diplomacy and geopolitics. So already you have gone beyond the conversation about small bits of malicious software that are being used as offensive cyber capabilities and elevating it to levels of global diplomacy and geopolitics. And essentially, there is a split in the world between liberal democracies, who at least adhere for the most part to international law, and a small set of other countries who very clearly do not.

ES: Given that context, what are the prospects for regulating offensive cyber activity? Is there the potential for formal treaties and agreements or are we talking more about the gradual development of norms of responsible state behaviour?

TS: This is the live question. Although we have an emerging understanding of the potential tools with which we might regulate these capabilities – including IHL and norms of responsible state behaviour – we have not got to the point of saying, for example, that we are going to have a global treaty. But there are multi-stakeholder efforts to do something that look a little like global agreements on, for example, the use of capabilities for targeting civilian infrastructure. There is something called the Cybersecurity Tech Accord, another is the Paris Call for Trust and Security in Cyberspace and there are half a dozen others that even if not explicitly focussed on offensive cyber, it is part of a suite of behaviours that they wish to develop norms around and potentially even regulation.

But it is incredibly difficult. The capabilities themselves are made of code: they are 1s and 0s, they zip around global networks, they are very difficult to interdict, they multiply, they distribute and they can attack a thousand different systems at once if they are done in a very distributed fashion. How do you tell where they come from? They do not come with a return address as the cliché goes. How do you tell who is responsible? Because no-one is going to own up to them. How do you tell if they are being developed? Well you cannot because they are done in secret. You can have a military parade in the streets of Washington DC, Pyongyang or Moscow, but you cannot do the same with cyber capabilities.

So it is very difficult to monitor both their use and their retention and development. And if nobody does own up to them, which is commonly the case, how do you punish anyone for breaching emerging norms or established international law? It is incredibly difficult. So the prospect for formal regulation anytime soon is remote.

ES: So far we have talked about some quite complex issues. Given the risks involved in developing and deploying these types of capabilities, what do you think needs to happen to improve public understanding of offensive cyber to the point that we can have a proper discussion about those risks?

TS: Public understanding of offensive cyber is not good and that is not the fault of the public. There are great journalists out there who take care in communicating these issues, and then there are others who have just been put on a story by their sub-editor and expected to come up to speed in the next half hour to put some copy out. It is really difficult to generate nuanced public understanding of things when the media environment is what it is.

Now I am not blaming the media here; I am just saying that that is one of the factors that plays into it. Because we have a role as academics as well and, ultimately, a lot of this falls to governments to communicate, which has conventionally not been great. Partly this is because a lot of the use and development of these capabilities comes from behind the classification barriers of national security, defence and intelligence. We have heard bits about their use in the battlespace against Islamic State in Iraq and Syria that has leaked out in interviews with senior decision-makers in the US and the UK, but generally not a lot else.

What we tend to get is policy statements saying: we have a sovereign offensive cyber capability and we are going to use it at a time and place of our choosing against this set of adversaries, which are always hostile states, terrorist groups, serious organised criminals and so on. But it does not encourage much public debate if everything that comes out in policy then gets called a cyber war capability because actions to stop child sexual exploitation by serious organised crime groups are not a war-like activity – they fall in a different space and yet they are covered by this cyber war moniker.

Now there is an emerging debate around offensive cyber. Germany has had a conversation about it, constitutionally quite constrained when it comes to offensive capabilities. There is a discussion in the Netherlands, also in the US about their new cyber posture – which is much more forward leaning than previous ones – and we are beginning to have a conversation in the UK as well. But a lot of that has fallen to academics to do and, I guess, I am part of that group who are looking at this issue and trying to generate more of a pubic conversation.

But it is difficult and the response you will sometimes get from government is: we do not need to have a conversation because we have already declared that everything we do is in accordance with our obligations under international law – we will do this against a set of adversaries that are clearly causing the nation harm and so on. That is fine. We are not doubting that that is their statement; we would just like to know a little bit more about the circumstances in which you would use these capabilities.

What, for example, is the new National Cyber Force going to do? How is it going to be structured? What are the lines of responsibility? Because one of the weird things about joint military-intelligence offensive cyber operations is that, in a country like the UK, you have the defence secretary signing off on one side and the foreign secretary signing off on the other because you are involving both the military and GCHQ, which have different lines of authority. So where does responsibility lie? Accountability? What happens if something goes wrong? What is your exact interpretation of international law? To be fair to the UK, they have set that interpretation out very clearly.

But there is more than just an academic interest here. If this is the future of conflict in some fashion and it has societal effects, then we need to have a conversation about whether these are the capabilities that we want to possess and deploy. Not least if the possession and deployment of those capabilities generates norms of state behaviour that include the use of cyber conflict. Is that something that we want to do in societies of the 21st century that are hugely dependent upon computer networks and deeply interconnected with other countries?

Those are the types of questions that we need to raise and we also need to raise the quality of public understanding. That is partly the job of academia and partly the job of media, but certainly the job of government.


The next interview in Strife’s Offensive Cyber Series is with Dr Daniel Moore on cyber operations. It will be released in two parts on Thursday 10th and Friday 11th June 2021.

Filed Under: Blog Article, Feature Tagged With: cyber, cyber warfare, cyberwarfare, dr tim stevens, ed stacey, offensive cyberwarfare, offensive cyberwarfare series, tim stevens

The Future of Cyber Warfare – An Interview with Greg Austin

April 26, 2020 by Ed Stacey

by Ed Stacey

Lt. Col. Tim Sands (from left), Capt. Jon Smith and Lt. Col. John Arnold monitor a simulated test April 16 in the Central Control Facility at Eglin Air Force Base, Fla. They use the Central Control Facility to oversee electronic warfare mission data flight testing. Portions of their missions may expand under the new Air Force Cyber Command. (Image credit: U.S. Air Force/Capt. Carrie Kessler)

On 29 January 2020, the International Institute for Strategic Studies (IISS) hosted an event on its upcoming Measuring Coercive Cyber Power Project (available to watch here). Ed Stacey sat down with Greg Austin, Senior Fellow for the Cyber, Space, and Future Conflict Programme at the IISS, the day after the event, for a discussion on this new project, cyber power and offensive cyber operations.

For more information on the IISS and the latest analysis of international security, strategy and defence issues, visit them here or follow them on Facebook, Twitter (@IISS_org) and Instagram (@iissorg).

ES: What is the Measuring Coercive Cyber Power project?

GA: This is a project that began at the IISS before I joined and has been run by a couple of very experienced professionals. Its purpose is to understand the basic fundamentals of cyber power. In other words: what are its economic, scientific, technological, and organisational underpinnings?

ES: What are your main findings?

GA: The main findings are a little obvious in one sense, but also a bit surprising. We have done thirteen country studies which include a review of the United States (US), China, Russia, Iran and North Korea – fairly obvious countries, perhaps – and then other states like Indonesia, India, Malaysia, Japan, and Canada. What we have found is that the US’ cyber power is miles ahead of any other country in the world; that the economic, scientific, and indeed, social underpinnings of cyber power are more powerful in the case of the US than any other country. That lead really revolves around the Information Communications Technology (ICT) industry – the fact that technologies like the Internet were devised in the US and that a very unusual relationship exists between its defence sector, industry, and universities. This relationship really does not exist anywhere else in the world. And the US has also been in the ICT industry longer than any other country – at the higher levels at least.

Whilst China is often regarded as a peer competitor of the US, it was in a political mess between 1966-76 (the Cultural Revolution), which has hindered its development of cyber power. During this period, they closed down their universities and persecuted scientists and researchers, calling them ‘stinking weeds’ – and that was only about 45 years ago. It is very hard for China, which was already a poor and developing country back in 1966, to overcome this negative legacy – one of ten years of persecution of its scientists and researchers, and ten years of closed universities.

ES: Were there any results which you found surprising?

GA: What I did not fully appreciate because there is so much public media coverage of countries like Iran and North Korea, is that these countries really are bit-players. By this I mean that, whilst they can certainly cause a lot of damage in cyberspace, they only do so every couple of years. And so, what we have seen over about the last ten years is that whilst countries like Iran and North Korea get a lot of headlines with their cyber attacks, they carry out these attacks very infrequently. Yes, they cause great damage and great disruption, but they do not seem to have a strong pattern to their non-espionage cyber activity.

ES: If you completed this study in another ten or twenty years time, what changes would you expect to see in your results?

GA: I think the most likely change is for the US and its allies to increase their lead over these disrupter countries, like North Korea, Iran and Russia; and for China to still be sort of struggling somewhere in-between. I think there will be political reversals in China which will undermine the strong push that we are currently seeing towards ICT improvements and the thrust towards China’s ambition to become a dominate player in cyberspace.

ES: What do you mean by cyber attacks or offensive cyber capabilities? Does this include, for example, information warfare?

GA: It certainly includes information warfare. Offensive cyber capabilities have a dualistic character. On the one hand, we can think of them as cyber attacks on cyber systems. On the other, certainly in American, Russian, and Chinese military thinking, information warfare effects (psychological effects as commonly understood) can also be delivered through cyberspace, in ways that we could not imagine thirty to forty years ago. And so, we are in a situation today where, as we struggle with the security of Information Technology (IT) systems, and ways of attacking and defending these systems, it is now also the case that politics is being played out in cyberspace.

ES: Is it problematic to call these capabilities weapons – to attach that label to them and the connotations that come with it?

GA: In an academic sense, it probably is. But I think the common person would understand a weapon as something that you can use to damage other people or things with. A hammer is a tool – it can be used for making things. But a hammer can also kill people. And code is the same: it can be a tool for making things and it can be a weapon that kills people. You can use code to turn off electric power stations and create negative health outcomes in hospitals. You can create negative health outcomes in hospitals by interfering with their basic computerised information. So, we are in an environment where IT, software and all the things around them can be seen as both tools for good and weapons for bad. I appreciate fully that arguments exist about the nature of violence and war; but I think, at the end of the day, the average person in the street – and certainly the average politician – would understand that the malicious things that are happening in cyberspace are weapons.

ES: Taking forward your example of a hammer: if I took a hammer and threatened you with it – that would deter you. Would this be the same in cyberspace, i.e. if a state has a ‘cyber-hammer’, does that deter other states? 

GA: The interesting thing about the hammer example is that you could hold up the hammer and appear to be threatening me, but there would have to be a lot of circumstances in place before I would see that as any sort of threat and actually take it seriously. For instance, I would have to understand what your record is of actually carrying out those sorts of threats, and I would have to make a calculation about how afraid you would be of my retaliation. What we are seeing in cyberspace – for example, with the American’s ‘Cyber Deterrence Initiative’ (which is, in a sense, not only raising the hammer but actually attacking countries like Russia and China to try and undermine their offensive malicious cyber activity) – is that it is difficult to tell whether they are actually working as deterrence policies.

That initiative involves what the Americans call ‘defending forward’ – attacking into the Russian and Chinese systems. It has been going on for over a year (around eighteen months) but we do not have enough information in the public domain – we do not have enough evidence – to determine whether the Russians or the Chinese are actually being deterred. So, it is a good place to start the argument: ‘If I raise a hammer, do I deter you?’. But we have to study what happens next.

ES: What potential is there for cyber to disrupt established practices of deterrence? I have in mind, particularly, nuclear deterrence, which was discussed during the event – the idea of a nuclear missile being in flight and then hacked and potentially redirected.

GA: I have actually written an article on this subject with a Russian scholar where we tried to understand how Russian military leaders actually think about this. There is very little evidence in the public domain, but we found enough to believe that some Russian military leaders think that cyber capability shifts the balance between offence and defence, and encourages states with nuclear weapons to strike pre-emptively before losing their command and control, or guidance systems. Now, the evidence is far from comprehensive – these are, in a sense, fragmentary thoughts. But from the US government’s point of view, we have to believe that they are using every technological lever they have to devise attack packages that could cripple the Russian government’s command and control of their nuclear weapons.

ES: Given the uncertain nature of cyberspace – as a highly complex, interconnected and evolving domain – is it possible to wield offensive cyber capabilities strategically?

GA: I certainly believe that it is possible to do that – but that is one of the big debates in the academic community. After last night’s seminar, for example, we received an email from a retired senior military officer in the UK who made the proposition that it is not possible to use cyberweapons strategically – that they are really just some sort of tactical, disruptive asset. But, in fact, the US government and the Chinse government are on the record as saying, planning and doing things which demonstrate their belief that cyber military capability is a game-changer. And that is very well captured in the Chinese statement in 2015 that outer space and cyberspace are the commanding heights of all international security competition. That was a statement in their official 2015 military strategy, and it was not on page 33 in a footnote – it was right at the beginning.

ES: How likely if at all is cyber war?

GA: According to the US government, cyber war is already happening. They believe that Russia and China have already launched open conflict with the US in cyberspace. Mind you, China and Russia believe exactly the same thing about the US. Whether or not we call that war, or some other form of conflict, is a point of debate. To go back to Thomas Rid’s book: even though Thomas’ arguments were valid as he constructed them, there is a whole realm of strategic thought and activity which he did not fully take account of, and that we are now seeing much more in the open. States believe that they can use these tools, as weapons, in a way that does not provoke an armed response. But, as we see in the American case, this is provoking some sort of retaliation in cyberspace through cyber attacks. As we experience year after year of this sort of interaction – of heightening tension and conflict in cyberspace – I think we are going to reach the point where one or other of these great powers decides that enough is enough in cyberspace and starts to take some non-cyber retaliatory measures. And you could argue that we are already seeing that in the case of the US’ policy on the ‘tech war’ with China.

ES: You spoke yesterday about cyber operations in the 1998-99 Kosovo conflict as being the first act of cyber war, which is interesting because Stuxnet is frequently cited as the first. Is there a certain threshold in cyberspace that you could identify, perhaps in terms of effect, where a cyber operation becomes an act of cyberwar?

GA: There are a number of international lawyers, more than a handful, who believe that the US’ use of Stuxnet against Iran was a breach of International Law. It was an act by one state against another causing damage in the second state. If you are not causing physical damage, then most states do not appear to regard that as aggression – it is something else. But where the US actually causes physical damage – sabotage of what was ostensibly a civil undertaking: the enrichment of nuclear fuel – that in International Law is plainly and simply a breach. Yet to find a level of escalation above that which would provoke an armed response is another question.

When security agents of the French government sank Greenpeace’s ship, Rainbow Warrior, in a New Zealand harbour in the 1980s, the French were held responsible for that in an international arbitration and paid damages – it was a breach of international law. That is really the same sort of act that the US perpetrated against Iran – creating physical damage, sabotage, and in the New Zealand case, they killed a couple of people – a similar sort of international tort. But we have not got to the point where any state has committed a cyber attack on the level that the receiving state has judged it to be a justification for an armed military response.

ES: Do you think that current international law is fit for purpose with regards to cyber conflict?

GA: Yes, I think it is – and I think the Tallinn Manual 1.0 proved that fairly conclusively. A whole range of international discussions suggest it is fit for purpose. But international law is not a perfect institution. And as in the Law of the Sea where there is lots of room for interpretation, and as in International Humanitarian Law where there is lots of room for interpretation, there is equally lots of room for interpretation in law applicable to hostile activities in cyberspace.

ES: Is there room to develop norms or specific agreements on activities in cyberspace?

GA: I think the conversation about norms has been productive and useful; but states signing up to new black letter international legal norms seems highly unlikely. There are several meanings of the word norm. One is that a norm, in a sense, sets a moral tenor for conduct. Another meaning of the word, of course, is a norm as enshrined in black letter law. I think that the future of the normative conservation in cyberspace will be about setting the moral tenor of action, rather than coming up with new black letter law.

ES: Marcus [Willet] spoke yesterday about the potential for distinguishing between discriminate [e.g. Stuxnet] and indiscriminate capabilities [e.g. WannaCry], which I think would be a good place to start.

GA: Yes, I think that is an excellent point.

ES: Are the US pre-eminent in cyberspace and, if so, do you think this will last?

GA: One of the reasons why it should last is that the US currently sits at the top of the most powerful intelligence alliance human history has ever seen – and that does not look like weakening anytime soon. Moreover, major adversaries, Russia and China, do not appear to be interested in crafting an intelligence alliance – in fact, the Russian government is very explicit that it does not see its military relations with China as an alliance. So, I think that as long as the US can maintain that very powerful intelligence alliance – and all of the signs are that it will – then Russia and China do not have a hope.

Just to clarify why that is important: the foundation of all effective operations in cyberspace is high-quality intelligence about the enemy’s information systems, their vulnerabilities, and how those vulnerabilities exist at any specific point in time. It is no good collecting intelligence about, say, the Iranian nuclear centrifuges on one day in 2006 and then arriving back in 2009 with the attack package because they might have changed the software configurations. You have got to keep assessing and reassessing, almost on a daily basis: ‘How is the offensive environment looking?’, so that any attack package that you do develop can be used at a later date. That requires a huge intelligence effort, and it is that intelligence effort that the US and its allies can deliver far better than any single country in the world – even one that looks as powerful as China.

ES: Is it the case at the moment, particularly in the context of the tech war, that cyberspace is just a two-horse race between the US and China?

GA: I think that China sees it as a two-horse race and many people in the US see it as a two-horse race – but it is really not. Modern technology and ICT represent globalised knowledge. And what we see with the US and its allies is that they are far better at exploiting that globally available knowledge. Almost everything around modern ICT science is equally available to China, Russia, Iran, North Korea, and the US. The difference is that the US has sixty to seventy years of excellent performance in exploiting that knowledge and putting it into practice. What we have seen is that countries like South Korea, Malaysia, and Taiwan can come along and pick off pieces of that ICT pie and become world-class in that space. So, that is the phenomenon we are seeing: this sort of multi-horse race; or many horses in the race, all excelling in different parts of it.

We put together some information based on the 2019 Fortune Global 500 companies which shows that, out of the Fortune Global 500, the US has fourteen companies in the tech and telecoms sectors whereas China only has eight. And what is interesting about the other 28 companies in those sectors is that all but two of them belong to very close US allies – so, European, Japanese, South Korean or Taiwanese. Also really interesting about that data is that while mainland China has eight companies in the tech or telecoms sectors, Taiwan has seven… little Taiwan has seven! How many millions of people are there in little Taiwan versus big China with its massive financial resources? And it has only got eight. So, if it is a two-horse race then Taiwan could be considered to be in the race as well.

ES: How does the UK compare to other states at the top of the table?

GA: The UK is one of the top-ten countries in the world in, what you might call, the national security aspects of cyberspace. And it may well be in the top-ten countries in the world in other aspects of ICT development. But, rather interestingly, there are only two tech and telecoms companies in the Fortune Global 500 which are UK companies – I think one was BT and the other Vodafone – when, as I mentioned, you have got little Taiwan with seven. So, the UK is not that well positioned in some of the commercial aspects. That being said, we have got to be careful because the Fortune Global 500 reflects revenue from selling things and services. And, really, what is happening with companies in Taiwan is that they are selling many more expensive things than Britain.

While Britain sells a lot of good ICT services, they are just not sold on the scale that countries like South Korea and entities like Taiwan are. And then there is the question of, well, even if the UK’s not earning as much money from what it is doing in cyberspace, maybe what the UK’s doing is of much higher value. UK interventions are happening at a strategic level and it is no coincidence that companies like BAE Systems, BT and Vodafone are global brands that have a role in the economic, strategic, and scientific development of a very large number of countries around the world. So, Britain is a presence that cannot easily be summed up in gross statistics such as the Fortune Global 500.

ES: And finally, what role do you expect cyber to play in the UK’s upcoming Strategic Defence and Security Review?

GA: As I suggested last night, a lot really depends on leadership choices. You can have the objective reality of the technology but there is no revolution in military affairs unless you have got a military leader who recognises the military potential and exploits it. And it is a bit the same with economic policy. Australia provides an interesting case in point. Malcolm Turnbull, who was very briefly the Prime Minister of Australia, represented a level of technological awareness that no preceding prime minister or his successor have in any way, shape, or form. Malcolm Turnbull was probably the only member of his government, at cabinet level, who had any appreciation of technology. So, unless you have got that sort of leadership then it is going to be very tough.

Additionally, I am afraid to say that the Brexit decision was a repudiation not only of the concept of the EU but of the value of globally integrated science and technology. Just ask the people in the universities what they think of it, and the research community. People who backed the Brexit decision really represent the same sort of mentality as ministers in the Australian government who do not have a full appreciation of what is involved in modern science and technology – how it is an integrated, globalised activity. When you put up your national boundaries, you are really not equipping yourself or positioning yourself well for the future. Now, that does not mean that the British defence establishment cannot do that because the British defence establishment has a very different position as a part of the Five Eyes community. And that scientific and technical community – represented by the close military alliance – may deliver outcomes for Britain, and imperatives in a strategic and defence review, that go counter to the Brexit mentality. But I really think that the people who currently dominate the UK government are not the right people to lead Britain into a brighter technological future and are not the people to lead the British national security establishment to a brighter technological future – I am afraid to say.


Ed Stacey is a BA International Relations student at King’s College London and a Student Ambassador for the International Institute for Strategic Studies (IISS). The #IISStudent Ambassador programme connects students interested in global security, political risk and military conflict with the Institute’s work and researchers.

Greg Austin is a Senior Fellow for the Cyber, Space and Future Conflict Programme at the IISS. Prior to joining the IISS, Greg worked at the University of New South Wales Canberra, as Professor and Deputy Director of its multi-disciplinary centre for cyber security research. He was a Senior Visiting Fellow in the Department of War Studies at King’s College London from 2012 to 2014.

Filed Under: Blog Article, Feature, Interview Tagged With: cyber warfare, ed stacey, Greg Austin, iiss

Ethics for the AI-Enabled Warfighter – The Human ‘Warrior-in-the-Design’

June 13, 2019 by J. Zhanna Malekos Smith

by J. Zhanna Malekos Smith

14 June 2019

(U.S. Navy photo by Petty Officer 1st Class Shannon E. Renfroe/Released)

Can a victor truly be crowned in the great power competition for artificial intelligence? According to Russian President Vladimir Putin, “whoever becomes the leader in this sphere will become the ruler of the world.” But the life of a state, much like that of a human being, is always subject to shifts of fortune. To illustrate, let’s consider this fabled ancient tale. At a lavish banquet King Croesus asked Solon of Athens if he knew anyone more fortunate than Croesus; to which Solon wisely answered: “The future bears down upon each one of us with all the hazards of the unknown, and we can only count a man happy when the gods have granted him good fortune to the end.” Thus, to better prepare the U.S. for sustainable leadership in AI innovation and military ethics, I recommend a set of principles to guide human warfighters in employing lethal autonomous weapon systems — armed robots.

Sustainable Leadership

By 2035, the Department expects to have ground forces teaming up with robots. The discussion on how autonomous weapon systems should responsibly be integrated with human military elements, however, is slowly unfolding. As Congress begins evaluating what the Defense Department should do, it must also consider preparing tomorrow’s warfighters for how armed robots will test military ethics.

As a beginning point of reference, Isaac Asimov’s Three Laws of Robotics require: (1) a robot must not harm humans; (2) a robot must follow all instructions by humans, except if following those instructions would violate the first law; and (3) a robot must protect itself, so long as its actions do not violate the first or second laws. Unfortunately, these laws are silent on how human ethics apply here. Thus, my research into autonomous weapon systems and ethical theories re-imagines Asimov’s Laws and offers a new code of conduct for servicemembers.

What is a Code of Conduct?

Fundamentally, it is a set of beliefs on how to behave. Each service branch teaches members to follow a code of conduct like the Soldier’s Creed and Warrior Ethos, the Airman’s Creed, and the Sailor’s Creed. Reflected across these distinct codes, however, is a shared commitment to a value-system of duty, honor, and integrity, among others.

Drawing inspiration from these concepts and several robotics strategy assessments by the Marine Corps and Army, I offer a guiding vision — a human Warrior-in-the-Design Code of Conduct.

The Warrior-in-the-Design concept embodies both the Defense Directive that autonomous systems be designed to support the human judgment of commanders and operators in employing lethal force, and Human Rights Watch’s definition of human-out-of-the-loop weapons (i.e., robots that can select targets and apply force without human input or interaction.

The Warrior-in-the-Design Code of Conduct for Servicemembers:

  • “I am the Warrior-in-the-Design;
  • Every decision to employ force begins with human judgment;
  • I verify the autonomous weapon systems target selection before authorizing engagement, escalating to fully autonomous capabilities when necessary as a final resort;
  • I will never forget my duty to responsibly operate these systems for the safety of my comrades and to uphold the law of war;
  • For I am the Warrior-in-the-Design.”

These principles encourage integrating AI and armed robots in ways that enhance — rather than supplant — human capability and the warrior psyche in combat. Furthermore, it reinforces that humans are the central figures in overseeing, managing, and employing autonomous weapons.

International Developments

Granted, each country’s approach to developing autonomous weapons will vary. For instance, Russia’s military expects “large unmanned ground vehicles [to do] the actual fighting … alongside or ahead of the human fighting force.” Based on China’s New Generation Plan, it aspires to lead the world in AI development by 2030 – including enhanced man-machine coordination and unmanned systems like service robots.

So far, the U.S. has focused on unmanned ground systems to support intelligence, surveillance and reconnaissance operations. The Pentagon’s Joint Artificial Intelligence Center is currently testing how AI can support the military in fighting fires and predictive maintenance tasks. Additionally, President Trump’s Executive Order on Artificial Intelligence encourages government agencies to prioritize AI research and development. Adopting the Warrior-in-the-Design Code of Conduct is a helpful first-step to supporting this initiative.

How?

It would signal to private industry and international peers that the U.S. is committed to the responsible development of these technologies and to upholding international law. Some critics object to the idea of ‘killer robots’ because they would lack human ethical decision-making capabilities and may violate moral and legal principles. The Defense Department’s response is two-fold: First, the technology is nowhere near the advancement needed to operate fully autonomous weapons, the ones that could — hypothetically, at least — examine potential targets, evaluate how threatening they are, and fire accordingly. Second, such technological capabilities could help save the lives of military personnel and civilians, by automating tasks that are “dull, dirty or dangerous” for humans.

Perhaps this creed concept could help bridge the communication divide between groups that worry such weapons violate human dignity, and servicemembers who critically need automated assistance on the battlefield. The future of AI bears down upon each of us — let reason and ethics guide us there.

This article was originally published in The Hill


Jessica ‘Zhanna’ Malekos Smith, the Reuben Everett Cyber Scholar at Duke University Law School, served as a Captain in the U.S. Air Force Judge Advocate General’s Corps. Before that, she was a post-doctoral fellow at the Belfer Center’s Cyber Security Project at the Harvard Kennedy School. She holds a J.D. from the University of California, Davis; a B.A. from Wellesley College, where she was a Fellow of the Madeleine Korbel Albright Institute for Global Affairs; and is finishing her M.A. with the Department of War Studies at King’s College London.

Filed Under: Blog Article Tagged With: AI, cyber, cyber warfare, digital, Warfare, warrior

Strife Series on Cyberwarfare and State Perspectives, Part III – The argument for a more critical analysis on the United States

July 23, 2018 by Shivali Bhatt

By Shivali Bhatt

Military Operation in Action, Soldiers Using Military Grade Laptop Targeting Enemy with Satellite (Credit Image: Gorodenkoff / Stock Image)

A critical line of argument regarding cyber warfare today is how it has supposedly brought about contextual changes that challenge the balance of power in the international system. The broad consensus is that large, powerful states, like the United States, are losing leverage against those – traditionally – deemed small and weak. According to an article published earlier this year by the World Economic Forum Global Platform, the rising domain of cyber warfare can be somewhat seen to be causing a levelling effect in the world today. Any state or non-state entity with access to the Internet and digital technology can develop powerful cyber weapons. At the same time, some news sources have claimed how the much-anticipated cyberwar is already underway, and how the United States is not ready or will most likely lose. The simplistic nature of such discourse fails to allow for a more critical understanding of what factors influence the nature and reality of cyber warfare. This article shall critique these narratives by analysing the factors that influence the strategic efficacy of cyberwarfare. Bearing the current state of cyberwarfare in the United States in mind, it shall contextualise these factors.

The United States is the most powerful state in the world, particularly regarding its military and intelligence capacity. President Trump elevated the original Cyber Command to a Unified Combatant Command earlier this year.

 

The importance of intelligence and collaboration

While it takes a lot of skill and effort to appropriately develop a powerful cyber weapon, the most complicated part of this process is application or deployment. It is this stage that determines the extent to which a cyber operation will yield strategic leverage for a state; one that relies on intelligence agencies and international alliances. In other words, cyber weapons are generally part of an extensive collection of capabilities.

Theoretically, the state with the most resourced and well-connected intelligence community will likely reel in greater strategic benefits from the domain of cyberwarfare, on the basis they are active political players in global affairs. The more in-depth and holistic the collecting and analysing of intelligence data, the smarter the cyber offensive strategy. In this context, the United States has notable leverage. The U.S. spends approximately $1 trillion on establishments and organisations that serve a national security purpose; in which its intelligence community spans across seventeen federal agencies. Moreover, these bureaus have strictly woven relationships with a large number of agencies operating in other states, with bases and ground-level operatives in over forty countries, including Israel and the United Kingdom. As NATO’s Operation Locked Shields demonstrates, cyberwarfare is a multi-dimensional domain that is determined by the nature of cooperation and collaboration between states. The Stuxnet virus, for instance, was planted with the assistance of the CIA’s regional partners in Israel; assets that were crucial to such a clandestine and sensitive operation. These practical steps to implementing cyberwarfare strategies explain why the U.S. is still and will always technically be a dominant player in the field.

 

The broader political context

Given that cyberwarfare is an aspect of broader political strategy, states that are regularly engaged in international affairs are more likely to determine the context for cyber-attacks. The United States is considered extremely influential, while North Korea – regardless of how large, fast-growing or highly skilled its ‘cyber army’ appears – a back-seat driver. Narratives that present North Korea as a case study to exemplify the ‘levelling effect’ in the world today, often present highly fragmented arguments outside of context.

It is useful to consider how economics and politics are woven together into the strategic context of cyber warfare, given that a prime part of developing cyber warfare strategy involves gathering in-depth knowledge on a person or situation. Similar to how former President Obama’s administration exploited the weaknesses of Russia’s economy by imposing heavy sanctions against Moscow in 2014, Washington can gain a notable edge by targeting Putin’s private affairs offshore; the consequences of which would be determined by the extent to which Putin’s private affairs affect Russia’s domestic political context. According to a National Bureau of Economic Research paper, the total accumulation of Russian offshore holdings amounts to approximately between $800 billion and $1.3 trillion; most of which belongs to President Putin and associates. This wealth power has been a contributing factor to his political power and ability to maintain authority in Russia, enabling him to govern and preside over state institutions and the secret police. Targeting his foreign assets would be a strategic application of U.S. cyber power.

 

Underlying factors

In this discussion, it is useful to recognise the longer-term damage traditional military weapons can have on both intellectual and physical infrastructures, and how those induced by cyberspace have not yet demonstrated such ability. At the same time, the Stuxnet weapon and newer versions inspired from its technological layering, such as the relatively recent Triton bug, can act as catalysts to broader military strategy. However, the accurate deployment of such a weapon not only requires a significant amount of skill and resource, both of which are usually available to higher-earning economies but also can go wrong. In the case of Stuxnet, several sources confirmed that the Americans and Israelis ‘lost control’ of their act.

It goes without a doubt saying that the United States is a powerful influencer in the world today, and especially so in a context of increasing globalisation and digital technology. There are a lot of concepts, processes and cultural embedding that would also need to be in the firing line for this argument to hold any traction in the longer term.

 

Conclusion

Today, it is really popular to consider cyberwarfare as this rising domain that challenges all other pre-existing tenets of global politics, with the narrative being how weaker states such as North Korea are on the rise and those powerful ones such as the United States should watch their back. However, the authors of such arguments seem also to disregard any more in-depth aspects of warfare analysis, such as the power of alliance, broader context, and particularly the underlying factors found within societal construct and culture that have existed before the advent of the digital age. While cyber warfare has proven to be a powerful mechanism, its scope of threatening powerful actors like the United States needs to be assessed through a more critical lens. Further, doing so will help better conceptualise its strategic worth in comparison to more conventional methods of warfare strategy.

 


Shivali is currently pursuing her MA Intelligence and International Security at Department of War Studies, King’s College London. She is also a Series Editor at Strife, as well as a Creative Writer at cybersecurity startup PixelPin, where she contributes articles on ‘Thought Leadership’, encouraging readers to approach security issues through innovative means. Prior to that, she spent some time in Hong Kong under the InvestHK and EntrepreneurHK organisations, engaging with the cybersecurity and tech scene on the East Coast. Her core research interests include modern warfare and contemporary challenges, cybersecurity, and strategic policy analysis. You can follow her  on @shivalixb


Image Source: https://www.istockphoto.com/gb/photo/military-operation-in-action-soldiers-using-military-grade-laptop-targeting-enemy-gm879913090-245205517

Filed Under: Blog Article Tagged With: Cyber Security, cyber warfare, intelligence, Strife series, tactical, USA

Footer

Contact

The Strife Blog & Journal

King’s College London
Department of War Studies
Strand Campus
London
WC2R 2LS
United Kingdom

blog@strifeblog.org

 

Recent Posts

  • Climate-Change and Conflict Prevention: Integrating Climate and Conflict Early Warning Systems
  • Preventing Coup d’Étas: Lessons on Coup-Proofing from Gabon
  • The Struggle for National Memory in Contemporary Nigeria
  • How UN Support for Insider Mediation Could Be a Breakthrough in the Kivu Conflict
  • Strife Series: Modern Conflict & Atrocity Prevention in Africa – Introduction

Tags

Afghanistan Africa Brexit China Climate Change conflict counterterrorism COVID-19 Cybersecurity Cyber Security Diplomacy Donald Trump drones Elections EU feature France India intelligence Iran Iraq ISIL ISIS Israel ma Myanmar NATO North Korea nuclear Pakistan Politics Russia security strategy Strife series Syria terrorism Turkey UK Ukraine United States us USA women Yemen

Licensed under Creative Commons (Attribution, Non-Commercial, No Derivatives) | Proudly powered by Wordpress & the Genesis Framework