• Skip to primary navigation
  • Skip to main content
  • Skip to footer
  • Home
  • About
    • Editorial Staff
      • Dr Anna B. Plunkett, Editor in Chief, Strife
      • Strife Journal Editors
      • Strife Blog Editors
      • Strife Communications Team
      • Senior Editors
      • Series Editors
      • Copy Editors
      • Staff Writers
      • External Representatives
      • Interns
    • Publication Ethics
    • Open Access Statement
  • Archive
  • Series
  • Strife Journal
  • Contact us
  • Submit to Strife!

Strife

The Academic Blog of the Department of War Studies, King's College London

  • Announcements
  • Articles
  • Book Reviews
  • Call for Papers
  • Features
  • Interviews
You are here: Home / Archives for Ed Stacey

Ed Stacey

Offensive Cyber Series: Dr Jacquelyn Schneider on Cyber Strategy, Part II

June 25, 2021 by Ed Stacey and Dr Jacquelyn Schneider

Photo Credit: The United States Military Academy at West Point, Licensed via Creative Commons.

This is part II of Ed Stacey’s interview with Dr Jacquelyn Schneider on cyber strategy for Strife’s Offensive Cyber Series. You can find part I here.


ES: You mentioned earlier your writing about this idea of a no first use policy with regards to strategic cyber attacks. I was wondering if you could speak a little to how that might help to limit escalation and maintain stability in cyberspace?

JS: One of the biggest hypocrisies, or logical inconsistencies, that is resident in US cyber strategy is this ambiguity about what they are willing to do offensively and yet they also say: do not dare attack our critical infrastructure or do not dare hurt our civilians. And so, if the US does not say that they are not going attack critical infrastructure, what are the incentives for other states to not attack critical infrastructure or how do they know that the US is not going to lump these attacks into defend forward?

So I use the term no first use, which I stole from the nuclear world and has its own connotations, but really what I am advocating for is declaratory restraint at the strategic level. States like the US, the UK and others can say: we do not think it is appropriate for states to attack critical infrastructure and create strategic effects against civilian populations. We have seen this type of attack in war and we know that it is ethically fraught and not usually very useful, so we are not going to do it. Now that does not mean that we are going to accept when other people do it to us, but we just want them to know that this is off the table for us.

Now if we are in a full-blown conflict and our opponent is intermingling their civilian infrastructure with their conventional or nuclear arsenal, then we might attack that. But as a rule if they are not entangling these things and we are not in a violent conflict, then we are going to say that those are off the table.

People worry that adopting this policy would handcuff the US, for example, or whichever state adopts it. But strategic cyber attacks are a really high threshold – these are attacks on critical infrastructure or nuclear infrastructure that cause significant violence to civilian populations. That is a pretty high bar. I am not talking about military infrastructures; it is a relatively defined group of targets that we are saying we are not going to attack.

But I think, in general, states like the US, the UK, France, Germany, Japan – typical allies of the US – are not the type of states that are going to attack critical infrastructure anyway. There is a sense that this is something that is not above board, that it is not viable or something that a liberal democratic state should do, especially prior to a conflict. I do not think that these states are going to do it anyway, so why not get credit for it? If you are already restraining yourself, why not get credit for it?

The other thing is that these types of attacks are actually relatively difficult to conduct and it is hard to see how strategically useful they are. This goes back to the idea that attacking civilian populations is going to decrease their desire to continue conflict. And the empirical evidence on this is mixed because sometimes you push too far, you escalate and get rally round the flag effects. So strategically this is not of great use to states like the US anyway.

That policy then allows the US to be more assertive and risk acceptant with lower level cyber attacks, where you are attacking other states’ offensive cyber infrastructure, with less worry about things escalating to a more violent conflict or a strategic cyber attack.

ES: This next question is from Amy Ertan, who I spoke to for part three of this series. She was wondering how best to educate decision-makers about the strategic implications of offensive cyber capabilities? And just to add to that if I can, there has been a lot pushback in the literature against comparisons between cyber and nuclear, but are there ways in which we can borrow ideas and concepts from the nuclear world – such as no first use – for educational purposes?

JS: In general, when issues emerge we have a tendency to analogise. Cyberspace has been rife with this: cyber is a bomb, cyber is an aeroplane, cyber is a nuclear weapon, cyber is – just recently in the Wall Street Journal – letters of marque, referencing naval operations historically. So there has been a problem with cyber operations in analogising too much to other points in history.

We actually have a lot of data now about how states interact in cyberspace. We have more big data analysis of things that have already occurred, so the work of people like Brandon Valeriano, Ryan Manness and Ben Jensen. Then we have people who are using other data generating mechanisms to create scenarios that have never existed, to see how people react. I do some of that work, but Nadia Kustyok also has some fantastic work here with experimental politics and Sarah Kreps.

So we have information to tell us when cyberspace is different to other domains. That evidence suggests that cyberspace is very different to the nuclear domains, but that does not mean that some of the concepts that we have applied to nuclear politics are not concepts that we can evaluate when it comes to cyberspace. For example, deterrence is not a nuclear concept – deterrence is a concept of how states have interacted going back thousands and thousands of years.

I stole no first use from the nuclear realm but that was actually to my detriment. I did that to kind of create a polemic but if I could go back I would not of said no first use, I would have said declaratory strategic restraint. Because it imbued a lot of conversations like: well, no first use and nuclear did not work. But cyber is not nuclear and I had to spend a lot of time in the article talking about why cyber is not nuclear. So maybe that was not a useful analogy for me to try and hook people in.

I think the nuclear analogy was used a lot in the US because cyber fell under Strategic Command and that was the natural analogy that institutionally existed. But as I talked about a little bit earlier, talking about cyber – especially offensive cyber – as strictly strategic really did not lead to an understanding of what the real impacts of cyber operations are.

If I am sitting down and talking to decision-makers about cyber operations and trying to educate them, I am trying to teach them about the nuances of it. Firstly, strategic cyber operations are really hard to do – they just are. Offensive cyber is much harder than it seems. If the US was a criminal ransomware actor, they would have it in the bag. But those are not our incentives and it is actually really difficult to do the kind of operations that fit the US’ strategic priorities. So you have to teach decision-makers not only about the dangers of cyber operations, but also the difficulties and the nuances.

I like to tell decision-makers: look, most of our evidence suggests that cyber operations do not lead to escalatory behaviours. In fact, what we find is that cyber operations very rarely change people’s behaviours – that is the puzzle. So what does that mean for you? That means that, yes, you can conduct offensive cyber operations and be less worried about escalation than you were previously. But that also means that you cannot say that you are going to use offensive cyber operations to coerce and deter and signal and all of these other things. You have got to choose one or the other – you cannot have it both ways.

We are onto a new generation, though, of cyber decision-makers who have a much more mature understanding of what works and what does not work in cyberspace. We see less of cyber as a magic pixie wand and less of cyber Armageddon, minus the public discourse. And I am not sure how you nuance the public discourse; there are a lot of incentives to overinflate the threat and the capabilities or the capacity of the US to do big things.

So in terms of educating, we need to get rid of analogies. We need to show people: this is what the data says. We need to invest in data-generating mechanisms that help us to understand the puzzles of cyberspace. I am not strictly an empiricist but I think that in cyberspace we can actually use data, as opposed to nuclear which we have not used very often, thank god, and therefore have very little data. We can actually generate good data and that can help us to understand when and why cyber operations might be more or less effective, escalatory or destabilising.

ES: And finally, does the increasing frequency and severity of cyber incidents in the US suggest that its more offensive cyber strategy is failing? Broadly, what lessons can we learn about the role of cyber operations from the US’ experimentation since 2018?

JS: You have to remember that these offensive cyber operations are actually pretty scoped. So when we see, for example, this increase in ransomware attacks from criminal organisations, nothing about US offensive cyber is geared towards criminal organisations and ransomware, at least in the current strategy. So those incidents are not an indicator of whether offence is working or not and more of an indicator that other elements of the strategy are off kilter – that we are not investing enough in information sharing, criminal prosecution or diplomatic measures that we can use to convince states to prosecute these criminals, which are basically functioning with zero sense of retribution in cyberspace.

I think the real question with things like defend forward is: are the Chinese, Russians, North Koreans, Iranians – so state actors – are they less able to use offensive cyber operations? Is it more expensive for them? Do they have to spend more time on defence? These are really hard things to measure and all of the strategies so far have punted on measurement. That is something I hope that the next strategy tackles because the problem with where the US is going when it comes to offensive cyber is that it is being organised in things called task forces. And when the US stands up a task force, there is never a clear plan about how you stand it down – it is like a perpetual cycle. So this question is really important: how do we figure out what is effective and what is not?

When you are thinking about SolarWinds and other espionage attacks, you do need to evaluate whether defend forward is doing anything against these activities to decrease the ability of those actors to even get in. That said, I think SolarWinds probably predates a lot of defend forward – that was kind of a long-standing issue. So we will see. The evidence is not there yet, but the US should try and think about how it would measure that to find out.


This is the final interview in Strife’s Offensive Cyber Series. You can find parts one, two and three with Dr Tim Stevens (Part I, Part II), Dr Daniel Moore (Part I, Part II) and Amy Ertan (Part I, Part II) here.

Filed Under: Blog Article, Feature, Interview, Series Tagged With: dr jacquelyn schneider, ed stacey, interview, jacquelyn schneider, offensive cyberwarfare, offensive cyberwarfare series

Offensive Cyber Series: Dr Jacquelyn Schneider on Cyber Strategy, Part I

June 24, 2021 by Ed Stacey and Dr Jacquelyn Schneider

Photo Credit: US Secretary of Defense, licensed under Creative Commons

On Wednesday 17th March, Strife Interviewer Ed Stacey sat down with Dr Jacquelyn Schneider to discuss the role of offensive cyber operations in cyber strategy. For the final part of Strife’s Offensive Cyber Series, Dr Jacquelyn Schneider outlines the origins of defend forward and persistent engagement as well as discussing the relationship between offence and defence in cyber strategy, the potential for a no first use policy with regards to strategic cyber-attacks and potential future trajectories in the US’ approach to cyber operations.

Ed Stacey: Jackie, if we could start back in 2018. How did the arrival of “defend forward” and “persistent engagement” alter the role and significance of offensive cyber operations within US cyber strategy?

Jacquelyn Schneider: I think the move towards persistent engagement and defend forward was a confluence of both US domestic factors, including organisational and bureaucratic politics, and the international situation.

From 2014 onwards, you see a pretty significant uptick in both the severity and amount of cyber activity, culminating in the 2016 elections with the Russian hack and release of Democratic National Committee information and cyber-enabled information operations. So there is this big change happening where the US is accepting and realising how important cyber operations are going to be both for domestic stability and international security. At the same time, you have these strange institutional politics going on within the Department of Defense (DoD) and particularly Cyber Command.

For those who are not followers of DoD internal politics, Cyber Command did not start off as its own command. It starts out as this task force and then as time goes by it becomes a sub-unified command, so it falls under Strategic Command. Now this is really important to the story of defend forward because Strategic Command is focussed on deterrence – this is the nuclear weapons command. And in their narrative about deterrence, they phrase offensive cyber as being strategic and special which translates to the Obama administration as: that sounds like it is potentially dangerous and escalatory, we should not do that very often.

So Cyber Command has this problem with narratives as they are sitting under Strategic Command and they are a little bit frustrated. I mean, imagine, here you have this huge command that is doing all of this important stuff but they are still a sub-unified command. They have to get Strategic Command to sign-off on almost everything because it has all the authorities for buying stuff, for manning, for almost any relevant doctrine or strategy – any important piece of information that comes out of Cyber Command has to go through and be approved by Strategic Command. This is happening right up until the election in 2016.

Now Admiral Rogers is running Cyber Command at the time and he has this group called the Commander’s Action Group, where you have a few scholars sitting – so Emily Goldman, Michael Warner and then a series of rotating fellows who end up having a really large role in this move towards persistent engagement, like Richard Harknett. These are the historical figures whose names are never attached to these documents but were really important in driving them.

Now these three individuals sitting in this Commander’s Action Group are frustrated with deterrence and think there has to be an alternative. This is when you see Richard Harknett start publishing pieces saying: we have to get rid of deterrence and deterrence is not a useful concept in cyberspace. And he starts talking about this idea of persistent engagement, which shows up in the Cyber Command strategic vision that comes out before any of the other strategy and around the same time that Cyber Command is pushing to move from a sub-unified command to a unified command.

So this move from deterrence to persistent engagement was just as much a response to the amount of cyber attacks that were happening in the international sphere as it was to organisational frustration within Cyber Command at how little they had been able to do, or how little they perceived they had been able to do, under Strategic Command.

You will remember that Trump then wins the election and takes over, Rogers is replaced by Nakasone, they also fend-off this big powerplay in domestic politics by the Director of National Intelligence to take the dual hat from Cyber Command and the National Security Agency, and Cyber Command elevates to a unified command. Now this is a really big moment in the institutional history of Cyber Command and you have this group of scholars who have been working really hard on creating the intellectual foundations for a strategy.

But persistent engagement in the DoD’s terms was not actually allowed to be a strategy. So Cyber Command has this idea where they want to be more active and forward leaning, but they are not allowed to call it a strategy. Shortly after, in 2018, the DoD comes out with their strategy – and this is being routed at the same that persistent engagement is coming up, so these two are slightly in competition with each other – and that strategy introduces the concept of defend forward. So defend forward gets published at a different level than Cyber Command, by the Office of the Secretary of Defense, and will for the next four years be consistently confused with persistent engagement.

Defend forward is this idea that you are revaluating the risk matrix and combating adversary cyber operations, not after but before they take place. The language surrounding this is really vague but I interpret defend forward as being: we are going to use offensive operations to attack the adversary’s offensive operations. That is how I interpret it, but the language is super vague. Since 2016, we have had a lot of experimentation and if you follow how Nakasone talks about this and how the White House talks about this, there is a bit of confusion – they are still figuring out what this really means.

With four years of the Trump administration you see a lot of, in some ways, almost benign neglect of Cyber Command, yet at the same time they gave them new authorities to do offensive operations and they became a unified command. So you see a lot of operational experimentation that starts occurring with Cyber Command and, at the same time, the Cybersecurity and Infrastructure Security Agency, under Chris Krebs, is also experimenting. You start seeing, how far are we going to be offensive? What are we going to tell people? What are going to not tell people? What is off limits? What is not off limits? You see a bunch of leaks to David Sanger that seem to suggest that defend forward is actually attacking critical infrastructure, which they then walk back a bit in public comment.

So at this point moving into the Biden administration I think what we have seen, at least as it is publicly discussed, is defend forward being operationalised as: we are going to help our allies by sending cyber protection teams and cyber network defenders into their countries to help them defend forward on their networks, what they call the hunt forward mission. Defend forward is going to be about using both offensive cyber and information operations to actively dissuade and degrade places like the Russian Internet Research Agency (IRA) from conducting information operations. And we have seen a lot less discussion in the last few years about defend forward being, for example, offensive attacks against critical infrastructure – that seems to have completely paired down.

As we move into the Biden administration, I think we are going to see a bit more specificity about what the US thinks are appropriate offensive operations and what are not. At the same time, Cyber Command is a lot more confident in who it is now because it has been a unified command for four years. I think what you see is Cyber Command starting to look a lot more like Special Operations Command and a lot less like Strategic Command, you see them defining and creating their own identity.

That was a really long explanation for the evolution of these ideas, which are constantly conflated. But if people take nothing else: persistent engagement, think of that as like Cyber Command’s motto – we are going to lean forward, we are not going to wait to respond, we are going to be a doing command. And then defend forward is how the DoD thinks about offensive operations below the threshold of violent conflict. Theoretically, you should have a national strategy that pulls this all together. The current national strategy does not really talk to these two but in the future, hopefully under the Biden administration, the national cyber strategy will be leading and pulling all of these elements together.

ES: You touched there on experimentation and the need for a cohesive strategy. Do you think the US currently strikes the right balance between offence and defence, aggression and restraint or however you would like to frame that strategic choice?

JS: I think the US in the last few years has leaned heavily on strategic ambiguity when it comes to offence and this has perhaps unduly suggested that it is being more offensive than it really is. I mean, you are sitting in the UK. The UK is sometimes more risk acceptant in cyberspace than the US, partly because of its bureaucratic politics. A lot of the UK’s cyber capabilities are resident in its intelligence arm instead of being strictly militarised, which means that sometimes they are far more willing to do operations than the US that would wonder if they fit into some sort of military lens.

So the US actually does less offence than you might expect. But because of this strategic ambiguity and how they talk about offence, the way they cage it in these odd terms like defend forward – I mean, we all know this is just offence that they are calling defence – it just looks a bit hypocritical. I think the US can own the offensive measures that they are doing and what are they not doing too.

The reason why you do not see defence come up a lot in these conversations is because the US struggles with how it discusses defence in a strategy. If you look at the US’ broader military strategies, in general there is a proclivity towards offence within them. I do not know if that is the “American way” or just a general desire by militaries to have more control, and there is great work by Barry Posen on this about the role of offensive doctrines. But the US is actually very concerned about defence; they just struggle with the vocabulary of how to talk about it in a strategy and how to outlay those priorities.

I think what we are going to see with the Biden administration is a more sophisticated and mature discussion of what defence is. The word resiliency is going to come up a lot more and hopefully that means they are also going to operationalise resiliency – so what does that mean in terms of investments in technology, infrastructure, people and training. And I think we are going to see a lot more of that.

In general, that discussion has not been very mature. Even while sometimes I think – I hope –that the DoD is becoming more sophisticated in how it thinks about investing in those technologies. They are just struggling with: how do you operationalise that and how do you talk about that in a strategy? So I think the US does less offence than it talks about and that offence is not as big a part of the strategy as you would expect, at least not the day-to-day. Hopefully the next strategy is more explicit about this.

I am also hoping that the next strategy lays out what the US thinks are appropriate offensive measures within status quo conflict and what are not. I think there is a lot of room – I have talked and written about this – for this idea of declaratory restraint at the highest levels and that the US can gain a lot from being more declaratory about what it is not willing to do, and what it says is not appropriate for most actors to do, in cyberspace.

ES: Looking across the Atlantic, the UK has recently been accused of “cyber-rattling” in its foreign policy review, spotlighting its new offensive cyber force at the expense of things like cyber resilience. Are more aggressive, forward-leaning approaches to cyber operations compatible with the strategic goal of liberal democracies to maintain an open, reliable and secure cyberspace?

JS: There is concern that the more geared towards offensive operations that states become the more there will be a general rise in cyber activity – it becomes like, you know, the US Wild West where everyone is just shooting everyone and we do not develop norms of what is appropriate and what is not appropriate in cyberspace.

I think you can be more offensive without it being the Wild West. Because how did the Wild West turn into what California is now, which is actually super regulated? You introduce and you experiment with what is appropriate and what is not appropriate. What are laws? What are ways that we can bind each other’s behaviour? What are punishment mechanisms?

We find that actors sometimes think that cyberspace is the Wild West and they veer too far. With this Colonial Pipeline hack, the criminals put out a statement saying: well, you know, we never meant to sow mayhem… Well, okay. So they pushed too far. Unfortunately for them, they have now highlighted the role that ransomware plays in US critical infrastructure and all of these ransomware attacks, which may previously have not made the news, are making the news. And so now the public says: goodness, this ransomware thing is happening and it seems to matter – it is going to affect me getting my gas, it is going to affect me buying my hotdogs, it is going to affect the hospitals I go to. Then you find, if that is the case, that maybe the Department of Justice is going to get more money, resources or authorities to go after these criminal actors.

So this kind of tit-for-tat is going to happen as states interact and these thresholds are really being defined as they are acted out. But for a state like the US which has made some level of offensive operations part of its strategy, in order to be able to use those without turning cyberspace into the Wild West or escalating things, it needs to do three things.

Firstly, it needs to define what are appropriate actions and what are not appropriate actions. For example, the US is not going to target Russia’s pipeline. It would be helpful to say things like that: we are not going to target critical infrastructure. So they know: okay, we are going to conduct offensive operations but they are going to be at the Russian IRA, they are going to be at the SVR, they are going to be at the Chinese People’s Liberation Army – we are not going to be focussing on critical infrastructure. So I think that helps, number one.

The second thing is the more that states are able to show that these attacks are costly, the less often they are going to happen. So in the past that has been phrased as deterrence by denial but really it is just making defence and resiliency better. Companies are less likely to pay ransomware attackers when their networks and data are resilient, so have backups and make sure that you can recover very quickly. Now that is expensive, but states and companies can invest in resiliency to make offensive operations less likely to occur.

Thirdly, having a credible strategic deterrence when states overreach is really important. So, for example, if Russia or China were to target US critical infrastructure and cause civilian deaths, the US needs to be willing to punish them with conventional kinetic means. And that is, I think, really hard to do.

But having those three things is important to be able to say: yes, we are integrating offensive operations but we are going to do it in a responsible way. So I am more optimistic that states can integrate offensive cyber operations without it escalating into this everybody shooting at everybody Wild West scenario in cyberspace.


Part II of this interview will be published tomorrow on Friday 25th June 2021.

Filed Under: Blog Article, Feature, Interview, Series Tagged With: offensive cyberwarfare, offensive cyberwarfare series

Offensive Cyber Series: Amy Ertan on AI and Military Innovation, Part II

June 18, 2021 by Ed Stacey and Amy Ertan

A soldier operates the remote controlled Mark 8 Wheel Barrow Counter IED Robot. Photo Credit: UK Ministry of Defence, licensed under Creative Commons

This is part II of Ed Stacey’s interview with Amy Ertan on AI and military innovation for Strife’s Offensive Cyber Series. You can find part I here.


ES: I feel like there is a whole interview to be had on this idea of an AI arms race, especially with some of the signals from governments about the importance of these technologies.

AE: We talk about an AI arms race, but actually the number of countries that have the resources to invest in this is really small. The US, of course, is investing billions and billions, and they have their Joint Artificial Intelligence Center which is coordinating AI, including AI for use in cyberspace. The UK invests a huge amount as well and so do a few other states within Europe, for example France. But for the majority of states, say across NATO, AI in conflict is not something that is currently top of the agenda: it is something that is discussed at the strategic level and people know that it will hit and have impact in 20 to 30 years’ time. So we are seeing that strategic discussion but it costs so much that it is just a matter of states buying solutions from the private sector, so lots of questions there to.

ES: On that note, given the private sector is so important in the development of AI, do you think that the advantage lies with liberal democratic states and their innovate, free-market economies or authoritarian states that have greater control over private companies, enhancing military-civil fusion? Or alternatively, is that dichotomy a bit of a cliché?

AE: That dichotomy is a bit of a cliché. I will say, though, that the states that do have control and oversight over their industry, China for example, have a significant advantage when it comes to military-civil fusion and access to big data. China places either top or joint top with the US at the moment – I think there is a separate computing race – when it comes to AI. And when you look at conversations, in the US and UK for example, public-private partnerships are a major focus with AI because you need to partner with companies like Microsoft, IBM, Amazon and Google.

The free-market economy is not something I think has an inherent advantage, which sounds strange to say. But there is an interesting aspect in that for a lot of private sector leaders in AI, governments are not their main target market – they do not need to work for them. There is controversy around what they do, for example with Google and Project Maven.

There has been a shift in the way that military innovation takes place over the last half-century or so and the government now has less control over who works with them than before. So public-private partnership is something that states like the UK and US would love to improve on. There are also challenges for government procurement cycles when it comes to technologies like AI because you need a much faster procurement cycle than you do for a tank or a plane. So working with the private sector is going to become increasingly central to Ministry of Defence procurement strategies moving forward.

ES: Your PhD research explores the unforeseen and unintended security consequences of developing and implementing military AI. Could you speak a little to how these consequences might materialise in or through the cyber domain?

AE: There are two aspects to this: one is the technical security angle and then the second is the strategic security angle. In terms of cyber security aspects, first, you have the threat that your AI system itself may not be acting as intended. Now especially when we think about sophisticated machine learning techniques, you often cannot analyse the results because the algorithm is simply too complicated. For example, if you have developed deep learning or a neural network, there will potentially be hundreds of thousands of nodes and no “explainability” – you have a “black box” problem as to what the algorithm is doing. That can make it very difficult to detect when something goes wrong and we have seen examples of that in the civic space, where it has turned out many years after the fact that an algorithm has been racist or sexist. It is a slightly different challenge in the military sphere: it is not so much about bias but rather is it picking up the right thing? Obviously, within a conflict environment you do not want to detect a threat where there is not one or miss something.

Second, there is the threat that your algorithm or data may be compromised and you would not know. So this could be the input data that you are feeding in or the system itself. For example, you may have a cyber defence algorithm that picks up abnormal activity on your network. A sophisticated attacker could interfere with the programming of that algorithm or tamper with the data so that the algorithm thinks that the attacker has been there all along and, therefore, that it is not abnormal activity and no flags are raised. So the way in which threat modelling does not consider the creativity of attackers, or insufficiency of the algorithm, could lead to something being deployed that is not fit for purpose.

Third, adversarial AI. This is the use of techniques to subvert an AI system, again making something that is deployed fallible. For one perhaps theoretical but technically feasible example, you could deploy an algorithm in cyberspace that would only target certain kinds of infrastructure. Maybe you would want it to not target hospitals, but that could be gamed – everyone could attempt to make their site look like a hospital to the algorithm.

Right now, the technology is too immature and we do not have direct explainability. It is also very difficult to know the right level of confidence to have before deploying an AI system and there are questions around oversight. So while technical challenges around explainability and accuracy may be solved through strict verification and validation procedures that will mature in time with AI capabilities, some of these unintended consequences come down to human factors like trust, oversight and responsibility. For example, how do humans know when to override an AI system

Those societal and policy questions will be tricky and that is what leads you into the strategic debate. For example, what is the appropriate use of AI in an offensive manner through or beyond cyberspace? What is a legitimate target? When it comes to AI and offensive cyber, all of the main questions around offensive cyber remain the same – the ones that traditionally apply to cyber conflict and the ones that we want to start thinking about with sub-threshold conflict. With AI, I think it is the way in which it can be mis-utilised or utilised to scale up inappropriate or unethical activity that is particularly problematic.

ES: How should states go about mitigating those risks? You touched on norms earlier, but because a lot of this work is super secretive, how can we have those conversations or develop regulation when states are, perhaps for good reason, not willing to reveal what they are doing in this space?

AE: Absolutely. Military innovation around AI will always be incredibly secretive. You will have these propriety algorithms that external parties cannot trust, and this is really difficult in the military space where the data is so limited anyway. I mentioned earlier that you can feed three million pictures of cats into an algorithm that then learns to recognise a cat, but there are way fewer images of tanks in the Baltic region or particular kinds of weapon. The data is much more limited in secretive military contexts and it potentially is not being shared between nations to the extent that might be desirable when it comes to building up a better data set that would lead to more accurate decisions. So encouraging information sharing to develop more robust algorithms would be one thing that could mitigate those technical risks.

Talking about broader conversations, norms and regulations. I think regulation is difficult. We have seen that with associated technologies: regulation moves quite slowly and will potentially fail to capture what happens in 10, 15 or 20 years’ time because we cannot foresee the way in which this technology will be deployed. Norms, yes, there is potential there. You can encourage principles, not only in the kinetic space but there are also statements and agreements around cyberspace – NATO’s Cyber Defence Pledge, for example, and the Paris Call. States can come together and agree on baseline behaviours of how to act. It is always difficult to get consensus and it is slow, but once you have it that can be quite a powerful assurance – not confirmation that AI will not be used in offensive cyber in undesirable ways, but it gives some assurance to alliance structures.

And those kinds of conversations can prove the basis for coming together to innovate as well. So we already see, for example, while the UK and US have the power and resources to invest themselves, across NATO groups of countries are coming together to look at certain problems, for example to procure items together, which may well be the path towards military AI.

It is difficult and you cannot force states to cooperate in this way, but it is also in the interests of some states. For example, if the US has invested billions in military AI for cyber purposes, it is also in its interest that its allies are secure as well and that the wider ecosystem is secure. So it may choose to share some of those capabilities to allies, not the most secretive nor the raw data but, for example, the principles to which it abides by or certain open source tools. Then we start thinking about trust networks, whether that is the Five Eyes, NATO or other alliance structures too. So it is not hopeless.


The final interview in Strife’s Offensive Cyber Series is with Dr Jacquelyn Schneider on cyber strategy. It will be released in two parts on Thursday 24th and Friday 25th June 2021.

Filed Under: Blog Article, Feature, Series Tagged With: AI, amy ertan, Artificial Intelligence, cyber, Cybersecurity, cyberwarfare, ed stacey, Military Cyber, offensive cyberwarfare, offensive cyberwarfare series

Offensive Cyber Series: Amy Ertan on AI and Military Innovation, Part I

June 17, 2021 by Ed Stacey and Amy Ertan

Photo Credit: Mike MacKenzie, licensed via Creative Commons.

On Wednesday 17th March, Strife Interviewer Ed Stacey sat down with Amy Ertan to discuss offensive cyber in the context of artificial intelligence (AI) and military innovation. For part three of Strife’s Offensive Cyber Series, Ms Ertan discusses the current role of AI in offensive cyber and potential future trajectories, including effects on the offence-defence balance and arms racing, as well as her PhD research, which explores the unforeseen and unintended security consequences of developing and implementing military AI.

Ed Stacey: Amy, could you start by briefly defining AI in the context of offensive cyber. Are we really just talking about machine learning, for example?

Amy Ertan: Artificial intelligence is not just machine learning algorithms – it is a huge range of technologies. There is a whole history of AI that goes back to before the mid-1970s and late-80s: rule-based AI and knowledge-based AI which is, as it sounds, learning based on rules and logic. Then in the last decade or so we have seen a huge uptick in machine learning-based algorithms and its various sub-branches, including deep-learning and neural networks, which are incredibly complex algorithms that we cannot actually understand as humans. So, in summary, AI is a big umbrella term for different kinds of learning technologies.

At the same time, there is some snake oil in the market and a lot of what people call AI can just be probabilistic statistics. Being generous, some of the start-ups that you see are doing if-then algorithms that we could probably do on Excel. That does not, of course, account for the tech giant stuff. But when we talk about AI, we have everything from the super basic things that are not really AI to the incredibly well-financed, billion dollar projects that we see at Amazon, Microsoft and so on.

Machine learning is where a lot of today’s cutting edge research is. So the idea that you can feed data, potentially untagged data – unsupervised learning – into an algorithm, let the algorithm work through that and then make predictions based on that data. So, for example, you feed in three million pictures of cats and if the algorithm works as intended, it will then recognise what is and is not a cat.

In terms of how that fits into offensive cyber, AI is another tool in the toolkit. A learning algorithm, depending on how it is designed and used, will be just like any other cyber tool that you might have, only with learning technology within it. I would make the point that it is not something that we see being utilised today in terms of pure cyber attacks because it is not mature enough to be creative. The machine learning AI that we have right now is very good at narrow tasks, but you cannot just launch it and there is no “AI cyber attack” at the moment.

ES: How might AI enhance or facilitate offensive cyber operations?

AE: As I said, AI is not being used extensively today in offensive cyber operations. The technology is too immature, although we do see AI doing interesting things when it has a narrow scope – like voice or image recognition, text generation or predictive analytics on a particular kind of data set. But looking forward, there are very feasible and clear ways in which AI-enabled technologies might enhance or facilitate cyber operations, both on the offensive and defensive side.

In general, you can talk about the way that AI-enabled tools can speed up or scale up an activity. One example of how AI might enhance offensive cyber operations is through surveillance and reconnaissance. We see already, for example, AI-enabled tools being used in intelligence processing for imagery, like drone footage, saving a huge amount of time and vastly expanding the capacity of that intelligence processing. You could predict that being used to survey a cyber network.

Using AI to automate reconnaissance, to do that research – the very first stage of a cyber attack – is not a capability that you have now. But it would certainly enhance a cyber operation in terms of working out the best target at an organisation – where the weak link was, the best way in. So there is a lot that could be done.

ES: Are we talking then about simply an evolution of currently automated functions or does AI have the potential to revolutionise offensive cyber?

AE: In terms of whether AI will be just a new step or a revolution, generally my research has shown that it will be pretty revolutionary. AI-enabled technology has the power to revolutionise conflict and cyber conflict, and to a large extent that is through an evolution of automated functions and autonomous capabilities. I think the extent to which it is a full-blown revolution will depend on how actors use it.

Within cyberspace, you have this aspect that there might be AI versus AI cyber conflict in the future. Where your offensive cyber tool – your intrusion, your exploit tool – goes head-to-head with your target’s AI-enabled cyber defence tools, which might be intrusion prevention or spam filtering tools that are already AI-enabled. It really depends on how capabilities are used. You will have human creativity but then an AI algorithm makes decisions in ways that humans do not, so that will change some aspects of how offensive cyber activity takes place.

There is debate as to whether this is a cyber attack or information warfare, but I think deep fakes would be an example of a technology or tool that is already being used, falsifying information, that has revolutionised information warfare because of the scale and the nature of the internet today. So how far AI revolutionises offensive cyber will depend not only on its use but also a complex set of interconnections between AI, big data, online connectedness and digital reliance that will come together to change the way that conflict takes place online.

That is a complicated, long answer to say: it depends, but AI definitely does have the potential to revolutionise offensive cyber.

ES: No, thank you – I appreciate that revolutionary is a bit of a loaded term.

AE: Yes, there is a lot of hyperbole when you talk about AI in warfare. But through my doctoral research, every industry practitioner and policy-maker that I have spoken to has agreed that it is a game-changer. Whether or not you agree with the hype, it changes the rules of the game because the speed completely changes and the nature of an attack may completely change. So you definitely cannot say that the power of big data and the power of AI will not change things.

ES: This next question is from Dr Daniel Moore, who I spoke to last week for part two of this series. He was wondering if you think that AI will significantly alter the balance between offence and defence in cyberspace?

AE: I am going to disappoint Danny and say: we do not know yet. We do already see, of course, this interesting balance that states are choosing when they pick their own defence versus offence postures. And I think it is really important to note here that AI is just one tool in the arsenal for a team that is tasked with offensive cyber capabilities. At this point, I do not predict it making a huge difference.

At least when we talk about state-coordinated offensive cyber – sophisticated attacks, taking down adversaries or against critical national infrastructure, for example – they require such sophisticated, niche tools that the automation capabilities provided by AI are unlikely to offer any cutting-edge advantage there. So that depends. AI cyber defence tools streamline a huge amount of activity, whether that is picking out abnormal activities in your network or your logs, that eliminates a huge amount of manual analysis that cyber defence analysts might have to do and gives them more time for meaningful analysis.

AI speeds up and streamlines activity on both the offensive and defensive side, so I think it simply fits into the wider policy discussions for a state. It is one aspect but not the determining aspect, at the moment anyway or in the near future.

ES: And I guess the blurring of the lines between offence and defence in some cyber postures complicates the issue a little?

AE: Yes, especially when you look at the US and the way they define persistent engagement and defending forward. It is interesting as to where different states will draw their own lines on reaching outside their networks to take down the infrastructure of someone they know is attacking them – offensive activity for defensive purposes. So I think the policy question is much bigger than AI.

ES: Thinking more geopolitically, the UK’s Integrated Review was heavy on science and new technologies and other countries are putting a lot of resources into AI as well. There seems to be some element of a security dilemma here, but would you go so far as to say that we are seeing the start of a nascent AI arms race – what is your view of that framing?

AE: I think to an extent, yes, we do see aspects of a nascent AI arms race. But it is across all sectors, which comes back to AI as a dual-use technology. The Microsoft AI capability that we use now to chat with friends is also being used by NATO command structures and other military structures in command and control infrastructure, albeit in a slightly different form.

Because cutting-edge AI is being developed by private companies, which have the access and resources to do this, it is not like there is this huge arsenal of inherently weaponised AI tools. On the flip side, AI as a dual-use technology means that everything can be weaponised or gamed with enough capability. So it is a very messy landscape.

There have been large debates around autonomous systems in conflict generally, like drones, and I think there is an extent to which we can apply this to cyberspace too. While there is this security dilemma aspect, it is not in any states’ interests to escalate into full-blown warfare that cannot be deescalated and that threatens their citizens, so tools and capabilities should be used carefully.

Now there is a limit to how much you can apply this to cyberspace because of its invisible nature, the lack of transparency and a completely different deterrence structure. But there is an argument that states will show restraint in weaponizing AI where it is not in their interest. You see this conversation taking place, for example, around lethal autonomous weapons at the United Nations Group of Governmental Experts, where it is generally considered that taking the human out of the loop is highly undesirable. But it is complicated and early days.

Looking at the UK, my research has shown that there is pressure to develop AI capabilities in this space and there are perceptions of an AI arms race across the private sector, which is who I spoke to. And there is this awareness that AI investment must happen, in a large part because of anticipated behaviour of adversary states – the idea that other states do not have the same ethical or legal constraints when it comes to offensive cyber or the use of military AI, which is what my PhD thesis focuses on. The only preventative answer to stop this security mechanism building up into an AI arms race seems to be some kind of consensus mechanism, whereby like-minded states agree not to weaponize AI in this way. That is why my research has taken me to NATO, to look in the military context at what kinds of norms can be developed and whether there is a role for international agreement in this way.

If I had to summarise that argument into one or two sentences: there are trends suggesting that there is an AI arms race which is bigger than conflict, bigger than the military and bigger than cyber. So you have to rely on the security interests of the states themselves not to escalate and to potentially form alliance agreements to prevent escalation.


Part II of this interview will be published tomorrow on Friday 18th June 2021.

Filed Under: Blog Article, Feature, Series Tagged With: amy ertan, Cyberwar, cyberwarfare, ed stacey, offensive cyberwarfare, offensive cyberwarfare series, Series, Strife series

Offensive Cyber Series: Dr Daniel Moore on Cyber Operations, Part II

June 11, 2021 by Dr Daniel Moore and Ed Stacey

Photo Credit: Ecole polytechnique / Paris / France, licensed with CC BY-SA 2.0.

This is part II of Ed Stacey’s interview with Dr Daniel Moore on cyber operations for Strife’s Offensive Cyber Series. You can find Part I here.


ES: Thinking about alliances more broadly, what sort of opportunities and challenges do allies face when conducting joint operations in cyberspace?

DM: Allied operations on networks – I am not a fan of cyberspace – are contentious as well. They are a good measure more sensitive than any conventional equivalent that you can think of. It is not like having a joint military operation: it means putting your sensitive infrastructure and capabilities on the line alongside an ally. That is not to say it does not happen and there have been documented cases which were purportedly joint operations by multiple countries. So I think it will happen, but there are complexities involved. I know that NATO has already declared that they are together, as an alliance, bringing forward cyber capabilities that they will use jointly. I welcome that declaration, even if I am sceptical as to what it actually means.

I would tend to believe that, considering how porous NATO is as an entity and how there are varying levels of trust within NATO, truly sensitive capabilities will be kept off the table by individual member states in favour of their own arsenals and sets of strategic capabilities. This is not to say it is not possible, but it is unlikely that at a NATO level you will see joint operations that are truly strategic in nature. What you might see is allied members that are operating together. I do not think that, for example, a joint UK-US operation against a target is out of the question, especially if one brings a certain set of capabilities to the table and one brings others – somebody gives the tools, this unit has the relevant exploits, this intelligence organisation had already developed access to that adversary and so on. Melding that together has a lot of advantages, but it requires a level of operational intimacy that is higher than what you would be able to achieve at the NATO alliance level.

ES: Moving beyond the state, what role does the private sector play in the operational side of offensive cyber? Do we have the equivalent of private military contractors in cyberspace, for example?

DM: There is a massive role for the private sector across the entire operational chain within offensive cyber operations. I would say a few things on this. Yes, they cover the entire chain of operations and that includes vulnerability research, exploit development, malicious tool development and then even specific outfits that carry out the entire operational lifecycle, so actually conduct the intrusion itself for whatever purposes. In some cases, it is part of an industrial-defence complex like in the US, for example, where you have some of the giant players in defence developing offensive capabilities, both on the event- and presence-based side of things. And ostensibly you would have some of those folks contributing contractors and operators to actually facilitate operations.

But in other countries that have a more freeform or less mature public sector model for facilitating offensive cyber operations, the reliance on third party private organisations is immense. If you look, for example, at some of the US indictments against Iranian entities, you will see that they charged quite a few Iranian private companies for engaging in offensive cyber operations. The same happens in China as well, where you see private sector entities engaging in operations driven by public sector objectives. In some cases, they are entirely subsumed by a government entity, whereas in others they are just doing work on their behalf. In some cases, you actually see them use the same infrastructure in one beat for national security objectives, then the workday ends and they pivot and start doing ransomware to get some more cash in the evenings – using the same tools or infrastructure, or something slightly different. So, yes, the private sector plays an immense role throughout this entire ecosystem, mostly because the cost of entry is low and the opportunities are vast.

ES: Just to finish, you have a book coming out soon on offensive cyber. Can you tell us anything about what to expect and does it have a title or release date yet?

DM: The book is planned for release in October. It will be titled Offensive Cyber Operations: Understanding Intangible Warfare, and it is basically a heavily processed version of my PhD thesis that has been adapted, firstly, with some additional content to reflect more case studies, but also to appeal to anybody who is interested in the topic without necessarily having a background in cyber nor military strategy and doctrine. So it is trying to bridge the gap and make the book accessible, exactly to dispel some of the ambiguities around the utility of cyber operations. Questions like, how they are currently being used? What can they be used for? What does the “cyber war” narrative mean? When does an offensive cyber operation actually qualify as an act of cyber warfare? And, most importantly, what are the key differences between how different countries approach offensive cyber operations? Things like organisational culture, different levels of maturity, strategic doctrine and even just circumstance really shape how counties approach the space.

So I tackle four case studies – Russia, the US, China and Iran – and each one of those countries has unique advantages and disadvantages, they bring something else to the table and have an entirely different set of circumstances for how they engage. For example, the Iranians are incredibly aggressive and loud in their offensive cyber operations. But the other side to this is that they lack discipline, their tools tend to be of a lower quality and while they are able to achieve tactical impact, it does not always translate to long-term success.

The US is very methodical in its approach – you can see, taste and smell the bureaucracy in every major operation that it does. But that bureaucratic entanglement and the constant tension between the National Security Agency, Cyber Command and other involved military entities results in a more ponderous approach to cyber operations, although those organisations obviously bring a tonne of access and capability.

With the Russians, you can clearly see how they do not address cyber operations as a distinct field. Instead, they look at the information spectrum more holistically, which is of pivotal importance to them – so shaping what is “the truth” and creating the narrative for longer-term strategic success is more important than the specifics. That being said, they are also one of the most prolific offensive actors that we have seen, including multiple attacks against global critical infrastructure and various aggressive worms that exacted a heavy toll from targets. So for Russia, if you start looking at their military doctrine, you can see just how much they borrow, not only from their past in electronic warfare but also their extensive past in information operations, and how those blend together to create a broader spectrum of information capabilities in which offensive cyber operations are just one component.

And finally, the Chinese are prolific actors in cyber espionage – provably so. They have significant technical capabilities, perhaps somewhat shy of their American counterparts but they are high up there. They took interesting steps to solidify their cyber capabilities under a military mandate when they established the Strategic Support Force, which again – like the NCF – tried to resolve organisational tensions by coalescing those capabilities. But they are largely unproven in the offensive space. They do have an interesting scenario on their plate to which cyber could and may play a role, which is any attempt at reclaiming Taiwan – something I look at extensively in the book and how that shapes their offensive posture.

So the book is a combination of a broader analysis of the significance of cyber operations and then how they are concretely applied by different nations for different purposes.


The next interview in Strife’s Offensive Cyber Series is with Amy Ertan on AI and military innovation. It will be released in two parts on Thursday 17th and Friday 18th June 2021.

Filed Under: Blog Article, Feature, Series Tagged With: cyber, Cyber Operations, Cyber Security, daniel moore, Dr Daniel Moore, ed stacey, offensive cyberwarfare, offensive cyberwarfare series, Series, Strife series

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Footer

Contact

The Strife Blog & Journal

King’s College London
Department of War Studies
Strand Campus
London
WC2R 2LS
United Kingdom

[email protected]

 

Recent Posts

  • Strife Recruitment for 2021-2022 is Open!
  • Belarus: Rogue State
  • Military Mayhem in Myanmar: the end of a democratic experiment
  • Greco-Turkish Relations: Two Centuries of Constant Competition
  • Series on Women and Children’s Health in Conflict – Children with craniofacial anomalies in the Gaza Strip: treatment options and access to care.

Tags

Afghanistan Africa Brexit China Climate Change conflict counterterrorism COVID-19 Cybersecurity Cyber Security Diplomacy Donald Trump drones Elections EU feature foreign policy France India intelligence Iran Iraq ISIL ISIS Israel ma NATO North Korea nuclear Pakistan Politics Russia security strategy Strife series Syria terrorism Turkey UK Ukraine United States us USA women Yemen

Licensed under Creative Commons (Attribution, Non-Commercial, No Derivatives) | Proudly powered by Wordpress & the Genesis Framework