• Skip to primary navigation
  • Skip to main content
  • Skip to footer
  • Home
  • About
    • Editorial Staff
      • Bryan Strawser, Editor in Chief, Strife
      • Dr Anna B. Plunkett, Founder, Women in Writing
      • Strife Journal Editors
      • Strife Blog Editors
      • Strife Communications Team
      • Senior Editors
      • Series Editors
      • Copy Editors
      • Strife Writing Fellows
      • Commissioning Editors
      • War Studies @ 60 Project Team
      • Web Team
    • Publication Ethics
    • Open Access Statement
  • Archive
  • Series
  • Strife Journal
  • Strife Policy Papers
    • Strife Policy Papers: Submission Guidelines
    • Vol 1, Issue 1 (June 2022): Perils in Plain Sight
  • Contact us
  • Submit to Strife!

Strife

The Academic Blog of the Department of War Studies, King's College London

  • Announcements
  • Articles
  • Book Reviews
  • Call for Papers
  • Features
  • Interviews
  • Strife Policy Papers
    • Strife Policy Papers: Submission Guidelines
    • Vol 1, Issue 1 (June 2022): Perils in Plain Sight
You are here: Home / Archives for Series

Series

Offensive Cyber Series: Dr Jacquelyn Schneider on Cyber Strategy, Part I

June 24, 2021 by Ed Stacey and Dr Jacquelyn Schneider

Photo Credit: US Secretary of Defense, licensed under Creative Commons

On Wednesday 17th March, Strife Interviewer Ed Stacey sat down with Dr Jacquelyn Schneider to discuss the role of offensive cyber operations in cyber strategy. For the final part of Strife’s Offensive Cyber Series, Dr Jacquelyn Schneider outlines the origins of defend forward and persistent engagement as well as discussing the relationship between offence and defence in cyber strategy, the potential for a no first use policy with regards to strategic cyber-attacks and potential future trajectories in the US’ approach to cyber operations.

Ed Stacey: Jackie, if we could start back in 2018. How did the arrival of “defend forward” and “persistent engagement” alter the role and significance of offensive cyber operations within US cyber strategy?

Jacquelyn Schneider: I think the move towards persistent engagement and defend forward was a confluence of both US domestic factors, including organisational and bureaucratic politics, and the international situation.

From 2014 onwards, you see a pretty significant uptick in both the severity and amount of cyber activity, culminating in the 2016 elections with the Russian hack and release of Democratic National Committee information and cyber-enabled information operations. So there is this big change happening where the US is accepting and realising how important cyber operations are going to be both for domestic stability and international security. At the same time, you have these strange institutional politics going on within the Department of Defense (DoD) and particularly Cyber Command.

For those who are not followers of DoD internal politics, Cyber Command did not start off as its own command. It starts out as this task force and then as time goes by it becomes a sub-unified command, so it falls under Strategic Command. Now this is really important to the story of defend forward because Strategic Command is focussed on deterrence – this is the nuclear weapons command. And in their narrative about deterrence, they phrase offensive cyber as being strategic and special which translates to the Obama administration as: that sounds like it is potentially dangerous and escalatory, we should not do that very often.

So Cyber Command has this problem with narratives as they are sitting under Strategic Command and they are a little bit frustrated. I mean, imagine, here you have this huge command that is doing all of this important stuff but they are still a sub-unified command. They have to get Strategic Command to sign-off on almost everything because it has all the authorities for buying stuff, for manning, for almost any relevant doctrine or strategy – any important piece of information that comes out of Cyber Command has to go through and be approved by Strategic Command. This is happening right up until the election in 2016.

Now Admiral Rogers is running Cyber Command at the time and he has this group called the Commander’s Action Group, where you have a few scholars sitting – so Emily Goldman, Michael Warner and then a series of rotating fellows who end up having a really large role in this move towards persistent engagement, like Richard Harknett. These are the historical figures whose names are never attached to these documents but were really important in driving them.

Now these three individuals sitting in this Commander’s Action Group are frustrated with deterrence and think there has to be an alternative. This is when you see Richard Harknett start publishing pieces saying: we have to get rid of deterrence and deterrence is not a useful concept in cyberspace. And he starts talking about this idea of persistent engagement, which shows up in the Cyber Command strategic vision that comes out before any of the other strategy and around the same time that Cyber Command is pushing to move from a sub-unified command to a unified command.

So this move from deterrence to persistent engagement was just as much a response to the amount of cyber attacks that were happening in the international sphere as it was to organisational frustration within Cyber Command at how little they had been able to do, or how little they perceived they had been able to do, under Strategic Command.

You will remember that Trump then wins the election and takes over, Rogers is replaced by Nakasone, they also fend-off this big powerplay in domestic politics by the Director of National Intelligence to take the dual hat from Cyber Command and the National Security Agency, and Cyber Command elevates to a unified command. Now this is a really big moment in the institutional history of Cyber Command and you have this group of scholars who have been working really hard on creating the intellectual foundations for a strategy.

But persistent engagement in the DoD’s terms was not actually allowed to be a strategy. So Cyber Command has this idea where they want to be more active and forward leaning, but they are not allowed to call it a strategy. Shortly after, in 2018, the DoD comes out with their strategy – and this is being routed at the same that persistent engagement is coming up, so these two are slightly in competition with each other – and that strategy introduces the concept of defend forward. So defend forward gets published at a different level than Cyber Command, by the Office of the Secretary of Defense, and will for the next four years be consistently confused with persistent engagement.

Defend forward is this idea that you are revaluating the risk matrix and combating adversary cyber operations, not after but before they take place. The language surrounding this is really vague but I interpret defend forward as being: we are going to use offensive operations to attack the adversary’s offensive operations. That is how I interpret it, but the language is super vague. Since 2016, we have had a lot of experimentation and if you follow how Nakasone talks about this and how the White House talks about this, there is a bit of confusion – they are still figuring out what this really means.

With four years of the Trump administration you see a lot of, in some ways, almost benign neglect of Cyber Command, yet at the same time they gave them new authorities to do offensive operations and they became a unified command. So you see a lot of operational experimentation that starts occurring with Cyber Command and, at the same time, the Cybersecurity and Infrastructure Security Agency, under Chris Krebs, is also experimenting. You start seeing, how far are we going to be offensive? What are we going to tell people? What are going to not tell people? What is off limits? What is not off limits? You see a bunch of leaks to David Sanger that seem to suggest that defend forward is actually attacking critical infrastructure, which they then walk back a bit in public comment.

So at this point moving into the Biden administration I think what we have seen, at least as it is publicly discussed, is defend forward being operationalised as: we are going to help our allies by sending cyber protection teams and cyber network defenders into their countries to help them defend forward on their networks, what they call the hunt forward mission. Defend forward is going to be about using both offensive cyber and information operations to actively dissuade and degrade places like the Russian Internet Research Agency (IRA) from conducting information operations. And we have seen a lot less discussion in the last few years about defend forward being, for example, offensive attacks against critical infrastructure – that seems to have completely paired down.

As we move into the Biden administration, I think we are going to see a bit more specificity about what the US thinks are appropriate offensive operations and what are not. At the same time, Cyber Command is a lot more confident in who it is now because it has been a unified command for four years. I think what you see is Cyber Command starting to look a lot more like Special Operations Command and a lot less like Strategic Command, you see them defining and creating their own identity.

That was a really long explanation for the evolution of these ideas, which are constantly conflated. But if people take nothing else: persistent engagement, think of that as like Cyber Command’s motto – we are going to lean forward, we are not going to wait to respond, we are going to be a doing command. And then defend forward is how the DoD thinks about offensive operations below the threshold of violent conflict. Theoretically, you should have a national strategy that pulls this all together. The current national strategy does not really talk to these two but in the future, hopefully under the Biden administration, the national cyber strategy will be leading and pulling all of these elements together.

ES: You touched there on experimentation and the need for a cohesive strategy. Do you think the US currently strikes the right balance between offence and defence, aggression and restraint or however you would like to frame that strategic choice?

JS: I think the US in the last few years has leaned heavily on strategic ambiguity when it comes to offence and this has perhaps unduly suggested that it is being more offensive than it really is. I mean, you are sitting in the UK. The UK is sometimes more risk acceptant in cyberspace than the US, partly because of its bureaucratic politics. A lot of the UK’s cyber capabilities are resident in its intelligence arm instead of being strictly militarised, which means that sometimes they are far more willing to do operations than the US that would wonder if they fit into some sort of military lens.

So the US actually does less offence than you might expect. But because of this strategic ambiguity and how they talk about offence, the way they cage it in these odd terms like defend forward – I mean, we all know this is just offence that they are calling defence – it just looks a bit hypocritical. I think the US can own the offensive measures that they are doing and what are they not doing too.

The reason why you do not see defence come up a lot in these conversations is because the US struggles with how it discusses defence in a strategy. If you look at the US’ broader military strategies, in general there is a proclivity towards offence within them. I do not know if that is the “American way” or just a general desire by militaries to have more control, and there is great work by Barry Posen on this about the role of offensive doctrines. But the US is actually very concerned about defence; they just struggle with the vocabulary of how to talk about it in a strategy and how to outlay those priorities.

I think what we are going to see with the Biden administration is a more sophisticated and mature discussion of what defence is. The word resiliency is going to come up a lot more and hopefully that means they are also going to operationalise resiliency – so what does that mean in terms of investments in technology, infrastructure, people and training. And I think we are going to see a lot more of that.

In general, that discussion has not been very mature. Even while sometimes I think – I hope –that the DoD is becoming more sophisticated in how it thinks about investing in those technologies. They are just struggling with: how do you operationalise that and how do you talk about that in a strategy? So I think the US does less offence than it talks about and that offence is not as big a part of the strategy as you would expect, at least not the day-to-day. Hopefully the next strategy is more explicit about this.

I am also hoping that the next strategy lays out what the US thinks are appropriate offensive measures within status quo conflict and what are not. I think there is a lot of room – I have talked and written about this – for this idea of declaratory restraint at the highest levels and that the US can gain a lot from being more declaratory about what it is not willing to do, and what it says is not appropriate for most actors to do, in cyberspace.

ES: Looking across the Atlantic, the UK has recently been accused of “cyber-rattling” in its foreign policy review, spotlighting its new offensive cyber force at the expense of things like cyber resilience. Are more aggressive, forward-leaning approaches to cyber operations compatible with the strategic goal of liberal democracies to maintain an open, reliable and secure cyberspace?

JS: There is concern that the more geared towards offensive operations that states become the more there will be a general rise in cyber activity – it becomes like, you know, the US Wild West where everyone is just shooting everyone and we do not develop norms of what is appropriate and what is not appropriate in cyberspace.

I think you can be more offensive without it being the Wild West. Because how did the Wild West turn into what California is now, which is actually super regulated? You introduce and you experiment with what is appropriate and what is not appropriate. What are laws? What are ways that we can bind each other’s behaviour? What are punishment mechanisms?

We find that actors sometimes think that cyberspace is the Wild West and they veer too far. With this Colonial Pipeline hack, the criminals put out a statement saying: well, you know, we never meant to sow mayhem… Well, okay. So they pushed too far. Unfortunately for them, they have now highlighted the role that ransomware plays in US critical infrastructure and all of these ransomware attacks, which may previously have not made the news, are making the news. And so now the public says: goodness, this ransomware thing is happening and it seems to matter – it is going to affect me getting my gas, it is going to affect me buying my hotdogs, it is going to affect the hospitals I go to. Then you find, if that is the case, that maybe the Department of Justice is going to get more money, resources or authorities to go after these criminal actors.

So this kind of tit-for-tat is going to happen as states interact and these thresholds are really being defined as they are acted out. But for a state like the US which has made some level of offensive operations part of its strategy, in order to be able to use those without turning cyberspace into the Wild West or escalating things, it needs to do three things.

Firstly, it needs to define what are appropriate actions and what are not appropriate actions. For example, the US is not going to target Russia’s pipeline. It would be helpful to say things like that: we are not going to target critical infrastructure. So they know: okay, we are going to conduct offensive operations but they are going to be at the Russian IRA, they are going to be at the SVR, they are going to be at the Chinese People’s Liberation Army – we are not going to be focussing on critical infrastructure. So I think that helps, number one.

The second thing is the more that states are able to show that these attacks are costly, the less often they are going to happen. So in the past that has been phrased as deterrence by denial but really it is just making defence and resiliency better. Companies are less likely to pay ransomware attackers when their networks and data are resilient, so have backups and make sure that you can recover very quickly. Now that is expensive, but states and companies can invest in resiliency to make offensive operations less likely to occur.

Thirdly, having a credible strategic deterrence when states overreach is really important. So, for example, if Russia or China were to target US critical infrastructure and cause civilian deaths, the US needs to be willing to punish them with conventional kinetic means. And that is, I think, really hard to do.

But having those three things is important to be able to say: yes, we are integrating offensive operations but we are going to do it in a responsible way. So I am more optimistic that states can integrate offensive cyber operations without it escalating into this everybody shooting at everybody Wild West scenario in cyberspace.


Part II of this interview will be published tomorrow on Friday 25th June 2021.

Filed Under: Blog Article, Feature, Interview, Series Tagged With: offensive cyberwarfare, offensive cyberwarfare series

Offensive Cyber Series: Amy Ertan on AI and Military Innovation, Part II

June 18, 2021 by Ed Stacey and Amy Ertan

A soldier operates the remote controlled Mark 8 Wheel Barrow Counter IED Robot. Photo Credit: UK Ministry of Defence, licensed under Creative Commons

This is part II of Ed Stacey’s interview with Amy Ertan on AI and military innovation for Strife’s Offensive Cyber Series. You can find part I here.


ES: I feel like there is a whole interview to be had on this idea of an AI arms race, especially with some of the signals from governments about the importance of these technologies.

AE: We talk about an AI arms race, but actually the number of countries that have the resources to invest in this is really small. The US, of course, is investing billions and billions, and they have their Joint Artificial Intelligence Center which is coordinating AI, including AI for use in cyberspace. The UK invests a huge amount as well and so do a few other states within Europe, for example France. But for the majority of states, say across NATO, AI in conflict is not something that is currently top of the agenda: it is something that is discussed at the strategic level and people know that it will hit and have impact in 20 to 30 years’ time. So we are seeing that strategic discussion but it costs so much that it is just a matter of states buying solutions from the private sector, so lots of questions there to.

ES: On that note, given the private sector is so important in the development of AI, do you think that the advantage lies with liberal democratic states and their innovate, free-market economies or authoritarian states that have greater control over private companies, enhancing military-civil fusion? Or alternatively, is that dichotomy a bit of a cliché?

AE: That dichotomy is a bit of a cliché. I will say, though, that the states that do have control and oversight over their industry, China for example, have a significant advantage when it comes to military-civil fusion and access to big data. China places either top or joint top with the US at the moment – I think there is a separate computing race – when it comes to AI. And when you look at conversations, in the US and UK for example, public-private partnerships are a major focus with AI because you need to partner with companies like Microsoft, IBM, Amazon and Google.

The free-market economy is not something I think has an inherent advantage, which sounds strange to say. But there is an interesting aspect in that for a lot of private sector leaders in AI, governments are not their main target market – they do not need to work for them. There is controversy around what they do, for example with Google and Project Maven.

There has been a shift in the way that military innovation takes place over the last half-century or so and the government now has less control over who works with them than before. So public-private partnership is something that states like the UK and US would love to improve on. There are also challenges for government procurement cycles when it comes to technologies like AI because you need a much faster procurement cycle than you do for a tank or a plane. So working with the private sector is going to become increasingly central to Ministry of Defence procurement strategies moving forward.

ES: Your PhD research explores the unforeseen and unintended security consequences of developing and implementing military AI. Could you speak a little to how these consequences might materialise in or through the cyber domain?

AE: There are two aspects to this: one is the technical security angle and then the second is the strategic security angle. In terms of cyber security aspects, first, you have the threat that your AI system itself may not be acting as intended. Now especially when we think about sophisticated machine learning techniques, you often cannot analyse the results because the algorithm is simply too complicated. For example, if you have developed deep learning or a neural network, there will potentially be hundreds of thousands of nodes and no “explainability” – you have a “black box” problem as to what the algorithm is doing. That can make it very difficult to detect when something goes wrong and we have seen examples of that in the civic space, where it has turned out many years after the fact that an algorithm has been racist or sexist. It is a slightly different challenge in the military sphere: it is not so much about bias but rather is it picking up the right thing? Obviously, within a conflict environment you do not want to detect a threat where there is not one or miss something.

Second, there is the threat that your algorithm or data may be compromised and you would not know. So this could be the input data that you are feeding in or the system itself. For example, you may have a cyber defence algorithm that picks up abnormal activity on your network. A sophisticated attacker could interfere with the programming of that algorithm or tamper with the data so that the algorithm thinks that the attacker has been there all along and, therefore, that it is not abnormal activity and no flags are raised. So the way in which threat modelling does not consider the creativity of attackers, or insufficiency of the algorithm, could lead to something being deployed that is not fit for purpose.

Third, adversarial AI. This is the use of techniques to subvert an AI system, again making something that is deployed fallible. For one perhaps theoretical but technically feasible example, you could deploy an algorithm in cyberspace that would only target certain kinds of infrastructure. Maybe you would want it to not target hospitals, but that could be gamed – everyone could attempt to make their site look like a hospital to the algorithm.

Right now, the technology is too immature and we do not have direct explainability. It is also very difficult to know the right level of confidence to have before deploying an AI system and there are questions around oversight. So while technical challenges around explainability and accuracy may be solved through strict verification and validation procedures that will mature in time with AI capabilities, some of these unintended consequences come down to human factors like trust, oversight and responsibility. For example, how do humans know when to override an AI system

Those societal and policy questions will be tricky and that is what leads you into the strategic debate. For example, what is the appropriate use of AI in an offensive manner through or beyond cyberspace? What is a legitimate target? When it comes to AI and offensive cyber, all of the main questions around offensive cyber remain the same – the ones that traditionally apply to cyber conflict and the ones that we want to start thinking about with sub-threshold conflict. With AI, I think it is the way in which it can be mis-utilised or utilised to scale up inappropriate or unethical activity that is particularly problematic.

ES: How should states go about mitigating those risks? You touched on norms earlier, but because a lot of this work is super secretive, how can we have those conversations or develop regulation when states are, perhaps for good reason, not willing to reveal what they are doing in this space?

AE: Absolutely. Military innovation around AI will always be incredibly secretive. You will have these propriety algorithms that external parties cannot trust, and this is really difficult in the military space where the data is so limited anyway. I mentioned earlier that you can feed three million pictures of cats into an algorithm that then learns to recognise a cat, but there are way fewer images of tanks in the Baltic region or particular kinds of weapon. The data is much more limited in secretive military contexts and it potentially is not being shared between nations to the extent that might be desirable when it comes to building up a better data set that would lead to more accurate decisions. So encouraging information sharing to develop more robust algorithms would be one thing that could mitigate those technical risks.

Talking about broader conversations, norms and regulations. I think regulation is difficult. We have seen that with associated technologies: regulation moves quite slowly and will potentially fail to capture what happens in 10, 15 or 20 years’ time because we cannot foresee the way in which this technology will be deployed. Norms, yes, there is potential there. You can encourage principles, not only in the kinetic space but there are also statements and agreements around cyberspace – NATO’s Cyber Defence Pledge, for example, and the Paris Call. States can come together and agree on baseline behaviours of how to act. It is always difficult to get consensus and it is slow, but once you have it that can be quite a powerful assurance – not confirmation that AI will not be used in offensive cyber in undesirable ways, but it gives some assurance to alliance structures.

And those kinds of conversations can prove the basis for coming together to innovate as well. So we already see, for example, while the UK and US have the power and resources to invest themselves, across NATO groups of countries are coming together to look at certain problems, for example to procure items together, which may well be the path towards military AI.

It is difficult and you cannot force states to cooperate in this way, but it is also in the interests of some states. For example, if the US has invested billions in military AI for cyber purposes, it is also in its interest that its allies are secure as well and that the wider ecosystem is secure. So it may choose to share some of those capabilities to allies, not the most secretive nor the raw data but, for example, the principles to which it abides by or certain open source tools. Then we start thinking about trust networks, whether that is the Five Eyes, NATO or other alliance structures too. So it is not hopeless.


The final interview in Strife’s Offensive Cyber Series is with Dr Jacquelyn Schneider on cyber strategy. It will be released in two parts on Thursday 24th and Friday 25th June 2021.

Filed Under: Blog Article, Feature, Series Tagged With: AI, amy ertan, Artificial Intelligence, cyber, Cybersecurity, cyberwarfare, ed stacey, Military Cyber, offensive cyberwarfare, offensive cyberwarfare series

Offensive Cyber Series: Amy Ertan on AI and Military Innovation, Part I

June 17, 2021 by Ed Stacey and Amy Ertan

Photo Credit: Mike MacKenzie, licensed via Creative Commons.

On Wednesday 17th March, Strife Interviewer Ed Stacey sat down with Amy Ertan to discuss offensive cyber in the context of artificial intelligence (AI) and military innovation. For part three of Strife’s Offensive Cyber Series, Ms Ertan discusses the current role of AI in offensive cyber and potential future trajectories, including effects on the offence-defence balance and arms racing, as well as her PhD research, which explores the unforeseen and unintended security consequences of developing and implementing military AI.

Ed Stacey: Amy, could you start by briefly defining AI in the context of offensive cyber. Are we really just talking about machine learning, for example?

Amy Ertan: Artificial intelligence is not just machine learning algorithms – it is a huge range of technologies. There is a whole history of AI that goes back to before the mid-1970s and late-80s: rule-based AI and knowledge-based AI which is, as it sounds, learning based on rules and logic. Then in the last decade or so we have seen a huge uptick in machine learning-based algorithms and its various sub-branches, including deep-learning and neural networks, which are incredibly complex algorithms that we cannot actually understand as humans. So, in summary, AI is a big umbrella term for different kinds of learning technologies.

At the same time, there is some snake oil in the market and a lot of what people call AI can just be probabilistic statistics. Being generous, some of the start-ups that you see are doing if-then algorithms that we could probably do on Excel. That does not, of course, account for the tech giant stuff. But when we talk about AI, we have everything from the super basic things that are not really AI to the incredibly well-financed, billion dollar projects that we see at Amazon, Microsoft and so on.

Machine learning is where a lot of today’s cutting edge research is. So the idea that you can feed data, potentially untagged data – unsupervised learning – into an algorithm, let the algorithm work through that and then make predictions based on that data. So, for example, you feed in three million pictures of cats and if the algorithm works as intended, it will then recognise what is and is not a cat.

In terms of how that fits into offensive cyber, AI is another tool in the toolkit. A learning algorithm, depending on how it is designed and used, will be just like any other cyber tool that you might have, only with learning technology within it. I would make the point that it is not something that we see being utilised today in terms of pure cyber attacks because it is not mature enough to be creative. The machine learning AI that we have right now is very good at narrow tasks, but you cannot just launch it and there is no “AI cyber attack” at the moment.

ES: How might AI enhance or facilitate offensive cyber operations?

AE: As I said, AI is not being used extensively today in offensive cyber operations. The technology is too immature, although we do see AI doing interesting things when it has a narrow scope – like voice or image recognition, text generation or predictive analytics on a particular kind of data set. But looking forward, there are very feasible and clear ways in which AI-enabled technologies might enhance or facilitate cyber operations, both on the offensive and defensive side.

In general, you can talk about the way that AI-enabled tools can speed up or scale up an activity. One example of how AI might enhance offensive cyber operations is through surveillance and reconnaissance. We see already, for example, AI-enabled tools being used in intelligence processing for imagery, like drone footage, saving a huge amount of time and vastly expanding the capacity of that intelligence processing. You could predict that being used to survey a cyber network.

Using AI to automate reconnaissance, to do that research – the very first stage of a cyber attack – is not a capability that you have now. But it would certainly enhance a cyber operation in terms of working out the best target at an organisation – where the weak link was, the best way in. So there is a lot that could be done.

ES: Are we talking then about simply an evolution of currently automated functions or does AI have the potential to revolutionise offensive cyber?

AE: In terms of whether AI will be just a new step or a revolution, generally my research has shown that it will be pretty revolutionary. AI-enabled technology has the power to revolutionise conflict and cyber conflict, and to a large extent that is through an evolution of automated functions and autonomous capabilities. I think the extent to which it is a full-blown revolution will depend on how actors use it.

Within cyberspace, you have this aspect that there might be AI versus AI cyber conflict in the future. Where your offensive cyber tool – your intrusion, your exploit tool – goes head-to-head with your target’s AI-enabled cyber defence tools, which might be intrusion prevention or spam filtering tools that are already AI-enabled. It really depends on how capabilities are used. You will have human creativity but then an AI algorithm makes decisions in ways that humans do not, so that will change some aspects of how offensive cyber activity takes place.

There is debate as to whether this is a cyber attack or information warfare, but I think deep fakes would be an example of a technology or tool that is already being used, falsifying information, that has revolutionised information warfare because of the scale and the nature of the internet today. So how far AI revolutionises offensive cyber will depend not only on its use but also a complex set of interconnections between AI, big data, online connectedness and digital reliance that will come together to change the way that conflict takes place online.

That is a complicated, long answer to say: it depends, but AI definitely does have the potential to revolutionise offensive cyber.

ES: No, thank you – I appreciate that revolutionary is a bit of a loaded term.

AE: Yes, there is a lot of hyperbole when you talk about AI in warfare. But through my doctoral research, every industry practitioner and policy-maker that I have spoken to has agreed that it is a game-changer. Whether or not you agree with the hype, it changes the rules of the game because the speed completely changes and the nature of an attack may completely change. So you definitely cannot say that the power of big data and the power of AI will not change things.

ES: This next question is from Dr Daniel Moore, who I spoke to last week for part two of this series. He was wondering if you think that AI will significantly alter the balance between offence and defence in cyberspace?

AE: I am going to disappoint Danny and say: we do not know yet. We do already see, of course, this interesting balance that states are choosing when they pick their own defence versus offence postures. And I think it is really important to note here that AI is just one tool in the arsenal for a team that is tasked with offensive cyber capabilities. At this point, I do not predict it making a huge difference.

At least when we talk about state-coordinated offensive cyber – sophisticated attacks, taking down adversaries or against critical national infrastructure, for example – they require such sophisticated, niche tools that the automation capabilities provided by AI are unlikely to offer any cutting-edge advantage there. So that depends. AI cyber defence tools streamline a huge amount of activity, whether that is picking out abnormal activities in your network or your logs, that eliminates a huge amount of manual analysis that cyber defence analysts might have to do and gives them more time for meaningful analysis.

AI speeds up and streamlines activity on both the offensive and defensive side, so I think it simply fits into the wider policy discussions for a state. It is one aspect but not the determining aspect, at the moment anyway or in the near future.

ES: And I guess the blurring of the lines between offence and defence in some cyber postures complicates the issue a little?

AE: Yes, especially when you look at the US and the way they define persistent engagement and defending forward. It is interesting as to where different states will draw their own lines on reaching outside their networks to take down the infrastructure of someone they know is attacking them – offensive activity for defensive purposes. So I think the policy question is much bigger than AI.

ES: Thinking more geopolitically, the UK’s Integrated Review was heavy on science and new technologies and other countries are putting a lot of resources into AI as well. There seems to be some element of a security dilemma here, but would you go so far as to say that we are seeing the start of a nascent AI arms race – what is your view of that framing?

AE: I think to an extent, yes, we do see aspects of a nascent AI arms race. But it is across all sectors, which comes back to AI as a dual-use technology. The Microsoft AI capability that we use now to chat with friends is also being used by NATO command structures and other military structures in command and control infrastructure, albeit in a slightly different form.

Because cutting-edge AI is being developed by private companies, which have the access and resources to do this, it is not like there is this huge arsenal of inherently weaponised AI tools. On the flip side, AI as a dual-use technology means that everything can be weaponised or gamed with enough capability. So it is a very messy landscape.

There have been large debates around autonomous systems in conflict generally, like drones, and I think there is an extent to which we can apply this to cyberspace too. While there is this security dilemma aspect, it is not in any states’ interests to escalate into full-blown warfare that cannot be deescalated and that threatens their citizens, so tools and capabilities should be used carefully.

Now there is a limit to how much you can apply this to cyberspace because of its invisible nature, the lack of transparency and a completely different deterrence structure. But there is an argument that states will show restraint in weaponizing AI where it is not in their interest. You see this conversation taking place, for example, around lethal autonomous weapons at the United Nations Group of Governmental Experts, where it is generally considered that taking the human out of the loop is highly undesirable. But it is complicated and early days.

Looking at the UK, my research has shown that there is pressure to develop AI capabilities in this space and there are perceptions of an AI arms race across the private sector, which is who I spoke to. And there is this awareness that AI investment must happen, in a large part because of anticipated behaviour of adversary states – the idea that other states do not have the same ethical or legal constraints when it comes to offensive cyber or the use of military AI, which is what my PhD thesis focuses on. The only preventative answer to stop this security mechanism building up into an AI arms race seems to be some kind of consensus mechanism, whereby like-minded states agree not to weaponize AI in this way. That is why my research has taken me to NATO, to look in the military context at what kinds of norms can be developed and whether there is a role for international agreement in this way.

If I had to summarise that argument into one or two sentences: there are trends suggesting that there is an AI arms race which is bigger than conflict, bigger than the military and bigger than cyber. So you have to rely on the security interests of the states themselves not to escalate and to potentially form alliance agreements to prevent escalation.


Part II of this interview will be published tomorrow on Friday 18th June 2021.

Filed Under: Blog Article, Feature, Series Tagged With: amy ertan, Cyberwar, cyberwarfare, ed stacey, offensive cyberwarfare, offensive cyberwarfare series, Series, Strife series

Offensive Cyber Series: Dr Daniel Moore on Cyber Operations, Part II

June 11, 2021 by Dr Daniel Moore and Ed Stacey

Photo Credit: Ecole polytechnique / Paris / France, licensed with CC BY-SA 2.0.

This is part II of Ed Stacey’s interview with Dr Daniel Moore on cyber operations for Strife’s Offensive Cyber Series. You can find Part I here.


ES: Thinking about alliances more broadly, what sort of opportunities and challenges do allies face when conducting joint operations in cyberspace?

DM: Allied operations on networks – I am not a fan of cyberspace – are contentious as well. They are a good measure more sensitive than any conventional equivalent that you can think of. It is not like having a joint military operation: it means putting your sensitive infrastructure and capabilities on the line alongside an ally. That is not to say it does not happen and there have been documented cases which were purportedly joint operations by multiple countries. So I think it will happen, but there are complexities involved. I know that NATO has already declared that they are together, as an alliance, bringing forward cyber capabilities that they will use jointly. I welcome that declaration, even if I am sceptical as to what it actually means.

I would tend to believe that, considering how porous NATO is as an entity and how there are varying levels of trust within NATO, truly sensitive capabilities will be kept off the table by individual member states in favour of their own arsenals and sets of strategic capabilities. This is not to say it is not possible, but it is unlikely that at a NATO level you will see joint operations that are truly strategic in nature. What you might see is allied members that are operating together. I do not think that, for example, a joint UK-US operation against a target is out of the question, especially if one brings a certain set of capabilities to the table and one brings others – somebody gives the tools, this unit has the relevant exploits, this intelligence organisation had already developed access to that adversary and so on. Melding that together has a lot of advantages, but it requires a level of operational intimacy that is higher than what you would be able to achieve at the NATO alliance level.

ES: Moving beyond the state, what role does the private sector play in the operational side of offensive cyber? Do we have the equivalent of private military contractors in cyberspace, for example?

DM: There is a massive role for the private sector across the entire operational chain within offensive cyber operations. I would say a few things on this. Yes, they cover the entire chain of operations and that includes vulnerability research, exploit development, malicious tool development and then even specific outfits that carry out the entire operational lifecycle, so actually conduct the intrusion itself for whatever purposes. In some cases, it is part of an industrial-defence complex like in the US, for example, where you have some of the giant players in defence developing offensive capabilities, both on the event- and presence-based side of things. And ostensibly you would have some of those folks contributing contractors and operators to actually facilitate operations.

But in other countries that have a more freeform or less mature public sector model for facilitating offensive cyber operations, the reliance on third party private organisations is immense. If you look, for example, at some of the US indictments against Iranian entities, you will see that they charged quite a few Iranian private companies for engaging in offensive cyber operations. The same happens in China as well, where you see private sector entities engaging in operations driven by public sector objectives. In some cases, they are entirely subsumed by a government entity, whereas in others they are just doing work on their behalf. In some cases, you actually see them use the same infrastructure in one beat for national security objectives, then the workday ends and they pivot and start doing ransomware to get some more cash in the evenings – using the same tools or infrastructure, or something slightly different. So, yes, the private sector plays an immense role throughout this entire ecosystem, mostly because the cost of entry is low and the opportunities are vast.

ES: Just to finish, you have a book coming out soon on offensive cyber. Can you tell us anything about what to expect and does it have a title or release date yet?

DM: The book is planned for release in October. It will be titled Offensive Cyber Operations: Understanding Intangible Warfare, and it is basically a heavily processed version of my PhD thesis that has been adapted, firstly, with some additional content to reflect more case studies, but also to appeal to anybody who is interested in the topic without necessarily having a background in cyber nor military strategy and doctrine. So it is trying to bridge the gap and make the book accessible, exactly to dispel some of the ambiguities around the utility of cyber operations. Questions like, how they are currently being used? What can they be used for? What does the “cyber war” narrative mean? When does an offensive cyber operation actually qualify as an act of cyber warfare? And, most importantly, what are the key differences between how different countries approach offensive cyber operations? Things like organisational culture, different levels of maturity, strategic doctrine and even just circumstance really shape how counties approach the space.

So I tackle four case studies – Russia, the US, China and Iran – and each one of those countries has unique advantages and disadvantages, they bring something else to the table and have an entirely different set of circumstances for how they engage. For example, the Iranians are incredibly aggressive and loud in their offensive cyber operations. But the other side to this is that they lack discipline, their tools tend to be of a lower quality and while they are able to achieve tactical impact, it does not always translate to long-term success.

The US is very methodical in its approach – you can see, taste and smell the bureaucracy in every major operation that it does. But that bureaucratic entanglement and the constant tension between the National Security Agency, Cyber Command and other involved military entities results in a more ponderous approach to cyber operations, although those organisations obviously bring a tonne of access and capability.

With the Russians, you can clearly see how they do not address cyber operations as a distinct field. Instead, they look at the information spectrum more holistically, which is of pivotal importance to them – so shaping what is “the truth” and creating the narrative for longer-term strategic success is more important than the specifics. That being said, they are also one of the most prolific offensive actors that we have seen, including multiple attacks against global critical infrastructure and various aggressive worms that exacted a heavy toll from targets. So for Russia, if you start looking at their military doctrine, you can see just how much they borrow, not only from their past in electronic warfare but also their extensive past in information operations, and how those blend together to create a broader spectrum of information capabilities in which offensive cyber operations are just one component.

And finally, the Chinese are prolific actors in cyber espionage – provably so. They have significant technical capabilities, perhaps somewhat shy of their American counterparts but they are high up there. They took interesting steps to solidify their cyber capabilities under a military mandate when they established the Strategic Support Force, which again – like the NCF – tried to resolve organisational tensions by coalescing those capabilities. But they are largely unproven in the offensive space. They do have an interesting scenario on their plate to which cyber could and may play a role, which is any attempt at reclaiming Taiwan – something I look at extensively in the book and how that shapes their offensive posture.

So the book is a combination of a broader analysis of the significance of cyber operations and then how they are concretely applied by different nations for different purposes.


The next interview in Strife’s Offensive Cyber Series is with Amy Ertan on AI and military innovation. It will be released in two parts on Thursday 17th and Friday 18th June 2021.

Filed Under: Blog Article, Feature, Series Tagged With: cyber, Cyber Operations, Cyber Security, daniel moore, Dr Daniel Moore, ed stacey, offensive cyberwarfare, offensive cyberwarfare series, Series, Strife series

Offensive Cyber Series: Dr Daniel Moore on Cyber Operations, Part I

June 10, 2021 by Dr Daniel Moore and Ed Stacey

Photo Credit: dustball, licensed with CC BY-NC 2.0

On Wednesday 10th March, Strife Interviewer Ed Stacey sat down with Dr Daniel Moore to discuss the operational side of offensive cyber. For part two of Strife’s Offensive Cyber Series, Dr Moore expands on his thinking about presence-based and event-based offensive cyber operations and discusses related topics such as the emergence of new organisational cyber structures, allied operations on networks and his upcoming book Offensive Cyber Operations: Understanding Intangible Warfare, slated for release in October 2021.

Ed Stacey: Danny, you have written in the past about distinguishing between presence-based and event-based offensive cyber operations. What are the key differences between the two?

Danny Moore: I came up with the distinction between presence-based and event-based operations as a commentary on the lack of distinction in most of the publicly accessible cyber doctrine documentation. Mostly what we see are offensive cyber operations treated as a uniform spectrum of possibilities that have the same considerations, the same set of staff associated with them and the same set of circumstances under which you would want to use them. But that is not the case.

A lot of the literature you see focusses on the technical deployment of offensive cyber operations – the malicious software involved in the process, the intended effect, what it means to pivot within a network – but that really only encompasses a fraction of the activity itself when we are talking about military-scale or even intelligence agency-scale of operations, at least where it counts. So I came up with this distinction to differentiate between what I think are two supercategories of operation that are so different in the circumstance, and so unique in how they would be utilised, that they are worth examining separately because they have distinct sets of advantages and disadvantages.

Presence-based operations are like the classic intelligence operation that has an offensive finisher. So you have everything that you normally would with an intelligence operation, including compromising the adversary’s network, establishing a foothold, pivoting within and gathering relevant information. But then there are additional offensive layers too, such as looking for the appropriate targets within the network that would yield the intended impact and weaponizing your access in a way that would facilitate achieving the objective. For example, would you need dedicated tooling in order to have an effect on the target? Or say you are looking to have a real-world, physical impact or even adversely degrade specific types of software and hardware, which would require significant capabilities. But crucially, the operation is managed over the period of at least many weeks, if not months and sometimes even years. And it can be a strategic set of capabilities that you would use possibly even just once, when needed, because once exposed it is likely to be counteracted, at least in the medium-term.

Event-based operations are completely different in that sense. They are the most robust equivalent that you could have to a proper weapon, in the military sense of the word. It is intended to be something that you can bundle, package up and deploy in multiple circumstances. Imagine – and I think this is the most helpful analogy – it is almost an evolution of electronic warfare, something that you can deploy on a ship or with a squad or even within an existing air defence grid. What it does is, instead of just communicating in electromagnetic signal, it also attempts to facilitate a software attack on the other side. And that sequence involves a completely different set of circumstances. You do not need to have an extended period of intelligence penetration of the network that you are targeting – that contact is likely to be minimal. Instead, what you have is an extensive research and development process where you collect the right technical intelligence in order to understand the target, craft the actual tool and then make it much more robust so that it can be used multiple times against the same or equivalent targets and not be as brittle to detection, so stealth is not really a component.

So that distinction is just a high-level way of saying that the circumstances are different, the types of manpower associated are different, but also that there are unique advantages and disadvantages when using each.

ES: What sort of benefits do states and their militaries and intelligence agencies gain by making this distinction?

DM: If you acknowledge these differences at a strategic and doctrinal level, it facilities much better planning and integration of cyber capabilities into military operations. As you know, there is a constant tension between intelligence agencies and their equivalents in the conventional military around how offensive cyber capabilities are used. The question here is: how close is the relationship between the intelligence agency – which is the natural owner of offensive cyber capabilities, for historical reasons and usually a strong link to signals intelligence – and the military, which wants to incorporate these capabilities and to have a level of predictability, repeatability and dependability from these activities for planning purposes? That tension is always there and it is not going away entirely, but how this distinction helps is to group capabilities in a way that facilitates better planning.

If you have a supercategory of operation that relies heavily on intelligence-led penetration, pivoting and analysis, for example, that comfortably lives with the extreme assistance of an intelligence agency, if not actual ownership – and that will vary between countries. Whereas the more packageable type of capability is easier to hand-off to a military commander or even specific units operating in the field. It is something that you can sign off and say: this will not compromise my capabilities in a significant way if it is used in the field incorrectly, or even correctly, and gets exposed in some way, shape or form. So it is about different levels of sensitivities, it is about facilitating planning and I think it takes the conversation around what offensive cyber operations actually look like to a more realistic place that supports the conversation, rather than limits it.

ES: Focussing on the organisational tensions that you mentioned, new structures like the UK’s National Cyber Force (NCF) are emerging around the world. What are the operational implications of these efforts?

DM: The short answer is that the NCF is an acknowledgement of a process that has been happening for many years. That is, the acknowledgement that you need to build a bridge between the intelligence agency, which is the natural owner of these capabilities, and the military, that wants to use them in a predictable and effective way. So you are seeing outfits like this come up in multiple countries. It allows for more transparent planning and for better doctrinal literature around how cyber capabilities integrate into military planning. That is not to say it will fix everything, but it decouples the almost symbiotic relationship between intelligence agencies and offensive cyber operations.

Intelligence agencies will always play a significant part because, as I said and have written about as well, they have an important role to play in these types of operations. But we have matured enough in our understanding to be able to have a distinct, separate conversation about them that includes other elements in military planning that do not just draw from intelligence agencies. So the NCF and other equivalent entities are an acknowledgement of the distinctness of the field.

ES: This next question is from Dr Tim Stevens, who I spoke to last week for part one of this series. Will NATO allies follow the US’ lead and adopt a posture of persistent engagement in cyberspace? And just to add to that, if they did, what sort of operational challenges and opportunities would they face in doing so?

DM: The conversation around the US’ persistent engagement and defend forward mentality for cyber operations is one that is ambivalent and a little contentious, even within the US itself – whether or not it is working, whether or not it is the best approach and, even, what it is actually trying to achieve. If you read the literature on this, you will find many different interpretations for what it is actually meant to do. So will NATO or specific member states choose to adopt elements of this? Possibly. But it is unlikely to manifest in the same way.

The perception from the US that they are in constant competition with their adversaries in and against networks is accurate. We have increased friction as a result of how the internet is structured and how sensitive networks are structured. You consistently have to fend off adversaries and seek to engage them, ideally outside your own networks – a good concept to have and a good operational model to keep in mind. And I think it is a great way to educate military leaders and planners around the unique circumstances of operating against networks. That said, I do not know if NATO is going to adopt wholesale persistent engagement and defend forward or rather just incorporate elements of that constant friction into their own models, which I think is a necessary by-product of engaging networks.

Some of the countries within NATO are more prolific than others when it comes to such activities – the UK, for example, or even France. Obviously, countries run offensive cyber operations of their own: they consistently need to fend off adversaries from their critical infrastructure and they prefer not to do this by directly mitigating incidents within their own network. So the step of persistent engagement and defend forward does make sense, but I do not know if that is an adoption of the same doctrine or just some of the principles that it looks to embody.


Part II of this interview will be published tomorrow on Friday 11th June 2021.

Filed Under: Blog Article, Feature, Series Tagged With: Cyber Operations, daniel moore, Dr Daniel Moore, Facebook, offensive cyberwarfare, offensive cyberwarfare series

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 6
  • Go to page 7
  • Go to page 8
  • Go to page 9
  • Go to page 10
  • Interim pages omitted …
  • Go to page 12
  • Go to Next Page »

Footer

Contact

The Strife Blog & Journal

King’s College London
Department of War Studies
Strand Campus
London
WC2R 2LS
United Kingdom

[email protected]

 

Recent Posts

  • Climate-Change and Conflict Prevention: Integrating Climate and Conflict Early Warning Systems
  • Preventing Coup d’Étas: Lessons on Coup-Proofing from Gabon
  • The Struggle for National Memory in Contemporary Nigeria
  • How UN Support for Insider Mediation Could Be a Breakthrough in the Kivu Conflict
  • Strife Series: Modern Conflict & Atrocity Prevention in Africa - Introduction

Tags

Afghanistan Africa Brexit China Climate Change conflict counterterrorism COVID-19 Cybersecurity Cyber Security Diplomacy Donald Trump drones Elections EU feature France India intelligence Iran Iraq ISIL ISIS Israel ma Myanmar NATO North Korea nuclear Pakistan Politics Russia security strategy Strife series Syria terrorism Turkey UK Ukraine United States us USA women Yemen

Licensed under Creative Commons (Attribution, Non-Commercial, No Derivatives) | Proudly powered by Wordpress & the Genesis Framework