• Skip to primary navigation
  • Skip to main content
  • Skip to footer
  • Home
  • About
    • Editorial Staff
      • Bryan Strawser, Editor in Chief, Strife
      • Dr Anna B. Plunkett, Founder, Women in Writing
      • Strife Journal Editors
      • Strife Blog Editors
      • Strife Communications Team
      • Senior Editors
      • Series Editors
      • Copy Editors
      • Strife Writing Fellows
      • Commissioning Editors
      • War Studies @ 60 Project Team
      • Web Team
    • Publication Ethics
    • Open Access Statement
  • Archive
  • Series
  • Strife Journal
  • Strife Policy Papers
    • Strife Policy Papers: Submission Guidelines
    • Vol 1, Issue 1 (June 2022): Perils in Plain Sight
  • Contact us
  • Submit to Strife!

Strife

The Academic Blog of the Department of War Studies, King's College London

  • Announcements
  • Articles
  • Book Reviews
  • Call for Papers
  • Features
  • Interviews
  • Strife Policy Papers
    • Strife Policy Papers: Submission Guidelines
    • Vol 1, Issue 1 (June 2022): Perils in Plain Sight
You are here: Home / Archives for cyberwarfare

cyberwarfare

Offensive Cyber Series: Amy Ertan on AI and Military Innovation, Part II

June 18, 2021 by Ed Stacey and Amy Ertan

A soldier operates the remote controlled Mark 8 Wheel Barrow Counter IED Robot. Photo Credit: UK Ministry of Defence, licensed under Creative Commons

This is part II of Ed Stacey’s interview with Amy Ertan on AI and military innovation for Strife’s Offensive Cyber Series. You can find part I here.


ES: I feel like there is a whole interview to be had on this idea of an AI arms race, especially with some of the signals from governments about the importance of these technologies.

AE: We talk about an AI arms race, but actually the number of countries that have the resources to invest in this is really small. The US, of course, is investing billions and billions, and they have their Joint Artificial Intelligence Center which is coordinating AI, including AI for use in cyberspace. The UK invests a huge amount as well and so do a few other states within Europe, for example France. But for the majority of states, say across NATO, AI in conflict is not something that is currently top of the agenda: it is something that is discussed at the strategic level and people know that it will hit and have impact in 20 to 30 years’ time. So we are seeing that strategic discussion but it costs so much that it is just a matter of states buying solutions from the private sector, so lots of questions there to.

ES: On that note, given the private sector is so important in the development of AI, do you think that the advantage lies with liberal democratic states and their innovate, free-market economies or authoritarian states that have greater control over private companies, enhancing military-civil fusion? Or alternatively, is that dichotomy a bit of a cliché?

AE: That dichotomy is a bit of a cliché. I will say, though, that the states that do have control and oversight over their industry, China for example, have a significant advantage when it comes to military-civil fusion and access to big data. China places either top or joint top with the US at the moment – I think there is a separate computing race – when it comes to AI. And when you look at conversations, in the US and UK for example, public-private partnerships are a major focus with AI because you need to partner with companies like Microsoft, IBM, Amazon and Google.

The free-market economy is not something I think has an inherent advantage, which sounds strange to say. But there is an interesting aspect in that for a lot of private sector leaders in AI, governments are not their main target market – they do not need to work for them. There is controversy around what they do, for example with Google and Project Maven.

There has been a shift in the way that military innovation takes place over the last half-century or so and the government now has less control over who works with them than before. So public-private partnership is something that states like the UK and US would love to improve on. There are also challenges for government procurement cycles when it comes to technologies like AI because you need a much faster procurement cycle than you do for a tank or a plane. So working with the private sector is going to become increasingly central to Ministry of Defence procurement strategies moving forward.

ES: Your PhD research explores the unforeseen and unintended security consequences of developing and implementing military AI. Could you speak a little to how these consequences might materialise in or through the cyber domain? 

AE: There are two aspects to this: one is the technical security angle and then the second is the strategic security angle. In terms of cyber security aspects, first, you have the threat that your AI system itself may not be acting as intended. Now especially when we think about sophisticated machine learning techniques, you often cannot analyse the results because the algorithm is simply too complicated. For example, if you have developed deep learning or a neural network, there will potentially be hundreds of thousands of nodes and no “explainability” – you have a “black box” problem as to what the algorithm is doing. That can make it very difficult to detect when something goes wrong and we have seen examples of that in the civic space, where it has turned out many years after the fact that an algorithm has been racist or sexist. It is a slightly different challenge in the military sphere: it is not so much about bias but rather is it picking up the right thing? Obviously, within a conflict environment you do not want to detect a threat where there is not one or miss something.

Second, there is the threat that your algorithm or data may be compromised and you would not know. So this could be the input data that you are feeding in or the system itself. For example, you may have a cyber defence algorithm that picks up abnormal activity on your network. A sophisticated attacker could interfere with the programming of that algorithm or tamper with the data so that the algorithm thinks that the attacker has been there all along and, therefore, that it is not abnormal activity and no flags are raised. So the way in which threat modelling does not consider the creativity of attackers, or insufficiency of the algorithm, could lead to something being deployed that is not fit for purpose.

Third, adversarial AI. This is the use of techniques to subvert an AI system, again making something that is deployed fallible. For one perhaps theoretical but technically feasible example, you could deploy an algorithm in cyberspace that would only target certain kinds of infrastructure. Maybe you would want it to not target hospitals, but that could be gamed – everyone could attempt to make their site look like a hospital to the algorithm.

Right now, the technology is too immature and we do not have direct explainability. It is also very difficult to know the right level of confidence to have before deploying an AI system and there are questions around oversight. So while technical challenges around explainability and accuracy may be solved through strict verification and validation procedures that will mature in time with AI capabilities, some of these unintended consequences come down to human factors like trust, oversight and responsibility. For example, how do humans know when to override an AI system

Those societal and policy questions will be tricky and that is what leads you into the strategic debate. For example, what is the appropriate use of AI in an offensive manner through or beyond cyberspace? What is a legitimate target? When it comes to AI and offensive cyber, all of the main questions around offensive cyber remain the same – the ones that traditionally apply to cyber conflict and the ones that we want to start thinking about with sub-threshold conflict. With AI, I think it is the way in which it can be mis-utilised or utilised to scale up inappropriate or unethical activity that is particularly problematic.

ES: How should states go about mitigating those risks? You touched on norms earlier, but because a lot of this work is super secretive, how can we have those conversations or develop regulation when states are, perhaps for good reason, not willing to reveal what they are doing in this space?

AE: Absolutely. Military innovation around AI will always be incredibly secretive. You will have these propriety algorithms that external parties cannot trust, and this is really difficult in the military space where the data is so limited anyway. I mentioned earlier that you can feed three million pictures of cats into an algorithm that then learns to recognise a cat, but there are way fewer images of tanks in the Baltic region or particular kinds of weapon. The data is much more limited in secretive military contexts and it potentially is not being shared between nations to the extent that might be desirable when it comes to building up a better data set that would lead to more accurate decisions. So encouraging information sharing to develop more robust algorithms would be one thing that could mitigate those technical risks.

Talking about broader conversations, norms and regulations. I think regulation is difficult. We have seen that with associated technologies: regulation moves quite slowly and will potentially fail to capture what happens in 10, 15 or 20 years’ time because we cannot foresee the way in which this technology will be deployed. Norms, yes, there is potential there. You can encourage principles, not only in the kinetic space but there are also statements and agreements around cyberspace – NATO’s Cyber Defence Pledge, for example, and the Paris Call. States can come together and agree on baseline behaviours of how to act. It is always difficult to get consensus and it is slow, but once you have it that can be quite a powerful assurance – not confirmation that AI will not be used in offensive cyber in undesirable ways, but it gives some assurance to alliance structures.

And those kinds of conversations can prove the basis for coming together to innovate as well. So we already see, for example, while the UK and US have the power and resources to invest themselves, across NATO groups of countries are coming together to look at certain problems, for example to procure items together, which may well be the path towards military AI.

It is difficult and you cannot force states to cooperate in this way, but it is also in the interests of some states. For example, if the US has invested billions in military AI for cyber purposes, it is also in its interest that its allies are secure as well and that the wider ecosystem is secure. So it may choose to share some of those capabilities to allies, not the most secretive nor the raw data but, for example, the principles to which it abides by or certain open source tools. Then we start thinking about trust networks, whether that is the Five Eyes, NATO or other alliance structures too. So it is not hopeless.


The final interview in Strife’s Offensive Cyber Series is with Dr Jacquelyn Schneider on cyber strategy. It will be released in two parts on Thursday 24th and Friday 25th June 2021.

Filed Under: Blog Article, Feature, Series Tagged With: AI, amy ertan, Artificial Intelligence, cyber, Cybersecurity, cyberwarfare, ed stacey, Military Cyber, offensive cyberwarfare, offensive cyberwarfare series

Offensive Cyber Series: Amy Ertan on AI and Military Innovation, Part I

June 17, 2021 by Ed Stacey and Amy Ertan

Photo Credit: Mike MacKenzie, licensed via Creative Commons.

On Wednesday 17th March, Strife Interviewer Ed Stacey sat down with Amy Ertan to discuss offensive cyber in the context of artificial intelligence (AI) and military innovation. For part three of Strife’s Offensive Cyber Series, Ms Ertan discusses the current role of AI in offensive cyber and potential future trajectories, including effects on the offence-defence balance and arms racing, as well as her PhD research, which explores the unforeseen and unintended security consequences of developing and implementing military AI.

Ed Stacey: Amy, could you start by briefly defining AI in the context of offensive cyber. Are we really just talking about machine learning, for example?

Amy Ertan: Artificial intelligence is not just machine learning algorithms – it is a huge range of technologies. There is a whole history of AI that goes back to before the mid-1970s and late-80s: rule-based AI and knowledge-based AI which is, as it sounds, learning based on rules and logic. Then in the last decade or so we have seen a huge uptick in machine learning-based algorithms and its various sub-branches, including deep-learning and neural networks, which are incredibly complex algorithms that we cannot actually understand as humans. So, in summary, AI is a big umbrella term for different kinds of learning technologies.

At the same time, there is some snake oil in the market and a lot of what people call AI can just be probabilistic statistics. Being generous, some of the start-ups that you see are doing if-then algorithms that we could probably do on Excel. That does not, of course, account for the tech giant stuff. But when we talk about AI, we have everything from the super basic things that are not really AI to the incredibly well-financed, billion dollar projects that we see at Amazon, Microsoft and so on.

Machine learning is where a lot of today’s cutting edge research is. So the idea that you can feed data, potentially untagged data – unsupervised learning – into an algorithm, let the algorithm work through that and then make predictions based on that data. So, for example, you feed in three million pictures of cats and if the algorithm works as intended, it will then recognise what is and is not a cat.

In terms of how that fits into offensive cyber, AI is another tool in the toolkit. A learning algorithm, depending on how it is designed and used, will be just like any other cyber tool that you might have, only with learning technology within it. I would make the point that it is not something that we see being utilised today in terms of pure cyber attacks because it is not mature enough to be creative. The machine learning AI that we have right now is very good at narrow tasks, but you cannot just launch it and there is no “AI cyber attack” at the moment.

ES: How might AI enhance or facilitate offensive cyber operations?

AE: As I said, AI is not being used extensively today in offensive cyber operations. The technology is too immature, although we do see AI doing interesting things when it has a narrow scope – like voice or image recognition, text generation or predictive analytics on a particular kind of data set. But looking forward, there are very feasible and clear ways in which AI-enabled technologies might enhance or facilitate cyber operations, both on the offensive and defensive side.

In general, you can talk about the way that AI-enabled tools can speed up or scale up an activity. One example of how AI might enhance offensive cyber operations is through surveillance and reconnaissance. We see already, for example, AI-enabled tools being used in intelligence processing for imagery, like drone footage, saving a huge amount of time and vastly expanding the capacity of that intelligence processing. You could predict that being used to survey a cyber network.

Using AI to automate reconnaissance, to do that research – the very first stage of a cyber attack – is not a capability that you have now. But it would certainly enhance a cyber operation in terms of working out the best target at an organisation – where the weak link was, the best way in. So there is a lot that could be done.

ES: Are we talking then about simply an evolution of currently automated functions or does AI have the potential to revolutionise offensive cyber?

AE: In terms of whether AI will be just a new step or a revolution, generally my research has shown that it will be pretty revolutionary. AI-enabled technology has the power to revolutionise conflict and cyber conflict, and to a large extent that is through an evolution of automated functions and autonomous capabilities. I think the extent to which it is a full-blown revolution will depend on how actors use it.

Within cyberspace, you have this aspect that there might be AI versus AI cyber conflict in the future. Where your offensive cyber tool – your intrusion, your exploit tool – goes head-to-head with your target’s AI-enabled cyber defence tools, which might be intrusion prevention or spam filtering tools that are already AI-enabled. It really depends on how capabilities are used. You will have human creativity but then an AI algorithm makes decisions in ways that humans do not, so that will change some aspects of how offensive cyber activity takes place.

There is debate as to whether this is a cyber attack or information warfare, but I think deep fakes would be an example of a technology or tool that is already being used, falsifying information, that has revolutionised information warfare because of the scale and the nature of the internet today. So how far AI revolutionises offensive cyber will depend not only on its use but also a complex set of interconnections between AI, big data, online connectedness and digital reliance that will come together to change the way that conflict takes place online.

That is a complicated, long answer to say: it depends, but AI definitely does have the potential to revolutionise offensive cyber.

ES: No, thank you – I appreciate that revolutionary is a bit of a loaded term.

AE: Yes, there is a lot of hyperbole when you talk about AI in warfare. But through my doctoral research, every industry practitioner and policy-maker that I have spoken to has agreed that it is a game-changer. Whether or not you agree with the hype, it changes the rules of the game because the speed completely changes and the nature of an attack may completely change. So you definitely cannot say that the power of big data and the power of AI will not change things.

ES: This next question is from Dr Daniel Moore, who I spoke to last week for part two of this series. He was wondering if you think that AI will significantly alter the balance between offence and defence in cyberspace?

AE: I am going to disappoint Danny and say: we do not know yet. We do already see, of course, this interesting balance that states are choosing when they pick their own defence versus offence postures. And I think it is really important to note here that AI is just one tool in the arsenal for a team that is tasked with offensive cyber capabilities. At this point, I do not predict it making a huge difference.

At least when we talk about state-coordinated offensive cyber – sophisticated attacks, taking down adversaries or against critical national infrastructure, for example – they require such sophisticated, niche tools that the automation capabilities provided by AI are unlikely to offer any cutting-edge advantage there. So that depends. AI cyber defence tools streamline a huge amount of activity, whether that is picking out abnormal activities in your network or your logs, that eliminates a huge amount of manual analysis that cyber defence analysts might have to do and gives them more time for meaningful analysis.

AI speeds up and streamlines activity on both the offensive and defensive side, so I think it simply fits into the wider policy discussions for a state. It is one aspect but not the determining aspect, at the moment anyway or in the near future.

ES: And I guess the blurring of the lines between offence and defence in some cyber postures complicates the issue a little?

 AE: Yes, especially when you look at the US and the way they define persistent engagement and defending forward. It is interesting as to where different states will draw their own lines on reaching outside their networks to take down the infrastructure of someone they know is attacking them – offensive activity for defensive purposes. So I think the policy question is much bigger than AI.

ES: Thinking more geopolitically, the UK’s Integrated Review was heavy on science and new technologies and other countries are putting a lot of resources into AI as well. There seems to be some element of a security dilemma here, but would you go so far as to say that we are seeing the start of a nascent AI arms race – what is your view of that framing?

AE: I think to an extent, yes, we do see aspects of a nascent AI arms race. But it is across all sectors, which comes back to AI as a dual-use technology. The Microsoft AI capability that we use now to chat with friends is also being used by NATO command structures and other military structures in command and control infrastructure, albeit in a slightly different form.

Because cutting-edge AI is being developed by private companies, which have the access and resources to do this, it is not like there is this huge arsenal of inherently weaponised AI tools. On the flip side, AI as a dual-use technology means that everything can be weaponised or gamed with enough capability. So it is a very messy landscape.

There have been large debates around autonomous systems in conflict generally, like drones, and I think there is an extent to which we can apply this to cyberspace too. While there is this security dilemma aspect, it is not in any states’ interests to escalate into full-blown warfare that cannot be deescalated and that threatens their citizens, so tools and capabilities should be used carefully.

Now there is a limit to how much you can apply this to cyberspace because of its invisible nature, the lack of transparency and a completely different deterrence structure. But there is an argument that states will show restraint in weaponizing AI where it is not in their interest. You see this conversation taking place, for example, around lethal autonomous weapons at the United Nations Group of Governmental Experts, where it is generally considered that taking the human out of the loop is highly undesirable. But it is complicated and early days.

Looking at the UK, my research has shown that there is pressure to develop AI capabilities in this space and there are perceptions of an AI arms race across the private sector, which is who I spoke to. And there is this awareness that AI investment must happen, in a large part because of anticipated behaviour of adversary states – the idea that other states do not have the same ethical or legal constraints when it comes to offensive cyber or the use of military AI, which is what my PhD thesis focuses on. The only preventative answer to stop this security mechanism building up into an AI arms race seems to be some kind of consensus mechanism, whereby like-minded states agree not to weaponize AI in this way. That is why my research has taken me to NATO, to look in the military context at what kinds of norms can be developed and whether there is a role for international agreement in this way.

If I had to summarise that argument into one or two sentences: there are trends suggesting that there is an AI arms race which is bigger than conflict, bigger than the military and bigger than cyber. So you have to rely on the security interests of the states themselves not to escalate and to potentially form alliance agreements to prevent escalation.


Part II of this interview will be published tomorrow on Friday 18th June 2021.

Filed Under: Blog Article, Feature, Series Tagged With: amy ertan, Cyberwar, cyberwarfare, ed stacey, offensive cyberwarfare, offensive cyberwarfare series, Series, Strife series

Offensive Cyber Series: Dr Tim Stevens on Offensive Cyber in the 2020s, Part II

June 4, 2021 by Ed Stacey and Dr Tim Stevens

Photo Credit: UK Ministry of Defence, Crown Copyright.

This is part II of Ed Stacey’s interview with Dr Tim Stevens on offensive cyber in the 2020s for Strife’s Offensive Cyber Series. You can find Part I here.


ES: Thinking about the relationship between offensive cyber and international law and ethics, how far have debates gone around when and how it is right to use these capabilities and how confident are we in their conclusions?

TS: Depending on who you ask, this issue is either settled or it is not. Now the point about the discussion around these capabilities is that, actually, when we think about international law and ethics, whether from a liberal democratic standpoint or otherwise, the conversation is not about the capabilities themselves, generally speaking – it is not about cyber weapons as such – but tends to be more about the targets of those capabilities and the effects.

In 2015, the United Nations (UN) Group of Governmental Experts (GGE) on information security, which is led by the permanent five – the UK, Russia, France, China and the US – but also involved twenty or so other countries, agreed that international law applies to this domain in its entirety. That includes the UN Charter, they found a couple of years later. There is also a big NATO process which says that international humanitarian law (IHL), which governs the use of force in war, also applies to this environment. And what comes out of that is an understanding of several things.

Firstly, that the use of any capabilities that you might describe as offensive – or indeed defensive, hypothetically – has to abide by the laws of war. So they have to be necessary, proportionate and they have to have distinction, in the sense that they cannot target civilians under normal circumstances. The 2015 GGE said that you could not target civilian infrastructure through cyber means and so on.

But the problem is that, as we look at the world around us, for all of those international legal constraints and associated ethical arguments about not targeting civilians, for example, what we see is the significant use by states and other actors of exactly these types of capabilities, targeting exactly these types of targets. We have seen civilian infrastructure being targeted by the Russians, for example in Kiev on a couple of occasions in winter, where they have essentially turned the electricity off. That is exactly the opposite of what they signed up to: they signed up to say that that was not legal under international law, yet they do it anyway.

So the question really is not whether international law applies. It is slightly an issue about the details of how it applies and then if someone is in breach of that, what do you then do, which throws you back into diplomacy and geopolitics. So already you have gone beyond the conversation about small bits of malicious software that are being used as offensive cyber capabilities and elevating it to levels of global diplomacy and geopolitics. And essentially, there is a split in the world between liberal democracies, who at least adhere for the most part to international law, and a small set of other countries who very clearly do not.

ES: Given that context, what are the prospects for regulating offensive cyber activity? Is there the potential for formal treaties and agreements or are we talking more about the gradual development of norms of responsible state behaviour?

TS: This is the live question. Although we have an emerging understanding of the potential tools with which we might regulate these capabilities – including IHL and norms of responsible state behaviour – we have not got to the point of saying, for example, that we are going to have a global treaty. But there are multi-stakeholder efforts to do something that look a little like global agreements on, for example, the use of capabilities for targeting civilian infrastructure. There is something called the Cybersecurity Tech Accord, another is the Paris Call for Trust and Security in Cyberspace and there are half a dozen others that even if not explicitly focussed on offensive cyber, it is part of a suite of behaviours that they wish to develop norms around and potentially even regulation.

But it is incredibly difficult. The capabilities themselves are made of code: they are 1s and 0s, they zip around global networks, they are very difficult to interdict, they multiply, they distribute and they can attack a thousand different systems at once if they are done in a very distributed fashion. How do you tell where they come from? They do not come with a return address as the cliché goes. How do you tell who is responsible? Because no-one is going to own up to them. How do you tell if they are being developed? Well you cannot because they are done in secret. You can have a military parade in the streets of Washington DC, Pyongyang or Moscow, but you cannot do the same with cyber capabilities.

So it is very difficult to monitor both their use and their retention and development. And if nobody does own up to them, which is commonly the case, how do you punish anyone for breaching emerging norms or established international law? It is incredibly difficult. So the prospect for formal regulation anytime soon is remote.

ES: So far we have talked about some quite complex issues. Given the risks involved in developing and deploying these types of capabilities, what do you think needs to happen to improve public understanding of offensive cyber to the point that we can have a proper discussion about those risks?

TS: Public understanding of offensive cyber is not good and that is not the fault of the public. There are great journalists out there who take care in communicating these issues, and then there are others who have just been put on a story by their sub-editor and expected to come up to speed in the next half hour to put some copy out. It is really difficult to generate nuanced public understanding of things when the media environment is what it is.

Now I am not blaming the media here; I am just saying that that is one of the factors that plays into it. Because we have a role as academics as well and, ultimately, a lot of this falls to governments to communicate, which has conventionally not been great. Partly this is because a lot of the use and development of these capabilities comes from behind the classification barriers of national security, defence and intelligence. We have heard bits about their use in the battlespace against Islamic State in Iraq and Syria that has leaked out in interviews with senior decision-makers in the US and the UK, but generally not a lot else.

What we tend to get is policy statements saying: we have a sovereign offensive cyber capability and we are going to use it at a time and place of our choosing against this set of adversaries, which are always hostile states, terrorist groups, serious organised criminals and so on. But it does not encourage much public debate if everything that comes out in policy then gets called a cyber war capability because actions to stop child sexual exploitation by serious organised crime groups are not a war-like activity – they fall in a different space and yet they are covered by this cyber war moniker.

Now there is an emerging debate around offensive cyber. Germany has had a conversation about it, constitutionally quite constrained when it comes to offensive capabilities. There is a discussion in the Netherlands, also in the US about their new cyber posture – which is much more forward leaning than previous ones – and we are beginning to have a conversation in the UK as well. But a lot of that has fallen to academics to do and, I guess, I am part of that group who are looking at this issue and trying to generate more of a pubic conversation.

But it is difficult and the response you will sometimes get from government is: we do not need to have a conversation because we have already declared that everything we do is in accordance with our obligations under international law – we will do this against a set of adversaries that are clearly causing the nation harm and so on. That is fine. We are not doubting that that is their statement; we would just like to know a little bit more about the circumstances in which you would use these capabilities.

What, for example, is the new National Cyber Force going to do? How is it going to be structured? What are the lines of responsibility? Because one of the weird things about joint military-intelligence offensive cyber operations is that, in a country like the UK, you have the defence secretary signing off on one side and the foreign secretary signing off on the other because you are involving both the military and GCHQ, which have different lines of authority. So where does responsibility lie? Accountability? What happens if something goes wrong? What is your exact interpretation of international law? To be fair to the UK, they have set that interpretation out very clearly.

But there is more than just an academic interest here. If this is the future of conflict in some fashion and it has societal effects, then we need to have a conversation about whether these are the capabilities that we want to possess and deploy. Not least if the possession and deployment of those capabilities generates norms of state behaviour that include the use of cyber conflict. Is that something that we want to do in societies of the 21st century that are hugely dependent upon computer networks and deeply interconnected with other countries?

Those are the types of questions that we need to raise and we also need to raise the quality of public understanding. That is partly the job of academia and partly the job of media, but certainly the job of government.


The next interview in Strife’s Offensive Cyber Series is with Dr Daniel Moore on cyber operations. It will be released in two parts on Thursday 10th and Friday 11th June 2021.

Filed Under: Blog Article, Feature Tagged With: cyber, cyber warfare, cyberwarfare, dr tim stevens, ed stacey, offensive cyberwarfare, offensive cyberwarfare series, tim stevens

Enhancing Cyber Wargames: The Crucial Role of Informed Games Design

January 11, 2021 by Amy Ertan and Peadar Callaghan

by Amy Ertan and Peadar Callaghan

“Risk – Onyx Edition (Ghosts of board games past)” by derekGavey.
Licensed under Creative Commons

 

‘A game capable of simulating every aspect of war would become war.’

Martin Van Creed, Wargames: From Gladiators to Gigabytes, 2013.

 

The launch of the MoD’s Defence Science and Technology Laboratory first Defence Wargaming Centre in December 2019 is an opportunity for future wargaming design. While current games do enable some knowledge transfer, the tried-and-tested techniques employed by the serious games community would enhance these exercises with more effective strategising and training mechanisms.  This article highlights how the characteristics of cyberspace require a distinct approach to wargames, and provides recommendations for improved development and practice of cyber wargames by drawing on established games design principles.

The use of games in educational settings has been recognised since the 4th century BC. Wargames, however, are a more recent invention. Wargaming first emerged in modern times via the Prussian Army. Kriegsspiel, as it was called, was used to teach tactics to officers as part of the Prussian Military Reforms in the wake of their devastating defeats at the hands of Napoleon. Ever since, military wargames have become a feature of training military personnel. The UK Ministry of Defence’s (MoD) Red Teaming Guide defines a wargame as ‘a scenario-based warfare model in which the outcome and sequence of events affect, and are affected by, the decisions made by the players’. These games, as noted by the MoD’s Wargaming Handbook, can be used to simulate conflicts in a low-risk table-top style setting across all levels of war and ‘across all domains and environments’. Wargames have repeatedly proved themselves a reliable method in communicating and practising military strategy that can be applied to explore all varieties of warfare.

As cyber becomes an increasingly important warfighting domain, both by itself and in collaboration with other domains, cyber wargames have begun to be played with the same frequency and importance as the traditional domains. Since 2016, the NATO Cooperative Cyber Defence Centre of Excellence (CCDCOE) has annually coordinated Crossed Swords, focusing on technical training, while NATO’s annual Cyber Coalition focuses on goals including information-sharing and collaboration and the Atlantic Council’s Cyber 9/12 focuses on strategic policy-making. Military examples include the U.S. Naval War College’s Defending Forward wargames, where, in its simplest form, cyber defenders (‘blue teams’) defend against cyber adversaries (‘red teams’). While these games are a great step forward in understanding, analysing, and preparing for the problems of cyberwarfare, these exercises tend to draw on existing conceptions of traditional serious games. This represents a missed opportunity; the cyber domain differs from traditional conflict in ways that warrant a fresh look at the design of wargames.

By design, wargames create an abstracted model of reality containing primary assumptions and simplifications that allow the model to be actionable. Underlying assumptions include: that the enemy is known, rational and ruthless; that the conflict being modelled is zero-sum in nature; that the games are effective tools even without specifically conceptualising how knowledge transfer takes place; and that the scope of the game should mirror reality as closely as possible. While these assumptions are appropriate for—or at least not detrimental to—traditional models of kinetic warfare, they are problematic for cyber wargame design. The challenges with each underlying assumption are described in turn.

The Known, Ruthless, and Rational Enemy

As Larry Greenemeier noted a decade ago, in cyberspace, the fog of war is exacerbated. While traditional warfare often limits available knowledge on an adversary’s location, in the cyber domain the reality is that defenders may not know who the enemy is nor their goals. When the enemy is an unknown, they can appear to act in an irrational way, at least from the perspective of the defender. This is due to the inherent asymmetry of the attacker. Through reconnaissance, the attacker will more than likely hold more information about intended targets than the defenders. Each of these issues, individually and collectively, are typically under-emphasised in most rigid wargames.

A Zero-Sum Nature of Conflict

Rigid wargames use a unity of opposites in their design, the goals of one side are diametrically opposed to the other. This creates a zero-sum game in which the goal of both the red and blue teams is the destruction of the other side. However, cyber conflict holds features of non zero-sum games, such as how the victory of one side does not always come with an associated loss to the other. Additionaly, there is an asymmetry introduced that should be addressed in the game design stage.

Knowledge Transfer: What is Actually Being Taught?

Another assumption made in the deployment of wargames is that they teach. However what is being taught is not as closely examined. In general, serious games can be categorised into two broad types: low road (or reflexive transfer) games; and high road (or mindful transfer) games. Low road transfer games are concerned with direct training of a stimulus and a response in a controlled environment that is as similar as possible to the context that the player is presented with in real life. For example, a flight simulator. The second type high road games are designed to encourage players to mindfully make connections between the context of play and the real world. Reflexive games are more likely to emphasise speed whereas mindful transfers are more likely to emphasise communication between players. Games must be designed using the knowledge transfer type most appropriate to the intended learning outcomes of the game.

Overenthusiastic Scoping

Cyber operations do not exist in isolation from traditional models of warfare. The integration of cyber operations with kinetic warfare, however, dramatically increases the complexity. Even attempting to capture the whole cyber landscape in a single game runs the real risk of detail overload, decision paralysis, and distracting the player from the game’s intended learning objectives. The longer it takes to learn to play, the less time the player has available to learn from the play. In reality, one cannot accurately simulate the real-world threat landscape without sacrificing effective learning (unless the learning point is simply to illustrate how complex the cyber threat landscape might be). For example, if the cyber wargame is focusing on the protection of critical national infrastructure, then side-tasks focusing on several other industries are likely to confuse, rather than assist, participants in achieving the desired learning goals.

Recommendations

How should we best approach the challenge of effective cyber wargame design?

We propose that designed cyber wargames must be in line with the following four principles:

  • Include ‘partial knowledge’ states.If the cyber wargame player has full knowledge of the game state, the game becomes nothing more than an algorithmic recall activity where a player can predict which actions are likely to result in successful outcomes. Certain ludic uncertainties can be included to induce ‘partial knowledge’, simulating the fog of war as required for each game.
  • Include ‘asymmetric positions’ for the players.The character of cyberwar is better modelled through asymmetric relationships between players. Cyber wargame designers need to consider the benefits to having this asymmetry inside the game.
  • Confirm learning objectives and knowledge transfer type before commencing design.Both low road and high road transfer games are valuable, but they serve different functions in the learning environment. A conscious choice for whether the game is attempting to promote low road or high road transfer should be confirmed before game design commences to ensure the appropriateness of the game.
  • Clearly scoped game to explore specific challenges.A well-scoped smaller game increases players’ willingness to replay games multiple times, allowing players to experiment with different strategies.

Conclusion

As both cybersecurity and wargames increase in importance and visibility, so does research on the use of cyber wargaming as a pedagogical tool for practitioners, policymakers, and the military. Existing principles within the games design profession around clear scoping of goals, game narratives, and appropriate player capabilities may all be applied to enhance existing cyber wargame design. The inclusion of partial knowledge states and asymmetric player capabilities both reflect crucial aspects of the cyber domain, while explicit attention to a game’s desired learning objectives and scope ensures that the resulting designs are as effective as possible. In a world in which cyberspace is only expected to become a more common feature of modern conflict, it is strongly advised that the MoD’s Defence Wargaming Centre leverages these tools and training opportunities. In the asymmetric and unpredictable field of cyber warfare, we need all the advantages we can get.

 

Amy Ertan is a cybersecurity researcher and information security doctoral candidate at Royal Holloway, University of London, and predoctoral cybersecurity fellow at the Belfer Center, Harvard Kennedy School. She is an exercise designer for cyber incident management scenarios for The CyberFish Company. As a Visiting Researcher at the NATO Cooperative Cyber Defence Center of Excellence, Amy has contributed to strategic scenario design for the cyber defence exercise, Locked Shields 2021. You can follow her on twitter: @AmyErtan, or via her personal webpage: https://www.amyertan.com

Peadar Callaghan is a wargames designer and lectures in learning game design and gamification at the University of Tallinn, Estonia. His company, Integrated Game Solutions, provides consultancy and design services for serious games and simulations, with a focus on providing engaging training outcomes. You can find him at http://peadarcallaghan.com/

Filed Under: Blog Article, Feature Tagged With: amy ertan, cyber domain, cyber war, cyber wargames, Cybersecurity, Cyberwar, cyberwarfare, military, NATO, peadar callaghan, Red Teams, UK Ministry of Defence, war games, wargaming

Future Warfighting in the 2030s: An Interview with Franz-Stefan Gady

September 9, 2020 by Ed Stacey

by Ed Stacey

British Royal Marines 45 Commando testing the Black Hornet 2 Unmanned Air System at the Army Warfighting Experiment 2017 (Image credit: Crown Copyright)

On 15 July 2020, Ed Stacey sat down with Franz-Stefan Gady to discuss the International Institute for Strategic Studies’ (IISS) upcoming future warfighting project. After introducing this new piece of work, Franz-Stefan offers some thoughts on the changing nature of warfare, the roles that emerging technologies and the nascent domains of space and cyber might play in future conflicts, and the need to move away from purely technological discussions about future warfighting.

For more information on the IISS and the latest analysis of international security, strategy, and defence issues, visit them here or follow them on Facebook, Twitter (@IISS_org), and Instagram (@iissorg).

 

ES: What is the IISS future warfighting project?

FG: The future warfighting project has just recently kicked off and looks at how great and medium-sized powers would fight high-intensity wars amongst peer and near-peer adversaries in the 2030s. So, what sort of capabilities will militaries need to develop over the next couple of decades in order to deal with specific operational problems in future warfighting scenarios? And how will these powers integrate emerging cyber and space strategies into existing, more classically conceived, options for kinetic and cognitive warfare?

The project explores future warfighting through three dimensions: space and cyber, kinetic and cognitive. Space and cyber refer to the application of primarily offensive cyber capabilities, supported by space assets, in cyberspace (including electronic warfare operations). Kinetic pertains to the use of conventional and nuclear weapons systems and the ‘traditional’ domains of air, land and sea. While the cognitive dimension includes an examination, not only of the use of information warfare but also the integration of artificial intelligence (AI) and machine learning into military hardware to gain information dominance at the strategic level and to influence decision-making at both the civilian and military level.

It is a fairly broad topic, and notably, we take technology as a starting point. By this, I am referring to the fact that a lot of future warfare discussions focus mostly on technological capabilities and their impact on warfighting. Yet I believe that such capabilities in themselves are fairly agnostic when it comes to triggering change. You can only really trigger change when you merge technological capabilities with new tactics, the right operational concepts and the right organisational structure.

So, the project takes technology as the starting point of a much deeper analysis of these new ideas. In doing so, we are trying to fill a gap that not many other institutions talking about future warfighting are looking at.

ES: What is your methodology for the project?

FG: As I mentioned, we are principally looking at future warfighting through three dimensions: space and cyber, kinetic and cognitive. We use these three dimensions to conduct comparative case studies on how various countries are thinking about future warfighting; and to divide up the literature, all the documents and interviews, and the military capabilities.

The first part of the project looks mostly at how China, Russia and the US would fight a high-intensity war after a breakdown of conventional deterrence. So not really grey-zone scenarios or hybrid warfare (though these are relevant) but rather high-intensity combat between great powers, which we have not really seen for many decades.

ES: What are your main findings so far?

FG: It is very early on, and I am hesitant to draw firm conclusions. But one of my hypotheses is that these three dimensions will increasingly merge into one over the next decade, and simultaneously, we will see a rebalance of conventional kinetic operations vis-à-vis cyber, space and information operations in any high-intensity great power war scenario. At the operational level, this is a result of the presumptive Chinese emphasis on system destruction warfare, the US attempt to move towards decision-centric manoeuvre warfare and the Russian push towards new-generation warfare.

All three forms of warfare attempt to move away from an attrition-centric approach, that emphasises the kinetic annihilation of an adversary’s forces, in favour of an evolving model of dislocation and disruption, that entails undermining an adversary’s battle network in all three dimensions. In this new form of network-centric warfare, you do not try to destroy your enemy and its main force; instead, you try to disable its networks and compromise its ability to fight.

A second hypothesis is that all three great powers will be increasingly capable of fielding precision-strike capabilities in all three dimensions in the 2030s. This will culminate in the establishment of a multi-dimension precision-strike regime, defined by the ability of a great power to conduct precision-strikes in the kinetic, cyber, space and cognitive dimensions against platforms, networks and humans at all ranges and in all warfighting domains.

And these two hypotheses draw attention to a third, which is that armed forces have a cultural problem in being overly focused on kinetic capabilities. My question would be, is this going to be a disadvantage for militaries in the future, as we move from a platform-centric approach to a more network-centric approach? (By platforms I mean tanks, ships, missiles and so on, or how we usually assess the military capabilities of a country – and I think these sorts of assessments are going to become less relevant in the future.)

There is a lot of resistance to this shift. For example, I have just spent some time looking at what is happening in the US, and the US Congress, interest groups and people within the Department of Defense are hesitant to give up certain capabilities that might no longer work in future warfighting scenarios, so-called legacy platforms. It is a huge problem. How exactly can you phase out legacy platforms and what are you going to replace those platforms with?

For instance, are we really going to have manned aircraft in 20 to 30 years from now? The answer is yes, but maybe we need to have a new role for manned aircraft. And maybe we are going to have more autonomous systems operating in the battlespace. What is the role of these new armed platforms? Are they going to be flying command and control centres, controlling autonomous swarms in the air or on the ground or in the oceans?

This question of integration is going to be crucial. You are still going to have legacy platforms 20 years down the road: you are still going to have the F-35 and maybe the F-15; you are still going to have most of the ships that you see in navies today – the aircraft carriers, the destroyers and manned submarines. But how do you integrate these capabilities with new platforms that are being developed? And by integrating, I mean how do you come up with a good operational concept to conduct a successful campaign in the future against a potential peer or near-peer adversary?

You cannot really talk about future warfighting unless you start off with a problem statement. Essentially, what is the operational environment you are envisioning in the future? And from there you try to come up with the kind of force structure you need, the kind of operational concepts you need and then also doctrine (how you train your force to fight in these future conflicts). And, of course, you need the resources and the strategy that comes along with all of this. So, it is a long, long process – and that is what we are trying to shed some light on.

It is a huge problem. How exactly can you phase out legacy platforms and what are you going to replace those platforms with?

ES: Which domain, if any, will be the most important in future warfighting? And does any domain have revolutionary potential?

FG: As I said, I think a key question behind modernisation efforts in China, the US and Russia (and we will also look at medium-sized powers, such as the UK, Germany and Japan – states that have strong military capabilities and relatively high defence budgets) is how they integrate these different capabilities. Ultimately, there are going to be trade-offs. And countries like China, Russia and the US – mostly China and the US, but its partially true for Russia – can handle these trade-offs better than smaller powers because they have the resources to invest in both legacy platforms and new capabilities and create a better force structure. Most other militaries will not have the money to do both, so they have to be very careful about where and what they spend their money on.

This makes your question a pertinent one, in the sense that states do need to prioritise funding when it comes to these capabilities. You can have all the operational concepts in the world and the doctrine, but if you do not have the capability then it just does not work – it is impossible to become an effective warfighting force.

So, when we talk about a new age of network-centric warfare, we are really talking about the creation of what you would call a military Internet of Things (IoT). That is a virtual and kinetic kill chain that creates networks that link the sensor to the shooter in a triangular relationship, or a ‘system of systems’. The sensor identifies the target and then through a network relays that information to the shooter, whether a manned aircraft, a missile or an offensive cyber capability. And the idea behind this is that a military commander would much faster be able to identify a target on a sensor and then through the military IoT direct fire, whether virtual or kinetic strikes, to degrade the target or destroy it.

Obviously, this opens up new attack vectors in cyberspace. And so, you cannot really implement any of these concepts properly unless you have extremely strong cyber defences, and cyber defence almost always entails offensive cyber capabilities.

I think an important technological capability to develop and hone in the future will be AI-enabled cyber defensive and offensive capabilities. When we think about the first officially AI-enabled weapons platform, it is probably going to be an offensive cyber weapon because they are easier to deploy than, let us say, a lethal autonomous weapon system like an autonomous tank or missile. This is because of all the risks that are still involved and the fundamental lack of trust in these platforms unless you test them at great length.

So, to a certain degree, the foundational element of network-centric warfare will be strong cyber defences and, ultimately, AI-enabled cyber defence capabilities. This will entail advances in AI and cyber defence. But if you do not have these, your network is going to be immensely vulnerable and attacks from the electromagnetic spectrum could turn the lights off, so to speak, of any of your networks. At the same time, however, you cannot of course neglect any other capabilities or domains.

In terms of revolutionary new capabilities that are going to fundamentally change the future of warfare, I do not think you will find these in hypersonics, for instance, because they just improve existing capabilities – they will be evolutionary. But when it comes to AI-enabled cyber capabilities, I think these have revolutionary potential.

I have to caveat this, though, by noting that it is very difficult to assess these capabilities because we have not seen high-intensity combat between great powers in which they have been deployed. And this is true of even strategic offensive cyber weapons, let alone AI-enabled cyber weapons. One scholar once called it the ‘fog of peace’, and we really do operate in a fog of peace when it comes to deliberations about future warfighting.

In terms of historical context, we are very much like where we were in the 1920s and 30s when it came to airpower. Because in the First World War you had airpower capabilities but by no means did airpower reach its full potential. It took the Second World War and the aerial campaigns of the Allies and the Axis powers to see whether some of those propositions in the 1920s and 30s turned out to be true.

A lot people said that airpower was going to be the only necessary military capability in future wars; that you could essentially win any future conflict with bombers and fighter aircraft, and that you would not need land forces or sea forces anymore – that airpower makes everything obsolete. That turned out to be untrue. And then there were the others who said: ‘Oh, well, airpower is completely useless; you do not really need strategic bombing capabilities; we only use aircraft for tactical purposes, like reconnaissance and tactical strikes’. And that also turned out to be untrue. At the end of the day, airpower had a big impact but it was, I think, by no means the decisive factor in the Allies’ victory.

So, we are in a similar situation in the sense that there will be extreme positions when it comes to network-centric warfare and all of these new capabilities, particularly cyber capabilities. At the end of the day, the truth will also probably be somewhere in-between the two extremes: one side saying that cyber is not going to be that important, that it is really just an auxiliary to other capabilities, and then the other, saying that cyber is going to be a revolutionary capability.

The difference, though – and I think why cyber has the potential to be more important than airpower, for example, or even nuclear weapons – is that cyber permeates all other dimensions and warfighting domains. It really is a foundational element of warfighting. Without strong cyber capabilities today, you cannot conduct conventional military operations because every system that is in a tank or an aircraft or a ship – all command and control systems – are immensely vulnerable to cyber-attacks. Any idea of new warfighting has to take that into account.

So, I would say cyber and AI-enabled cyber capabilities probably have the biggest potential to revolutionise future warfighting. Of course, there are other capabilities of note as well. But cyber is slightly underestimated by a lot of military planners and defence departments, despite probably having the biggest potential.

ES: You note the revolutionary potential of AI-enabled cyber capabilities. More generally, do you think AI as a technology is a game-changer or is it overhyped? And is an AI arms race inevitable or perhaps already even happening?

FG: I generally do not like the idea that there is an arms race in AI happening. Firstly, AI is not a weapon system or a military capability per se: it is a general-purpose technology, as some scholars have pointed out. And so, we should not consider AI in isolation but instead how this technology might be combined with weapon systems.

In the short term, I do not foresee revolutionary changes when it comes to AI-enabled capabilities. But in the long-term, it is definitely possible.

In the short-term, what we are going to see is an accelerated pace of military operations from AI first arriving in non-lethal roles. And this is already underway when it comes to intelligence collection and analysis, support elements for command and control, decision-making support, and Intelligence, Surveillance and Reconnaissance (ISR) capabilities – where I see huge potential for AI, such as in AI-enabled satellites.

It is a hugely important field, and AI does have the potential – just like the combustion engine 100 years ago – to revolutionise warfare. But I would not look at it in isolation, and that is an important point to note about discussions around future military technological capabilities in general. What people usually get wrong is not so much predicting a particular technology but rather how that technology will combine with the wider defence architecture to field an effective weapon system.

If you think about the Second World War, for example, you had radio communications, the combustion engine, advances in armour protection, as well as in the ballistics and mechanics of high-velocity guns. But it was only through merging all of these new capabilities that we created the tank – in other words, only in combination did they create a ‘revolutionary’ weapons platform. Yet even that alone did not do that much. All of the Western militaries had tanks and fairly advanced tanks that were relatively equal in terms of technical capabilities. The true change came when the Germans devised a revolutionary operational concept that later on was adapted to doctrine (operational concepts being the precursors to doctrine). By combing all of these technological developments with a revolutionary approach to warfighting, the Germans enabled not a revolution in military affairs but a decisive victory in the battlespace.

In the short-term, what we are going to see is an accelerated pace of military operations from AI first arriving in non-lethal roles.

So, I guess the major point here is that technology alone is not going to determine the character of future wars. It is really as much, and if not more, about how you change your organisational structures, adopt doctrine and so on. And within this, there is the key question of how you integrate all of these new platforms and approaches into an overall force structure that gives you the most capabilities to meet future operational problems.

To return to your question, I guess we have to ask: firstly, what are the most important technologies that AI could be combined with? Secondly, what would be the best operational concepts and doctrine to exploit the full potential of these newly combined technological capabilities? And thirdly, what sort of organisational structure and force posture does your military need to execute missions that exploit the full potential of these new capabilities?

ES: Thinking about other technologies, how significant is it that China is overtaking, or perceived to be overtaking, the US in various areas of research on quantum technology? In the context of the tech war too, is it significant that China is making ground in this space?

FG: Yes, I think so. Quantum technology is an interesting one because we are still probably many years away from fielding military capabilities when it comes to quantum radars or quantum sonar. There is also a debate over whether it will have any impact at all in the military domain. And so, I am hesitant to make any predictions about quantum technological capabilities and their impact.

To answer your point about competition between the US and China, as I mentioned earlier with regards to AI, I just do not see all of these tech races as really being tech races. Firstly, there is a lot of cooperation between the US and China in many fields, and there is more collaboration than you would think in developing these emerging technologies. Secondly, you have to question whether these technologies will actually have a significant impact on the modern battlespace and to what degree they will revolutionise future warfighting.

Just to illustrate why I always try to move away from strictly technological discussions when we talk about future conflict. My approach to military power is based on what the defence analyst Stephen Biddle called the ‘modern system’ of force employment. That is, military power is based, on the one hand, on combined arms operations that increase the effects of precision-guided munitions (what I call the multi-dimension precision-strike regime), whilst on the other, cover and concealment and the dispersion of your forces, for example through stealth technology or the suppression of ISR capabilities, simultaneously offer protection from an adversary’s precision-strike capability.

The aim of combined arms operations is to integrate different services, capabilities and platforms to achieve a decisive effect in the battlespace. In the modern battlespace, the emergence of precision-guided munitions requires militaries to conceal their forces because, in order to conduct these operations successfully, usually you need to be able to mass your forces to achieve a breakthrough on the frontline.

And combined arms operations are really difficult to pull off. Combining and coordinating capabilities and strikes from land forces, naval forces and air forces to achieve some sort of effect and a breakthrough in the battlespace is immensely difficult – and something only a few militaries have been capable of achieving.

To take the example of the first Gulf War, there was a decisive victory by the US and her allies, but this was not just down to superior technologies: it was technology integrated with combined arms operations, and the ability to hide forces and conceal movements, that achieved this one-sided victory. The other side, Saddam Hussain, also had a powerful military and fairly advanced technological capabilities (although not as powerful or advanced as the US). But it was impossible for him to achieve meaningful effects in the battlespace because, on the one hand, he failed to hide his forces from precision strikes, and on the other, to conduct combined arms operations to counter the US and her allies.

Saddam probably could have conducted some form of combined arms operations against elements of US ground forces, by using artillery strikes in combination with tanks, infantry and air strikes. But he failed to coordinate his attacks and successfully manoeuvre his forces. These are the key tenants of any military these days, and the ability of a military to execute these operations is currently the decisive factor in warfighting.

Revolutionary change in the future battlespace is most likely to happen if these warfighting methods are made ineffective by new technological capabilities. Ones that make cover, concealment and dispersion through camouflage or stealth technology, as well as combined arms operations in general, obsolete. That, in essence, are capable of detecting every move on the battlefield and provide complete situational awareness. To date, no such technological capability exists – but I accept that may change in the coming decades.

When it comes to stuff like AI-enabled ISR capabilities or quantum radar and quantum sonar, if these capabilities can facilitate that sort of situational awareness then you might have a revolutionary technology on your hands. Until such a technology exists, however, combined arms operations, or multi-domain operations, will remain the most important factor when it comes to military power.

ES: And finally, how important is space going to be as a future warfighting domain?

FG: New space-based capabilities are crucial to how all major military powers exercise command and control over their forces. They also directly link to the ability to conduct offensive cyber operations and campaigns, which will be a – and if not the – crucial component in any future military campaign. For one thing, space and cyber permeate all other war-fighting domains and so are the new centre of gravity for high-intensity military operations.

Whoever dominates space will have massive advantages in the cyber domain, and without space capabilities, you are essentially not capable of conducting modern military operations. So, all major military powers have been working to deny potential adversaries the use of these capabilities, which until recently basically meant GPS type satellites.

China and Russia have recognised the US is particularly vulnerable when it comes to space.

No longer in the future, however, are you going to have just a handful of GPS satellites you depend on for ISR capabilities, targeting, early warning detection systems and so on. A lot of current discussions in the US are about making ISR architecture less reliant on space capabilities and a desire to diversify and build more resilient space architecture. As a result, we will see a proliferation of smaller, cheaper, low Earth orbit (LEO) satellites in order to create more redundancy in capabilities and to increase the resilience of space architecture and battle networks.

Networks of hundreds to thousands of smaller, more expendable LEO satellites are much harder to disrupt than larger GPS-type satellites. LEO satellites, such as the ones to be developed by OneWeb in the UK, can increase situational awareness in the battlespace, for example by transmitting high-resolution, real-time video directly into the cockpit of military aircraft, such as the F-35, and decrease the reliance on GPS for these tasks. They could also be used to monitor the activities of adversaries and in developments in areas such as optical clocks, which are necessary for accurate positioning and enable high-precision, reliable navigation (and as a result precision-strikes) without the limitations of GPS systems.

China and Russia have recognised the US is particularly vulnerable when it comes to space. They have tested anti-satellite weapons and have been developing cyber capabilities to degrade and disrupt and manipulate satellites. Having said that, the US will continue to dominate the space domain for the foreseeable future.

So, it is going to be a hugely important warfighting domain because it links to other key capabilities: it is very difficult to pull off precision-strikes without space assets, and it is very difficult to conduct offensive cyber operations without space-based capabilities. And it is already a very important domain, which is why countries are working to build more resilient space architectures and, at the same time, looking at alternatives to existing platforms.

It is extremely difficult, though, to achieve uncontested superiority in space because of the nature of the domain – which makes assets immensely vulnerable to all sorts military operations, whether kinetic strikes, such as anti-satellite weapons, cyber-attacks or electronic warfare. And there is a ‘nuclear’ option in space too: a series of kinetic strikes against satellites causing massive debris, which would knock out a large percentage of existing space-satellite architecture.


Ed Stacey is an MA Intelligence and International Security student at King’s College London and a Student Ambassador for the International Institute for Strategic Studies (IISS). The #IISStudent Ambassador programme connects students interested in global security, political risk and military conflict with the Institute’s work and researchers.

Franz-Stefan Gady is a Research Fellow at the IISS focused on future conflict and the future of war. Prior to joining the IISS, he held various positions at the EastWest Institute, the Project on National Security Reform and the National Defense University, conducting field research in Afghanistan and Iraq, and also reported from a wide range of countries and conflict zones as a journalist.

Filed Under: Feature, Interview Tagged With: cyberwarfare, ed stacey, Franz-Stefan Gady, future warfighting, iiss, space warfare

Footer

Contact

The Strife Blog & Journal

King’s College London
Department of War Studies
Strand Campus
London
WC2R 2LS
United Kingdom

blog@strifeblog.org

 

Recent Posts

  • Climate-Change and Conflict Prevention: Integrating Climate and Conflict Early Warning Systems
  • Preventing Coup d’Étas: Lessons on Coup-Proofing from Gabon
  • The Struggle for National Memory in Contemporary Nigeria
  • How UN Support for Insider Mediation Could Be a Breakthrough in the Kivu Conflict
  • Strife Series: Modern Conflict & Atrocity Prevention in Africa – Introduction

Tags

Afghanistan Africa Brexit China Climate Change conflict counterterrorism COVID-19 Cybersecurity Cyber Security Diplomacy Donald Trump drones Elections EU feature France India intelligence Iran Iraq ISIL ISIS Israel ma Myanmar NATO North Korea nuclear Pakistan Politics Russia security strategy Strife series Syria terrorism Turkey UK Ukraine United States us USA women Yemen

Licensed under Creative Commons (Attribution, Non-Commercial, No Derivatives) | Proudly powered by Wordpress & the Genesis Framework