• Skip to primary navigation
  • Skip to main content
  • Skip to footer
  • Home
  • About
    • Editorial Staff
      • Bryan Strawser, Editor in Chief, Strife
      • Dr Anna B. Plunkett, Founder, Women in Writing
      • Strife Journal Editors
      • Strife Blog Editors
      • Strife Communications Team
      • Senior Editors
      • Series Editors
      • Copy Editors
      • Strife Writing Fellows
      • Commissioning Editors
      • War Studies @ 60 Project Team
      • Web Team
    • Publication Ethics
    • Open Access Statement
  • Archive
  • Series
  • Strife Journal
  • Strife Policy Papers
    • Strife Policy Papers: Submission Guidelines
    • Vol 1, Issue 1 (June 2022): Perils in Plain Sight
  • Contact us
  • Submit to Strife!

Strife

The Academic Blog of the Department of War Studies, King's College London

  • Announcements
  • Articles
  • Book Reviews
  • Call for Papers
  • Features
  • Interviews
  • Strife Policy Papers
    • Strife Policy Papers: Submission Guidelines
    • Vol 1, Issue 1 (June 2022): Perils in Plain Sight
You are here: Home / Archives for cyber

cyber

Economically isolated, North Korea now turns to Cyberspace

July 13, 2021 by Carlotta Rinaudo

North Korean leader Kim Jong-un surrounded by military personnel. Photo Source: Flickr, licensed under Creative Commons.

For years, the international community has slapped North Korea with painful economic sanctions aimed at constraining its nuclear ambitions. Trade of arms and military equipment has been prohibited, exports of coals and minerals have been banned, and the assets of North Korean officials have been frozen. To make matters worse, the ongoing Covid19 pandemic has hit Pyongyang harder than any previous sanction. After closing its border with China, trade with Beijing has been reduced by 95%, leading to a scarcity of food and basic necessities such as soybean oil, sugar, and flour. Trains and flights in and out of the country have been stopped since March 2020, thus freezing tourism and labor exports, two major sources of foreign currency. It would therefore be easy to conclude that North Korea has recently been living in total economic isolation, that is, were it not for cyberspace.

In the physical world, a country like North Korea can be forced into isolation. Yet, in cyberspace, Pyongyang is everybody’s neighbor. Often described as the fifth domain of warfare, cyberspace has a low cost of entry while offering a high degree of anonymity. Pyongyang has seemingly exploited this domain to circumvent economic sanctions, raising millions of dollars through ransomware attacks. North Korean hackers have in fact been accused of hacking international financial institutions to steal foreign currency, which is in turn used to finance Pyongyang’s nuclear program. For this reason, they have recently been branded as “the world’s leading bank robbers”. But North Korean hackers might also have been the architects behind a cyber-attack directed against Sony Pictures Entertainment back in 2014. The entertainment company was about to release “The Interview”, a comedy that portrayed two journalists assassinating Kim Jong-un in Pyongyang. North Korea’s requests to cease the production of the movie had largely been ignored, then, in November, Sony’s employees entered their office and found images of red skeletons on their computers. “We’ve obtained all your internal data, including your secrets and top secrets”, said a message on the screens, “if you don’t obey us, we’ll release the data shown below to the world.” This makes North Korea a rare cyber-creature: a country which is using cyberattacks not only for espionage, but also to fund its own operations, and – even more strangely – to punish comedic depictions of its leader.

In 2017, the Trump administration accused North Korea of being responsible for the WannaCry malicious software, which blocked computers in more than 150 countries. In response, Pyongyang denied any responsibility and declared “we have nothing to do with cyberattacks.” Following the malware intrusion, victims were asked for a ransom payment in exchange for unlocking their systems and data. In two hospitals in Jakarta, the malware blocked patient files, including medication records. In the UK, hospitals had to cancel thousands of medical appointments after losing access to computers. In China, some gas stations had to ask their customers to pay by cash only, after their digital payment system stopped working. In France, the carmaker Renault had to suspend its production in order to stop the spread of the worm. In different ways, the WannaCry computer worm caused unexpected levels of disruption all around the world.

Bitcoin as a new source of income for the Kim regime. Photo Credit: Flickr, licensed under Creative Commons.

Constrained by a set of international sanctions and by the destructive force of the ongoing pandemic, Pyongyang is now searching for new means to ensure its survival in a hostile environment. And cyberspace offers plenty of opportunities. Following the public’s growing interest in digital currencies, North Korean hackers have currently turned their attention to the world of cryptocurrencies. Allegedly, they have built at least nine cryptocurrency apps to trade cryptocurrencies and create digital wallets, such as Ants2Whale, CoinGo, and iCryptoFX, designed with a back door that can provide North Korean hackers with access into computer systems. In August 2020 one of these Apps was used to break into a financial institution in New York to steal $11.8 millions in cryptocurrency. In addition, exchanges that trade Bitcoin and other cryptocurrencies have fallen victims to North Korean cyberattacks, as these exchanges offer easy access to storage facilities known as “hot wallets”: hot, because they are connected to the Internet, as opposed to the storage method known as offline “cold wallets”. In total, according to a UN report, North Korea might have stolen more than $300 million in cryptocurrencies over recent months, partly in order to support its nuclear program.

In the past, most of North Korea’s criminal operations involved the smuggling of cigarettes, counterfeit money, trading of endangered species, and illegal drugs such as methamphetamine. Today, cyberspace allows conventionally weaker actors to challenge their stronger competitors more easily. North Korea can thus pursue an asymmetric strategy to put pressure on the international community: through cyberattacks, Pyongyang is not only countering its economic isolation, but it is also funding its nuclear program.

It is hard for the international community to find an effective response: retaliation seems highly ineffective, because North Korea has a primitive infrastructure that is less vulnerable to cyberattacks. Imposing further sanctions also appears a non-viable option: many sanctions have already been imposed, and North Korea is becoming increasingly adept at finding workarounds to its economic isolation.

For decades, North Korea has searched for solutions to the same old questions: how to mitigate and instrumentalize its weaknesses to stay relevant in a hostile international system. Now, it seems that cyberspace offers the answers.

Filed Under: Blog Article, Feature Tagged With: Carlotta Rinaudo, cyber, Cybersecurity, Cyberspace, North Korea

Offensive Cyber Series: Amy Ertan on AI and Military Innovation, Part II

June 18, 2021 by Ed Stacey and Amy Ertan

A soldier operates the remote controlled Mark 8 Wheel Barrow Counter IED Robot. Photo Credit: UK Ministry of Defence, licensed under Creative Commons

This is part II of Ed Stacey’s interview with Amy Ertan on AI and military innovation for Strife’s Offensive Cyber Series. You can find part I here.


ES: I feel like there is a whole interview to be had on this idea of an AI arms race, especially with some of the signals from governments about the importance of these technologies.

AE: We talk about an AI arms race, but actually the number of countries that have the resources to invest in this is really small. The US, of course, is investing billions and billions, and they have their Joint Artificial Intelligence Center which is coordinating AI, including AI for use in cyberspace. The UK invests a huge amount as well and so do a few other states within Europe, for example France. But for the majority of states, say across NATO, AI in conflict is not something that is currently top of the agenda: it is something that is discussed at the strategic level and people know that it will hit and have impact in 20 to 30 years’ time. So we are seeing that strategic discussion but it costs so much that it is just a matter of states buying solutions from the private sector, so lots of questions there to.

ES: On that note, given the private sector is so important in the development of AI, do you think that the advantage lies with liberal democratic states and their innovate, free-market economies or authoritarian states that have greater control over private companies, enhancing military-civil fusion? Or alternatively, is that dichotomy a bit of a cliché?

AE: That dichotomy is a bit of a cliché. I will say, though, that the states that do have control and oversight over their industry, China for example, have a significant advantage when it comes to military-civil fusion and access to big data. China places either top or joint top with the US at the moment – I think there is a separate computing race – when it comes to AI. And when you look at conversations, in the US and UK for example, public-private partnerships are a major focus with AI because you need to partner with companies like Microsoft, IBM, Amazon and Google.

The free-market economy is not something I think has an inherent advantage, which sounds strange to say. But there is an interesting aspect in that for a lot of private sector leaders in AI, governments are not their main target market – they do not need to work for them. There is controversy around what they do, for example with Google and Project Maven.

There has been a shift in the way that military innovation takes place over the last half-century or so and the government now has less control over who works with them than before. So public-private partnership is something that states like the UK and US would love to improve on. There are also challenges for government procurement cycles when it comes to technologies like AI because you need a much faster procurement cycle than you do for a tank or a plane. So working with the private sector is going to become increasingly central to Ministry of Defence procurement strategies moving forward.

ES: Your PhD research explores the unforeseen and unintended security consequences of developing and implementing military AI. Could you speak a little to how these consequences might materialise in or through the cyber domain? 

AE: There are two aspects to this: one is the technical security angle and then the second is the strategic security angle. In terms of cyber security aspects, first, you have the threat that your AI system itself may not be acting as intended. Now especially when we think about sophisticated machine learning techniques, you often cannot analyse the results because the algorithm is simply too complicated. For example, if you have developed deep learning or a neural network, there will potentially be hundreds of thousands of nodes and no “explainability” – you have a “black box” problem as to what the algorithm is doing. That can make it very difficult to detect when something goes wrong and we have seen examples of that in the civic space, where it has turned out many years after the fact that an algorithm has been racist or sexist. It is a slightly different challenge in the military sphere: it is not so much about bias but rather is it picking up the right thing? Obviously, within a conflict environment you do not want to detect a threat where there is not one or miss something.

Second, there is the threat that your algorithm or data may be compromised and you would not know. So this could be the input data that you are feeding in or the system itself. For example, you may have a cyber defence algorithm that picks up abnormal activity on your network. A sophisticated attacker could interfere with the programming of that algorithm or tamper with the data so that the algorithm thinks that the attacker has been there all along and, therefore, that it is not abnormal activity and no flags are raised. So the way in which threat modelling does not consider the creativity of attackers, or insufficiency of the algorithm, could lead to something being deployed that is not fit for purpose.

Third, adversarial AI. This is the use of techniques to subvert an AI system, again making something that is deployed fallible. For one perhaps theoretical but technically feasible example, you could deploy an algorithm in cyberspace that would only target certain kinds of infrastructure. Maybe you would want it to not target hospitals, but that could be gamed – everyone could attempt to make their site look like a hospital to the algorithm.

Right now, the technology is too immature and we do not have direct explainability. It is also very difficult to know the right level of confidence to have before deploying an AI system and there are questions around oversight. So while technical challenges around explainability and accuracy may be solved through strict verification and validation procedures that will mature in time with AI capabilities, some of these unintended consequences come down to human factors like trust, oversight and responsibility. For example, how do humans know when to override an AI system

Those societal and policy questions will be tricky and that is what leads you into the strategic debate. For example, what is the appropriate use of AI in an offensive manner through or beyond cyberspace? What is a legitimate target? When it comes to AI and offensive cyber, all of the main questions around offensive cyber remain the same – the ones that traditionally apply to cyber conflict and the ones that we want to start thinking about with sub-threshold conflict. With AI, I think it is the way in which it can be mis-utilised or utilised to scale up inappropriate or unethical activity that is particularly problematic.

ES: How should states go about mitigating those risks? You touched on norms earlier, but because a lot of this work is super secretive, how can we have those conversations or develop regulation when states are, perhaps for good reason, not willing to reveal what they are doing in this space?

AE: Absolutely. Military innovation around AI will always be incredibly secretive. You will have these propriety algorithms that external parties cannot trust, and this is really difficult in the military space where the data is so limited anyway. I mentioned earlier that you can feed three million pictures of cats into an algorithm that then learns to recognise a cat, but there are way fewer images of tanks in the Baltic region or particular kinds of weapon. The data is much more limited in secretive military contexts and it potentially is not being shared between nations to the extent that might be desirable when it comes to building up a better data set that would lead to more accurate decisions. So encouraging information sharing to develop more robust algorithms would be one thing that could mitigate those technical risks.

Talking about broader conversations, norms and regulations. I think regulation is difficult. We have seen that with associated technologies: regulation moves quite slowly and will potentially fail to capture what happens in 10, 15 or 20 years’ time because we cannot foresee the way in which this technology will be deployed. Norms, yes, there is potential there. You can encourage principles, not only in the kinetic space but there are also statements and agreements around cyberspace – NATO’s Cyber Defence Pledge, for example, and the Paris Call. States can come together and agree on baseline behaviours of how to act. It is always difficult to get consensus and it is slow, but once you have it that can be quite a powerful assurance – not confirmation that AI will not be used in offensive cyber in undesirable ways, but it gives some assurance to alliance structures.

And those kinds of conversations can prove the basis for coming together to innovate as well. So we already see, for example, while the UK and US have the power and resources to invest themselves, across NATO groups of countries are coming together to look at certain problems, for example to procure items together, which may well be the path towards military AI.

It is difficult and you cannot force states to cooperate in this way, but it is also in the interests of some states. For example, if the US has invested billions in military AI for cyber purposes, it is also in its interest that its allies are secure as well and that the wider ecosystem is secure. So it may choose to share some of those capabilities to allies, not the most secretive nor the raw data but, for example, the principles to which it abides by or certain open source tools. Then we start thinking about trust networks, whether that is the Five Eyes, NATO or other alliance structures too. So it is not hopeless.


The final interview in Strife’s Offensive Cyber Series is with Dr Jacquelyn Schneider on cyber strategy. It will be released in two parts on Thursday 24th and Friday 25th June 2021.

Filed Under: Blog Article, Feature, Series Tagged With: AI, amy ertan, Artificial Intelligence, cyber, Cybersecurity, cyberwarfare, ed stacey, Military Cyber, offensive cyberwarfare, offensive cyberwarfare series

Offensive Cyber Series: Dr Daniel Moore on Cyber Operations, Part II

June 11, 2021 by Dr Daniel Moore and Ed Stacey

Photo Credit: Ecole polytechnique / Paris / France, licensed with CC BY-SA 2.0.

This is part II of Ed Stacey’s interview with Dr Daniel Moore on cyber operations for Strife’s Offensive Cyber Series. You can find Part I here.


ES: Thinking about alliances more broadly, what sort of opportunities and challenges do allies face when conducting joint operations in cyberspace?

DM: Allied operations on networks – I am not a fan of cyberspace – are contentious as well. They are a good measure more sensitive than any conventional equivalent that you can think of. It is not like having a joint military operation: it means putting your sensitive infrastructure and capabilities on the line alongside an ally. That is not to say it does not happen and there have been documented cases which were purportedly joint operations by multiple countries. So I think it will happen, but there are complexities involved. I know that NATO has already declared that they are together, as an alliance, bringing forward cyber capabilities that they will use jointly. I welcome that declaration, even if I am sceptical as to what it actually means.

I would tend to believe that, considering how porous NATO is as an entity and how there are varying levels of trust within NATO, truly sensitive capabilities will be kept off the table by individual member states in favour of their own arsenals and sets of strategic capabilities. This is not to say it is not possible, but it is unlikely that at a NATO level you will see joint operations that are truly strategic in nature. What you might see is allied members that are operating together. I do not think that, for example, a joint UK-US operation against a target is out of the question, especially if one brings a certain set of capabilities to the table and one brings others – somebody gives the tools, this unit has the relevant exploits, this intelligence organisation had already developed access to that adversary and so on. Melding that together has a lot of advantages, but it requires a level of operational intimacy that is higher than what you would be able to achieve at the NATO alliance level.

ES: Moving beyond the state, what role does the private sector play in the operational side of offensive cyber? Do we have the equivalent of private military contractors in cyberspace, for example?

DM: There is a massive role for the private sector across the entire operational chain within offensive cyber operations. I would say a few things on this. Yes, they cover the entire chain of operations and that includes vulnerability research, exploit development, malicious tool development and then even specific outfits that carry out the entire operational lifecycle, so actually conduct the intrusion itself for whatever purposes. In some cases, it is part of an industrial-defence complex like in the US, for example, where you have some of the giant players in defence developing offensive capabilities, both on the event- and presence-based side of things. And ostensibly you would have some of those folks contributing contractors and operators to actually facilitate operations.

But in other countries that have a more freeform or less mature public sector model for facilitating offensive cyber operations, the reliance on third party private organisations is immense. If you look, for example, at some of the US indictments against Iranian entities, you will see that they charged quite a few Iranian private companies for engaging in offensive cyber operations. The same happens in China as well, where you see private sector entities engaging in operations driven by public sector objectives. In some cases, they are entirely subsumed by a government entity, whereas in others they are just doing work on their behalf. In some cases, you actually see them use the same infrastructure in one beat for national security objectives, then the workday ends and they pivot and start doing ransomware to get some more cash in the evenings – using the same tools or infrastructure, or something slightly different. So, yes, the private sector plays an immense role throughout this entire ecosystem, mostly because the cost of entry is low and the opportunities are vast.

ES: Just to finish, you have a book coming out soon on offensive cyber. Can you tell us anything about what to expect and does it have a title or release date yet?

DM: The book is planned for release in October. It will be titled Offensive Cyber Operations: Understanding Intangible Warfare, and it is basically a heavily processed version of my PhD thesis that has been adapted, firstly, with some additional content to reflect more case studies, but also to appeal to anybody who is interested in the topic without necessarily having a background in cyber nor military strategy and doctrine. So it is trying to bridge the gap and make the book accessible, exactly to dispel some of the ambiguities around the utility of cyber operations. Questions like, how they are currently being used? What can they be used for? What does the “cyber war” narrative mean? When does an offensive cyber operation actually qualify as an act of cyber warfare? And, most importantly, what are the key differences between how different countries approach offensive cyber operations? Things like organisational culture, different levels of maturity, strategic doctrine and even just circumstance really shape how counties approach the space.

So I tackle four case studies – Russia, the US, China and Iran – and each one of those countries has unique advantages and disadvantages, they bring something else to the table and have an entirely different set of circumstances for how they engage. For example, the Iranians are incredibly aggressive and loud in their offensive cyber operations. But the other side to this is that they lack discipline, their tools tend to be of a lower quality and while they are able to achieve tactical impact, it does not always translate to long-term success.

The US is very methodical in its approach – you can see, taste and smell the bureaucracy in every major operation that it does. But that bureaucratic entanglement and the constant tension between the National Security Agency, Cyber Command and other involved military entities results in a more ponderous approach to cyber operations, although those organisations obviously bring a tonne of access and capability.

With the Russians, you can clearly see how they do not address cyber operations as a distinct field. Instead, they look at the information spectrum more holistically, which is of pivotal importance to them – so shaping what is “the truth” and creating the narrative for longer-term strategic success is more important than the specifics. That being said, they are also one of the most prolific offensive actors that we have seen, including multiple attacks against global critical infrastructure and various aggressive worms that exacted a heavy toll from targets. So for Russia, if you start looking at their military doctrine, you can see just how much they borrow, not only from their past in electronic warfare but also their extensive past in information operations, and how those blend together to create a broader spectrum of information capabilities in which offensive cyber operations are just one component.

And finally, the Chinese are prolific actors in cyber espionage – provably so. They have significant technical capabilities, perhaps somewhat shy of their American counterparts but they are high up there. They took interesting steps to solidify their cyber capabilities under a military mandate when they established the Strategic Support Force, which again – like the NCF – tried to resolve organisational tensions by coalescing those capabilities. But they are largely unproven in the offensive space. They do have an interesting scenario on their plate to which cyber could and may play a role, which is any attempt at reclaiming Taiwan – something I look at extensively in the book and how that shapes their offensive posture.

So the book is a combination of a broader analysis of the significance of cyber operations and then how they are concretely applied by different nations for different purposes.


The next interview in Strife’s Offensive Cyber Series is with Amy Ertan on AI and military innovation. It will be released in two parts on Thursday 17th and Friday 18th June 2021.

Filed Under: Blog Article, Feature, Series Tagged With: cyber, Cyber Operations, Cyber Security, daniel moore, Dr Daniel Moore, ed stacey, offensive cyberwarfare, offensive cyberwarfare series, Series, Strife series

Offensive Cyber Series: Dr Tim Stevens on Offensive Cyber in the 2020s, Part II

June 4, 2021 by Ed Stacey and Dr Tim Stevens

Photo Credit: UK Ministry of Defence, Crown Copyright.

This is part II of Ed Stacey’s interview with Dr Tim Stevens on offensive cyber in the 2020s for Strife’s Offensive Cyber Series. You can find Part I here.


ES: Thinking about the relationship between offensive cyber and international law and ethics, how far have debates gone around when and how it is right to use these capabilities and how confident are we in their conclusions?

TS: Depending on who you ask, this issue is either settled or it is not. Now the point about the discussion around these capabilities is that, actually, when we think about international law and ethics, whether from a liberal democratic standpoint or otherwise, the conversation is not about the capabilities themselves, generally speaking – it is not about cyber weapons as such – but tends to be more about the targets of those capabilities and the effects.

In 2015, the United Nations (UN) Group of Governmental Experts (GGE) on information security, which is led by the permanent five – the UK, Russia, France, China and the US – but also involved twenty or so other countries, agreed that international law applies to this domain in its entirety. That includes the UN Charter, they found a couple of years later. There is also a big NATO process which says that international humanitarian law (IHL), which governs the use of force in war, also applies to this environment. And what comes out of that is an understanding of several things.

Firstly, that the use of any capabilities that you might describe as offensive – or indeed defensive, hypothetically – has to abide by the laws of war. So they have to be necessary, proportionate and they have to have distinction, in the sense that they cannot target civilians under normal circumstances. The 2015 GGE said that you could not target civilian infrastructure through cyber means and so on.

But the problem is that, as we look at the world around us, for all of those international legal constraints and associated ethical arguments about not targeting civilians, for example, what we see is the significant use by states and other actors of exactly these types of capabilities, targeting exactly these types of targets. We have seen civilian infrastructure being targeted by the Russians, for example in Kiev on a couple of occasions in winter, where they have essentially turned the electricity off. That is exactly the opposite of what they signed up to: they signed up to say that that was not legal under international law, yet they do it anyway.

So the question really is not whether international law applies. It is slightly an issue about the details of how it applies and then if someone is in breach of that, what do you then do, which throws you back into diplomacy and geopolitics. So already you have gone beyond the conversation about small bits of malicious software that are being used as offensive cyber capabilities and elevating it to levels of global diplomacy and geopolitics. And essentially, there is a split in the world between liberal democracies, who at least adhere for the most part to international law, and a small set of other countries who very clearly do not.

ES: Given that context, what are the prospects for regulating offensive cyber activity? Is there the potential for formal treaties and agreements or are we talking more about the gradual development of norms of responsible state behaviour?

TS: This is the live question. Although we have an emerging understanding of the potential tools with which we might regulate these capabilities – including IHL and norms of responsible state behaviour – we have not got to the point of saying, for example, that we are going to have a global treaty. But there are multi-stakeholder efforts to do something that look a little like global agreements on, for example, the use of capabilities for targeting civilian infrastructure. There is something called the Cybersecurity Tech Accord, another is the Paris Call for Trust and Security in Cyberspace and there are half a dozen others that even if not explicitly focussed on offensive cyber, it is part of a suite of behaviours that they wish to develop norms around and potentially even regulation.

But it is incredibly difficult. The capabilities themselves are made of code: they are 1s and 0s, they zip around global networks, they are very difficult to interdict, they multiply, they distribute and they can attack a thousand different systems at once if they are done in a very distributed fashion. How do you tell where they come from? They do not come with a return address as the cliché goes. How do you tell who is responsible? Because no-one is going to own up to them. How do you tell if they are being developed? Well you cannot because they are done in secret. You can have a military parade in the streets of Washington DC, Pyongyang or Moscow, but you cannot do the same with cyber capabilities.

So it is very difficult to monitor both their use and their retention and development. And if nobody does own up to them, which is commonly the case, how do you punish anyone for breaching emerging norms or established international law? It is incredibly difficult. So the prospect for formal regulation anytime soon is remote.

ES: So far we have talked about some quite complex issues. Given the risks involved in developing and deploying these types of capabilities, what do you think needs to happen to improve public understanding of offensive cyber to the point that we can have a proper discussion about those risks?

TS: Public understanding of offensive cyber is not good and that is not the fault of the public. There are great journalists out there who take care in communicating these issues, and then there are others who have just been put on a story by their sub-editor and expected to come up to speed in the next half hour to put some copy out. It is really difficult to generate nuanced public understanding of things when the media environment is what it is.

Now I am not blaming the media here; I am just saying that that is one of the factors that plays into it. Because we have a role as academics as well and, ultimately, a lot of this falls to governments to communicate, which has conventionally not been great. Partly this is because a lot of the use and development of these capabilities comes from behind the classification barriers of national security, defence and intelligence. We have heard bits about their use in the battlespace against Islamic State in Iraq and Syria that has leaked out in interviews with senior decision-makers in the US and the UK, but generally not a lot else.

What we tend to get is policy statements saying: we have a sovereign offensive cyber capability and we are going to use it at a time and place of our choosing against this set of adversaries, which are always hostile states, terrorist groups, serious organised criminals and so on. But it does not encourage much public debate if everything that comes out in policy then gets called a cyber war capability because actions to stop child sexual exploitation by serious organised crime groups are not a war-like activity – they fall in a different space and yet they are covered by this cyber war moniker.

Now there is an emerging debate around offensive cyber. Germany has had a conversation about it, constitutionally quite constrained when it comes to offensive capabilities. There is a discussion in the Netherlands, also in the US about their new cyber posture – which is much more forward leaning than previous ones – and we are beginning to have a conversation in the UK as well. But a lot of that has fallen to academics to do and, I guess, I am part of that group who are looking at this issue and trying to generate more of a pubic conversation.

But it is difficult and the response you will sometimes get from government is: we do not need to have a conversation because we have already declared that everything we do is in accordance with our obligations under international law – we will do this against a set of adversaries that are clearly causing the nation harm and so on. That is fine. We are not doubting that that is their statement; we would just like to know a little bit more about the circumstances in which you would use these capabilities.

What, for example, is the new National Cyber Force going to do? How is it going to be structured? What are the lines of responsibility? Because one of the weird things about joint military-intelligence offensive cyber operations is that, in a country like the UK, you have the defence secretary signing off on one side and the foreign secretary signing off on the other because you are involving both the military and GCHQ, which have different lines of authority. So where does responsibility lie? Accountability? What happens if something goes wrong? What is your exact interpretation of international law? To be fair to the UK, they have set that interpretation out very clearly.

But there is more than just an academic interest here. If this is the future of conflict in some fashion and it has societal effects, then we need to have a conversation about whether these are the capabilities that we want to possess and deploy. Not least if the possession and deployment of those capabilities generates norms of state behaviour that include the use of cyber conflict. Is that something that we want to do in societies of the 21st century that are hugely dependent upon computer networks and deeply interconnected with other countries?

Those are the types of questions that we need to raise and we also need to raise the quality of public understanding. That is partly the job of academia and partly the job of media, but certainly the job of government.


The next interview in Strife’s Offensive Cyber Series is with Dr Daniel Moore on cyber operations. It will be released in two parts on Thursday 10th and Friday 11th June 2021.

Filed Under: Blog Article, Feature Tagged With: cyber, cyber warfare, cyberwarfare, dr tim stevens, ed stacey, offensive cyberwarfare, offensive cyberwarfare series, tim stevens

Offensive Cyber Series: Dr Tim Stevens on Offensive Cyber in the 2020s, Part I

June 3, 2021 by Ed Stacey and Dr Tim Stevens

Photo Credit: AirmanMagazine, licensed under CC BY-NC 2.0

On Wednesday 3rd March, Strife Interviewer Ed Stacey sat down with Dr Tim Stevens to discuss the state of play in offensive cyber in the 2020s. As part one of Strife’s Offensive Cyber Series, Dr Stevens introduces the topic and offers his thoughts on a range of topical debates, from the utility of offensive cyber capabilities to questions around international law and ethics and the UK’s recently avowed National Cyber Force.

Ed Stacey: Tim, as you know, this interview series is all about offensive cyber. This is quite a slippery term, so could you perhaps kick us off with a working definition?

Tim Stevens: You will be unsurprised to hear that there is no working definition, or at least no consensus on definition, about what offensive cyber is. Obviously, it is a term that attempts to draw some kind of analogy from other capabilities that can be used for offensive purposes – one of which is obviously weapons, another would be munition. But actually, offensive cyber is a lot more difficult to pin down because it is not kinetic in any conventional sense: it is not something that you can throw, shoot or drop on someone to cause damage.

But what offensive cyber tries to get at is the idea that through computer code, so little packets of software that can be sent through computer networks, you are going to attempt to deny, degrade, disrupt or even destroy something that your enemy holds to be of value. This principally could be data itself or it could be the computer systems and computer networks that data is held on.

Now offensive cyber is also being used not just in a military context but an intelligence context too, so it has some relationships with espionage or at least the covert activities of intelligence agencies. It could conceivably be used not in the kind of military break things sense but in the more inflected activities of intelligence, like subversion or sabotage, that occupy a slightly weird space and do not look like acts of war, for example.

ES: Terms such as cyber war, cyber attack and cyber weapons are used quite loosely in public discourse. Do you think we need to be more precise with our language when we are talking about offensive cyber?

TS: I think it would help if we had in common discourse some understanding that perhaps we are overhyping some of the phenomena that were describing, and using heavily militarised language like cyber war really does not help. Cyber attacks are usually nothing of the sort and cyber weapons usually cannot be classed as weapons, for example.

To take the cyber war example. When we think about cyber war, these days it usually means some kind of state of hostilities operating between two states, in which they are battering each other with cyber weapons of some description or another. Now apart from the fact that we have not seen this, it is also unlikely that we will see it. I think if two states are to be in a declared or actual state of cyber hostilities, there will be other issues – other types of operations in other domains – that are going to be just as relevant. So this idea of a standalone cyber war is not helpful.

Cyber warfare, on the other hand, is helpful because that is what militaries and intelligence agencies arguably are involved in at present – they are fighting, conflicting and contesting cyberspace as an operational domain. And they are doing that through offensive cyber, in part, but also through other activities that they can bring to bear on that domain. So cyber warfare has some utility; it is a form of warfighting or conflict through cyber means.

Cyber attacks, well that is just used to denote anything that you do not like. Whether it is an attack in any kind of conventional or attenuated sense is really irrelevant. If your adversary – whether they are a criminal, terrorist, state or proxy – has done something to your networks that you do not like, you call it a cyber attack, even though it might be nothing of the sort. It might be one of billions of automated pings or bots that confront your networks everyday as a matter of course. Or it could be a cunning, socially-engineered and sophisticated cyber operation against something that you hold of value. The two are clearly not the same, but they are all being called cyber attacks in popular discourse, and the media are just as guilty of this as politicians and occasionally academics and civil society too. So I do think it is important to make these distinctions.

The issue with cyber weapons is whether these types of capabilities can actually be described as weapons, and again there is no consensus. Conventionally weapons have to have the capacity to hurt by virtue of, say, ballistics. If you think about discussions around chemical and biological weapons, people are sometimes unconformable calling them weapons in any conventional sense too. And the thing about cyber weapons is that, as of yet, no direct physical harm has been caused by any of those capabilities. Instead, what happens is that there is attenuated secondary harm that would be caused when, for example, you change the 1s and 0s in an incubator in an intensive care unit and as a result of that someone dies, but it does not directly harm that person. So that is the kind of debate that is being had about whether these capabilities are weapons or not.

ES: Thinking about the utility of offensive cyber, why are states developing these types of capabilities and what do they offer that other capabilities do not?

TC: To think about the broader utility or the framing of these capabilities is, I think, to return to the [revolution in military affairs] of the late 1980s and early 1990s, then going on in subsequent decades in western military affairs. So the suggestion that we are shifting towards informationalised, precision strike, stand-off warfare that prioritises our own force protection and the ability to cause effects hundreds, if not thousands, of miles away.

Clearly, if you are sitting at a computer in one part of the world and you wish to attack another computer on the other side of the world, it is much easier to do that through computer networks than it is through conventional means: the mode of operation, the platform and the technology is much easier to get hold of. And if you can create the same effects remotely than if you were standing a hundred yards or half a mile away, then why would you not? You do not have to put your troops, or indeed your intelligence agents, in harm’s way. If you do not have to put a human asset into a foreign country to achieve an effect, why would you? These are the kind of attractions that states are finding in these sorts of capabilities.

Another one, of course, is that it is relatively cheap. It is much easier to hire people to develop these kinds of capabilities than it is to develop a new weapon system. Essentially, if the weapon system you need is, if not quite an off the shelf computer system but something existing that can be adapted, it is much cheaper than trying to develop a new line of fighter jet, precision guided munition, helicopter or battleship of any description. So that is attraction there.

Another thing is this idea of effects. As I mentioned previously, if you can create some kind of effect that generates, mainly operational or strategic but also tactical, advantage over your adversary through the use of computer networks, that has to be attractive. If it is cheaper, if it does not put your troops in harm’s way and, importantly, does not immediately escalate to something that looks like a conventional shooting war. Because if people are not being directly harmed, but yet you are causing your adversary to change their mind or behaviour in some fashion, that is incredibly seductive for a commander or state that is looking to improve, enhance or extend their operational and strategic toolbox. So that is the general idea behind why these capabilities are attractive.

ES: Looking at the other side of things, what are the limits of offensive cyber?

TC: That is a good question and an open one too. These kinds of capabilities may be attractive to countries and their militaries and intelligence agencies, but the jury is out on how effective they actually are. Because it turns out, for various reasons, that it is actually quite difficult to get your adversary to do what you want through cyber means. Partly this is because they are not as easy to control as we might think, and partly it is because, as I mentioned earlier, causing kinetic effects to actually change someone’s mind in a visceral sense is very difficult.

It is also difficult because you cannot keep doing it with the same capabilities. Once you have developed an advanced offensive cyber capability, essentially you can only use it once because then your enemy will see the code, understand the vulnerability that has been exploited, patch their systems and then that vulnerability disappears. So you cannot keep holding your enemy’s assets at risk, which means that even if something happens once – and given that no computer system is demonstrably secure, it is going to happen at some point – you know that it is a one-off attack. Because you know, or at least you hope, that your adversary has not got the capability to keep punishing you in that way. So that means that if you can roll with the punches if you get attacked or exploited, you are not expecting a follow-up that is really going to double down and force you to change your mind or your behaviour.

So for all the attraction of these capabilities, there are limits. Now that is not to say that there are limits to the imagination of people who wish to develop and deploy these things, and I am not saying for a second that, with this realisation that there are limits to their utility, states are going to stop developing them, because they are not. In fact, what I think is going to happen is what you are seeing at the moment, which is that states and other actors are going to continue to experiment with them until they find some way of generating the higher-level effects that they wish.

To bring that round to a conclusion: tactically, they can be very useful; operationally, they can generate some really interesting effects; strategically, it looks very difficult to generate the effects that you want.

Part II of this interview will be published tomorrow on Friday 4th June 2021.

Filed Under: Blog Article, Feature, Series Tagged With: cyber, cyberwarefare, dr tim stevens, ed stacey, offensive cyberwarfare, offensive cyberwarfare series, Series, Strife series, tim stevens

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Footer

Contact

The Strife Blog & Journal

King’s College London
Department of War Studies
Strand Campus
London
WC2R 2LS
United Kingdom

blog@strifeblog.org

 

Recent Posts

  • Climate-Change and Conflict Prevention: Integrating Climate and Conflict Early Warning Systems
  • Preventing Coup d’Étas: Lessons on Coup-Proofing from Gabon
  • The Struggle for National Memory in Contemporary Nigeria
  • How UN Support for Insider Mediation Could Be a Breakthrough in the Kivu Conflict
  • Strife Series: Modern Conflict & Atrocity Prevention in Africa – Introduction

Tags

Afghanistan Africa Brexit China Climate Change conflict counterterrorism COVID-19 Cybersecurity Cyber Security Diplomacy Donald Trump drones Elections EU feature France India intelligence Iran Iraq ISIL ISIS Israel ma Myanmar NATO North Korea nuclear Pakistan Politics Russia security strategy Strife series Syria terrorism Turkey UK Ukraine United States us USA women Yemen

Licensed under Creative Commons (Attribution, Non-Commercial, No Derivatives) | Proudly powered by Wordpress & the Genesis Framework