• Skip to primary navigation
  • Skip to main content
  • Skip to footer
  • Home
  • About
    • Editorial Staff
      • Bryan Strawser, Editor in Chief, Strife
      • Dr Anna B. Plunkett, Founder, Women in Writing
      • Strife Journal Editors
      • Strife Blog Editors
      • Strife Communications Team
      • Senior Editors
      • Series Editors
      • Copy Editors
      • Strife Writing Fellows
      • Commissioning Editors
      • War Studies @ 60 Project Team
      • Web Team
    • Publication Ethics
    • Open Access Statement
  • Archive
  • Series
  • Strife Journal
  • Contact us
  • Submit to Strife!

Strife

The Academic Blog of the Department of War Studies, King's College London

  • Announcements
  • Articles
  • Book Reviews
  • Call for Papers
  • Features
  • Interviews
You are here: Home / Archives for Cybersecurity

Cybersecurity

Economically isolated, North Korea now turns to Cyberspace

July 13, 2021 by Carlotta Rinaudo

North Korean leader Kim Jong-un surrounded by military personnel. Photo Source: Flickr, licensed under Creative Commons.

For years, the international community has slapped North Korea with painful economic sanctions aimed at constraining its nuclear ambitions. Trade of arms and military equipment has been prohibited, exports of coals and minerals have been banned, and the assets of North Korean officials have been frozen. To make matters worse, the ongoing Covid19 pandemic has hit Pyongyang harder than any previous sanction. After closing its border with China, trade with Beijing has been reduced by 95%, leading to a scarcity of food and basic necessities such as soybean oil, sugar, and flour. Trains and flights in and out of the country have been stopped since March 2020, thus freezing tourism and labor exports, two major sources of foreign currency. It would therefore be easy to conclude that North Korea has recently been living in total economic isolation, that is, were it not for cyberspace.

In the physical world, a country like North Korea can be forced into isolation. Yet, in cyberspace, Pyongyang is everybody’s neighbor. Often described as the fifth domain of warfare, cyberspace has a low cost of entry while offering a high degree of anonymity. Pyongyang has seemingly exploited this domain to circumvent economic sanctions, raising millions of dollars through ransomware attacks. North Korean hackers have in fact been accused of hacking international financial institutions to steal foreign currency, which is in turn used to finance Pyongyang’s nuclear program. For this reason, they have recently been branded as “the world’s leading bank robbers”. But North Korean hackers might also have been the architects behind a cyber-attack directed against Sony Pictures Entertainment back in 2014. The entertainment company was about to release “The Interview”, a comedy that portrayed two journalists assassinating Kim Jong-un in Pyongyang. North Korea’s requests to cease the production of the movie had largely been ignored, then, in November, Sony’s employees entered their office and found images of red skeletons on their computers. “We’ve obtained all your internal data, including your secrets and top secrets”, said a message on the screens, “if you don’t obey us, we’ll release the data shown below to the world.” This makes North Korea a rare cyber-creature: a country which is using cyberattacks not only for espionage, but also to fund its own operations, and – even more strangely – to punish comedic depictions of its leader.

In 2017, the Trump administration accused North Korea of being responsible for the WannaCry malicious software, which blocked computers in more than 150 countries. In response, Pyongyang denied any responsibility and declared “we have nothing to do with cyberattacks.” Following the malware intrusion, victims were asked for a ransom payment in exchange for unlocking their systems and data. In two hospitals in Jakarta, the malware blocked patient files, including medication records. In the UK, hospitals had to cancel thousands of medical appointments after losing access to computers. In China, some gas stations had to ask their customers to pay by cash only, after their digital payment system stopped working. In France, the carmaker Renault had to suspend its production in order to stop the spread of the worm. In different ways, the WannaCry computer worm caused unexpected levels of disruption all around the world.

Bitcoin as a new source of income for the Kim regime. Photo Credit: Flickr, licensed under Creative Commons.

Constrained by a set of international sanctions and by the destructive force of the ongoing pandemic, Pyongyang is now searching for new means to ensure its survival in a hostile environment. And cyberspace offers plenty of opportunities. Following the public’s growing interest in digital currencies, North Korean hackers have currently turned their attention to the world of cryptocurrencies. Allegedly, they have built at least nine cryptocurrency apps to trade cryptocurrencies and create digital wallets, such as Ants2Whale, CoinGo, and iCryptoFX, designed with a back door that can provide North Korean hackers with access into computer systems. In August 2020 one of these Apps was used to break into a financial institution in New York to steal $11.8 millions in cryptocurrency. In addition, exchanges that trade Bitcoin and other cryptocurrencies have fallen victims to North Korean cyberattacks, as these exchanges offer easy access to storage facilities known as “hot wallets”: hot, because they are connected to the Internet, as opposed to the storage method known as offline “cold wallets”. In total, according to a UN report, North Korea might have stolen more than $300 million in cryptocurrencies over recent months, partly in order to support its nuclear program.

In the past, most of North Korea’s criminal operations involved the smuggling of cigarettes, counterfeit money, trading of endangered species, and illegal drugs such as methamphetamine. Today, cyberspace allows conventionally weaker actors to challenge their stronger competitors more easily. North Korea can thus pursue an asymmetric strategy to put pressure on the international community: through cyberattacks, Pyongyang is not only countering its economic isolation, but it is also funding its nuclear program.

It is hard for the international community to find an effective response: retaliation seems highly ineffective, because North Korea has a primitive infrastructure that is less vulnerable to cyberattacks. Imposing further sanctions also appears a non-viable option: many sanctions have already been imposed, and North Korea is becoming increasingly adept at finding workarounds to its economic isolation.

For decades, North Korea has searched for solutions to the same old questions: how to mitigate and instrumentalize its weaknesses to stay relevant in a hostile international system. Now, it seems that cyberspace offers the answers.

Filed Under: Blog Article, Feature Tagged With: Carlotta Rinaudo, cyber, Cybersecurity, Cyberspace, North Korea

Chinese cyber coercion in the Asia-Pacific? Recent cyber operations in South Korea, Hong Kong, and India.

June 21, 2021 by Orlanda Gill

Photo by Markus Spiske on Unsplash

Writings on Chinese cyber operations tend to focus on cyber espionage and the stealing of state secrets for China’s military modernisation. Comparatively in discussions of cyber operations, cyber coercion and Chinese cyber coercion are infrequently mentioned. This has to do with the ambiguity surrounding the definition of cyber coercion and the challenges of attribution.

Chinese cyber coercion is understood as a subset of what is known as weishe. Weishe is, in direct English translation, understood as “deterrence”, but is conceptually understood as a combination of compellence and deterrence. In theory, cyber coercion thus operates by compelling actors through cyber operations to produce an effect called deterrence wherein actors are deterred from decisions that are harmful to China’s interest. This role of compellence in cyber deterrence is made clearer when contrasted to the cyber deterrence strategies discussed so far in the United States and the United Kingdom. The use of cyber deterrence in the respective countries appears mostly in reference to a retaliation to a cyber-attack or in building domestic resilience to make an attack costly. In contrast, in the Chinese context, compellence and deterrence are one, the role of compellence is encouraged, and a cyber-attack does not seem to be a prerequisite to the use of cyber deterrence.

The theoretical understanding of weishe, however, is imperfect in practice. Whether deterrence is truly a component of weishe is subject to disagreement and is debated amongst Chinese analysts. If this is the case, then what lens should be used to analyse potential Chinese cyber coercion?

Observed practice of cyber coercion may be a more helpful lens than its theoretical counterpart. Observed practice can include the combination of vague threats, an implied actor, and an implicit desired behaviour. In a greater layer of complexity, consistency across the elements’ contents is not necessary. For instance, cyber coercion may include explicit threats, an implied actor, and an explicit desired behaviour. Therefore, observed practice captures a more detailed variation of what is understood as cyber coercion—something which is illustrated in the following three cases.

The cyberattacks against South Korea in 2017 illustrate one of the clearer cases of Chinese cyber-coercion, specifically cyber-enabled economic coercion. It also demonstrates the use of cyber deterrence to deter a country from choosing a political decision that is judged as harmful to China’s security. On February 7, 2016, officials from the United States and South Korea announced discussions on deploying Terminal High Altitude Area Defence missile defence system (THAAD). Beijing, however, disapproved of the X-band AN/TYP-2 band radar system which would allow for approximately a 3,000 miles detection range. This would mean potential US military monitoring of activity in China and the undermining of China’s nuclear deterrence.

In correspondence to the announcement of THAAD, there were reported increases in cyber intrusions. In the first half of 2017, there were over 6,000 cyber intrusions from China against the South Korean Foreign Ministry’s servers which was an increase from the 4,600 in 2016. Furthermore, Lotte Group, a South Korean-Japanese conglomerate was also attacked. Chinese internet protocol addresses took parts of Lotte Group’s storefront offline for several days, and Chinese e-commerce sites stopped co-operation with Lotte. This has been connected to Lotte Group permitting the South Korean government to use its golf course to deploy THAAD. South Korea did end up agreeing to limitations on THAAD, but it is difficult to say whether this was uniquely due to the cyber impacts because of the presence of other coercive levers. For instance, the Chinese government shut nearly all of Lotte’s physical stores in China. Cyber coercion, however, does signal great displeasure, and the intentions can be perceived as the use of compellence to deter further plans regarding THAAD.

Whilst the THAAD case outlines more clearly what happened and who the suspect is, other potential cases do not. Cyber operations in Hong Kong and India demonstrate cases of an explicit threat, an implied desire, and an implied actor.

Over the course of Xi Jinping’s rule, a tighter grip has been imposed on Hong Kong and protests have become more dangerous to participate in. Joshua Wong and Agnes Chow, who were the faces of Hong Kong’s protests against the Chinese Communist Party’s grip, are now imprisoned. The 2019-2020 Anti-Extradition Law Amendment Bill Movement, which was a series of movements against the Extradition Bill, coincided with the emergence of HKLeaks—a doxing website which appeared in late August 2019. The website doxes anti-government protestors, revealing people’s personal identifiable information (PII) such as headshot, social media handles, phone numbers and their misdeeds. The threat is explicit in that it threatens an individual’s privacy and makes the struggle for a freer Hong Kong even more costly. There have even been instances of malicious targeting. In one case, a doxed female reporter from Apple Daily, a Hong Kong tabloid known to criticise the Chinese Communist Party, started receiving threatening calls.

The argument that China is behind this is difficult to build, although there are very subtle implications behind HKLeaks that tie it to state-sponsored actors and potentially to the Chinese state. Aside from China’s interest in Hong Kong, looking at who or what HKLeaks is connected to is informative. HKLeaks has been linked to social media accounts similar to those taken down for being fake accounts linked to state-backed actors, which were also used as a tactic in disinformation campaigns against Taiwan. Some information of who the state could be is found in anecdotal evidence. According to an alleged victim of HKLeaks, they gave a “fake address I’ve never given to anyone” to Chinese police at the Hong-Kong and China border when returning from a business trip from mainland China. His address afterwards appeared on HKLeaks. Whilst the link between cause and effect is unclear, these disparate points of evidence could arguably form a weakly implied Chinese state as actor.

HKLeaks is also positively viewed and engaged by the Chinese state. For instance, the official Weibo account of China’s state-owned TV network, “published a video showcasing the HKLeaks website, and urged followers to ‘act together’ and ‘tear off the masks of the rioters’”. This post was then shared by “the Weibo accounts of local Chinese police, local media outlets, branches of Chinese Communist Youth League, and others.” Again, the actor cannot be established, but there is certainly a perception of an implied actor, an implied (or explicit, depending on one’s perception) desire to stop the protests and the threat of the violation of privacy and potential harm to the individual. This arguably forms a cyber coercion, rendered perhaps more threatening by the ambiguity on how members are being doxed and by not knowing the exact actor.

The case of the Mumbai power outage in October 2020 is a similar case where there is implied Chinese involvement. However, connections to the Chinese state around this topic is slightly clearer and less speculative. Speculation of China’s involvement is found across Foreign Affairs, NY Times, and The Diplomat, and domestically amongst Indian officials. The main source of information, however, is from a report by Recorded Future, a private cybersecurity company. On February 28 2021, Recorded Future published a report which demonstrated a connection between Red Echo, a Chinese state-sponsored group, and the installation of malware into civilian infrastructure such as “electric power organisations, seaports, and railways.” This cyber intrusion is thought to connect with the border conflict occurring at the time and has led to speculation about the connection to the Mumbai power outage. According to retired cyber expert Lt. Gen. D.S. Hooda, the power outage has acted as a signal from China to indicate “that we can and we have the capability to do this in times of a crisis.” Such a signal draws parallel to the cyber intrusions concerning South Korea and THAAD.

All three cases demonstrate the inherent limitations in analysing cyber coercion (as deterrence through compellence.) Even if China is implied from political context and from malware, building a case to clearly identify the Chinese State’s direct involvement is difficult to build without clear attribution. Nevertheless, if China is definitively involved, the utility of being an implied actor may be helpful with information operations elsewhere wherein appearing benign is used to gather support of the country. The case of Mumbai and South Korea also bring up interesting questions for compellence and deterrence, with China potentially being seen to blur the two. Cyber coercion overall remains somewhat enigmatic. The ambiguity is likely advantageous for the actor(s) behind the acts of cyber coercion. Ambiguity helps reduce chances of liability, which permits for a more peaceful (less conflict inducing) approach to manipulating and shaping another state to one’s desires.

Filed Under: Blog Article, Feature, Women in Writing Tagged With: China, Cybersecurity, Deterrence, orlanda gill, women in writing

Offensive Cyber Series: Amy Ertan on AI and Military Innovation, Part II

June 18, 2021 by Ed Stacey and Amy Ertan

A soldier operates the remote controlled Mark 8 Wheel Barrow Counter IED Robot. Photo Credit: UK Ministry of Defence, licensed under Creative Commons

This is part II of Ed Stacey’s interview with Amy Ertan on AI and military innovation for Strife’s Offensive Cyber Series. You can find part I here.


ES: I feel like there is a whole interview to be had on this idea of an AI arms race, especially with some of the signals from governments about the importance of these technologies.

AE: We talk about an AI arms race, but actually the number of countries that have the resources to invest in this is really small. The US, of course, is investing billions and billions, and they have their Joint Artificial Intelligence Center which is coordinating AI, including AI for use in cyberspace. The UK invests a huge amount as well and so do a few other states within Europe, for example France. But for the majority of states, say across NATO, AI in conflict is not something that is currently top of the agenda: it is something that is discussed at the strategic level and people know that it will hit and have impact in 20 to 30 years’ time. So we are seeing that strategic discussion but it costs so much that it is just a matter of states buying solutions from the private sector, so lots of questions there to.

ES: On that note, given the private sector is so important in the development of AI, do you think that the advantage lies with liberal democratic states and their innovate, free-market economies or authoritarian states that have greater control over private companies, enhancing military-civil fusion? Or alternatively, is that dichotomy a bit of a cliché?

AE: That dichotomy is a bit of a cliché. I will say, though, that the states that do have control and oversight over their industry, China for example, have a significant advantage when it comes to military-civil fusion and access to big data. China places either top or joint top with the US at the moment – I think there is a separate computing race – when it comes to AI. And when you look at conversations, in the US and UK for example, public-private partnerships are a major focus with AI because you need to partner with companies like Microsoft, IBM, Amazon and Google.

The free-market economy is not something I think has an inherent advantage, which sounds strange to say. But there is an interesting aspect in that for a lot of private sector leaders in AI, governments are not their main target market – they do not need to work for them. There is controversy around what they do, for example with Google and Project Maven.

There has been a shift in the way that military innovation takes place over the last half-century or so and the government now has less control over who works with them than before. So public-private partnership is something that states like the UK and US would love to improve on. There are also challenges for government procurement cycles when it comes to technologies like AI because you need a much faster procurement cycle than you do for a tank or a plane. So working with the private sector is going to become increasingly central to Ministry of Defence procurement strategies moving forward.

ES: Your PhD research explores the unforeseen and unintended security consequences of developing and implementing military AI. Could you speak a little to how these consequences might materialise in or through the cyber domain?

AE: There are two aspects to this: one is the technical security angle and then the second is the strategic security angle. In terms of cyber security aspects, first, you have the threat that your AI system itself may not be acting as intended. Now especially when we think about sophisticated machine learning techniques, you often cannot analyse the results because the algorithm is simply too complicated. For example, if you have developed deep learning or a neural network, there will potentially be hundreds of thousands of nodes and no “explainability” – you have a “black box” problem as to what the algorithm is doing. That can make it very difficult to detect when something goes wrong and we have seen examples of that in the civic space, where it has turned out many years after the fact that an algorithm has been racist or sexist. It is a slightly different challenge in the military sphere: it is not so much about bias but rather is it picking up the right thing? Obviously, within a conflict environment you do not want to detect a threat where there is not one or miss something.

Second, there is the threat that your algorithm or data may be compromised and you would not know. So this could be the input data that you are feeding in or the system itself. For example, you may have a cyber defence algorithm that picks up abnormal activity on your network. A sophisticated attacker could interfere with the programming of that algorithm or tamper with the data so that the algorithm thinks that the attacker has been there all along and, therefore, that it is not abnormal activity and no flags are raised. So the way in which threat modelling does not consider the creativity of attackers, or insufficiency of the algorithm, could lead to something being deployed that is not fit for purpose.

Third, adversarial AI. This is the use of techniques to subvert an AI system, again making something that is deployed fallible. For one perhaps theoretical but technically feasible example, you could deploy an algorithm in cyberspace that would only target certain kinds of infrastructure. Maybe you would want it to not target hospitals, but that could be gamed – everyone could attempt to make their site look like a hospital to the algorithm.

Right now, the technology is too immature and we do not have direct explainability. It is also very difficult to know the right level of confidence to have before deploying an AI system and there are questions around oversight. So while technical challenges around explainability and accuracy may be solved through strict verification and validation procedures that will mature in time with AI capabilities, some of these unintended consequences come down to human factors like trust, oversight and responsibility. For example, how do humans know when to override an AI system

Those societal and policy questions will be tricky and that is what leads you into the strategic debate. For example, what is the appropriate use of AI in an offensive manner through or beyond cyberspace? What is a legitimate target? When it comes to AI and offensive cyber, all of the main questions around offensive cyber remain the same – the ones that traditionally apply to cyber conflict and the ones that we want to start thinking about with sub-threshold conflict. With AI, I think it is the way in which it can be mis-utilised or utilised to scale up inappropriate or unethical activity that is particularly problematic.

ES: How should states go about mitigating those risks? You touched on norms earlier, but because a lot of this work is super secretive, how can we have those conversations or develop regulation when states are, perhaps for good reason, not willing to reveal what they are doing in this space?

AE: Absolutely. Military innovation around AI will always be incredibly secretive. You will have these propriety algorithms that external parties cannot trust, and this is really difficult in the military space where the data is so limited anyway. I mentioned earlier that you can feed three million pictures of cats into an algorithm that then learns to recognise a cat, but there are way fewer images of tanks in the Baltic region or particular kinds of weapon. The data is much more limited in secretive military contexts and it potentially is not being shared between nations to the extent that might be desirable when it comes to building up a better data set that would lead to more accurate decisions. So encouraging information sharing to develop more robust algorithms would be one thing that could mitigate those technical risks.

Talking about broader conversations, norms and regulations. I think regulation is difficult. We have seen that with associated technologies: regulation moves quite slowly and will potentially fail to capture what happens in 10, 15 or 20 years’ time because we cannot foresee the way in which this technology will be deployed. Norms, yes, there is potential there. You can encourage principles, not only in the kinetic space but there are also statements and agreements around cyberspace – NATO’s Cyber Defence Pledge, for example, and the Paris Call. States can come together and agree on baseline behaviours of how to act. It is always difficult to get consensus and it is slow, but once you have it that can be quite a powerful assurance – not confirmation that AI will not be used in offensive cyber in undesirable ways, but it gives some assurance to alliance structures.

And those kinds of conversations can prove the basis for coming together to innovate as well. So we already see, for example, while the UK and US have the power and resources to invest themselves, across NATO groups of countries are coming together to look at certain problems, for example to procure items together, which may well be the path towards military AI.

It is difficult and you cannot force states to cooperate in this way, but it is also in the interests of some states. For example, if the US has invested billions in military AI for cyber purposes, it is also in its interest that its allies are secure as well and that the wider ecosystem is secure. So it may choose to share some of those capabilities to allies, not the most secretive nor the raw data but, for example, the principles to which it abides by or certain open source tools. Then we start thinking about trust networks, whether that is the Five Eyes, NATO or other alliance structures too. So it is not hopeless.


The final interview in Strife’s Offensive Cyber Series is with Dr Jacquelyn Schneider on cyber strategy. It will be released in two parts on Thursday 24th and Friday 25th June 2021.

Filed Under: Blog Article, Feature, Series Tagged With: AI, amy ertan, Artificial Intelligence, cyber, Cybersecurity, cyberwarfare, ed stacey, Military Cyber, offensive cyberwarfare, offensive cyberwarfare series

Cybersecurity from Trump to Biden: a triumph for cybersecurity?

April 7, 2021 by Harriet Turner

USAF, 2017 - A cyber operator.

After what felt like a lengthy and tortuous transition period, Joe Biden became the United States’ 46th president on the 20th January 2021. Whilst some feel relieved and others disgruntled by the result, one thing is abundantly clear: the renewed and elevated focus on cybersecurity under a Biden administration is certainly promising.

It is not news that the United States is becoming increasingly vulnerable to cyber threats; the recent SolarWinds and Microsoft attacks so aptly epitomises the extent of the vulnerability of, not only the US, but the entire world. Our ever-increasing dependency on technology also suggests that the impact of the threat will grow accordingly, which can be demonstrated by our reliance on technology during the COVID-19 pandemic. Indeed, the survival and maintenance of our livelihoods, relationships and education, to name a few, currently depend on our access to technology. As both Artificial Intelligence (AI) and the Internet of Things (IoT) come to greater fruition and almost every part of our lives are inextricably connected to technology – from the appliances in our homes to our modes of transport – US citizens will also no doubt become dangerously susceptible to disruptive hacks. Looking forward, an elevated cybersecurity focus is absolutely necessary to appropriately protect US intellectual property, prevent psychological and physical damage to its people and their property and to preserve the US’ status as a major player on the world stage.

Four Years of Cybersecurity Under Trump

As one of the most serious threats facing the US, cybersecurity should be dealt with earnestly and should never have taken the backseat that it has in recent years. Despite this, skepticism surrounding cybersecurity in the US since the beginning of Donald Trump’s presidency was rife, and rightly so. Trump consistently failed to acknowledge or confront the Kremlin interference in the 2016 election – to the extent that it was considered a hallmark of his presidency – posing an enormous threat to the fabric of US democracy. The Mueller report conclusively found that Russia executed ‘a social media campaign that favoured presidential candidate Donald J. Trump and disparaged presidential candidate Hillary Clinton’ and attempted to sow mass discord across the US.

To put it into perspective, when two Russian hacking groups, Cozy Bear and Fancy Bear, hacked the Democratic National Committee (DNC), Trump was quick to state that ‘’it was the DNC that did the ‘hacking’ as a way to distract from the many issues facing their deeply flawed candidate and failed party leader’’ prior to any real investigation or analysis. This was soon found to be untrue. Worse yet, Trump had actively urged Russia to leak Hilary Clinton’s ‘missing emails’. Seemingly, Trump was more concerned with his relationship with Vladimir Putin than the security of the country he was presiding over. Or was he concerned that confronting and acknowledging this interference would highlight that his electoral victory was not so victorious after all?

More recently, Trump flippantly blamed China for the SolarWinds attack, in light of evidence that pointed to Russia, and tweeted ‘’Russia, Russia, Russia is the priority chant for when anything happens because Lamesteam is, for mostly financial reasons, petrified of discussing the possibility that it may be China (it may!).’’ Again, this is completely misaligned with what his own Secretary of State and intelligence community had said which demonstrates a complete lack of coherence in the administration where cybersecurity is concerned. Sadly, in many ways, cybersecurity clearly suffered grave negligence under the Trump administration.

The evolution of ‘’defend forward’’ under Biden?

Nonetheless, Trump’s cybersecurity strategy is likely to leave somewhat of a legacy and its more active and bold tone is likely to evolve under the Biden administration. This can be inferred from the Biden-Harris statement that was released in the wake of the SolarWinds attack whereby they echoed much of defend forward and stated that “a good defense isn’t enough’’. Specifically, one concept from the Trump administration’s cyber vision that is likely to mature is persistent engagement - that is, the idea that by continuously contesting an adversary and ‘forcing them to expend more resources on defence and rebuild capabilities’, the adversary becomes less effective and the offender achieves superiority. Persistent engagement could help in the construction of norms of acceptable and non-acceptable behaviour in cyberspace through a process of tacit bargaining because states can gauge an understanding of adversaries’ so-called red lines. Therefore, it could prove to be a useful method for creating deterrence structures within cyberspace going forward. Although, as the Biden administration seeks to strengthen its offensive capabilities, it should consider that prepositioning and reconnoitring in an adversary’s network could also have undesirable escalatory effects. This raises the important question of how the US would de-escalate if escalation occurred?

Biden Takes the Baton: A Hopeful Future for Cybersecurity?

Even in the early stages of Biden’s presidency, Biden has demonstrated a much more earnest attitude towards cybersecurity. This is clear in Biden’s orchestration of a strong cybersecurity team which has been endorsed by many public and private sector individuals and was referred to by Tom Burt, the vice president of Microsoft, as ‘’world-class’’. Biden has also demonstrated a willingness to confront adversaries rather than ‘’sit idly by in the face of cyber assaults’’ which, as we have discovered, stands in direct contrast to Trump’s approach to confrontation (or lack thereof). Promisingly, Biden’s National Security Advisor, Jake Sullivan, has also made it clear that the administration is willing to use a combination of seen and unseen tools and ‘’ensure that Russia understands where the United States draws the line on this kind of activity.’’

Another reassuring factor is Biden’s desire to work with other countries and nurture stronger bilateral and multilateral partnerships after a period of neglect to help mitigate the threat. Due to the permeable nature of cybersecurity which ultimately knows no borders and is therefore an inherently team sport, this is a promising prospect. As Charlie Croom once said, “we all have knowledge and experiences that when shared make us better than we individually could be’’ and this is especially applicable where states and cybersecurity are concerned. In particular, the emergence of cyber diplomacy will be a crucial part of fostering a sense of team spirit among states by guaranteeing constant dialogue and in turn, preventing unnecessary escalation or wrongful attribution. However, under Trump, the US’ cyber diplomacy efforts were negatively impacted by Rex Tillerson’s decision to abolish the Office of the Coordinator for Cyber Issues. Fortunately, it is likely that Biden will enlist Jen Easterly as National Cyber Director, as a part of the Executive Office of the President, raising the profile of cybersecurity as a clear priority. The hope is that Easterly will then be able to coordinate the government’s cyber capabilities and bolster the US’ cyber diplomacy through her efforts.

One aspect that could have made Biden’s cybersecurity approach more encouraging is the appointment of more private sector experts. Those set to be in leadership positions are largely from the public sector which is wildly disproportionate to how much of the US’ internet infrastructure is owned by the private sector, which is the vast majority of it. Therefore, a fusion of public and private sector expertise would be more representative of this dynamic and provide a richer pool of knowledge. It would also help to create a more effective channel of communication between the two whereby threat information can be shared more easily and effectively. Importantly, appointing more individuals from the private sector would likely provide an opportunity to bring greater clarity to the public-private partnership in the US, as ‘there are no clear statements outlining legal authority, responsibility and rights across the diverse set of relationships that the government maintain with the private sector’. Ultimately, this would provide direction and confidence to the public and private sector to make definitive decisions within their remits of responsibility.

Conclusion

Overall, if there is anything that we can conclude from this, it is that losing Trump will hopefully be a triumph for cybersecurity in the US. A revived focus on cybersecurity and the employment of offensive and defensive measures from a world-class team of experts means that projections for the future of cybersecurity in the US are largely optimistic. However, in the absence of private sector appointees, it is hoped that the Biden administration will make serious efforts to nurture a stronger public-private sector partnership in other ways going forward. Ultimately, the Biden administration’s responses to the SolarWinds and Microsoft attacks should paint a much clearer picture of what cybersecurity will look like for the US in successive years.

 

Harriet is an MA National Security Studies student at King’s College London and a recent Politics and International Relations graduate. Her final year dissertation explored the UK’s decision to renew Trident and was titled ‘Chasing Status: was status the dominant driver of the UK’s decision to renew its Trident nuclear deterrent in 2016?’ Her broader writing interests include cybersecurity strategy and policy, radicalisation, counter-terrorism, status and emotions in an International Relations context and non-proliferation.

Filed Under: Feature Tagged With: Biden, Cybersecurity, Trump, us, us politics

Enhancing Cyber Wargames: The Crucial Role of Informed Games Design

January 11, 2021 by Amy Ertan and Peadar Callaghan

by Amy Ertan and Peadar Callaghan

“Risk - Onyx Edition (Ghosts of board games past)” by derekGavey.
Licensed under Creative Commons

 

‘A game capable of simulating every aspect of war would become war.’

Martin Van Creed, Wargames: From Gladiators to Gigabytes, 2013.

 

The launch of the MoD’s Defence Science and Technology Laboratory first Defence Wargaming Centre in December 2019 is an opportunity for future wargaming design. While current games do enable some knowledge transfer, the tried-and-tested techniques employed by the serious games community would enhance these exercises with more effective strategising and training mechanisms. This article highlights how the characteristics of cyberspace require a distinct approach to wargames, and provides recommendations for improved development and practice of cyber wargames by drawing on established games design principles.

The use of games in educational settings has been recognised since the 4th century BC. Wargames, however, are a more recent invention. Wargaming first emerged in modern times via the Prussian Army. Kriegsspiel, as it was called, was used to teach tactics to officers as part of the Prussian Military Reforms in the wake of their devastating defeats at the hands of Napoleon. Ever since, military wargames have become a feature of training military personnel. The UK Ministry of Defence’s (MoD) Red Teaming Guide defines a wargame as ‘a scenario-based warfare model in which the outcome and sequence of events affect, and are affected by, the decisions made by the players’. These games, as noted by the MoD’s Wargaming Handbook, can be used to simulate conflicts in a low-risk table-top style setting across all levels of war and ‘across all domains and environments’. Wargames have repeatedly proved themselves a reliable method in communicating and practising military strategy that can be applied to explore all varieties of warfare.

As cyber becomes an increasingly important warfighting domain, both by itself and in collaboration with other domains, cyber wargames have begun to be played with the same frequency and importance as the traditional domains. Since 2016, the NATO Cooperative Cyber Defence Centre of Excellence (CCDCOE) has annually coordinated Crossed Swords, focusing on technical training, while NATO’s annual Cyber Coalition focuses on goals including information-sharing and collaboration and the Atlantic Council’s Cyber 9/12 focuses on strategic policy-making. Military examples include the U.S. Naval War College’s Defending Forward wargames, where, in its simplest form, cyber defenders (‘blue teams’) defend against cyber adversaries (‘red teams’). While these games are a great step forward in understanding, analysing, and preparing for the problems of cyberwarfare, these exercises tend to draw on existing conceptions of traditional serious games. This represents a missed opportunity; the cyber domain differs from traditional conflict in ways that warrant a fresh look at the design of wargames.

By design, wargames create an abstracted model of reality containing primary assumptions and simplifications that allow the model to be actionable. Underlying assumptions include: that the enemy is known, rational and ruthless; that the conflict being modelled is zero-sum in nature; that the games are effective tools even without specifically conceptualising how knowledge transfer takes place; and that the scope of the game should mirror reality as closely as possible. While these assumptions are appropriate for—or at least not detrimental to—traditional models of kinetic warfare, they are problematic for cyber wargame design. The challenges with each underlying assumption are described in turn.

The Known, Ruthless, and Rational Enemy

As Larry Greenemeier noted a decade ago, in cyberspace, the fog of war is exacerbated. While traditional warfare often limits available knowledge on an adversary’s location, in the cyber domain the reality is that defenders may not know who the enemy is nor their goals. When the enemy is an unknown, they can appear to act in an irrational way, at least from the perspective of the defender. This is due to the inherent asymmetry of the attacker. Through reconnaissance, the attacker will more than likely hold more information about intended targets than the defenders. Each of these issues, individually and collectively, are typically under-emphasised in most rigid wargames.

A Zero-Sum Nature of Conflict

Rigid wargames use a unity of opposites in their design, the goals of one side are diametrically opposed to the other. This creates a zero-sum game in which the goal of both the red and blue teams is the destruction of the other side. However, cyber conflict holds features of non zero-sum games, such as how the victory of one side does not always come with an associated loss to the other. Additionaly, there is an asymmetry introduced that should be addressed in the game design stage.

Knowledge Transfer: What is Actually Being Taught?

Another assumption made in the deployment of wargames is that they teach. However what is being taught is not as closely examined. In general, serious games can be categorised into two broad types: low road (or reflexive transfer) games; and high road (or mindful transfer) games. Low road transfer games are concerned with direct training of a stimulus and a response in a controlled environment that is as similar as possible to the context that the player is presented with in real life. For example, a flight simulator. The second type high road games are designed to encourage players to mindfully make connections between the context of play and the real world. Reflexive games are more likely to emphasise speed whereas mindful transfers are more likely to emphasise communication between players. Games must be designed using the knowledge transfer type most appropriate to the intended learning outcomes of the game.

Overenthusiastic Scoping

Cyber operations do not exist in isolation from traditional models of warfare. The integration of cyber operations with kinetic warfare, however, dramatically increases the complexity. Even attempting to capture the whole cyber landscape in a single game runs the real risk of detail overload, decision paralysis, and distracting the player from the game’s intended learning objectives. The longer it takes to learn to play, the less time the player has available to learn from the play. In reality, one cannot accurately simulate the real-world threat landscape without sacrificing effective learning (unless the learning point is simply to illustrate how complex the cyber threat landscape might be). For example, if the cyber wargame is focusing on the protection of critical national infrastructure, then side-tasks focusing on several other industries are likely to confuse, rather than assist, participants in achieving the desired learning goals.

Recommendations

How should we best approach the challenge of effective cyber wargame design?

We propose that designed cyber wargames must be in line with the following four principles:

  • Include ‘partial knowledge’ states.If the cyber wargame player has full knowledge of the game state, the game becomes nothing more than an algorithmic recall activity where a player can predict which actions are likely to result in successful outcomes. Certain ludic uncertainties can be included to induce ‘partial knowledge’, simulating the fog of war as required for each game.
  • Include ‘asymmetric positions’ for the players.The character of cyberwar is better modelled through asymmetric relationships between players. Cyber wargame designers need to consider the benefits to having this asymmetry inside the game.
  • Confirm learning objectives and knowledge transfer type before commencing design.Both low road and high road transfer games are valuable, but they serve different functions in the learning environment. A conscious choice for whether the game is attempting to promote low road or high road transfer should be confirmed before game design commences to ensure the appropriateness of the game.
  • Clearly scoped game to explore specific challenges.A well-scoped smaller game increases players’ willingness to replay games multiple times, allowing players to experiment with different strategies.

Conclusion

As both cybersecurity and wargames increase in importance and visibility, so does research on the use of cyber wargaming as a pedagogical tool for practitioners, policymakers, and the military. Existing principles within the games design profession around clear scoping of goals, game narratives, and appropriate player capabilities may all be applied to enhance existing cyber wargame design. The inclusion of partial knowledge states and asymmetric player capabilities both reflect crucial aspects of the cyber domain, while explicit attention to a game’s desired learning objectives and scope ensures that the resulting designs are as effective as possible. In a world in which cyberspace is only expected to become a more common feature of modern conflict, it is strongly advised that the MoD’s Defence Wargaming Centre leverages these tools and training opportunities. In the asymmetric and unpredictable field of cyber warfare, we need all the advantages we can get.

 

Amy Ertan is a cybersecurity researcher and information security doctoral candidate at Royal Holloway, University of London, and predoctoral cybersecurity fellow at the Belfer Center, Harvard Kennedy School. She is an exercise designer for cyber incident management scenarios for The CyberFish Company. As a Visiting Researcher at the NATO Cooperative Cyber Defence Center of Excellence, Amy has contributed to strategic scenario design for the cyber defence exercise, Locked Shields 2021. You can follow her on twitter: @AmyErtan, or via her personal webpage: https://www.amyertan.com

Peadar Callaghan is a wargames designer and lectures in learning game design and gamification at the University of Tallinn, Estonia. His company, Integrated Game Solutions, provides consultancy and design services for serious games and simulations, with a focus on providing engaging training outcomes. You can find him at http://peadarcallaghan.com/

Filed Under: Blog Article, Feature Tagged With: amy ertan, cyber domain, cyber war, cyber wargames, Cybersecurity, Cyberwar, cyberwarfare, military, NATO, peadar callaghan, Red Teams, UK Ministry of Defence, war games, wargaming

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 5
  • Go to Next Page »

Footer

Contact

The Strife Blog & Journal

King’s College London
Department of War Studies
Strand Campus
London
WC2R 2LS
United Kingdom

[email protected]

 

Recent Posts

  • The cyber domain: capabilities and implications
  • The Case of the Wagner Group: the problematics of outsourcing war
  • From Physical Shift to Psychic Shift: Anne’s Move From 37 Merwedeplein to 263 Prinsengracht
  • Beyond Beijing: Russia in the Indo-Pacific
  • Book Review: The Father of Modern Vaccine Misinformation - “The Doctor Who Fooled the World: Science, Deception, and the War on Vaccines” by Brian Deer

Tags

Afghanistan Africa Brexit China Climate Change conflict counterterrorism COVID-19 Cybersecurity Cyber Security Diplomacy Donald Trump drones Elections EU feature France India intelligence Iran Iraq ISIL ISIS Israel ma Myanmar NATO North Korea nuclear Pakistan Politics Russia security strategy Strife series Syria terrorism Turkey UK Ukraine United States us USA women Yemen

Licensed under Creative Commons (Attribution, Non-Commercial, No Derivatives) | Proudly powered by Wordpress & the Genesis Framework