• Skip to primary navigation
  • Skip to main content
  • Skip to footer
  • Home
  • About
    • Editorial Staff
      • Bryan Strawser, Editor in Chief, Strife
      • Dr Anna B. Plunkett, Founder, Women in Writing
      • Strife Journal Editors
      • Strife Blog Editors
      • Strife Communications Team
      • Senior Editors
      • Series Editors
      • Copy Editors
      • Strife Writing Fellows
      • Commissioning Editors
      • War Studies @ 60 Project Team
      • Web Team
    • Publication Ethics
    • Open Access Statement
  • Archive
  • Series
  • Strife Journal
  • Strife Policy Papers
    • Strife Policy Papers: Submission Guidelines
    • Vol 1, Issue 1 (June 2022): Perils in Plain Sight
  • Contact us
  • Submit to Strife!

Strife

The Academic Blog of the Department of War Studies, King's College London

  • Announcements
  • Articles
  • Book Reviews
  • Call for Papers
  • Features
  • Interviews
  • Strife Policy Papers
    • Strife Policy Papers: Submission Guidelines
    • Vol 1, Issue 1 (June 2022): Perils in Plain Sight
You are here: Home / Archives for Cyberwar

Cyberwar

Offensive Cyber Series: Amy Ertan on AI and Military Innovation, Part I

June 17, 2021 by Ed Stacey and Amy Ertan

Photo Credit: Mike MacKenzie, licensed via Creative Commons.

On Wednesday 17th March, Strife Interviewer Ed Stacey sat down with Amy Ertan to discuss offensive cyber in the context of artificial intelligence (AI) and military innovation. For part three of Strife’s Offensive Cyber Series, Ms Ertan discusses the current role of AI in offensive cyber and potential future trajectories, including effects on the offence-defence balance and arms racing, as well as her PhD research, which explores the unforeseen and unintended security consequences of developing and implementing military AI.

Ed Stacey: Amy, could you start by briefly defining AI in the context of offensive cyber. Are we really just talking about machine learning, for example?

Amy Ertan: Artificial intelligence is not just machine learning algorithms – it is a huge range of technologies. There is a whole history of AI that goes back to before the mid-1970s and late-80s: rule-based AI and knowledge-based AI which is, as it sounds, learning based on rules and logic. Then in the last decade or so we have seen a huge uptick in machine learning-based algorithms and its various sub-branches, including deep-learning and neural networks, which are incredibly complex algorithms that we cannot actually understand as humans. So, in summary, AI is a big umbrella term for different kinds of learning technologies.

At the same time, there is some snake oil in the market and a lot of what people call AI can just be probabilistic statistics. Being generous, some of the start-ups that you see are doing if-then algorithms that we could probably do on Excel. That does not, of course, account for the tech giant stuff. But when we talk about AI, we have everything from the super basic things that are not really AI to the incredibly well-financed, billion dollar projects that we see at Amazon, Microsoft and so on.

Machine learning is where a lot of today’s cutting edge research is. So the idea that you can feed data, potentially untagged data – unsupervised learning – into an algorithm, let the algorithm work through that and then make predictions based on that data. So, for example, you feed in three million pictures of cats and if the algorithm works as intended, it will then recognise what is and is not a cat.

In terms of how that fits into offensive cyber, AI is another tool in the toolkit. A learning algorithm, depending on how it is designed and used, will be just like any other cyber tool that you might have, only with learning technology within it. I would make the point that it is not something that we see being utilised today in terms of pure cyber attacks because it is not mature enough to be creative. The machine learning AI that we have right now is very good at narrow tasks, but you cannot just launch it and there is no “AI cyber attack” at the moment.

ES: How might AI enhance or facilitate offensive cyber operations?

AE: As I said, AI is not being used extensively today in offensive cyber operations. The technology is too immature, although we do see AI doing interesting things when it has a narrow scope – like voice or image recognition, text generation or predictive analytics on a particular kind of data set. But looking forward, there are very feasible and clear ways in which AI-enabled technologies might enhance or facilitate cyber operations, both on the offensive and defensive side.

In general, you can talk about the way that AI-enabled tools can speed up or scale up an activity. One example of how AI might enhance offensive cyber operations is through surveillance and reconnaissance. We see already, for example, AI-enabled tools being used in intelligence processing for imagery, like drone footage, saving a huge amount of time and vastly expanding the capacity of that intelligence processing. You could predict that being used to survey a cyber network.

Using AI to automate reconnaissance, to do that research – the very first stage of a cyber attack – is not a capability that you have now. But it would certainly enhance a cyber operation in terms of working out the best target at an organisation – where the weak link was, the best way in. So there is a lot that could be done.

ES: Are we talking then about simply an evolution of currently automated functions or does AI have the potential to revolutionise offensive cyber?

AE: In terms of whether AI will be just a new step or a revolution, generally my research has shown that it will be pretty revolutionary. AI-enabled technology has the power to revolutionise conflict and cyber conflict, and to a large extent that is through an evolution of automated functions and autonomous capabilities. I think the extent to which it is a full-blown revolution will depend on how actors use it.

Within cyberspace, you have this aspect that there might be AI versus AI cyber conflict in the future. Where your offensive cyber tool – your intrusion, your exploit tool – goes head-to-head with your target’s AI-enabled cyber defence tools, which might be intrusion prevention or spam filtering tools that are already AI-enabled. It really depends on how capabilities are used. You will have human creativity but then an AI algorithm makes decisions in ways that humans do not, so that will change some aspects of how offensive cyber activity takes place.

There is debate as to whether this is a cyber attack or information warfare, but I think deep fakes would be an example of a technology or tool that is already being used, falsifying information, that has revolutionised information warfare because of the scale and the nature of the internet today. So how far AI revolutionises offensive cyber will depend not only on its use but also a complex set of interconnections between AI, big data, online connectedness and digital reliance that will come together to change the way that conflict takes place online.

That is a complicated, long answer to say: it depends, but AI definitely does have the potential to revolutionise offensive cyber.

ES: No, thank you – I appreciate that revolutionary is a bit of a loaded term.

AE: Yes, there is a lot of hyperbole when you talk about AI in warfare. But through my doctoral research, every industry practitioner and policy-maker that I have spoken to has agreed that it is a game-changer. Whether or not you agree with the hype, it changes the rules of the game because the speed completely changes and the nature of an attack may completely change. So you definitely cannot say that the power of big data and the power of AI will not change things.

ES: This next question is from Dr Daniel Moore, who I spoke to last week for part two of this series. He was wondering if you think that AI will significantly alter the balance between offence and defence in cyberspace?

AE: I am going to disappoint Danny and say: we do not know yet. We do already see, of course, this interesting balance that states are choosing when they pick their own defence versus offence postures. And I think it is really important to note here that AI is just one tool in the arsenal for a team that is tasked with offensive cyber capabilities. At this point, I do not predict it making a huge difference.

At least when we talk about state-coordinated offensive cyber – sophisticated attacks, taking down adversaries or against critical national infrastructure, for example – they require such sophisticated, niche tools that the automation capabilities provided by AI are unlikely to offer any cutting-edge advantage there. So that depends. AI cyber defence tools streamline a huge amount of activity, whether that is picking out abnormal activities in your network or your logs, that eliminates a huge amount of manual analysis that cyber defence analysts might have to do and gives them more time for meaningful analysis.

AI speeds up and streamlines activity on both the offensive and defensive side, so I think it simply fits into the wider policy discussions for a state. It is one aspect but not the determining aspect, at the moment anyway or in the near future.

ES: And I guess the blurring of the lines between offence and defence in some cyber postures complicates the issue a little?

 AE: Yes, especially when you look at the US and the way they define persistent engagement and defending forward. It is interesting as to where different states will draw their own lines on reaching outside their networks to take down the infrastructure of someone they know is attacking them – offensive activity for defensive purposes. So I think the policy question is much bigger than AI.

ES: Thinking more geopolitically, the UK’s Integrated Review was heavy on science and new technologies and other countries are putting a lot of resources into AI as well. There seems to be some element of a security dilemma here, but would you go so far as to say that we are seeing the start of a nascent AI arms race – what is your view of that framing?

AE: I think to an extent, yes, we do see aspects of a nascent AI arms race. But it is across all sectors, which comes back to AI as a dual-use technology. The Microsoft AI capability that we use now to chat with friends is also being used by NATO command structures and other military structures in command and control infrastructure, albeit in a slightly different form.

Because cutting-edge AI is being developed by private companies, which have the access and resources to do this, it is not like there is this huge arsenal of inherently weaponised AI tools. On the flip side, AI as a dual-use technology means that everything can be weaponised or gamed with enough capability. So it is a very messy landscape.

There have been large debates around autonomous systems in conflict generally, like drones, and I think there is an extent to which we can apply this to cyberspace too. While there is this security dilemma aspect, it is not in any states’ interests to escalate into full-blown warfare that cannot be deescalated and that threatens their citizens, so tools and capabilities should be used carefully.

Now there is a limit to how much you can apply this to cyberspace because of its invisible nature, the lack of transparency and a completely different deterrence structure. But there is an argument that states will show restraint in weaponizing AI where it is not in their interest. You see this conversation taking place, for example, around lethal autonomous weapons at the United Nations Group of Governmental Experts, where it is generally considered that taking the human out of the loop is highly undesirable. But it is complicated and early days.

Looking at the UK, my research has shown that there is pressure to develop AI capabilities in this space and there are perceptions of an AI arms race across the private sector, which is who I spoke to. And there is this awareness that AI investment must happen, in a large part because of anticipated behaviour of adversary states – the idea that other states do not have the same ethical or legal constraints when it comes to offensive cyber or the use of military AI, which is what my PhD thesis focuses on. The only preventative answer to stop this security mechanism building up into an AI arms race seems to be some kind of consensus mechanism, whereby like-minded states agree not to weaponize AI in this way. That is why my research has taken me to NATO, to look in the military context at what kinds of norms can be developed and whether there is a role for international agreement in this way.

If I had to summarise that argument into one or two sentences: there are trends suggesting that there is an AI arms race which is bigger than conflict, bigger than the military and bigger than cyber. So you have to rely on the security interests of the states themselves not to escalate and to potentially form alliance agreements to prevent escalation.


Part II of this interview will be published tomorrow on Friday 18th June 2021.

Filed Under: Blog Article, Feature, Series Tagged With: amy ertan, Cyberwar, cyberwarfare, ed stacey, offensive cyberwarfare, offensive cyberwarfare series, Series, Strife series

Enhancing Cyber Wargames: The Crucial Role of Informed Games Design

January 11, 2021 by Amy Ertan and Peadar Callaghan

by Amy Ertan and Peadar Callaghan

“Risk – Onyx Edition (Ghosts of board games past)” by derekGavey.
Licensed under Creative Commons

 

‘A game capable of simulating every aspect of war would become war.’

Martin Van Creed, Wargames: From Gladiators to Gigabytes, 2013.

 

The launch of the MoD’s Defence Science and Technology Laboratory first Defence Wargaming Centre in December 2019 is an opportunity for future wargaming design. While current games do enable some knowledge transfer, the tried-and-tested techniques employed by the serious games community would enhance these exercises with more effective strategising and training mechanisms.  This article highlights how the characteristics of cyberspace require a distinct approach to wargames, and provides recommendations for improved development and practice of cyber wargames by drawing on established games design principles.

The use of games in educational settings has been recognised since the 4th century BC. Wargames, however, are a more recent invention. Wargaming first emerged in modern times via the Prussian Army. Kriegsspiel, as it was called, was used to teach tactics to officers as part of the Prussian Military Reforms in the wake of their devastating defeats at the hands of Napoleon. Ever since, military wargames have become a feature of training military personnel. The UK Ministry of Defence’s (MoD) Red Teaming Guide defines a wargame as ‘a scenario-based warfare model in which the outcome and sequence of events affect, and are affected by, the decisions made by the players’. These games, as noted by the MoD’s Wargaming Handbook, can be used to simulate conflicts in a low-risk table-top style setting across all levels of war and ‘across all domains and environments’. Wargames have repeatedly proved themselves a reliable method in communicating and practising military strategy that can be applied to explore all varieties of warfare.

As cyber becomes an increasingly important warfighting domain, both by itself and in collaboration with other domains, cyber wargames have begun to be played with the same frequency and importance as the traditional domains. Since 2016, the NATO Cooperative Cyber Defence Centre of Excellence (CCDCOE) has annually coordinated Crossed Swords, focusing on technical training, while NATO’s annual Cyber Coalition focuses on goals including information-sharing and collaboration and the Atlantic Council’s Cyber 9/12 focuses on strategic policy-making. Military examples include the U.S. Naval War College’s Defending Forward wargames, where, in its simplest form, cyber defenders (‘blue teams’) defend against cyber adversaries (‘red teams’). While these games are a great step forward in understanding, analysing, and preparing for the problems of cyberwarfare, these exercises tend to draw on existing conceptions of traditional serious games. This represents a missed opportunity; the cyber domain differs from traditional conflict in ways that warrant a fresh look at the design of wargames.

By design, wargames create an abstracted model of reality containing primary assumptions and simplifications that allow the model to be actionable. Underlying assumptions include: that the enemy is known, rational and ruthless; that the conflict being modelled is zero-sum in nature; that the games are effective tools even without specifically conceptualising how knowledge transfer takes place; and that the scope of the game should mirror reality as closely as possible. While these assumptions are appropriate for—or at least not detrimental to—traditional models of kinetic warfare, they are problematic for cyber wargame design. The challenges with each underlying assumption are described in turn.

The Known, Ruthless, and Rational Enemy

As Larry Greenemeier noted a decade ago, in cyberspace, the fog of war is exacerbated. While traditional warfare often limits available knowledge on an adversary’s location, in the cyber domain the reality is that defenders may not know who the enemy is nor their goals. When the enemy is an unknown, they can appear to act in an irrational way, at least from the perspective of the defender. This is due to the inherent asymmetry of the attacker. Through reconnaissance, the attacker will more than likely hold more information about intended targets than the defenders. Each of these issues, individually and collectively, are typically under-emphasised in most rigid wargames.

A Zero-Sum Nature of Conflict

Rigid wargames use a unity of opposites in their design, the goals of one side are diametrically opposed to the other. This creates a zero-sum game in which the goal of both the red and blue teams is the destruction of the other side. However, cyber conflict holds features of non zero-sum games, such as how the victory of one side does not always come with an associated loss to the other. Additionaly, there is an asymmetry introduced that should be addressed in the game design stage.

Knowledge Transfer: What is Actually Being Taught?

Another assumption made in the deployment of wargames is that they teach. However what is being taught is not as closely examined. In general, serious games can be categorised into two broad types: low road (or reflexive transfer) games; and high road (or mindful transfer) games. Low road transfer games are concerned with direct training of a stimulus and a response in a controlled environment that is as similar as possible to the context that the player is presented with in real life. For example, a flight simulator. The second type high road games are designed to encourage players to mindfully make connections between the context of play and the real world. Reflexive games are more likely to emphasise speed whereas mindful transfers are more likely to emphasise communication between players. Games must be designed using the knowledge transfer type most appropriate to the intended learning outcomes of the game.

Overenthusiastic Scoping

Cyber operations do not exist in isolation from traditional models of warfare. The integration of cyber operations with kinetic warfare, however, dramatically increases the complexity. Even attempting to capture the whole cyber landscape in a single game runs the real risk of detail overload, decision paralysis, and distracting the player from the game’s intended learning objectives. The longer it takes to learn to play, the less time the player has available to learn from the play. In reality, one cannot accurately simulate the real-world threat landscape without sacrificing effective learning (unless the learning point is simply to illustrate how complex the cyber threat landscape might be). For example, if the cyber wargame is focusing on the protection of critical national infrastructure, then side-tasks focusing on several other industries are likely to confuse, rather than assist, participants in achieving the desired learning goals.

Recommendations

How should we best approach the challenge of effective cyber wargame design?

We propose that designed cyber wargames must be in line with the following four principles:

  • Include ‘partial knowledge’ states.If the cyber wargame player has full knowledge of the game state, the game becomes nothing more than an algorithmic recall activity where a player can predict which actions are likely to result in successful outcomes. Certain ludic uncertainties can be included to induce ‘partial knowledge’, simulating the fog of war as required for each game.
  • Include ‘asymmetric positions’ for the players.The character of cyberwar is better modelled through asymmetric relationships between players. Cyber wargame designers need to consider the benefits to having this asymmetry inside the game.
  • Confirm learning objectives and knowledge transfer type before commencing design.Both low road and high road transfer games are valuable, but they serve different functions in the learning environment. A conscious choice for whether the game is attempting to promote low road or high road transfer should be confirmed before game design commences to ensure the appropriateness of the game.
  • Clearly scoped game to explore specific challenges.A well-scoped smaller game increases players’ willingness to replay games multiple times, allowing players to experiment with different strategies.

Conclusion

As both cybersecurity and wargames increase in importance and visibility, so does research on the use of cyber wargaming as a pedagogical tool for practitioners, policymakers, and the military. Existing principles within the games design profession around clear scoping of goals, game narratives, and appropriate player capabilities may all be applied to enhance existing cyber wargame design. The inclusion of partial knowledge states and asymmetric player capabilities both reflect crucial aspects of the cyber domain, while explicit attention to a game’s desired learning objectives and scope ensures that the resulting designs are as effective as possible. In a world in which cyberspace is only expected to become a more common feature of modern conflict, it is strongly advised that the MoD’s Defence Wargaming Centre leverages these tools and training opportunities. In the asymmetric and unpredictable field of cyber warfare, we need all the advantages we can get.

 

Amy Ertan is a cybersecurity researcher and information security doctoral candidate at Royal Holloway, University of London, and predoctoral cybersecurity fellow at the Belfer Center, Harvard Kennedy School. She is an exercise designer for cyber incident management scenarios for The CyberFish Company. As a Visiting Researcher at the NATO Cooperative Cyber Defence Center of Excellence, Amy has contributed to strategic scenario design for the cyber defence exercise, Locked Shields 2021. You can follow her on twitter: @AmyErtan, or via her personal webpage: https://www.amyertan.com

Peadar Callaghan is a wargames designer and lectures in learning game design and gamification at the University of Tallinn, Estonia. His company, Integrated Game Solutions, provides consultancy and design services for serious games and simulations, with a focus on providing engaging training outcomes. You can find him at http://peadarcallaghan.com/

Filed Under: Blog Article, Feature Tagged With: amy ertan, cyber domain, cyber war, cyber wargames, Cybersecurity, Cyberwar, cyberwarfare, military, NATO, peadar callaghan, Red Teams, UK Ministry of Defence, war games, wargaming

Strife Series on Cyberwarfare and State Perspectives: Strategic effectiveness in Cyberspace – Introduction

July 10, 2018 by Shivali Bhatt

By Shivali Bhatt

 

Soldiers on the digital battlefield

 

Over the past couple of decades, the world has witnessed an unstoppable and almost inevitable rise in cyber-attacks and acts of digital warfare. Just over ten years ago, the Israeli government successfully disarmed the Syrian air defence system near a nuclear facility, allowing it to destroy the base without having to deal with the Syrians putting up a fight. This event marked a critical turning point for state warfare, as it exemplified the way in which cyberspace and digital technology can become an accessory to broader military strategy. A few years later, a joint built American/Israeli cyberweapon, also known as Stuxnet, unleashed havoc in Iran and a few other countries. This highly sophisticated attack not only managed to infiltrate a significant portion of cyberspace and thousands of computers but is believed to be an explanatory factor behind the rate at which states have been investing in, and advancing, their cyber capabilities.

Today, over two hundred thousand samples of malware get launched daily, and states are participating in a ‘cyber arms race’ or ‘technology arms race’. States, especially like the United States and China, are competing to acquire military edge by investing and developing skills in innovative technology, like artificial intelligence [1]. One of the main reasons behind the significant interest in technological superiority is because the rules to the global politics and warfare are changing. The instrument of cyberwarfare has and continues to become one of the most highly regarded domains for political strategy, yet each state has a different perspective and reality in this evolving context.

Therefore, the purpose of this series is to shed light on the perspectives of states, all of which possess varying cultural, geopolitical and economic contexts. A significant narrative today is how cyberwarfare and generally cyberspace are changing the balance of power in the international system. However, these arguments present themselves in the absence of critical analysis, which helps contextualise the reality and trajectory of modern cyberwarfare. The states examined in this series engage with cyberspace in different ways; at times, can be conceptualised by a set of underlying factors. They offer the reader a compelling contrast, and hopefully shall help them understand the scope for further discussion and research on the extent to which cyberwarfare is strategically effective.

In the first article, PhD researcher Andreas Haggman analyses the cyber capabilities of two ‘medium’ powers, Australia and Sweden. He identifies how they enhance their existing traditional military strategies, placing greater emphasis on the relevance of geopolitical context.

In the second article, PhD researcher Amy Ertan examines the strategic value of ‘false flags’ in a context of state-led cyberwarfare, using Russia as a critical case study. She analyses how geopolitics can act as a catalyst for those states faced with the problem of attribution.

In the final piece, Shivali Bhatt approaches the domain of cyberwarfare through the lens worn by American policymakers and critiques current narratives circulating in popular media and also specific academic communities today. Her lines of argument emphasise the underlying factors that in the case of the United States, increase strategic leverage.

We hope this series offers readers a greater insight into state perspectives on cyberwarfare and critical understanding of the domain’s strategic effectiveness.

Thanks for reading!


 

Shivali is currently pursuing her MA Intelligence and International Security at Department of War Studies, King’s College London. She is also a Series Editor at Strife, as well as a Creative Writer at cybersecurity startup PixelPin, where she contributes articles on ‘Thought Leadership’, encouraging readers to approach security issues through innovative means. Prior to that, she spent some time in Hong Kong under the InvestHK and EntrepreneurHK organisations, engaging with the cybersecurity and tech scene on the East Coast. Her core research interests include modern warfare and contemporary challenges, cybersecurity, and strategic policy analysis. You can follow her  on @shivalixb


 

Filed Under: Blog Article, Uncategorized Tagged With: Cyberwar, strategy, Strife series, Stuxnet

Strife Feature, Abstract: A Beginners Guide to the Musical Scales of Cyberwar

December 15, 2016 by J. Zhanna Malekos Smith

By: Jessica “Zhanna” Malekos Smith

Musical scales of cyberwar: the graphic of a piano keyboard illustrates how the core principles of the law of war apply to cyberspace
Musical scales of cyberwar: the graphic of a piano keyboard illustrates how the core principles of the law of war apply to cyberspace

In Strife’s long-form feature piece for December, Jessica Malekos Smith writes about the beginner’s guide to the ‘musical scales’ of cyberwar. Using the analogy of a piano keyboard, her article aims to promote an understanding of what constitutes a use of force in cyberspace and how a state may lawfully respond. Understanding the legal confines of offensive and defensive cyber operations is a burgeoning area of study. In fact, in Harold Koh’s famous remarks at U.S. Cyber Command’s Inter-Agency Legal Conference in 2012, he posed the following question to the audience: “how do we apply old laws of war to new cyber-circumstances, staying faithful to enduring principles, while accounting for changing times and technologies?”[1]

To help achieve this, Jessica uses the concept of Middle C and musical intervals known as octaves to explain the range of permissible state conduct during times of conflict. By juxtaposing the law of war with a piano keyboard, Jessica illustrates the arcane legal precepts of how states evaluate the scale and effects of a cyber operation and determine a basis for using force under the Law of Armed Conflict. Music is a language that is universally understood, and the analogies used here will encourage society to learn about the law of war, and help collectively better strategize ways to mitigate conflict in the cyber domain.


Jessica “Zhanna” Malekos Smith is a Postdoctoral Fellow of the Belfer Center’s Cyber Security Project at the Harvard Kennedy School. Her feature was published on 29th December 2016. 


Notes:

[1] Harold Hongju Koh, International Law in Cyberspace, Yale University Faculty Scholarship Series. Paper 4854 (2012), http://digitalcommons.law.yale.edu/fss_papers/4854.

Image credit: https://www.goodfreephotos.com/albums/vector-images/piano-keyboard-with-notes-vector-file.png

Feature image credit: https://www.goodfreephotos.com/albums/other-photos/hand-playing-keyboard-keys.jpg

Filed Under: Announcement Tagged With: Cybersecurity, Cyberwar, feature

Film Review: Zero Days (2016)

September 21, 2016 by Cheng Lai Ki

Gibney, A. Zero Days, Jigsaw Productions, (2016). (PG-13) More information from: http://gb.imdb.com/title/tt5446858/.

By: Cheng Lai Ki

maxresdefault

“The science fiction cyberwar scenario is here…” This statement comes from members of the United States National Security Agency (NSA), and others in the intelligence community, role-played by actress Joanne Tucker. Zero Days, directed and narrated by documentarian Alex Gibney – who produced the award winning documentaries Enron: The Smartest Guys in the Room (2005) and Taxi to the Dark Side (2007) – explores the evolving nature of computer network exploitations (CNEs). In a world where critical infrastructures (i.e. energy suppliers, telecommunication infrastructures), military communication grids (i.e. US Global Information Grid – GIG) and diplomatic communications are conducted on information-communication technologies (ICTs); the documentary illuminates the uncomfortable realities and vulnerabilities within cyberspace.

Zero Days explores StuxNet, a computer worm developed by a US-Israeli effort to cripple the uranium enrichment capabilities at the Natanz enrichment plant in Iran. The documentary debuted at the 2016 Berlin film festival and was awarded a four-star review by the Guardian’s Peter Bradshaw, who described Gibney’s 2016 documentary as ‘intriguing and disturbing’. Named after the technical term ‘zero day’ that represents a computer network vulnerability that is only known to the attacker, the investigative documentary tells Gibney’s journey in uncovering ‘the truth’ behind StuxNet’s technical capabilities and attributed political motives. Despite discussing a cybersecurity threat, the documentary goes beyond the technical landscape and introduces various geopolitical elements within – such as the Israeli disapproval of Iran cultivating national nuclear capabilities. Given the relative basic nature of its discussions, this documentary appears to be intended for the general public rather than specialists in the field. However, Gibney appears to have followed along an investigative journalistic approach (something he undoubtedly is famous for) and guides the viewer along a path of what essentially is a cyber-attribution journey implicating the US and Israeli agencies. The documentary was constructed with strategically cut interviews from cybersecurity specialists (i.e. Kaspersky; Symantec), former senior-leaderships from ‘three-letter’ government agencies, industrial experts (i.e. Ralph Langner, a German Control System Security consultant) and pioneers within the investigative journalism (i.e. David Sanger) in discussing StuxNet’s discovery and capabilities. In addition to these interviews, Gibney wanted a more ‘real’ source of information. This was where the anonymous NSA intelligence community came in. Collectively using transcripts of these employees (and the help of actress Joanne Tucker), Gibney was able to incorporate an inside-source that gave this documentary a little more power behind its claims.

A collection of Programmable Logic Controllers (PLCs) that are crucial technological components within most critical infrastructure. The StuxNet worm targeted specficially the Siemens Simatic S7-300 PLC CPU with three I/O modules attached.

The documentary excels in unveiling to the general public that: i) cybersecurity is not purely a software issue, but also a hardware one; and ii) digital-malware can be easily weaponised for intelligence gathering and strike purposes.

First, Symantec Security Response specialist, Eric Chien, states in an interview: ‘…real-world physical destruction. [Boom] At that time things became really scary for us. Here you had malware potentially killing people and that was something that was Hollywood-esque to us; that we’d always laugh at, when people made that kind of assertion.’ Through conducting a simple experiment where Symantec specialists infected a Programmable Logic Controller (PLC) – the main computer control unit of most facility control systems – with the StuxNet worm. Under normal conditions, the PLC was programed to inflate a balloon and stop after five-seconds. However, after being infected with the StuxNet worm, the PLC ignored commands to stop the inflation and the balloon burst after being continuously filled with air. Through this simple experiment, the specialists (and Gibney) managed to reveal the devastating impact of vulnerable computer systems that control our national critical infrastructures or dangerous facilities such as Natanz.

Second, the NSA employees that decided to talk to Gibney revealed who the US cyber intelligence community recruits and more importantly, their capacities to create digital-techniques for intelligence gathering – or in the case of StuxNet, strike purposes. Cybersecurity specialists that were analysing the StuxNet code discovered older versions that were focused on data-collection. It wasn’t until the later versions that more offensive objectives were made more apparent within the code. According to forthcoming NSA employees, this shift within the code was done by the Israeli foreign intelligence services (Mossad) and not the American agencies. Regardless, Zero Days does an excellent job in revealing the highly adaptive nature of cyber ordinances.

national_security_agency_headquarters_fort_meade_maryland
The United States National Security Agency (NSA) at Fort Meade, Maryland. There, information technology experts developed the multiple version of the StuxNet worm at the Cyber Command unit (USCYBERCOM) established in 2009 that was housed wihtin.

However, to security academics, this documentary suffers from several limitations undermining its credibility. Two of its main limitations are: i) over centralization on investigative attribution; and ii) inherently negative portrayal of governmental personnel and activity.

First, as earlier mentioned, the documentary is a journey of cyber-attribution at its core – much akin to the work of investigative journalist, David Sanger. To show this, we need to review the structure of the documentary. It begins with discussing the cybersecurity incident, how the worm was found, and how it baffled cybersecurity specialists. Next, the documentary explains the geopolitical and security tensions between the US, Israel and Iran; in addition to discussing the American position on Iran’s nuclear capabilities. Next, it progressed onto the technical and security domains; explaining the infrastructure of American and Israeli cyber-intelligence capabilities and operations. Finally, Gibney asks harder questions of implications and opinions during his interviews with American intelligence, security and military subjects. Obviously, for national security and secrecy reasons, these could not be answered. It would appear that Gibney wanted to ask these questions to highlight his disgust in the lack of transparency within the security sector. Throughout the late part of the documentary, he supplements various claims with an informal-esque interview with the NSA employees using Joanne Tucker as an avatar. To the general public, this documentary is undoubtedly an interesting journey of exploration and revelation about American and Israeli cyber capabilities. While highlighting several cybersecurity concerns afflicting cybersecurity specialists in governmental and industrial sectors, the documentary quickly narrows its attributive direction towards the United States and Israel – leaving little room for alternative arguments.

Second, to security specialists this documentary leaves out several key areas of consideration, such as the crucial importance of having an effective intelligence collection and pre-emptive strike capabilities for reasons of national security. During interviews with government leaderships, they were either explaining the structure of their national intelligence agencies/capabilities or talking about how certain operations were transferred between presidents – StuxNet was known within the American government community as ‘The Olympic Games’. As such, government interviewees played only an informative role, participating in few discussions. Another comment would be on the NSA employees that decided to be vocal. Playing the devil’s advocate, certain questions about credibility and accuracy can be raised: How do we know these were really NSA employees from their cyber divisions? Do we know if they are really vocalizing because they wanted to? Or were they instructed to? There was a significant amount of blame placed on Mossad for ‘weaponizing’ the StuxNet code when the Americans just wanted to utilise it solely for intelligence collection purposes. Within the realms of intelligence, this sounds more like disinformation rather than truth. To some civil-servants from security or intelligence backgrounds, this documentary appears to portray such government operations in a negative light and perpetuates the concept of transparency with little regard for its ramifications. Sometimes, knowing the ‘truth’ might do more harm than good.

Zero Days is an excellent documentary and investigatory source of information that raises awareness of cybersecurity issues and its importance in our modernized era. First, its innovative and effective use of animations coupled with strategic uses of interviewees from various backgrounds provides it credibility and persuasiveness when discussing StuxNet. Second, it increases awareness about the importance of cultivating a better understanding of cybersecurity and how vulnerable digital and hardware systems can have significantly harmful consequences. However, in his quest to push for transparency behind government intelligence operations, Zero Days promotes a dangerous notion. Operational secrecy is not a negative notion but sometimes vital for national security. The ubiquitous nature of cyberspace, like Pandora’s Box, opens nations to a new dimension of threats that cannot be as easily defended like that of Air, Land, or Sea and increased transparency can deal much more harm. Regardless your position regarding the motives behind Zero Days, it remains an excellent documentary in raising cybersecurity awareness.

Zero Days (2016) Documentary Trailer:

 

Cheng served as an Amour Officer and Training Instructor at the Armour Training Institute (ATI) in the Singapore Armed Forces (SAF) and now possesses reservist status. His master’s research revolves around security considerations within the Asia-Pacific Region and more specifically around areas of Cybersecurity, Maritime Security and Intelligence Studies. His Graduate thesis explores the characteristics and trends defining China’s emerging Cybersecurity and Cyberwarfare capabilities. He participated in the April 2016 9/12 Cyber Student Challenge in Geneva and has been published in IHS Janes’s Intelligence Review in May 2016. You can follow him on Twitter @LK_Cheng

 

Notes:

Bradshaw, P. ‘Zero Days review – a disturbing portrait of malware as the future of war’, The Guardian, Available from: https://www.theguardian.com/film/2016/feb/17/zero-days-review-malware-cyberwar-berlin-film-festival, (17 Feb 2016).

Gibney, A. ‘Director Profile’, JigSaw Productions, Available from: http://www.jigsawprods.com/alex-gibney/ (Accessed October 2016).

Internatinale Filmfestipiele Berlin 2016, Film File: Zero Days (Competition), Available from: https://www.berlinale.de/en/archiv/jahresarchive/2016/02_programm_2016/02_Filmdatenblatt_2016_201608480.php#tab=filmStills (2016)

Langer, R. ‘Cracking Stuxnet, a 21st-century cyber weapon’, TEDTalk, Available from: https://www.ted.com/talks/ralph_langner_cracking_stuxnet_a_21st_century_cyberweapon/transcript?language=en, (Mar 2011)

Lewis, J.A. ‘In Defense of Stuxnet’, Military and Strategic Affairs, 4(3), Dec 2012, pp.65 – 76.

Macaulay, S. ‘Wrong Turn’, FilmMaker, Available from: http://www.filmmakermagazine.com/archives/issues/winter2008/taxi.php#.V-A8_Tvouu5, (2008).

Scott, A.O. ‘Those You Love to Hate: A Look at the Mighty Laid Low’, The New York Times, Available from: http://www.nytimes.com/2005/04/22/movies/those-you-love-to-hate-a-look-at-the-mighty-laid-low.html?_r=1, (Apr 22 2005).

Image Source (1): https://i.ytimg.com/vi/GlC_1gZfuuU/maxresdefault.jpg

Image Source (2): https://upload.wikimedia.org/wikipedia/commons/8/82/SIMATIC_different_equipment.JPG

Image Source (3): https://upload.wikimedia.org/wikipedia/commons/8/84/National_Security_Agency_headquarters,_Fort_Meade,_Maryland.jpg

 

 

Filed Under: Film Review Tagged With: Cybersecurity, Cyberwar, feature, Iran, Israel, National Security Agency, nuclear, Stuxnet

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Footer

Contact

The Strife Blog & Journal

King’s College London
Department of War Studies
Strand Campus
London
WC2R 2LS
United Kingdom

blog@strifeblog.org

 

Recent Posts

  • Climate-Change and Conflict Prevention: Integrating Climate and Conflict Early Warning Systems
  • Preventing Coup d’Étas: Lessons on Coup-Proofing from Gabon
  • The Struggle for National Memory in Contemporary Nigeria
  • How UN Support for Insider Mediation Could Be a Breakthrough in the Kivu Conflict
  • Strife Series: Modern Conflict & Atrocity Prevention in Africa – Introduction

Tags

Afghanistan Africa Brexit China Climate Change conflict counterterrorism COVID-19 Cybersecurity Cyber Security Diplomacy Donald Trump drones Elections EU feature France India intelligence Iran Iraq ISIL ISIS Israel ma Myanmar NATO North Korea nuclear Pakistan Politics Russia security strategy Strife series Syria terrorism Turkey UK Ukraine United States us USA women Yemen

Licensed under Creative Commons (Attribution, Non-Commercial, No Derivatives) | Proudly powered by Wordpress & the Genesis Framework