• Skip to primary navigation
  • Skip to main content
  • Skip to footer
  • Home
  • About
    • Editorial Staff
      • Bryan Strawser, Editor in Chief, Strife
      • Dr Anna B. Plunkett, Founder, Women in Writing
      • Strife Journal Editors
      • Strife Blog Editors
      • Strife Communications Team
      • Senior Editors
      • Series Editors
      • Copy Editors
      • Strife Writing Fellows
      • Commissioning Editors
      • War Studies @ 60 Project Team
      • Web Team
    • Publication Ethics
    • Open Access Statement
  • Archive
  • Series
  • Strife Journal
  • Strife Policy Papers
    • Strife Policy Papers: Submission Guidelines
    • Vol 1, Issue 1 (June 2022): Perils in Plain Sight
  • Contact us
  • Submit to Strife!

Strife

The Academic Blog of the Department of War Studies, King's College London

  • Announcements
  • Articles
  • Book Reviews
  • Call for Papers
  • Features
  • Interviews
  • Strife Policy Papers
    • Strife Policy Papers: Submission Guidelines
    • Vol 1, Issue 1 (June 2022): Perils in Plain Sight
You are here: Home / Archives for amy ertan

amy ertan

Offensive Cyber Series: Amy Ertan on AI and Military Innovation, Part II

June 18, 2021 by Ed Stacey and Amy Ertan

A soldier operates the remote controlled Mark 8 Wheel Barrow Counter IED Robot. Photo Credit: UK Ministry of Defence, licensed under Creative Commons

This is part II of Ed Stacey’s interview with Amy Ertan on AI and military innovation for Strife’s Offensive Cyber Series. You can find part I here.


ES: I feel like there is a whole interview to be had on this idea of an AI arms race, especially with some of the signals from governments about the importance of these technologies.

AE: We talk about an AI arms race, but actually the number of countries that have the resources to invest in this is really small. The US, of course, is investing billions and billions, and they have their Joint Artificial Intelligence Center which is coordinating AI, including AI for use in cyberspace. The UK invests a huge amount as well and so do a few other states within Europe, for example France. But for the majority of states, say across NATO, AI in conflict is not something that is currently top of the agenda: it is something that is discussed at the strategic level and people know that it will hit and have impact in 20 to 30 years’ time. So we are seeing that strategic discussion but it costs so much that it is just a matter of states buying solutions from the private sector, so lots of questions there to.

ES: On that note, given the private sector is so important in the development of AI, do you think that the advantage lies with liberal democratic states and their innovate, free-market economies or authoritarian states that have greater control over private companies, enhancing military-civil fusion? Or alternatively, is that dichotomy a bit of a cliché?

AE: That dichotomy is a bit of a cliché. I will say, though, that the states that do have control and oversight over their industry, China for example, have a significant advantage when it comes to military-civil fusion and access to big data. China places either top or joint top with the US at the moment – I think there is a separate computing race – when it comes to AI. And when you look at conversations, in the US and UK for example, public-private partnerships are a major focus with AI because you need to partner with companies like Microsoft, IBM, Amazon and Google.

The free-market economy is not something I think has an inherent advantage, which sounds strange to say. But there is an interesting aspect in that for a lot of private sector leaders in AI, governments are not their main target market – they do not need to work for them. There is controversy around what they do, for example with Google and Project Maven.

There has been a shift in the way that military innovation takes place over the last half-century or so and the government now has less control over who works with them than before. So public-private partnership is something that states like the UK and US would love to improve on. There are also challenges for government procurement cycles when it comes to technologies like AI because you need a much faster procurement cycle than you do for a tank or a plane. So working with the private sector is going to become increasingly central to Ministry of Defence procurement strategies moving forward.

ES: Your PhD research explores the unforeseen and unintended security consequences of developing and implementing military AI. Could you speak a little to how these consequences might materialise in or through the cyber domain? 

AE: There are two aspects to this: one is the technical security angle and then the second is the strategic security angle. In terms of cyber security aspects, first, you have the threat that your AI system itself may not be acting as intended. Now especially when we think about sophisticated machine learning techniques, you often cannot analyse the results because the algorithm is simply too complicated. For example, if you have developed deep learning or a neural network, there will potentially be hundreds of thousands of nodes and no “explainability” – you have a “black box” problem as to what the algorithm is doing. That can make it very difficult to detect when something goes wrong and we have seen examples of that in the civic space, where it has turned out many years after the fact that an algorithm has been racist or sexist. It is a slightly different challenge in the military sphere: it is not so much about bias but rather is it picking up the right thing? Obviously, within a conflict environment you do not want to detect a threat where there is not one or miss something.

Second, there is the threat that your algorithm or data may be compromised and you would not know. So this could be the input data that you are feeding in or the system itself. For example, you may have a cyber defence algorithm that picks up abnormal activity on your network. A sophisticated attacker could interfere with the programming of that algorithm or tamper with the data so that the algorithm thinks that the attacker has been there all along and, therefore, that it is not abnormal activity and no flags are raised. So the way in which threat modelling does not consider the creativity of attackers, or insufficiency of the algorithm, could lead to something being deployed that is not fit for purpose.

Third, adversarial AI. This is the use of techniques to subvert an AI system, again making something that is deployed fallible. For one perhaps theoretical but technically feasible example, you could deploy an algorithm in cyberspace that would only target certain kinds of infrastructure. Maybe you would want it to not target hospitals, but that could be gamed – everyone could attempt to make their site look like a hospital to the algorithm.

Right now, the technology is too immature and we do not have direct explainability. It is also very difficult to know the right level of confidence to have before deploying an AI system and there are questions around oversight. So while technical challenges around explainability and accuracy may be solved through strict verification and validation procedures that will mature in time with AI capabilities, some of these unintended consequences come down to human factors like trust, oversight and responsibility. For example, how do humans know when to override an AI system

Those societal and policy questions will be tricky and that is what leads you into the strategic debate. For example, what is the appropriate use of AI in an offensive manner through or beyond cyberspace? What is a legitimate target? When it comes to AI and offensive cyber, all of the main questions around offensive cyber remain the same – the ones that traditionally apply to cyber conflict and the ones that we want to start thinking about with sub-threshold conflict. With AI, I think it is the way in which it can be mis-utilised or utilised to scale up inappropriate or unethical activity that is particularly problematic.

ES: How should states go about mitigating those risks? You touched on norms earlier, but because a lot of this work is super secretive, how can we have those conversations or develop regulation when states are, perhaps for good reason, not willing to reveal what they are doing in this space?

AE: Absolutely. Military innovation around AI will always be incredibly secretive. You will have these propriety algorithms that external parties cannot trust, and this is really difficult in the military space where the data is so limited anyway. I mentioned earlier that you can feed three million pictures of cats into an algorithm that then learns to recognise a cat, but there are way fewer images of tanks in the Baltic region or particular kinds of weapon. The data is much more limited in secretive military contexts and it potentially is not being shared between nations to the extent that might be desirable when it comes to building up a better data set that would lead to more accurate decisions. So encouraging information sharing to develop more robust algorithms would be one thing that could mitigate those technical risks.

Talking about broader conversations, norms and regulations. I think regulation is difficult. We have seen that with associated technologies: regulation moves quite slowly and will potentially fail to capture what happens in 10, 15 or 20 years’ time because we cannot foresee the way in which this technology will be deployed. Norms, yes, there is potential there. You can encourage principles, not only in the kinetic space but there are also statements and agreements around cyberspace – NATO’s Cyber Defence Pledge, for example, and the Paris Call. States can come together and agree on baseline behaviours of how to act. It is always difficult to get consensus and it is slow, but once you have it that can be quite a powerful assurance – not confirmation that AI will not be used in offensive cyber in undesirable ways, but it gives some assurance to alliance structures.

And those kinds of conversations can prove the basis for coming together to innovate as well. So we already see, for example, while the UK and US have the power and resources to invest themselves, across NATO groups of countries are coming together to look at certain problems, for example to procure items together, which may well be the path towards military AI.

It is difficult and you cannot force states to cooperate in this way, but it is also in the interests of some states. For example, if the US has invested billions in military AI for cyber purposes, it is also in its interest that its allies are secure as well and that the wider ecosystem is secure. So it may choose to share some of those capabilities to allies, not the most secretive nor the raw data but, for example, the principles to which it abides by or certain open source tools. Then we start thinking about trust networks, whether that is the Five Eyes, NATO or other alliance structures too. So it is not hopeless.


The final interview in Strife’s Offensive Cyber Series is with Dr Jacquelyn Schneider on cyber strategy. It will be released in two parts on Thursday 24th and Friday 25th June 2021.

Filed Under: Blog Article, Feature, Series Tagged With: AI, amy ertan, Artificial Intelligence, cyber, Cybersecurity, cyberwarfare, ed stacey, Military Cyber, offensive cyberwarfare, offensive cyberwarfare series

Offensive Cyber Series: Amy Ertan on AI and Military Innovation, Part I

June 17, 2021 by Ed Stacey and Amy Ertan

Photo Credit: Mike MacKenzie, licensed via Creative Commons.

On Wednesday 17th March, Strife Interviewer Ed Stacey sat down with Amy Ertan to discuss offensive cyber in the context of artificial intelligence (AI) and military innovation. For part three of Strife’s Offensive Cyber Series, Ms Ertan discusses the current role of AI in offensive cyber and potential future trajectories, including effects on the offence-defence balance and arms racing, as well as her PhD research, which explores the unforeseen and unintended security consequences of developing and implementing military AI.

Ed Stacey: Amy, could you start by briefly defining AI in the context of offensive cyber. Are we really just talking about machine learning, for example?

Amy Ertan: Artificial intelligence is not just machine learning algorithms – it is a huge range of technologies. There is a whole history of AI that goes back to before the mid-1970s and late-80s: rule-based AI and knowledge-based AI which is, as it sounds, learning based on rules and logic. Then in the last decade or so we have seen a huge uptick in machine learning-based algorithms and its various sub-branches, including deep-learning and neural networks, which are incredibly complex algorithms that we cannot actually understand as humans. So, in summary, AI is a big umbrella term for different kinds of learning technologies.

At the same time, there is some snake oil in the market and a lot of what people call AI can just be probabilistic statistics. Being generous, some of the start-ups that you see are doing if-then algorithms that we could probably do on Excel. That does not, of course, account for the tech giant stuff. But when we talk about AI, we have everything from the super basic things that are not really AI to the incredibly well-financed, billion dollar projects that we see at Amazon, Microsoft and so on.

Machine learning is where a lot of today’s cutting edge research is. So the idea that you can feed data, potentially untagged data – unsupervised learning – into an algorithm, let the algorithm work through that and then make predictions based on that data. So, for example, you feed in three million pictures of cats and if the algorithm works as intended, it will then recognise what is and is not a cat.

In terms of how that fits into offensive cyber, AI is another tool in the toolkit. A learning algorithm, depending on how it is designed and used, will be just like any other cyber tool that you might have, only with learning technology within it. I would make the point that it is not something that we see being utilised today in terms of pure cyber attacks because it is not mature enough to be creative. The machine learning AI that we have right now is very good at narrow tasks, but you cannot just launch it and there is no “AI cyber attack” at the moment.

ES: How might AI enhance or facilitate offensive cyber operations?

AE: As I said, AI is not being used extensively today in offensive cyber operations. The technology is too immature, although we do see AI doing interesting things when it has a narrow scope – like voice or image recognition, text generation or predictive analytics on a particular kind of data set. But looking forward, there are very feasible and clear ways in which AI-enabled technologies might enhance or facilitate cyber operations, both on the offensive and defensive side.

In general, you can talk about the way that AI-enabled tools can speed up or scale up an activity. One example of how AI might enhance offensive cyber operations is through surveillance and reconnaissance. We see already, for example, AI-enabled tools being used in intelligence processing for imagery, like drone footage, saving a huge amount of time and vastly expanding the capacity of that intelligence processing. You could predict that being used to survey a cyber network.

Using AI to automate reconnaissance, to do that research – the very first stage of a cyber attack – is not a capability that you have now. But it would certainly enhance a cyber operation in terms of working out the best target at an organisation – where the weak link was, the best way in. So there is a lot that could be done.

ES: Are we talking then about simply an evolution of currently automated functions or does AI have the potential to revolutionise offensive cyber?

AE: In terms of whether AI will be just a new step or a revolution, generally my research has shown that it will be pretty revolutionary. AI-enabled technology has the power to revolutionise conflict and cyber conflict, and to a large extent that is through an evolution of automated functions and autonomous capabilities. I think the extent to which it is a full-blown revolution will depend on how actors use it.

Within cyberspace, you have this aspect that there might be AI versus AI cyber conflict in the future. Where your offensive cyber tool – your intrusion, your exploit tool – goes head-to-head with your target’s AI-enabled cyber defence tools, which might be intrusion prevention or spam filtering tools that are already AI-enabled. It really depends on how capabilities are used. You will have human creativity but then an AI algorithm makes decisions in ways that humans do not, so that will change some aspects of how offensive cyber activity takes place.

There is debate as to whether this is a cyber attack or information warfare, but I think deep fakes would be an example of a technology or tool that is already being used, falsifying information, that has revolutionised information warfare because of the scale and the nature of the internet today. So how far AI revolutionises offensive cyber will depend not only on its use but also a complex set of interconnections between AI, big data, online connectedness and digital reliance that will come together to change the way that conflict takes place online.

That is a complicated, long answer to say: it depends, but AI definitely does have the potential to revolutionise offensive cyber.

ES: No, thank you – I appreciate that revolutionary is a bit of a loaded term.

AE: Yes, there is a lot of hyperbole when you talk about AI in warfare. But through my doctoral research, every industry practitioner and policy-maker that I have spoken to has agreed that it is a game-changer. Whether or not you agree with the hype, it changes the rules of the game because the speed completely changes and the nature of an attack may completely change. So you definitely cannot say that the power of big data and the power of AI will not change things.

ES: This next question is from Dr Daniel Moore, who I spoke to last week for part two of this series. He was wondering if you think that AI will significantly alter the balance between offence and defence in cyberspace?

AE: I am going to disappoint Danny and say: we do not know yet. We do already see, of course, this interesting balance that states are choosing when they pick their own defence versus offence postures. And I think it is really important to note here that AI is just one tool in the arsenal for a team that is tasked with offensive cyber capabilities. At this point, I do not predict it making a huge difference.

At least when we talk about state-coordinated offensive cyber – sophisticated attacks, taking down adversaries or against critical national infrastructure, for example – they require such sophisticated, niche tools that the automation capabilities provided by AI are unlikely to offer any cutting-edge advantage there. So that depends. AI cyber defence tools streamline a huge amount of activity, whether that is picking out abnormal activities in your network or your logs, that eliminates a huge amount of manual analysis that cyber defence analysts might have to do and gives them more time for meaningful analysis.

AI speeds up and streamlines activity on both the offensive and defensive side, so I think it simply fits into the wider policy discussions for a state. It is one aspect but not the determining aspect, at the moment anyway or in the near future.

ES: And I guess the blurring of the lines between offence and defence in some cyber postures complicates the issue a little?

 AE: Yes, especially when you look at the US and the way they define persistent engagement and defending forward. It is interesting as to where different states will draw their own lines on reaching outside their networks to take down the infrastructure of someone they know is attacking them – offensive activity for defensive purposes. So I think the policy question is much bigger than AI.

ES: Thinking more geopolitically, the UK’s Integrated Review was heavy on science and new technologies and other countries are putting a lot of resources into AI as well. There seems to be some element of a security dilemma here, but would you go so far as to say that we are seeing the start of a nascent AI arms race – what is your view of that framing?

AE: I think to an extent, yes, we do see aspects of a nascent AI arms race. But it is across all sectors, which comes back to AI as a dual-use technology. The Microsoft AI capability that we use now to chat with friends is also being used by NATO command structures and other military structures in command and control infrastructure, albeit in a slightly different form.

Because cutting-edge AI is being developed by private companies, which have the access and resources to do this, it is not like there is this huge arsenal of inherently weaponised AI tools. On the flip side, AI as a dual-use technology means that everything can be weaponised or gamed with enough capability. So it is a very messy landscape.

There have been large debates around autonomous systems in conflict generally, like drones, and I think there is an extent to which we can apply this to cyberspace too. While there is this security dilemma aspect, it is not in any states’ interests to escalate into full-blown warfare that cannot be deescalated and that threatens their citizens, so tools and capabilities should be used carefully.

Now there is a limit to how much you can apply this to cyberspace because of its invisible nature, the lack of transparency and a completely different deterrence structure. But there is an argument that states will show restraint in weaponizing AI where it is not in their interest. You see this conversation taking place, for example, around lethal autonomous weapons at the United Nations Group of Governmental Experts, where it is generally considered that taking the human out of the loop is highly undesirable. But it is complicated and early days.

Looking at the UK, my research has shown that there is pressure to develop AI capabilities in this space and there are perceptions of an AI arms race across the private sector, which is who I spoke to. And there is this awareness that AI investment must happen, in a large part because of anticipated behaviour of adversary states – the idea that other states do not have the same ethical or legal constraints when it comes to offensive cyber or the use of military AI, which is what my PhD thesis focuses on. The only preventative answer to stop this security mechanism building up into an AI arms race seems to be some kind of consensus mechanism, whereby like-minded states agree not to weaponize AI in this way. That is why my research has taken me to NATO, to look in the military context at what kinds of norms can be developed and whether there is a role for international agreement in this way.

If I had to summarise that argument into one or two sentences: there are trends suggesting that there is an AI arms race which is bigger than conflict, bigger than the military and bigger than cyber. So you have to rely on the security interests of the states themselves not to escalate and to potentially form alliance agreements to prevent escalation.


Part II of this interview will be published tomorrow on Friday 18th June 2021.

Filed Under: Blog Article, Feature, Series Tagged With: amy ertan, Cyberwar, cyberwarfare, ed stacey, offensive cyberwarfare, offensive cyberwarfare series, Series, Strife series

Enhancing Cyber Wargames: The Crucial Role of Informed Games Design

January 11, 2021 by Amy Ertan and Peadar Callaghan

by Amy Ertan and Peadar Callaghan

“Risk – Onyx Edition (Ghosts of board games past)” by derekGavey.
Licensed under Creative Commons

 

‘A game capable of simulating every aspect of war would become war.’

Martin Van Creed, Wargames: From Gladiators to Gigabytes, 2013.

 

The launch of the MoD’s Defence Science and Technology Laboratory first Defence Wargaming Centre in December 2019 is an opportunity for future wargaming design. While current games do enable some knowledge transfer, the tried-and-tested techniques employed by the serious games community would enhance these exercises with more effective strategising and training mechanisms.  This article highlights how the characteristics of cyberspace require a distinct approach to wargames, and provides recommendations for improved development and practice of cyber wargames by drawing on established games design principles.

The use of games in educational settings has been recognised since the 4th century BC. Wargames, however, are a more recent invention. Wargaming first emerged in modern times via the Prussian Army. Kriegsspiel, as it was called, was used to teach tactics to officers as part of the Prussian Military Reforms in the wake of their devastating defeats at the hands of Napoleon. Ever since, military wargames have become a feature of training military personnel. The UK Ministry of Defence’s (MoD) Red Teaming Guide defines a wargame as ‘a scenario-based warfare model in which the outcome and sequence of events affect, and are affected by, the decisions made by the players’. These games, as noted by the MoD’s Wargaming Handbook, can be used to simulate conflicts in a low-risk table-top style setting across all levels of war and ‘across all domains and environments’. Wargames have repeatedly proved themselves a reliable method in communicating and practising military strategy that can be applied to explore all varieties of warfare.

As cyber becomes an increasingly important warfighting domain, both by itself and in collaboration with other domains, cyber wargames have begun to be played with the same frequency and importance as the traditional domains. Since 2016, the NATO Cooperative Cyber Defence Centre of Excellence (CCDCOE) has annually coordinated Crossed Swords, focusing on technical training, while NATO’s annual Cyber Coalition focuses on goals including information-sharing and collaboration and the Atlantic Council’s Cyber 9/12 focuses on strategic policy-making. Military examples include the U.S. Naval War College’s Defending Forward wargames, where, in its simplest form, cyber defenders (‘blue teams’) defend against cyber adversaries (‘red teams’). While these games are a great step forward in understanding, analysing, and preparing for the problems of cyberwarfare, these exercises tend to draw on existing conceptions of traditional serious games. This represents a missed opportunity; the cyber domain differs from traditional conflict in ways that warrant a fresh look at the design of wargames.

By design, wargames create an abstracted model of reality containing primary assumptions and simplifications that allow the model to be actionable. Underlying assumptions include: that the enemy is known, rational and ruthless; that the conflict being modelled is zero-sum in nature; that the games are effective tools even without specifically conceptualising how knowledge transfer takes place; and that the scope of the game should mirror reality as closely as possible. While these assumptions are appropriate for—or at least not detrimental to—traditional models of kinetic warfare, they are problematic for cyber wargame design. The challenges with each underlying assumption are described in turn.

The Known, Ruthless, and Rational Enemy

As Larry Greenemeier noted a decade ago, in cyberspace, the fog of war is exacerbated. While traditional warfare often limits available knowledge on an adversary’s location, in the cyber domain the reality is that defenders may not know who the enemy is nor their goals. When the enemy is an unknown, they can appear to act in an irrational way, at least from the perspective of the defender. This is due to the inherent asymmetry of the attacker. Through reconnaissance, the attacker will more than likely hold more information about intended targets than the defenders. Each of these issues, individually and collectively, are typically under-emphasised in most rigid wargames.

A Zero-Sum Nature of Conflict

Rigid wargames use a unity of opposites in their design, the goals of one side are diametrically opposed to the other. This creates a zero-sum game in which the goal of both the red and blue teams is the destruction of the other side. However, cyber conflict holds features of non zero-sum games, such as how the victory of one side does not always come with an associated loss to the other. Additionaly, there is an asymmetry introduced that should be addressed in the game design stage.

Knowledge Transfer: What is Actually Being Taught?

Another assumption made in the deployment of wargames is that they teach. However what is being taught is not as closely examined. In general, serious games can be categorised into two broad types: low road (or reflexive transfer) games; and high road (or mindful transfer) games. Low road transfer games are concerned with direct training of a stimulus and a response in a controlled environment that is as similar as possible to the context that the player is presented with in real life. For example, a flight simulator. The second type high road games are designed to encourage players to mindfully make connections between the context of play and the real world. Reflexive games are more likely to emphasise speed whereas mindful transfers are more likely to emphasise communication between players. Games must be designed using the knowledge transfer type most appropriate to the intended learning outcomes of the game.

Overenthusiastic Scoping

Cyber operations do not exist in isolation from traditional models of warfare. The integration of cyber operations with kinetic warfare, however, dramatically increases the complexity. Even attempting to capture the whole cyber landscape in a single game runs the real risk of detail overload, decision paralysis, and distracting the player from the game’s intended learning objectives. The longer it takes to learn to play, the less time the player has available to learn from the play. In reality, one cannot accurately simulate the real-world threat landscape without sacrificing effective learning (unless the learning point is simply to illustrate how complex the cyber threat landscape might be). For example, if the cyber wargame is focusing on the protection of critical national infrastructure, then side-tasks focusing on several other industries are likely to confuse, rather than assist, participants in achieving the desired learning goals.

Recommendations

How should we best approach the challenge of effective cyber wargame design?

We propose that designed cyber wargames must be in line with the following four principles:

  • Include ‘partial knowledge’ states.If the cyber wargame player has full knowledge of the game state, the game becomes nothing more than an algorithmic recall activity where a player can predict which actions are likely to result in successful outcomes. Certain ludic uncertainties can be included to induce ‘partial knowledge’, simulating the fog of war as required for each game.
  • Include ‘asymmetric positions’ for the players.The character of cyberwar is better modelled through asymmetric relationships between players. Cyber wargame designers need to consider the benefits to having this asymmetry inside the game.
  • Confirm learning objectives and knowledge transfer type before commencing design.Both low road and high road transfer games are valuable, but they serve different functions in the learning environment. A conscious choice for whether the game is attempting to promote low road or high road transfer should be confirmed before game design commences to ensure the appropriateness of the game.
  • Clearly scoped game to explore specific challenges.A well-scoped smaller game increases players’ willingness to replay games multiple times, allowing players to experiment with different strategies.

Conclusion

As both cybersecurity and wargames increase in importance and visibility, so does research on the use of cyber wargaming as a pedagogical tool for practitioners, policymakers, and the military. Existing principles within the games design profession around clear scoping of goals, game narratives, and appropriate player capabilities may all be applied to enhance existing cyber wargame design. The inclusion of partial knowledge states and asymmetric player capabilities both reflect crucial aspects of the cyber domain, while explicit attention to a game’s desired learning objectives and scope ensures that the resulting designs are as effective as possible. In a world in which cyberspace is only expected to become a more common feature of modern conflict, it is strongly advised that the MoD’s Defence Wargaming Centre leverages these tools and training opportunities. In the asymmetric and unpredictable field of cyber warfare, we need all the advantages we can get.

 

Amy Ertan is a cybersecurity researcher and information security doctoral candidate at Royal Holloway, University of London, and predoctoral cybersecurity fellow at the Belfer Center, Harvard Kennedy School. She is an exercise designer for cyber incident management scenarios for The CyberFish Company. As a Visiting Researcher at the NATO Cooperative Cyber Defence Center of Excellence, Amy has contributed to strategic scenario design for the cyber defence exercise, Locked Shields 2021. You can follow her on twitter: @AmyErtan, or via her personal webpage: https://www.amyertan.com

Peadar Callaghan is a wargames designer and lectures in learning game design and gamification at the University of Tallinn, Estonia. His company, Integrated Game Solutions, provides consultancy and design services for serious games and simulations, with a focus on providing engaging training outcomes. You can find him at http://peadarcallaghan.com/

Filed Under: Blog Article, Feature Tagged With: amy ertan, cyber domain, cyber war, cyber wargames, Cybersecurity, Cyberwar, cyberwarfare, military, NATO, peadar callaghan, Red Teams, UK Ministry of Defence, war games, wargaming

Footer

Contact

The Strife Blog & Journal

King’s College London
Department of War Studies
Strand Campus
London
WC2R 2LS
United Kingdom

blog@strifeblog.org

 

Recent Posts

  • Climate-Change and Conflict Prevention: Integrating Climate and Conflict Early Warning Systems
  • Preventing Coup d’Étas: Lessons on Coup-Proofing from Gabon
  • The Struggle for National Memory in Contemporary Nigeria
  • How UN Support for Insider Mediation Could Be a Breakthrough in the Kivu Conflict
  • Strife Series: Modern Conflict & Atrocity Prevention in Africa – Introduction

Tags

Afghanistan Africa Brexit China Climate Change conflict counterterrorism COVID-19 Cybersecurity Cyber Security Diplomacy Donald Trump drones Elections EU feature France India intelligence Iran Iraq ISIL ISIS Israel ma Myanmar NATO North Korea nuclear Pakistan Politics Russia security strategy Strife series Syria terrorism Turkey UK Ukraine United States us USA women Yemen

Licensed under Creative Commons (Attribution, Non-Commercial, No Derivatives) | Proudly powered by Wordpress & the Genesis Framework