• Skip to primary navigation
  • Skip to main content
  • Skip to footer
  • Home
  • About
    • Editorial Staff
      • Bryan Strawser, Editor in Chief, Strife
      • Dr Anna B. Plunkett, Founder, Women in Writing
      • Strife Journal Editors
      • Strife Blog Editors
      • Strife Communications Team
      • Senior Editors
      • Series Editors
      • Copy Editors
      • Strife Writing Fellows
      • Commissioning Editors
      • War Studies @ 60 Project Team
      • Web Team
    • Publication Ethics
    • Open Access Statement
  • Archive
  • Series
  • Strife Journal
  • Strife Policy Papers
    • Strife Policy Papers: Submission Guidelines
    • Vol 1, Issue 1 (June 2022): Perils in Plain Sight
  • Contact us
  • Submit to Strife!

Strife

The Academic Blog of the Department of War Studies, King's College London

  • Announcements
  • Articles
  • Book Reviews
  • Call for Papers
  • Features
  • Interviews
  • Strife Policy Papers
    • Strife Policy Papers: Submission Guidelines
    • Vol 1, Issue 1 (June 2022): Perils in Plain Sight
You are here: Home / Archives for Strife series

Strife series

Offensive Cyber Series: Amy Ertan on AI and Military Innovation, Part I

June 17, 2021 by Ed Stacey and Amy Ertan

Photo Credit: Mike MacKenzie, licensed via Creative Commons.

On Wednesday 17th March, Strife Interviewer Ed Stacey sat down with Amy Ertan to discuss offensive cyber in the context of artificial intelligence (AI) and military innovation. For part three of Strife’s Offensive Cyber Series, Ms Ertan discusses the current role of AI in offensive cyber and potential future trajectories, including effects on the offence-defence balance and arms racing, as well as her PhD research, which explores the unforeseen and unintended security consequences of developing and implementing military AI.

Ed Stacey: Amy, could you start by briefly defining AI in the context of offensive cyber. Are we really just talking about machine learning, for example?

Amy Ertan: Artificial intelligence is not just machine learning algorithms – it is a huge range of technologies. There is a whole history of AI that goes back to before the mid-1970s and late-80s: rule-based AI and knowledge-based AI which is, as it sounds, learning based on rules and logic. Then in the last decade or so we have seen a huge uptick in machine learning-based algorithms and its various sub-branches, including deep-learning and neural networks, which are incredibly complex algorithms that we cannot actually understand as humans. So, in summary, AI is a big umbrella term for different kinds of learning technologies.

At the same time, there is some snake oil in the market and a lot of what people call AI can just be probabilistic statistics. Being generous, some of the start-ups that you see are doing if-then algorithms that we could probably do on Excel. That does not, of course, account for the tech giant stuff. But when we talk about AI, we have everything from the super basic things that are not really AI to the incredibly well-financed, billion dollar projects that we see at Amazon, Microsoft and so on.

Machine learning is where a lot of today’s cutting edge research is. So the idea that you can feed data, potentially untagged data – unsupervised learning – into an algorithm, let the algorithm work through that and then make predictions based on that data. So, for example, you feed in three million pictures of cats and if the algorithm works as intended, it will then recognise what is and is not a cat.

In terms of how that fits into offensive cyber, AI is another tool in the toolkit. A learning algorithm, depending on how it is designed and used, will be just like any other cyber tool that you might have, only with learning technology within it. I would make the point that it is not something that we see being utilised today in terms of pure cyber attacks because it is not mature enough to be creative. The machine learning AI that we have right now is very good at narrow tasks, but you cannot just launch it and there is no “AI cyber attack” at the moment.

ES: How might AI enhance or facilitate offensive cyber operations?

AE: As I said, AI is not being used extensively today in offensive cyber operations. The technology is too immature, although we do see AI doing interesting things when it has a narrow scope – like voice or image recognition, text generation or predictive analytics on a particular kind of data set. But looking forward, there are very feasible and clear ways in which AI-enabled technologies might enhance or facilitate cyber operations, both on the offensive and defensive side.

In general, you can talk about the way that AI-enabled tools can speed up or scale up an activity. One example of how AI might enhance offensive cyber operations is through surveillance and reconnaissance. We see already, for example, AI-enabled tools being used in intelligence processing for imagery, like drone footage, saving a huge amount of time and vastly expanding the capacity of that intelligence processing. You could predict that being used to survey a cyber network.

Using AI to automate reconnaissance, to do that research – the very first stage of a cyber attack – is not a capability that you have now. But it would certainly enhance a cyber operation in terms of working out the best target at an organisation – where the weak link was, the best way in. So there is a lot that could be done.

ES: Are we talking then about simply an evolution of currently automated functions or does AI have the potential to revolutionise offensive cyber?

AE: In terms of whether AI will be just a new step or a revolution, generally my research has shown that it will be pretty revolutionary. AI-enabled technology has the power to revolutionise conflict and cyber conflict, and to a large extent that is through an evolution of automated functions and autonomous capabilities. I think the extent to which it is a full-blown revolution will depend on how actors use it.

Within cyberspace, you have this aspect that there might be AI versus AI cyber conflict in the future. Where your offensive cyber tool – your intrusion, your exploit tool – goes head-to-head with your target’s AI-enabled cyber defence tools, which might be intrusion prevention or spam filtering tools that are already AI-enabled. It really depends on how capabilities are used. You will have human creativity but then an AI algorithm makes decisions in ways that humans do not, so that will change some aspects of how offensive cyber activity takes place.

There is debate as to whether this is a cyber attack or information warfare, but I think deep fakes would be an example of a technology or tool that is already being used, falsifying information, that has revolutionised information warfare because of the scale and the nature of the internet today. So how far AI revolutionises offensive cyber will depend not only on its use but also a complex set of interconnections between AI, big data, online connectedness and digital reliance that will come together to change the way that conflict takes place online.

That is a complicated, long answer to say: it depends, but AI definitely does have the potential to revolutionise offensive cyber.

ES: No, thank you – I appreciate that revolutionary is a bit of a loaded term.

AE: Yes, there is a lot of hyperbole when you talk about AI in warfare. But through my doctoral research, every industry practitioner and policy-maker that I have spoken to has agreed that it is a game-changer. Whether or not you agree with the hype, it changes the rules of the game because the speed completely changes and the nature of an attack may completely change. So you definitely cannot say that the power of big data and the power of AI will not change things.

ES: This next question is from Dr Daniel Moore, who I spoke to last week for part two of this series. He was wondering if you think that AI will significantly alter the balance between offence and defence in cyberspace?

AE: I am going to disappoint Danny and say: we do not know yet. We do already see, of course, this interesting balance that states are choosing when they pick their own defence versus offence postures. And I think it is really important to note here that AI is just one tool in the arsenal for a team that is tasked with offensive cyber capabilities. At this point, I do not predict it making a huge difference.

At least when we talk about state-coordinated offensive cyber – sophisticated attacks, taking down adversaries or against critical national infrastructure, for example – they require such sophisticated, niche tools that the automation capabilities provided by AI are unlikely to offer any cutting-edge advantage there. So that depends. AI cyber defence tools streamline a huge amount of activity, whether that is picking out abnormal activities in your network or your logs, that eliminates a huge amount of manual analysis that cyber defence analysts might have to do and gives them more time for meaningful analysis.

AI speeds up and streamlines activity on both the offensive and defensive side, so I think it simply fits into the wider policy discussions for a state. It is one aspect but not the determining aspect, at the moment anyway or in the near future.

ES: And I guess the blurring of the lines between offence and defence in some cyber postures complicates the issue a little?

 AE: Yes, especially when you look at the US and the way they define persistent engagement and defending forward. It is interesting as to where different states will draw their own lines on reaching outside their networks to take down the infrastructure of someone they know is attacking them – offensive activity for defensive purposes. So I think the policy question is much bigger than AI.

ES: Thinking more geopolitically, the UK’s Integrated Review was heavy on science and new technologies and other countries are putting a lot of resources into AI as well. There seems to be some element of a security dilemma here, but would you go so far as to say that we are seeing the start of a nascent AI arms race – what is your view of that framing?

AE: I think to an extent, yes, we do see aspects of a nascent AI arms race. But it is across all sectors, which comes back to AI as a dual-use technology. The Microsoft AI capability that we use now to chat with friends is also being used by NATO command structures and other military structures in command and control infrastructure, albeit in a slightly different form.

Because cutting-edge AI is being developed by private companies, which have the access and resources to do this, it is not like there is this huge arsenal of inherently weaponised AI tools. On the flip side, AI as a dual-use technology means that everything can be weaponised or gamed with enough capability. So it is a very messy landscape.

There have been large debates around autonomous systems in conflict generally, like drones, and I think there is an extent to which we can apply this to cyberspace too. While there is this security dilemma aspect, it is not in any states’ interests to escalate into full-blown warfare that cannot be deescalated and that threatens their citizens, so tools and capabilities should be used carefully.

Now there is a limit to how much you can apply this to cyberspace because of its invisible nature, the lack of transparency and a completely different deterrence structure. But there is an argument that states will show restraint in weaponizing AI where it is not in their interest. You see this conversation taking place, for example, around lethal autonomous weapons at the United Nations Group of Governmental Experts, where it is generally considered that taking the human out of the loop is highly undesirable. But it is complicated and early days.

Looking at the UK, my research has shown that there is pressure to develop AI capabilities in this space and there are perceptions of an AI arms race across the private sector, which is who I spoke to. And there is this awareness that AI investment must happen, in a large part because of anticipated behaviour of adversary states – the idea that other states do not have the same ethical or legal constraints when it comes to offensive cyber or the use of military AI, which is what my PhD thesis focuses on. The only preventative answer to stop this security mechanism building up into an AI arms race seems to be some kind of consensus mechanism, whereby like-minded states agree not to weaponize AI in this way. That is why my research has taken me to NATO, to look in the military context at what kinds of norms can be developed and whether there is a role for international agreement in this way.

If I had to summarise that argument into one or two sentences: there are trends suggesting that there is an AI arms race which is bigger than conflict, bigger than the military and bigger than cyber. So you have to rely on the security interests of the states themselves not to escalate and to potentially form alliance agreements to prevent escalation.


Part II of this interview will be published tomorrow on Friday 18th June 2021.

Filed Under: Blog Article, Feature, Series Tagged With: amy ertan, Cyberwar, cyberwarfare, ed stacey, offensive cyberwarfare, offensive cyberwarfare series, Series, Strife series

Offensive Cyber Series: Dr Daniel Moore on Cyber Operations, Part II

June 11, 2021 by Dr Daniel Moore and Ed Stacey

Photo Credit: Ecole polytechnique / Paris / France, licensed with CC BY-SA 2.0.

This is part II of Ed Stacey’s interview with Dr Daniel Moore on cyber operations for Strife’s Offensive Cyber Series. You can find Part I here.


ES: Thinking about alliances more broadly, what sort of opportunities and challenges do allies face when conducting joint operations in cyberspace?

DM: Allied operations on networks – I am not a fan of cyberspace – are contentious as well. They are a good measure more sensitive than any conventional equivalent that you can think of. It is not like having a joint military operation: it means putting your sensitive infrastructure and capabilities on the line alongside an ally. That is not to say it does not happen and there have been documented cases which were purportedly joint operations by multiple countries. So I think it will happen, but there are complexities involved. I know that NATO has already declared that they are together, as an alliance, bringing forward cyber capabilities that they will use jointly. I welcome that declaration, even if I am sceptical as to what it actually means.

I would tend to believe that, considering how porous NATO is as an entity and how there are varying levels of trust within NATO, truly sensitive capabilities will be kept off the table by individual member states in favour of their own arsenals and sets of strategic capabilities. This is not to say it is not possible, but it is unlikely that at a NATO level you will see joint operations that are truly strategic in nature. What you might see is allied members that are operating together. I do not think that, for example, a joint UK-US operation against a target is out of the question, especially if one brings a certain set of capabilities to the table and one brings others – somebody gives the tools, this unit has the relevant exploits, this intelligence organisation had already developed access to that adversary and so on. Melding that together has a lot of advantages, but it requires a level of operational intimacy that is higher than what you would be able to achieve at the NATO alliance level.

ES: Moving beyond the state, what role does the private sector play in the operational side of offensive cyber? Do we have the equivalent of private military contractors in cyberspace, for example?

DM: There is a massive role for the private sector across the entire operational chain within offensive cyber operations. I would say a few things on this. Yes, they cover the entire chain of operations and that includes vulnerability research, exploit development, malicious tool development and then even specific outfits that carry out the entire operational lifecycle, so actually conduct the intrusion itself for whatever purposes. In some cases, it is part of an industrial-defence complex like in the US, for example, where you have some of the giant players in defence developing offensive capabilities, both on the event- and presence-based side of things. And ostensibly you would have some of those folks contributing contractors and operators to actually facilitate operations.

But in other countries that have a more freeform or less mature public sector model for facilitating offensive cyber operations, the reliance on third party private organisations is immense. If you look, for example, at some of the US indictments against Iranian entities, you will see that they charged quite a few Iranian private companies for engaging in offensive cyber operations. The same happens in China as well, where you see private sector entities engaging in operations driven by public sector objectives. In some cases, they are entirely subsumed by a government entity, whereas in others they are just doing work on their behalf. In some cases, you actually see them use the same infrastructure in one beat for national security objectives, then the workday ends and they pivot and start doing ransomware to get some more cash in the evenings – using the same tools or infrastructure, or something slightly different. So, yes, the private sector plays an immense role throughout this entire ecosystem, mostly because the cost of entry is low and the opportunities are vast.

ES: Just to finish, you have a book coming out soon on offensive cyber. Can you tell us anything about what to expect and does it have a title or release date yet?

DM: The book is planned for release in October. It will be titled Offensive Cyber Operations: Understanding Intangible Warfare, and it is basically a heavily processed version of my PhD thesis that has been adapted, firstly, with some additional content to reflect more case studies, but also to appeal to anybody who is interested in the topic without necessarily having a background in cyber nor military strategy and doctrine. So it is trying to bridge the gap and make the book accessible, exactly to dispel some of the ambiguities around the utility of cyber operations. Questions like, how they are currently being used? What can they be used for? What does the “cyber war” narrative mean? When does an offensive cyber operation actually qualify as an act of cyber warfare? And, most importantly, what are the key differences between how different countries approach offensive cyber operations? Things like organisational culture, different levels of maturity, strategic doctrine and even just circumstance really shape how counties approach the space.

So I tackle four case studies – Russia, the US, China and Iran – and each one of those countries has unique advantages and disadvantages, they bring something else to the table and have an entirely different set of circumstances for how they engage. For example, the Iranians are incredibly aggressive and loud in their offensive cyber operations. But the other side to this is that they lack discipline, their tools tend to be of a lower quality and while they are able to achieve tactical impact, it does not always translate to long-term success.

The US is very methodical in its approach – you can see, taste and smell the bureaucracy in every major operation that it does. But that bureaucratic entanglement and the constant tension between the National Security Agency, Cyber Command and other involved military entities results in a more ponderous approach to cyber operations, although those organisations obviously bring a tonne of access and capability.

With the Russians, you can clearly see how they do not address cyber operations as a distinct field. Instead, they look at the information spectrum more holistically, which is of pivotal importance to them – so shaping what is “the truth” and creating the narrative for longer-term strategic success is more important than the specifics. That being said, they are also one of the most prolific offensive actors that we have seen, including multiple attacks against global critical infrastructure and various aggressive worms that exacted a heavy toll from targets. So for Russia, if you start looking at their military doctrine, you can see just how much they borrow, not only from their past in electronic warfare but also their extensive past in information operations, and how those blend together to create a broader spectrum of information capabilities in which offensive cyber operations are just one component.

And finally, the Chinese are prolific actors in cyber espionage – provably so. They have significant technical capabilities, perhaps somewhat shy of their American counterparts but they are high up there. They took interesting steps to solidify their cyber capabilities under a military mandate when they established the Strategic Support Force, which again – like the NCF – tried to resolve organisational tensions by coalescing those capabilities. But they are largely unproven in the offensive space. They do have an interesting scenario on their plate to which cyber could and may play a role, which is any attempt at reclaiming Taiwan – something I look at extensively in the book and how that shapes their offensive posture.

So the book is a combination of a broader analysis of the significance of cyber operations and then how they are concretely applied by different nations for different purposes.


The next interview in Strife’s Offensive Cyber Series is with Amy Ertan on AI and military innovation. It will be released in two parts on Thursday 17th and Friday 18th June 2021.

Filed Under: Blog Article, Feature, Series Tagged With: cyber, Cyber Operations, Cyber Security, daniel moore, Dr Daniel Moore, ed stacey, offensive cyberwarfare, offensive cyberwarfare series, Series, Strife series

Offensive Cyber Series: Dr Tim Stevens on Offensive Cyber in the 2020s, Part I

June 3, 2021 by Ed Stacey and Dr Tim Stevens

Photo Credit: AirmanMagazine, licensed under CC BY-NC 2.0

On Wednesday 3rd March, Strife Interviewer Ed Stacey sat down with Dr Tim Stevens to discuss the state of play in offensive cyber in the 2020s. As part one of Strife’s Offensive Cyber Series, Dr Stevens introduces the topic and offers his thoughts on a range of topical debates, from the utility of offensive cyber capabilities to questions around international law and ethics and the UK’s recently avowed National Cyber Force.

Ed Stacey: Tim, as you know, this interview series is all about offensive cyber. This is quite a slippery term, so could you perhaps kick us off with a working definition?

Tim Stevens: You will be unsurprised to hear that there is no working definition, or at least no consensus on definition, about what offensive cyber is. Obviously, it is a term that attempts to draw some kind of analogy from other capabilities that can be used for offensive purposes – one of which is obviously weapons, another would be munition. But actually, offensive cyber is a lot more difficult to pin down because it is not kinetic in any conventional sense: it is not something that you can throw, shoot or drop on someone to cause damage.

But what offensive cyber tries to get at is the idea that through computer code, so little packets of software that can be sent through computer networks, you are going to attempt to deny, degrade, disrupt or even destroy something that your enemy holds to be of value. This principally could be data itself or it could be the computer systems and computer networks that data is held on.

Now offensive cyber is also being used not just in a military context but an intelligence context too, so it has some relationships with espionage or at least the covert activities of intelligence agencies. It could conceivably be used not in the kind of military break things sense but in the more inflected activities of intelligence, like subversion or sabotage, that occupy a slightly weird space and do not look like acts of war, for example.

ES: Terms such as cyber war, cyber attack and cyber weapons are used quite loosely in public discourse. Do you think we need to be more precise with our language when we are talking about offensive cyber?

TS: I think it would help if we had in common discourse some understanding that perhaps we are overhyping some of the phenomena that were describing, and using heavily militarised language like cyber war really does not help. Cyber attacks are usually nothing of the sort and cyber weapons usually cannot be classed as weapons, for example.

To take the cyber war example. When we think about cyber war, these days it usually means some kind of state of hostilities operating between two states, in which they are battering each other with cyber weapons of some description or another. Now apart from the fact that we have not seen this, it is also unlikely that we will see it. I think if two states are to be in a declared or actual state of cyber hostilities, there will be other issues – other types of operations in other domains – that are going to be just as relevant. So this idea of a standalone cyber war is not helpful.

Cyber warfare, on the other hand, is helpful because that is what militaries and intelligence agencies arguably are involved in at present – they are fighting, conflicting and contesting cyberspace as an operational domain. And they are doing that through offensive cyber, in part, but also through other activities that they can bring to bear on that domain. So cyber warfare has some utility; it is a form of warfighting or conflict through cyber means.

Cyber attacks, well that is just used to denote anything that you do not like. Whether it is an attack in any kind of conventional or attenuated sense is really irrelevant. If your adversary – whether they are a criminal, terrorist, state or proxy – has done something to your networks that you do not like, you call it a cyber attack, even though it might be nothing of the sort. It might be one of billions of automated pings or bots that confront your networks everyday as a matter of course. Or it could be a cunning, socially-engineered and sophisticated cyber operation against something that you hold of value. The two are clearly not the same, but they are all being called cyber attacks in popular discourse, and the media are just as guilty of this as politicians and occasionally academics and civil society too. So I do think it is important to make these distinctions.

The issue with cyber weapons is whether these types of capabilities can actually be described as weapons, and again there is no consensus. Conventionally weapons have to have the capacity to hurt by virtue of, say, ballistics. If you think about discussions around chemical and biological weapons, people are sometimes unconformable calling them weapons in any conventional sense too. And the thing about cyber weapons is that, as of yet, no direct physical harm has been caused by any of those capabilities. Instead, what happens is that there is attenuated secondary harm that would be caused when, for example, you change the 1s and 0s in an incubator in an intensive care unit and as a result of that someone dies, but it does not directly harm that person. So that is the kind of debate that is being had about whether these capabilities are weapons or not.

ES: Thinking about the utility of offensive cyber, why are states developing these types of capabilities and what do they offer that other capabilities do not?

TC: To think about the broader utility or the framing of these capabilities is, I think, to return to the [revolution in military affairs] of the late 1980s and early 1990s, then going on in subsequent decades in western military affairs. So the suggestion that we are shifting towards informationalised, precision strike, stand-off warfare that prioritises our own force protection and the ability to cause effects hundreds, if not thousands, of miles away.

Clearly, if you are sitting at a computer in one part of the world and you wish to attack another computer on the other side of the world, it is much easier to do that through computer networks than it is through conventional means: the mode of operation, the platform and the technology is much easier to get hold of. And if you can create the same effects remotely than if you were standing a hundred yards or half a mile away, then why would you not? You do not have to put your troops, or indeed your intelligence agents, in harm’s way. If you do not have to put a human asset into a foreign country to achieve an effect, why would you? These are the kind of attractions that states are finding in these sorts of capabilities.

Another one, of course, is that it is relatively cheap. It is much easier to hire people to develop these kinds of capabilities than it is to develop a new weapon system. Essentially, if the weapon system you need is, if not quite an off the shelf computer system but something existing that can be adapted, it is much cheaper than trying to develop a new line of fighter jet, precision guided munition, helicopter or battleship of any description. So that is attraction there.

Another thing is this idea of effects. As I mentioned previously, if you can create some kind of effect that generates, mainly operational or strategic but also tactical, advantage over your adversary through the use of computer networks, that has to be attractive. If it is cheaper, if it does not put your troops in harm’s way and, importantly, does not immediately escalate to something that looks like a conventional shooting war. Because if people are not being directly harmed, but yet you are causing your adversary to change their mind or behaviour in some fashion, that is incredibly seductive for a commander or state that is looking to improve, enhance or extend their operational and strategic toolbox. So that is the general idea behind why these capabilities are attractive.

ES: Looking at the other side of things, what are the limits of offensive cyber?

TC: That is a good question and an open one too. These kinds of capabilities may be attractive to countries and their militaries and intelligence agencies, but the jury is out on how effective they actually are. Because it turns out, for various reasons, that it is actually quite difficult to get your adversary to do what you want through cyber means. Partly this is because they are not as easy to control as we might think, and partly it is because, as I mentioned earlier, causing kinetic effects to actually change someone’s mind in a visceral sense is very difficult.

It is also difficult because you cannot keep doing it with the same capabilities. Once you have developed an advanced offensive cyber capability, essentially you can only use it once because then your enemy will see the code, understand the vulnerability that has been exploited, patch their systems and then that vulnerability disappears. So you cannot keep holding your enemy’s assets at risk, which means that even if something happens once – and given that no computer system is demonstrably secure, it is going to happen at some point – you know that it is a one-off attack. Because you know, or at least you hope, that your adversary has not got the capability to keep punishing you in that way. So that means that if you can roll with the punches if you get attacked or exploited, you are not expecting a follow-up that is really going to double down and force you to change your mind or your behaviour.

So for all the attraction of these capabilities, there are limits. Now that is not to say that there are limits to the imagination of people who wish to develop and deploy these things, and I am not saying for a second that, with this realisation that there are limits to their utility, states are going to stop developing them, because they are not. In fact, what I think is going to happen is what you are seeing at the moment, which is that states and other actors are going to continue to experiment with them until they find some way of generating the higher-level effects that they wish.

To bring that round to a conclusion: tactically, they can be very useful; operationally, they can generate some really interesting effects; strategically, it looks very difficult to generate the effects that you want.

Part II of this interview will be published tomorrow on Friday 4th June 2021.

Filed Under: Blog Article, Feature, Series Tagged With: cyber, cyberwarefare, dr tim stevens, ed stacey, offensive cyberwarfare, offensive cyberwarfare series, Series, Strife series, tim stevens

Series Introduction: Conflict and Health in the Eastern Mediterranean

April 25, 2021 by Dr Anas Ismail

by Anas Ismail

Photo by Andrey Metelev on Unsplash

Conflict is known to affect health in a myriad of ways[1] including, but not limited to, attacks on health workers and facilities, disruption of access to health services, deterioration of water and sanitation infrastructure, and interruption of training and education for medical students and junior doctors. The more severe and long-lasting the conflict, the more significant its effects on health and the harder it will be to recover and improve health in post-conflict situations. Moreover, the impacts of conflict extend beyond the national borders in which they occur, and bleed over into neighbouring states whose governance, in the face of conflicts’ destabilizing effects and mass refugee flows, is significantly challenged.

The Eastern Mediterranean is home to one of the oldest conflicts, that between Israel and Palestine, and one of the most devastating, the Syrian civil war.

In Palestine, over five decades of occupation have taken their toll on all aspects of healthcare from medical education to patient care and life conditions, all aspects further explored in this series. In Syria, the conflict has been characterized by the weaponization of healthcare, i.e. attacking or withholding healthcare as a strategy of war. Moreover, while the parties to these conflicts suffer extensively, other countries in the region are also struggling from their spill-over effects alongside their own internal divisions and unrest. Lebanon continues to endure an unprecedented economic crisis, deep internal divisions, and lack of proper governance on the political level – all whilst hosting nearly 1.5 million Syrian refugees. These have had a significant  impact on the country’s youth whose mental health needs seems to be far from being met in a country where psychiatric and psychological services are largely neglected, with a treatment gap close to 90%. Jordan, on the other hand, enjoys  relatively stable governance and economic conditions, but it has been strained  by surrounding instability. As a result of the protracted Israeli-Palestinian conflict, it hosts 2.2 million Palestinian refugees, the most of any country, and over the past decade these have been joined by 1 million Syrian refugees, meaning that over 30% of Jordan’s total resident population are displaced peoples.

This series aims to capture various experiences of healthcare delivery and medical education in the region. The first article brings to light how medical education is being provided completely online for the first time in Gaza under settings of protracted conflict, chronic electricity shortage, and economic impoverishment. The second piece discusses the underlying causes of the looming epidemic of mental health disorders among the youth in Lebanon and how the economic crisis is impacting it. The third article illuminates the intricacies and security implications of patients in Gaza needing, and seeking, treatment abroad. In the fourth piece, the author, a Syrian doctor in diaspora, discusses how attacking the healthcare work force in Syria is affecting the provision of health and what needs to be done. The final piece describes the far-reaching impact of war-related amputations on civilians in the Gaza Strip and argues how modern warfare, such as drone attacks, are causing an increase in such injuries.

Series Publication Schedule

  • 26 April 2021 – Part I: Medical education under blockade, protracted conflict and constant warfare. By Alaa Ismail
  • 27 April 2021 – Part II: Lebanon in Ashes: A Looming Mental Health Crisis? By Loubaba Al Wazir
  • 28 April 2021 – Part III: Medical Referrals in Gaza: Uncertainty and Agony for Palestinians Patients. By Anas Ismail
  • 29 April 2021 – Part IV: How does the Syrian Civil War affect health care workers? By Abdullah Al Houri
  • 30 April 2021 – Part V: Life after traumatic amputations in the Gaza Strip. By Hanne Heszlein-Lossius

[1] Howard, Natasha, Hossain, Mazeda, Ho, Lara. “Effects of Conflict on Heath.” In Conflict and Health, 25–32. Berkshire: McGraw-Hill Education, 2012.


Anas Ismail is a medical doctor originally from Gaza, Palestine, where he received his education and training. As a citizen of Gaza, and later as a medical student, he personally lived the impact that conflict has on life in general and on healthcare in specific. This led him to an interest in global health as a means of learning more about the relationship between conflict and health.

With a joint scholarship from the Chevening Awards and the Said Foundation, Anas is currently studying MSc Global health with conflict and security at King’s College, London. He is the Production Manager for Strife Blog and a Series Editor at Strife.

Filed Under: Blog Article, Feature, Series Tagged With: Anas Ismail, Conflict and Health in the Eastern Mediterranean, Conflict and Health in the Eastern Mediterranean Series, Series, Strife series

Call for Papers: Rethinking States of Exception Series

February 4, 2021 by Strife Staff

 

Strife is pleased to announce the call for contributions to its ‘Rethinking States of Exception’ Series.

This series is looking to publish on examinations of contemporary productions of “states of exception” that go beyond Carl Schmitt and Giorgio Agamben’s traditional theorizations, and incorporate critical, transnational, post-colonial, as well as feminist perspectives to produce a more nuanced theoretical paradigm. The theory’s crux lays in its ability to explain the everyday use of mechanisms of power in today’s liberal democracies, as well as in contexts of despotic power and complete impunity. Nevertheless, the theory’s traditional framework falls short of fully accounting for these mechanisms as a result of its ungendered, unracialized, and ahistorical approach. Only in recognizing the ways in which gender, race, and colonial legacies affect distributions of power can we truly uncover the conditions under which states of exception and the entity of homines sacri are produced today.

Themes could include but are not limited to:

  • Refugee Politics and Statelessness
  • Arbitrary Detention, Mass Incarceration, and the Carceral State
  • Negotiations and Imaginaries of Borders, Sovereignty, and States of Exception
  • Covid-19 and Viruses as Metaphors of Exception
  • The War on Terror
  • Intersex, Transexual, and Queer Bodies as States of Exception
  • The Violence Produced by Exceptions

Questions could include but are not limited to:

  • How do we account for and respond to the contemporary production of states of exception?
  • Should responses to violent crises always be constitutional?
  • What does the “state of exception” mean in a neoliberal world order where executive power is inseparable from the interests of the private sector?
  • What is the relationship between human reproduction and governance?
  • What function does the securitisation of irregular migration fulfill? In what strategies is it integrated?

This interdisciplinary series is not limited to geopolitical perspectives and welcomes contributions from diverse fields, such as gender studies, law, economics, contemporary arts, and much more. Articles should be around 1000-1200 words in length and meet with all of the submission guidelines. Articles will be subject to a review by the Series Editor and the Blog Coordinating Editor prior to acceptance to the series. Articles that do not meet referencing and formatting guidelines risk being rejected for publication.

Articles should be submitted by 2nd March 2021. If you are interested in submitting an article for publication, or have an idea or query you wish to discuss, please contact our editorial team at:  blog.coordinating.editor@strifeblog.org

Filed Under: Call for Papers Tagged With: Call for Papers, Calls for Papers, Series, series CFP, States of Exception, Strife series

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Interim pages omitted …
  • Go to page 10
  • Go to Next Page »

Footer

Contact

The Strife Blog & Journal

King’s College London
Department of War Studies
Strand Campus
London
WC2R 2LS
United Kingdom

blog@strifeblog.org

 

Recent Posts

  • Climate-Change and Conflict Prevention: Integrating Climate and Conflict Early Warning Systems
  • Preventing Coup d’Étas: Lessons on Coup-Proofing from Gabon
  • The Struggle for National Memory in Contemporary Nigeria
  • How UN Support for Insider Mediation Could Be a Breakthrough in the Kivu Conflict
  • Strife Series: Modern Conflict & Atrocity Prevention in Africa – Introduction

Tags

Afghanistan Africa Brexit China Climate Change conflict counterterrorism COVID-19 Cybersecurity Cyber Security Diplomacy Donald Trump drones Elections EU feature France India intelligence Iran Iraq ISIL ISIS Israel ma Myanmar NATO North Korea nuclear Pakistan Politics Russia security strategy Strife series Syria terrorism Turkey UK Ukraine United States us USA women Yemen

Licensed under Creative Commons (Attribution, Non-Commercial, No Derivatives) | Proudly powered by Wordpress & the Genesis Framework