• Skip to primary navigation
  • Skip to main content
  • Skip to footer
  • Home
  • About
    • Editorial Staff
      • Bryan Strawser, Editor in Chief, Strife
      • Dr Anna B. Plunkett, Founder, Women in Writing
      • Strife Journal Editors
      • Strife Blog Editors
      • Strife Communications Team
      • Senior Editors
      • Series Editors
      • Copy Editors
      • Strife Writing Fellows
      • Commissioning Editors
      • War Studies @ 60 Project Team
      • Web Team
    • Publication Ethics
    • Open Access Statement
  • Archive
  • Series
  • Strife Journal
  • Strife Policy Papers
    • Strife Policy Papers: Submission Guidelines
    • Vol 1, Issue 1 (June 2022): Perils in Plain Sight
  • Contact us
  • Submit to Strife!

Strife

The Academic Blog of the Department of War Studies, King's College London

  • Announcements
  • Articles
  • Book Reviews
  • Call for Papers
  • Features
  • Interviews
  • Strife Policy Papers
    • Strife Policy Papers: Submission Guidelines
    • Vol 1, Issue 1 (June 2022): Perils in Plain Sight
You are here: Home / Archives for Cybersecurity

Cybersecurity

Brainjacking: The uncomfortable truth of bio-technology

October 3, 2016 by Cheng Lai Ki

By: Cheng Lai Ki

Modular Prosthetic Limb (MPL) was developed as part of a four-year program by the Johns Hopkins Applied Physics Laboratory, along with Walter Reed National Military Medical Center and the Uniformed Services University of the Health Sciences. The brain-controlled prosthetic has nearly as much dexterity as a natural limb, and allows independent movement of fingers. (Source: Wikimedia)

On July 25 2013, renowned hacker and information security expert Barnaby Jack was discovered dead at his San Francisco apartment. As a bearer of an implanted device himself, he was known for exposing security vulnerabilities of implanted medical devices, such as in insulin pumps ‘that could be [programmed] to dispense a fatal dose by a hacker 300ft away.’ His exposé has even led some medial companies to review the cybersecurity protocols of their products. Jack’s work has undoubtedly uncovered an important but under-discussed area of cybersecurity: cybernetics and brainjacking.

Cybernetics was coined in 1948 by Norbert Wiener in his book Cybernetics: or Control and Communication in the Animal and the Machine and inspired an entire generation of engineers and technical enthusiasts. More recently, David Mindel defined cybernetics as ‘the study of human/machine interaction guided by the principle that numerous different types of systems can be studied according to principles of feedback, control and communication.’[1] At its core, cybernetics simply represents the interaction between man and machine, a concept elucidated by Thomas Rid in his new book: Rise of the Machines: A Cybernetic History.[2]

As our technological capabilities continue to advance, so too does the importance of cybernetics. While our understanding of cybernetics remains vital within cybersecurity domains, Barnaby’s work emphasized the increasing man-machine merger and the need to review security systems of medical and augmentative devices. Today, medical and defence communities have progressively developed advanced prosthetics through ‘taking advantage of the latest robotic technologies to enable [individuals] injured in battle to lead normal lives [and even regain capabilities] better than the original limb’. With current technology, prosthetics have come to replicate internal biological function (i.e. pacemakers), information-processing functions (i.e. optical implants) and interactive functions (i.e. robotic hands). Fictionally reflected by Robin Williams’s character in the Bicentennial Man (2002), almost all internal organs and external limbs can be technologically replicated. In a prepared brief for the 2013 Black Hat conference, Barnaby Jack wrote: ‘[i]n 2006 approximately 350,000 pacemakers and 173,000 ICDs (Implantable Cardioverter Defibrillators) were implanted in the US alone…[t]oday there are well over 3 million pacemakers and over 1.7 million ICD’s in use.’ The fictional idea of the ‘cyborg’ (an entity which is both man and machine) is not looking so fictional anymore.

Currently, most devices and prosthetics do not require a direct neurological connection. Insulin pumps and pacemakers are connected to a small programmable logic controller (PLC) to regulate the dosages of insulin required for optimal organ functionality. The cybersecurity considerations of such devices are similar to those of other PLC dependent systems that also regulate fluids and/or voltages (i.e. hydroelectric dams). While the focus on this category within cybersecurity has been around for a while – especially since the discovery of the StuxNet worm and its effects on the Iranian uranium enrichment facility in Natanz – absent is a greater focus on neuro-linked devices.

According to an article by João Mediros in WIRED, advanced and personalised prosthetics for amputees are becoming more affordable and readily available.[3] Most importantly, as Barnaby’s research has discovered, commercially available prosthetics are becoming increasingly programmable – guided by convenience and marketing ideals. Currently, most prosthetics operate with external sensors. However, technologists have made significant strides in developing ones operated from implanted neurological sensors. For example, the LifeHand2, was developed with a technique called intracortical microstimulation where neuro impulses can be mapped and subsequently used to define the elicitation of body movements depending upon the stimulus, directly relaying real-time information and sensory feedback for the amputee. This exposes augmented humans to a ‘bio-cybersecurity’ issue of human-hacking – in the literal sense.

Technological foundations of the Lifehand2, an advanced prosthetics that can be operated via neurologically implanted sensors to provide functional feedback for amputees. (Source: LifeHand2)

‘Brainjacking’

Termed in a World Neurosurgery article published in August 2016, brainjacking refers to the act of corrupting neurological implants with malicious codes to exert involuntary control of motor functions or impulse control systems within the patient/host. On a technical level, neurological implants convert digitised code into neuro-electric impulses mirroring those fired by neurons (brain cells). If the PLCs within these neuro-implants convert digital code into electric impulses, a carefully outlined line of code could potentially create the right levels of neuro-stimuli that could – in effect – be used to blackmail, control, inhibit or even kill the individual.

A Bio-Cybersecurity Concern?

The cybersecurity community must take steps to ensure that neuromodulation-based platforms are protected on a digital level. As highlighted in Barnaby’s work, wireless and programmable components with prosthetics and medical implants possess their own computer vulnerabilities that can be exploited by malicious actors. This bio-cybersecurity concern must be addressed when, like most technology throughout human history (i.e. ARPANET – the processor to the Internet), advanced prosthetics are developed for various military projects.

Such projects include DARPA’s Reliable Neural-Interface Technology (RE-NET) program, that are developing high-performance neurological interfaces for advanced prosthetics. Soldiers and other civil-servicemen require the full faculties of their brains in order to carry out their missions – usually in highly stressful and hazardous environments. As such, security and operational ramifications of a faulty prosthesis (and by extension an augmented solider) is no different from that of a faulty transmission signal or virus infection aboard autonomous or remote-controlled platforms (i.e. drones).

Humans have consistently used technology for capability enhancement and augmentation. For individuals who have lost limbs or full-functionality of various bodily components, technological advancements have given rise to adaptive (and upgradable) prosthetics and implanted devices to help them regain full functionality. There is no doubt that advanced prosthetics can significantly improve the lives of individuals who have lost their limbs or have diminished functionality. However, with more platforms being biologically integrated, cybersecurity practitioners and prosthetic technicians are now faced with a hybridised domain of security considerations – both biological and technological. Within the increasing number of bio-technological devices, biologists and technical specialists need to collectively address the uncomfortable possibilities potentially afflicting an ever growing cyborg community – before it’s too late.

 

 

 

Cheng served as an Amour Officer and Training Instructor at the Armour Training Institute (ATI) in the Singapore Armed Forces (SAF) and now possesses reservist status. His master’s research revolves around security considerations within the Asia-Pacific Region and more specifically around areas of Cybersecurity, Maritime Security and Intelligence Studies. His Master’s thesis explores the characteristics and trends defining China’s emerging cybersecurity and cyberwarfare capabilities. He participated in the April 2016 9/12 Cyber Student Challenge in Geneva and was published in IHS Janes’s Intelligence Review in May 2016. You can follow him on Twitter @LK_Cheng

Notes:

[1] Mindel, D.A. ‘Cybernetics: Knowledge domains in Engineering Systems’, MIT, Available from: http://web.mit.edu/esd.83/www/notebook/Cybernetics.PDF, (Fall, 2000)

[2] Rid, T. Rise of the Machines: A Cybernetic History, (W.W. Norton & Company: New York), 2016.

[3] Mediros, J. ‘Humans Becoming Bionic: The next generation of prosthetics will be bespoke, adaptable – even desirable’, The WIRED World in 2016, (2016), pp. 55 – 56

Image Source [1]: United States Navy, ‘Modular Prosthetic Limb’, Available from: https://commons.wikimedia.org/wiki/File:Flickr_-_Official_U.S._Navy_Imagery_-_The_Modular_Prosthetic_Limb_(MPL)..jpg (Mar 23 2012)

Image Source [2]: LifeHand2, http://www.discovery-zone.com/technology-amputee-feels-real-time-bionic-hand/ (Oct 1 2016)

Filed Under: Blog Article Tagged With: Bionics, Cybersecurity, DARPA, feature, military, Robotics

Film Review: Zero Days (2016)

September 21, 2016 by Cheng Lai Ki

Gibney, A. Zero Days, Jigsaw Productions, (2016). (PG-13) More information from: http://gb.imdb.com/title/tt5446858/.

By: Cheng Lai Ki

maxresdefault

“The science fiction cyberwar scenario is here…” This statement comes from members of the United States National Security Agency (NSA), and others in the intelligence community, role-played by actress Joanne Tucker. Zero Days, directed and narrated by documentarian Alex Gibney – who produced the award winning documentaries Enron: The Smartest Guys in the Room (2005) and Taxi to the Dark Side (2007) – explores the evolving nature of computer network exploitations (CNEs). In a world where critical infrastructures (i.e. energy suppliers, telecommunication infrastructures), military communication grids (i.e. US Global Information Grid - GIG) and diplomatic communications are conducted on information-communication technologies (ICTs); the documentary illuminates the uncomfortable realities and vulnerabilities within cyberspace.

Zero Days explores StuxNet, a computer worm developed by a US-Israeli effort to cripple the uranium enrichment capabilities at the Natanz enrichment plant in Iran. The documentary debuted at the 2016 Berlin film festival and was awarded a four-star review by the Guardian’s Peter Bradshaw, who described Gibney’s 2016 documentary as ‘intriguing and disturbing’. Named after the technical term ‘zero day’ that represents a computer network vulnerability that is only known to the attacker, the investigative documentary tells Gibney’s journey in uncovering ‘the truth’ behind StuxNet’s technical capabilities and attributed political motives. Despite discussing a cybersecurity threat, the documentary goes beyond the technical landscape and introduces various geopolitical elements within – such as the Israeli disapproval of Iran cultivating national nuclear capabilities. Given the relative basic nature of its discussions, this documentary appears to be intended for the general public rather than specialists in the field. However, Gibney appears to have followed along an investigative journalistic approach (something he undoubtedly is famous for) and guides the viewer along a path of what essentially is a cyber-attribution journey implicating the US and Israeli agencies. The documentary was constructed with strategically cut interviews from cybersecurity specialists (i.e. Kaspersky; Symantec), former senior-leaderships from ‘three-letter’ government agencies, industrial experts (i.e. Ralph Langner, a German Control System Security consultant) and pioneers within the investigative journalism (i.e. David Sanger) in discussing StuxNet’s discovery and capabilities. In addition to these interviews, Gibney wanted a more ‘real’ source of information. This was where the anonymous NSA intelligence community came in. Collectively using transcripts of these employees (and the help of actress Joanne Tucker), Gibney was able to incorporate an inside-source that gave this documentary a little more power behind its claims.

A collection of Programmable Logic Controllers (PLCs) that are crucial technological components within most critical infrastructure. The StuxNet worm targeted specficially the Siemens Simatic S7-300 PLC CPU with three I/O modules attached.

The documentary excels in unveiling to the general public that: i) cybersecurity is not purely a software issue, but also a hardware one; and ii) digital-malware can be easily weaponised for intelligence gathering and strike purposes.

First, Symantec Security Response specialist, Eric Chien, states in an interview: ‘…real-world physical destruction. [Boom] At that time things became really scary for us. Here you had malware potentially killing people and that was something that was Hollywood-esque to us; that we’d always laugh at, when people made that kind of assertion.’ Through conducting a simple experiment where Symantec specialists infected a Programmable Logic Controller (PLC) – the main computer control unit of most facility control systems – with the StuxNet worm. Under normal conditions, the PLC was programed to inflate a balloon and stop after five-seconds. However, after being infected with the StuxNet worm, the PLC ignored commands to stop the inflation and the balloon burst after being continuously filled with air. Through this simple experiment, the specialists (and Gibney) managed to reveal the devastating impact of vulnerable computer systems that control our national critical infrastructures or dangerous facilities such as Natanz.

Second, the NSA employees that decided to talk to Gibney revealed who the US cyber intelligence community recruits and more importantly, their capacities to create digital-techniques for intelligence gathering – or in the case of StuxNet, strike purposes. Cybersecurity specialists that were analysing the StuxNet code discovered older versions that were focused on data-collection. It wasn’t until the later versions that more offensive objectives were made more apparent within the code. According to forthcoming NSA employees, this shift within the code was done by the Israeli foreign intelligence services (Mossad) and not the American agencies. Regardless, Zero Days does an excellent job in revealing the highly adaptive nature of cyber ordinances.

national_security_agency_headquarters_fort_meade_maryland
The United States National Security Agency (NSA) at Fort Meade, Maryland. There, information technology experts developed the multiple version of the StuxNet worm at the Cyber Command unit (USCYBERCOM) established in 2009 that was housed wihtin.

However, to security academics, this documentary suffers from several limitations undermining its credibility. Two of its main limitations are: i) over centralization on investigative attribution; and ii) inherently negative portrayal of governmental personnel and activity.

First, as earlier mentioned, the documentary is a journey of cyber-attribution at its core – much akin to the work of investigative journalist, David Sanger. To show this, we need to review the structure of the documentary. It begins with discussing the cybersecurity incident, how the worm was found, and how it baffled cybersecurity specialists. Next, the documentary explains the geopolitical and security tensions between the US, Israel and Iran; in addition to discussing the American position on Iran’s nuclear capabilities. Next, it progressed onto the technical and security domains; explaining the infrastructure of American and Israeli cyber-intelligence capabilities and operations. Finally, Gibney asks harder questions of implications and opinions during his interviews with American intelligence, security and military subjects. Obviously, for national security and secrecy reasons, these could not be answered. It would appear that Gibney wanted to ask these questions to highlight his disgust in the lack of transparency within the security sector. Throughout the late part of the documentary, he supplements various claims with an informal-esque interview with the NSA employees using Joanne Tucker as an avatar. To the general public, this documentary is undoubtedly an interesting journey of exploration and revelation about American and Israeli cyber capabilities. While highlighting several cybersecurity concerns afflicting cybersecurity specialists in governmental and industrial sectors, the documentary quickly narrows its attributive direction towards the United States and Israel - leaving little room for alternative arguments.

Second, to security specialists this documentary leaves out several key areas of consideration, such as the crucial importance of having an effective intelligence collection and pre-emptive strike capabilities for reasons of national security. During interviews with government leaderships, they were either explaining the structure of their national intelligence agencies/capabilities or talking about how certain operations were transferred between presidents – StuxNet was known within the American government community as ‘The Olympic Games’. As such, government interviewees played only an informative role, participating in few discussions. Another comment would be on the NSA employees that decided to be vocal. Playing the devil’s advocate, certain questions about credibility and accuracy can be raised: How do we know these were really NSA employees from their cyber divisions? Do we know if they are really vocalizing because they wanted to? Or were they instructed to? There was a significant amount of blame placed on Mossad for ‘weaponizing’ the StuxNet code when the Americans just wanted to utilise it solely for intelligence collection purposes. Within the realms of intelligence, this sounds more like disinformation rather than truth. To some civil-servants from security or intelligence backgrounds, this documentary appears to portray such government operations in a negative light and perpetuates the concept of transparency with little regard for its ramifications. Sometimes, knowing the ‘truth’ might do more harm than good.

Zero Days is an excellent documentary and investigatory source of information that raises awareness of cybersecurity issues and its importance in our modernized era. First, its innovative and effective use of animations coupled with strategic uses of interviewees from various backgrounds provides it credibility and persuasiveness when discussing StuxNet. Second, it increases awareness about the importance of cultivating a better understanding of cybersecurity and how vulnerable digital and hardware systems can have significantly harmful consequences. However, in his quest to push for transparency behind government intelligence operations, Zero Days promotes a dangerous notion. Operational secrecy is not a negative notion but sometimes vital for national security. The ubiquitous nature of cyberspace, like Pandora’s Box, opens nations to a new dimension of threats that cannot be as easily defended like that of Air, Land, or Sea and increased transparency can deal much more harm. Regardless your position regarding the motives behind Zero Days, it remains an excellent documentary in raising cybersecurity awareness.

Zero Days (2016) Documentary Trailer:

 

Cheng served as an Amour Officer and Training Instructor at the Armour Training Institute (ATI) in the Singapore Armed Forces (SAF) and now possesses reservist status. His master’s research revolves around security considerations within the Asia-Pacific Region and more specifically around areas of Cybersecurity, Maritime Security and Intelligence Studies. His Graduate thesis explores the characteristics and trends defining China’s emerging Cybersecurity and Cyberwarfare capabilities. He participated in the April 2016 9/12 Cyber Student Challenge in Geneva and has been published in IHS Janes’s Intelligence Review in May 2016. You can follow him on Twitter @LK_Cheng

 

Notes:

Bradshaw, P. ‘Zero Days review – a disturbing portrait of malware as the future of war’, The Guardian, Available from: https://www.theguardian.com/film/2016/feb/17/zero-days-review-malware-cyberwar-berlin-film-festival, (17 Feb 2016).

Gibney, A. ‘Director Profile’, JigSaw Productions, Available from: http://www.jigsawprods.com/alex-gibney/ (Accessed October 2016).

Internatinale Filmfestipiele Berlin 2016, Film File: Zero Days (Competition), Available from: https://www.berlinale.de/en/archiv/jahresarchive/2016/02_programm_2016/02_Filmdatenblatt_2016_201608480.php#tab=filmStills (2016)

Langer, R. ‘Cracking Stuxnet, a 21st-century cyber weapon’, TEDTalk, Available from: https://www.ted.com/talks/ralph_langner_cracking_stuxnet_a_21st_century_cyberweapon/transcript?language=en, (Mar 2011)

Lewis, J.A. ‘In Defense of Stuxnet’, Military and Strategic Affairs, 4(3), Dec 2012, pp.65 – 76.

Macaulay, S. ‘Wrong Turn’, FilmMaker, Available from: http://www.filmmakermagazine.com/archives/issues/winter2008/taxi.php#.V-A8_Tvouu5, (2008).

Scott, A.O. ‘Those You Love to Hate: A Look at the Mighty Laid Low’, The New York Times, Available from: http://www.nytimes.com/2005/04/22/movies/those-you-love-to-hate-a-look-at-the-mighty-laid-low.html?_r=1, (Apr 22 2005).

Image Source (1): https://i.ytimg.com/vi/GlC_1gZfuuU/maxresdefault.jpg

Image Source (2): https://upload.wikimedia.org/wikipedia/commons/8/82/SIMATIC_different_equipment.JPG

Image Source (3): https://upload.wikimedia.org/wikipedia/commons/8/84/National_Security_Agency_headquarters,_Fort_Meade,_Maryland.jpg

 

 

Filed Under: Film Review Tagged With: Cybersecurity, Cyberwar, feature, Iran, Israel, National Security Agency, nuclear, Stuxnet

‘Authentication – Crypto-Wars’ new frontline

August 1, 2016 by Yuji Develle

By: Yuji Develle

Image credit: https://netzpolitik.org/wp-upload/23390123_b6caaefc16_o.jpg

9 February, 2016: the FBI requested Apple to unlock an iPhone device belonging to a suspect of the San Bernardino terror shootings. Given until 26 February to respond, Apple flatly refused. So began a drawn out legal battle and ongoing public debates surrounding the merits of encryption, pitting the national security community and the tech-world against each other. Captains of industry and five-star generals faced-off in fiery declarations. As the FBI hired Japan-based Sun Corp to unlock the iPhone for close to $1 million, WhatsApp (April 5th) and Viber (April 18th) both raced to complete end-to-end encryption roll-out on their products. Across the pond, just last week the second reading of the was discussed in the House of Lords. This Bill appeared just steps away from authorising state-sanctioned “equipment interference”. Reaching a new zenith, a new chapter in the Crypto-Wars has begun.

Most “battles” in this Crypto-War occur in the legal and policy spheres. This is primarily due to requirement that intelligence services and law enforcement have to request the right to access the encrypted data of individuals in specific cases (lawful intercept). Lawful Intercept has been a hallmark of the telecoms industry for decades, as network managers were compelled by the law to provide data that may help with criminal investigations. As made apparent to the British public in the public uproar created by the ‘Regulation of Investigatory Powers Act (RIPA 2000)’ (Snooper’s Charter) and the recent ‘Investigatory Powers Bill’, many policy-makers are striving to create greater legal leeway for intelligence and law enforcement. Meanwhile, academics such as Thomas Rid (Rise of the Machines) from the War Studies Department at King’s College London, have discussed the place of encryption in society’s moral-compass, whether such leeway is morally justifiable. Legal, policy and academia interact reflexively in a constantly shifting Crypto-War landscape.

However, an aspect of this conflict is certain. Both the national security establishment and the tech-world are developing surveillance and encryption technologies far faster than laws or policy. Just as Daniel Moore’s and Thomas Rid’s Cryptopolitik & the Darknet exposed the critical chasm between Westminster’s understanding of the Darknet and real traffic trends, the available technologies driving encryption out-pace the current laws sanctioning “equipment interference”. These technologies cover a variety of areas such as F-Secure’s Freedome (better VPNs) or Silent Circle’s head-to-toe phone encryption, but appear most notably in the field of web authentication.

The very fabric of the internet hinges on the idea of trust. Without trust, it would be impossible to be certain that a file from Mr. Smith actually comes from Mr. Smith. E-Commerce, E-Banking and in particular E-Voting rely on the trust of both their users and their servers to function properly. One major structure in charge of maintaining this trust is web authentication, or the structures of authentication and certification in place to make sure, for instance, that a certain ‘Mr. Smith’ is actually who he says he is. Currently one system, the Public Key Infrastructure (PKI), dominates this space since the Internet’s humble beginnings.

The Public Key Infrastructure is a centralised model of assigning a certain number (or key) to each individual machine attempting to gain access to a given server on the internet. If Alice wishes to access a server, she will be put through a multi-step process before gaining access to that server:

  1. 1) A Registering Authority (RA) notes down Alice’s Public Key (unique credentials)
  2. 2) A Certificate Authority (CA) notes down the Public Key onto a Central Directory
  3. 3) The CA issues a certificate based on Alice’s Public Key, this certificate is Alice’s digital signature.
  4. 4) This signature is matched with the Server’s Private Key to grant access
  5. 5) The signature is verified again by a Validation Authority, in charge of double-checking the validity of digital signatures/certificates.

It is quickly apparent how such a system may lead to serious vulnerabilities. The Public Key Infrastructure is a chain of events that relies on the integrity of the initial Public Key, and on the reliable denotation of this key in each following step. Alice’s identity on the internet is directly bound to her key. Due to this, after having been registered by RAs, Public Keys are stored by CAs in “Central Directories”. The PKI paradigm relies on storing this type of identification information on supposedly “air-tight” info-caches.

Similar to how keeping a list of username and password pairings in an office drawer, “Central Directories” are inherently dangerous and have been the cause of some of the largest security breaches in web history (See 2011 DigiNotar Breach). The repeated communication between different steps of the PKI also mean that “Replay” attacks are easier to undertake, such as when a hacker eavesdrops until they are able to replicate a given communication/operation. Moreover, governments have worked with other companies in issuing fake certificates to sanctioned spyware and malware. One example being Gogo Inflight Internet’s alleged use of Google certificates, as sanctioned by the FCC shown in this letter. The top 5 Certificate Authorities are all based in the United States - food for thought!

In light of these bedrock vulnerabilities, the tech-world has been busy. The Web of Trust model, gives the freedom to each network to gradually accumulate their list of “trusted introducers”, or trusted users, placed on a White-list. The idea is that the more White-list users are placed, the more authentic one becomes. This circumvents the need to pass through CAs hundreds of times, as is usually the case with any given web-application. The Distributed Trust model is the most innovative, however.

A Distributed Trust Infrastructural Model

In a D-TA model, Alice would for instance, only have to supply two different pieces of information (Step 1: Multi-Factor Authentication), a pin code and the fact that she is using Google Chrome (logo shown in Safari), before being assigned a “Unique Cryptographic Authentication Key” (Step 2) and thus accessing the server. Alice did not have to surrender any passwords, keys or personal information to any “Central Directories” to be identified and authenticated. To prove that Alice’s pin and browser-type is correct, the information is matched with two or more partial key-holders (called “Trusted Authorities” or TA). The TAs constitute a block-chain of key-parts that together form the key. At no time does any TA have access to the full key, nor does any information get stored on any registry. Every authentication key is unique.

The Distributed Trust model eliminates two of the most damaging sources of cyber-breaches: password-related breaches and ‘Man-in-the-Middle’ attacks. Without any directories to poach information from, this near-eliminates the possibility of ID-theft (think: 2014 OPM Hack, last April’s Mexican Voter Breach, the LinkedIn Breach). More relevant to the Crypto-Wars, this technology prevents a common-method with which governments agencies – including intelligence services – implanted spyware into social-media, e-mail, and banking apps. At the same time, Distributed Trust would protect every government server from sophisticated attacks. Eliminating the need for passwords and/or public-key registries makes web-security truly air-tight.

In light of the resurgence of Crypto-Wars in public debate, the fate of Distributed Trust hangs in the balance. Should governments prove to make headway in adopting Distributed Trust, this would limit Opposition parties (in some countries) and Hacktivists from penetrating public servers. While widespread private-sector adoption would lead to a much more secure internet, it would remove many of the spy tools available to law enforcement and intelligence services (those being made legal by “Electronic Surveillance”).

Yuji Develle, is an Undergraduate Representative and Editor for Strife Blog. A French and Japanese War Studies graduate; he is currently working for a London start-up specialised in cryptography. His interests lie in cybersecurity, energy security and other emerging security issues.

Filed Under: Blog Article Tagged With: Cybersecurity, Encryption, feature, internet

Cyber risks to governance, Part III: Hyper-connectivity and its impact on state power

August 31, 2015 by Strife Staff

By: Christy Quinn

In an era of Snowden, Wikileaks, Dark Web and data breaches there have never been so many cyber risks associated with governance. This article is the third of a 3-part Strife series which examines three diverse aspects of cyber risks to governance. Andreas Haggman began by looking at the online market place Silk Road and its transformation of the online market place. Yuji Develle and Jackson Webster then examined cyber attribution in policymaking, and finally Strife Editor Christy Quinn examines the implications of hyper-connectivity.

A 2011 study by the technology company Cisco predicted that by 2020, over 50 billion devices will be connected via the internet.[1] Relatively little research has been done into the impact of such a huge growth in networking technologies upon the power and shape of the state. The first boom in global communication technologies triggered by the invention of the telegraph in 1837 and the spread of railways had a transformative impact on the state’s ability to maintain social control and wage war. The telegraph massively increased the speed of information exchange between cities and outposts connected by railway routes. While this allowed for rapid mobilisation of troops for transportation, it also sped up the spread of ideas and ideologies between urban centres, threatening the states’ capabilities for censorship and curtailing the spread of revolutionary movements. During the European ‘Springtime of the Peoples’ revolutions in 1848, news regarding the successful uprisings of anti-monarchist revolutionaries spread like wildfire through the major cities, inspiring a plethora of local efforts to capitalise on long-held feelings of disenfranchisement.[2]

While the development and expansion of network and communication technologies of the 19th century had a relatively limited direct impact on the average citizen in Europe, the rapid expansion of 21st century network technologies threatens to upturn the relationship between state and citizen on much more fundamental levels. The growth and increased density of digital networks through the internet, coupled with providing the average citizen with access to many different forms of communication technologies, is resulting in societal ‘hyper-connectivity’; speed and quantity of communication coupled with complex many-to-many socio-technical networks. These networks offer the capability to empower citizens by providing them with vast quantities of free information and the ability to massively expand their social relationships beyond their own physical limitations. The huge increases in the volume of information and communications exchanged between citizens has also been seen by policymakers to threaten their capability to monitor society for threats to national security.[3]

What is important to note is that these huge changes in power relationships brought by hyper-connectivity enable all sectors of society. The same effects that allow farmers in rural China to access weather forecasts or micro-finance for their crops empower political extremists to organise remotely and propel their political message across huge distances. Thomas Rid and Hecker have suggested that these network effects are particularly useful for militant extremists on the fringes of society and political debate. By utilising network communication technologies to organise and attract new followers, extremists can self-organise and maintain their own distinct political space without having to attempt to attract followers from wider society.[4] Just as ‘bronies’, a subculture of mostly young men and committed followers of ‘My Little Pony’ TV series, can have an outsized cultural impact despite their esoteric tastes, militant jihadists can dictate the terms of politics through choice violent interventions. As a result, hyper-connectivity challenges the power of the state to dictate political values to society.

The complexity and unpredictability of a hyper-connected society also poses challenges to state power. State power reflects in part the ability of the state to respond quickly to societal developments and threats to its sovereignty. Post-structural theorist Paul Virilio has argued that the processes of urbanisation in European Medieval societies forced states to adapt their means of enforcing sovereignty away from simply building walls around its holdings and instead to increase the speed and manoeuvrability of their military forces.[5] Hyper-connectivity poses further challenges in time and space; challenges to state power can emerge at any point within societal networks. For example, an investigation by cyber security firm TrapX found that medical devices in a hospital had been implanted with malware designed to steal data regarding patient records.[6] The implantation of network technologies into every facet of life have brought security vulnerabilities that can be exploited by malicious actors, creating new spaces to contest state controls on the spread of information.

One of the most significant network technologies at the centre of the growth in information exchange is encryption. Encryption and the public key infrastructure (PKI) that supports it is a central technology to a hyper-connected society by providing economic transactions such as online commerce and personal communications with guarantees of secure communications. However, the huge growth in the volume of encrypted communications, particularly since the Snowden revelations on 2013, also have the power to disrupt the sovereignty of states.[7] The growth of crypto-currencies such as Bitcoin, which circumvent traditional banking systems and are extremely difficult to track by utilising complex encryption ‘blockchains’, pose a direct challenge to the state’s ability to regulate and control economic activity within its own sovereign territory. Hidden services offered through Tor encrypted networks, such as The Silk Road drugs market, demonstrate the potential of these technologies to challenge the state’s ability to enforce moral and legal codes in the economy.

Clearly, a hyper-connected society offers huge challenges for the bureaucratic modern state. The difficulties experienced by state security services and law enforcement in tackling Islamic State (ISIL) online recruitment and the rapid development of cyber crime networks are ultimately just the visible tip of the iceberg. The byproducts of hyper-connectivity, such as huge increases in the volume of information flows, increasing levels of highly encrypted communications and new societal behaviours such as cyber stalking all threaten major social upheavals over the next few decades. States such as Russia and Iran are seeking limit the connective capacity of their own citizens through the creation of ‘sovereign internets’ that can be controlled and separated from global networks at will. What is likely to be more successful is seeking to increase the reactive capacity of the state by adapting to this new reality. Whether policymakers like it or not, we now live in a hyper-connected society and it is time to consider how a hyper-connected state could work with it.

Christy Quinn studied International History at the London School of Economics & Political Science and is currently reading for an MA in Intelligence & International Security at Kings College London. His research interests are cyber security, national security strategy and the Asia-Pacific region. He is a Guest Editor at Strife. Follow him on Twitter @ChristyQuinn.

[1] Dave Evans, “The Internet of Things: How the Next Evolution of the Internet Is Changing Everything,” (Cisco, 2011).

[2] Mike Rapport David McKeever, “Technology and the Revolutions of 1848 and 2011: How Technology Can Work Towards Catalyzing Popular Revolutions,” Konrad-Adenauer-Stiftung, http://www.kas.de/brasilien/en/publications/34903/.

[3] “Access to Communications Data by the Intelligence and Security Agencies,” (Intelligence and Security Committee, 2013).

[4] Thomas Rid and Marc Hecker, “The Terror Fringe,” Policy Review, no. 158 (2009).

[5] John Armitage, “Beyond Postmodernism? Paul Virilio’s Hypermodern Cultural Theory,” Ctheory 90, no. 1 (2000).

[6] Kelly Jackson Higgins, “Hospital Medical Devices Used as Weapons in Cyberattacks,” Darkreading, http://www.darkreading.com/vulnerabilities—threats/hospital-medical-devices-used-as-weapons-in-cyberattacks/d/d-id/1320751.

[7] Patrick Howell O’Neill, “The State of Encryption Tools, 2 Years after Snowden Leaks,” The Daily Dot, http://www.dailydot.com/politics/encryption-since-snowden-trending-up/.

Filed Under: Blog Article Tagged With: cyber, Cybersecurity, governance

Cyber risks to governance, Part II – The Attribution Game: the challenges and opportunities of cyber attribution in policymaking

August 28, 2015 by Strife Staff

By Yuji Develle and Jackson Webster:

Hacker

In an era of Snowden, Wikileaks, Dark Web and data breaches there have never been so many cyber risks associated with governance. This article is the second of a 3-part Strife series which examines three diverse aspects of cyber risks to governance. Last week Andreas Haggman began by looking at the online market place Silk Road and its transformation of the online market place. This week, Yuji Develle and Jackson Webster will examine cyber attribution in policymaking, and finally Strife Editor Christy Quinn will examine on the implications of hyper-connectivity.

‘Human lives and the security of the state may depend on ascribing agency to an agent. In the context of computer network intrusions, attribution is commonly seen as one of the most intractable technical problems… as dependent mainly on available forensic evidence.’ ‘Attributing Cyber Attacks’, Prof. Thomas Rid & Ben Buchanan

The question of “Who-done-it?” dominates all efforts from the crime scene to the court of law; a case can only be considered solved when the culprit of the crime has been identified and convicted. In the era of DNA identification and video monitoring, this strict guilty-versus-innocent divide poses little issue in the physical realm where an excellent standard of criminal investigations can be observed in most developed countries.

This vision is nevertheless out of touch with the reality of the attribution process in cyberspace. While forensic evidence can be acquired – ‘Indicators of Compromise’ (IP addresses, domain names, etc.) and unique attack signatures (patterns of behaviours, malware utilised, etc.) – it is extremely difficult for experts to identify any one set of culprits without significant risk. High potential cyber attacks are typically designed to cloak the identities of their designers and are often founded on the basis of deceiving the target from realizing the true extent of the damage incurred until it is too late, often resulting in infection of IT networks without visible effect for months after the network intrusions were made. This lag allows for infections to assimilate themselves into the crowd of Internet traffic before the attack by displaying regularly innocent patterns of behaviour. For example, cyber security firm Fireye’s investigation of Operation Poisoned Hurricane in 2014 detailed how malware trying to infiltrate the networks of several Asia-based internet service providers and other private businesses by disguising itself as routine internet traffic with genuine digital certificates. As the extent of damage of cyber-attacks are unknown, hidden or unforeseen, ‘digital crime scenes’ cannot be investigated in the same vacuum that forensic experts enjoy in the tangible sphere.

The issues of attribution are both what makes cyber such an enticing realm for would-be attackers and such a problematic issue for statesmen. Extending the issues previously detailed into the context of International Relations, it’s easy to see how incorrect attribution can cause a cascade of undue escalation and insult by the accusing party. Tracing a given attack to a server or network of servers within a state does not clearly implicate that state’s government itself as a perpetrator nor does it assume that state is passively complicit or even aware of the attacks being launched. Individuals and small groups are perfectly capable of launching major cyber attacks, as the computer is the ultimate force multiplier, and IP addresses can be easily ‘spoofed’, or bounced endlessly around the globe through proxies to confuse solid attribution.

Many policymakers may be willing to make logical jumps in the attribution process due to its inherent lack of clarity. The lack of certainty surrounding cyber attack attribution allows statesmen to blame geopolitical adversaries for the attacks. No one is standing in the room pointing a smoking gun at the targeted computer. Furthermore, ‘militant’ cyber actors are not necessarily associated with a state, and governments can easily distance themselves from inconveniently uncovered hacking groups they covertly support. For example, were an attack akin to the Shamoon Attacks perpetrated by the Shia-affiliated ‘Cutting Sword of Justice’ on the Saudi state oil firm Aramco in 2012 to happen again, Iran would inevitably be blamed for tacit complicity if not direct involvement, regardless of its actual agency in the attacks themselves. Attribution in this circumstance is not concerned with technical evidence of guilt, but rather with the Saudi government’s foreign policy narrative that Iran is behind all seditious actions in the region, from chemical weapons in Syria to Shi’ite militia atrocities in Iraq, to the Houthi movement in the Yemen.

On the other hand, intentional misattribution -in the form of scapegoating to non-state actors- presents a convenient tool for statesmen in some circumstances. Offence is at a massive advantage in cyber. When securing a network from attack, one must ensure the constant safety of every single system on the defended network. When an attacker attempts to access a target network, only one server or device must be compromised to gain access to a network. The logical threshold for the use of force in cyberspace is thus low. This incentive towards offensive action is amplified by the fact that statesmen can easily pass off responsibility and liability onto non-state actors, such as so-called ‘hacktivists’, from which they can disassociate state intelligence agencies and militaries. ‘Hacktivists’ represent both an easy scapegoat for aggressor states and a convenient culprit for victim states because, as pointed out in a WIRED article pointed out last year, ‘[hacktivists”] geopolitical interests and motives often jibe with a state’s interests.’ Cyber is not simply a revolutionary gimmick to be dealt with by niche experts and private corporations. Just as the airplane quickly went from being invented to being a crucial part of national defence, commerce, and transportation, states are already realising the political utility the Internet provides is now central to the execution of policy. Cyber is both damaging and useful to states’ national interests, but it cannot be ignored, as its uses and effects are clearly set to increase, not decrease.

The unconvincing inaccuracy of cyber attribution has also led to a growing mistrust of the public sector. Some corporate actors have even sought help from private contractors which hire ex-hackers to conduct retaliatory attacks on behalf of those companies. The lack of confidence in the state’s ability to perform its most basic security duties is a threat to the very raison d’être of law enforcement. This phenomenon reduces the state’s ability to control its response in the face of potentially politically damaging cyber attacks. Furthermore, as Thomas Rid coins it, when it comes to conducting investigations as they, unlike private companies, often have the mandate to collect from a wider scope of information, covertly or otherwise. The outsourcing of cyber-security dulls down the credibility and efficiency of a state’s response to cyber-attacks.

Ultimately, attribution is what the actor makes of it. Avoiding ‘attribution fixation’, the obsession of ascribing agency to an actor, will be essential in how successfully governments and companies can use the cyberspace as a means to their ends. It can be a tool for geo-political advancement, a technical obstacle to overcome, or a damaging libel risk for states with active domestic hacking communities. Cyberspace cannot be viewed as a problem, nor as a solution. It is an operational space like any other, though currently popularly misunderstood and lacking the regulations and norms of kinetic battle spaces.

Yuji Develle, is a French and Japanese student reading a B.A. (Hons) War Studies with a strong interest in Cybersecurity and a Russia & CIS regional specialisation.

Jackson Webster, a native of Manhattan Beach, California, is currently studying at the King’s College London Department of War Studies reading a degree in International Relations with a specialisation in the politics of the Middle East and a strong interest in multilateral security practices.

Filed Under: Blog Article Tagged With: cyber, Cyber Security, Cybersecurity, hacking

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to Next Page »

Footer

Contact

The Strife Blog & Journal

King’s College London
Department of War Studies
Strand Campus
London
WC2R 2LS
United Kingdom

[email protected]

 

Recent Posts

  • Climate-Change and Conflict Prevention: Integrating Climate and Conflict Early Warning Systems
  • Preventing Coup d’Étas: Lessons on Coup-Proofing from Gabon
  • The Struggle for National Memory in Contemporary Nigeria
  • How UN Support for Insider Mediation Could Be a Breakthrough in the Kivu Conflict
  • Strife Series: Modern Conflict & Atrocity Prevention in Africa - Introduction

Tags

Afghanistan Africa Brexit China Climate Change conflict counterterrorism COVID-19 Cybersecurity Cyber Security Diplomacy Donald Trump drones Elections EU feature France India intelligence Iran Iraq ISIL ISIS Israel ma Myanmar NATO North Korea nuclear Pakistan Politics Russia security strategy Strife series Syria terrorism Turkey UK Ukraine United States us USA women Yemen

Licensed under Creative Commons (Attribution, Non-Commercial, No Derivatives) | Proudly powered by Wordpress & the Genesis Framework