• Skip to primary navigation
  • Skip to main content
  • Skip to footer
  • Home
  • About
    • Editorial Staff
      • Bryan Strawser, Editor in Chief, Strife
      • Dr Anna B. Plunkett, Founder, Women in Writing
      • Strife Journal Editors
      • Strife Blog Editors
      • Strife Communications Team
      • Senior Editors
      • Series Editors
      • Copy Editors
      • Strife Writing Fellows
      • Commissioning Editors
      • War Studies @ 60 Project Team
      • Web Team
    • Publication Ethics
    • Open Access Statement
  • Archive
  • Series
  • Strife Journal
  • Strife Policy Papers
    • Strife Policy Papers: Submission Guidelines
    • Vol 1, Issue 1 (June 2022): Perils in Plain Sight
  • Contact us
  • Submit to Strife!

Strife

The Academic Blog of the Department of War Studies, King's College London

  • Announcements
  • Articles
  • Book Reviews
  • Call for Papers
  • Features
  • Interviews
  • Strife Policy Papers
    • Strife Policy Papers: Submission Guidelines
    • Vol 1, Issue 1 (June 2022): Perils in Plain Sight
You are here: Home / Archives for Artificial Intelligence

Artificial Intelligence

Can Artificial Intelligence shape the way we conduct war in the future?

July 19, 2021 by Arnaud Sobrero

Photo by Hitesh Choudhary on Unsplash

The country that leads in artificial intelligence (AI) development ‘will be the ruler of the world’. Those are the words of Vladimir Putin. His government was indirectly involved in the recent conventional war in 2020 between Armenian and Azeri forces to control the Nagorno-Karabakh region. During this conflict, we have seen the use of drones in an unprecedented way. In addition to weaponized drones, swarm tactics are disrupting conventional warfare, and we are seeing new developments of drones being integrated with fifth and sixth-generation fighter aircraft. The common denominator between all those recent technological developments is the emergence and implementation of artificial intelligence technologies, shaping the way wars will be conducted in the future.

Artificial intelligence, or more precisely machine learning, can provide a tactical advantage on the battlefield. As demonstrated by the recent conflict between Armenia and Azerbaijan over the Nagorno-Karabakh region, weaponized drones can offer a significant advantage. Azerbaijan, supported by the Turkish military, massively deployed a fleet of Unmanned Aircraft Vehicles (UAVs) equipped with increasingly autonomous and surveillance capabilities. The deployment of those drones, such as the Turkish TB2 unmanned combat aerial vehicle (UCAV), had a substantial disruptive impact on the battlefield as the Azeri forces were able to destroy 47% of the Armenian combat vehicles and 93% of its artillery. This is a significant breakthrough in conventional warfare as low-cost drones can offer robust air power while disrupting the enemy’s air defence systems in a successful manner. The implications for the future of warfare are ‘game-changing’, according to U.K. Defense Secretary Ben Wallace.

As an extension of weaponized UAVs, drone swarms have the potential to disrupt conventional warfare as well. An AI-powered fully autonomous drone swarm would combine mass, firepower, and speed, which could overwhelm the enemy’s defensive systems through coordinated and synchronized attacks. The potential use of swarm tactics has generated some anxiety among top western defence officials. General John M. Murray, head of the United States Army Futures Command (AFC), has expressed his concern that humans may not adequately be able to address the challenges posed by emerging drone swarm threats. General Murray posits that, ultimately, an AI engine would better be equipped than a human to counter swarm attacks as a human would not be able to keep up.

Furthermore, AI-enabled unmanned underwater vehicle (UUV) swarm systems are also being tested to track and potentially damage submarines, forcing them to lose their stealth characteristics vital for their survival and strategic nuclear deterrence. In addition, Chinese UUV swarms could potentially take out an entire aircraft carrier group in the near future, as argued by Franz-Stefan Gady in a hypothetical scenario published in the International Institute for Strategic Studies’ 2021 Regional Security Assessment.

Machine learning could also revolutionize warfare by integrating military manned and unmanned systems. The most common example of this application is from the Loyal Wingman project, such as the Kratos XQ-58A. Dubbed as Skyborg, the U.S. Air Force is working on an AI application that will allow the autonomous operation of drones. In line with the 2018 United States Air Force (USAF) Artificial Intelligence Strategy, the Skyborg program aims to integrate this system into various unmanned platforms. The program will allow them to operate alongside fourth and fifth-generation fighter aircraft like the Boeing F-15EX or the Lockheed Martin F-35, and to enable them to take on missions too risky for human pilots. The immediate benefit of an AI-enabled system is the increase in speed of decision-making, to improve aerial combat maneuvers and weapons employment by creating a virtual ‘co-pilot.’ The program received a positive boost with the first successful flight in April 2021, moving the USAF one step closer to fielding an uncrewed ‘loyal wingman’ for human pilots.

The U.S. is not the only country working on such an AI-enabled system to enhance its air force capabilities: the Okhotnik-B, the Russian equivalent of wingman, is an upcoming sixth-generation heavy stealth drone that will fly alongside the fifth-generation fighter Su-57. Fully autonomous, the Okhotnik-B would be able to track multiple targets while flying alongside the Su-57.

However, not all those new technological developments are widely adopted yet. Within NATO, there is a reluctance to deploy such AI systems due to a lack of trust. Giving complete control of a weapon system to an AI-enabled machine raises several moral and ethical questions with tremendous political ramifications within NATO countries like France and Germany. For now, the future of AI-enabled systems in NATO countries is likely to be confined to improve logistic systems and perform predictive maintenance operations.

Countries with less ethical constraints are currently increasing their capability in AI driven UAV systems. Lagging in terms of capability in that domain, NATO countries may find themselves at a disadvantage against traditional and emerging threats such as Russia and China. NATO needs to assess what those technological advancements in Artificial Intelligence mean for warfare and develop a realistic and strategic approach to incorporate them into existing military systems.

The data explosion and the emergence of new AI-powered systems are likely to change how wars are conducted in the 21st century, as illustrated by the Nagorno war and recent technological developments.

Filed Under: Blog Article, Feature Tagged With: Arnaud Sobrero, Artificial Intelligence, drones

Offensive Cyber Series: Amy Ertan on AI and Military Innovation, Part II

June 18, 2021 by Ed Stacey and Amy Ertan

A soldier operates the remote controlled Mark 8 Wheel Barrow Counter IED Robot. Photo Credit: UK Ministry of Defence, licensed under Creative Commons

This is part II of Ed Stacey’s interview with Amy Ertan on AI and military innovation for Strife’s Offensive Cyber Series. You can find part I here.


ES: I feel like there is a whole interview to be had on this idea of an AI arms race, especially with some of the signals from governments about the importance of these technologies.

AE: We talk about an AI arms race, but actually the number of countries that have the resources to invest in this is really small. The US, of course, is investing billions and billions, and they have their Joint Artificial Intelligence Center which is coordinating AI, including AI for use in cyberspace. The UK invests a huge amount as well and so do a few other states within Europe, for example France. But for the majority of states, say across NATO, AI in conflict is not something that is currently top of the agenda: it is something that is discussed at the strategic level and people know that it will hit and have impact in 20 to 30 years’ time. So we are seeing that strategic discussion but it costs so much that it is just a matter of states buying solutions from the private sector, so lots of questions there to.

ES: On that note, given the private sector is so important in the development of AI, do you think that the advantage lies with liberal democratic states and their innovate, free-market economies or authoritarian states that have greater control over private companies, enhancing military-civil fusion? Or alternatively, is that dichotomy a bit of a cliché?

AE: That dichotomy is a bit of a cliché. I will say, though, that the states that do have control and oversight over their industry, China for example, have a significant advantage when it comes to military-civil fusion and access to big data. China places either top or joint top with the US at the moment – I think there is a separate computing race – when it comes to AI. And when you look at conversations, in the US and UK for example, public-private partnerships are a major focus with AI because you need to partner with companies like Microsoft, IBM, Amazon and Google.

The free-market economy is not something I think has an inherent advantage, which sounds strange to say. But there is an interesting aspect in that for a lot of private sector leaders in AI, governments are not their main target market – they do not need to work for them. There is controversy around what they do, for example with Google and Project Maven.

There has been a shift in the way that military innovation takes place over the last half-century or so and the government now has less control over who works with them than before. So public-private partnership is something that states like the UK and US would love to improve on. There are also challenges for government procurement cycles when it comes to technologies like AI because you need a much faster procurement cycle than you do for a tank or a plane. So working with the private sector is going to become increasingly central to Ministry of Defence procurement strategies moving forward.

ES: Your PhD research explores the unforeseen and unintended security consequences of developing and implementing military AI. Could you speak a little to how these consequences might materialise in or through the cyber domain? 

AE: There are two aspects to this: one is the technical security angle and then the second is the strategic security angle. In terms of cyber security aspects, first, you have the threat that your AI system itself may not be acting as intended. Now especially when we think about sophisticated machine learning techniques, you often cannot analyse the results because the algorithm is simply too complicated. For example, if you have developed deep learning or a neural network, there will potentially be hundreds of thousands of nodes and no “explainability” – you have a “black box” problem as to what the algorithm is doing. That can make it very difficult to detect when something goes wrong and we have seen examples of that in the civic space, where it has turned out many years after the fact that an algorithm has been racist or sexist. It is a slightly different challenge in the military sphere: it is not so much about bias but rather is it picking up the right thing? Obviously, within a conflict environment you do not want to detect a threat where there is not one or miss something.

Second, there is the threat that your algorithm or data may be compromised and you would not know. So this could be the input data that you are feeding in or the system itself. For example, you may have a cyber defence algorithm that picks up abnormal activity on your network. A sophisticated attacker could interfere with the programming of that algorithm or tamper with the data so that the algorithm thinks that the attacker has been there all along and, therefore, that it is not abnormal activity and no flags are raised. So the way in which threat modelling does not consider the creativity of attackers, or insufficiency of the algorithm, could lead to something being deployed that is not fit for purpose.

Third, adversarial AI. This is the use of techniques to subvert an AI system, again making something that is deployed fallible. For one perhaps theoretical but technically feasible example, you could deploy an algorithm in cyberspace that would only target certain kinds of infrastructure. Maybe you would want it to not target hospitals, but that could be gamed – everyone could attempt to make their site look like a hospital to the algorithm.

Right now, the technology is too immature and we do not have direct explainability. It is also very difficult to know the right level of confidence to have before deploying an AI system and there are questions around oversight. So while technical challenges around explainability and accuracy may be solved through strict verification and validation procedures that will mature in time with AI capabilities, some of these unintended consequences come down to human factors like trust, oversight and responsibility. For example, how do humans know when to override an AI system

Those societal and policy questions will be tricky and that is what leads you into the strategic debate. For example, what is the appropriate use of AI in an offensive manner through or beyond cyberspace? What is a legitimate target? When it comes to AI and offensive cyber, all of the main questions around offensive cyber remain the same – the ones that traditionally apply to cyber conflict and the ones that we want to start thinking about with sub-threshold conflict. With AI, I think it is the way in which it can be mis-utilised or utilised to scale up inappropriate or unethical activity that is particularly problematic.

ES: How should states go about mitigating those risks? You touched on norms earlier, but because a lot of this work is super secretive, how can we have those conversations or develop regulation when states are, perhaps for good reason, not willing to reveal what they are doing in this space?

AE: Absolutely. Military innovation around AI will always be incredibly secretive. You will have these propriety algorithms that external parties cannot trust, and this is really difficult in the military space where the data is so limited anyway. I mentioned earlier that you can feed three million pictures of cats into an algorithm that then learns to recognise a cat, but there are way fewer images of tanks in the Baltic region or particular kinds of weapon. The data is much more limited in secretive military contexts and it potentially is not being shared between nations to the extent that might be desirable when it comes to building up a better data set that would lead to more accurate decisions. So encouraging information sharing to develop more robust algorithms would be one thing that could mitigate those technical risks.

Talking about broader conversations, norms and regulations. I think regulation is difficult. We have seen that with associated technologies: regulation moves quite slowly and will potentially fail to capture what happens in 10, 15 or 20 years’ time because we cannot foresee the way in which this technology will be deployed. Norms, yes, there is potential there. You can encourage principles, not only in the kinetic space but there are also statements and agreements around cyberspace – NATO’s Cyber Defence Pledge, for example, and the Paris Call. States can come together and agree on baseline behaviours of how to act. It is always difficult to get consensus and it is slow, but once you have it that can be quite a powerful assurance – not confirmation that AI will not be used in offensive cyber in undesirable ways, but it gives some assurance to alliance structures.

And those kinds of conversations can prove the basis for coming together to innovate as well. So we already see, for example, while the UK and US have the power and resources to invest themselves, across NATO groups of countries are coming together to look at certain problems, for example to procure items together, which may well be the path towards military AI.

It is difficult and you cannot force states to cooperate in this way, but it is also in the interests of some states. For example, if the US has invested billions in military AI for cyber purposes, it is also in its interest that its allies are secure as well and that the wider ecosystem is secure. So it may choose to share some of those capabilities to allies, not the most secretive nor the raw data but, for example, the principles to which it abides by or certain open source tools. Then we start thinking about trust networks, whether that is the Five Eyes, NATO or other alliance structures too. So it is not hopeless.


The final interview in Strife’s Offensive Cyber Series is with Dr Jacquelyn Schneider on cyber strategy. It will be released in two parts on Thursday 24th and Friday 25th June 2021.

Filed Under: Blog Article, Feature, Series Tagged With: AI, amy ertan, Artificial Intelligence, cyber, Cybersecurity, cyberwarfare, ed stacey, Military Cyber, offensive cyberwarfare, offensive cyberwarfare series

Who’s Driving This Train? Intelligent Autonomy and Law

December 19, 2018 by J. Zhanna Malekos Smith

By Jessica ‘Zhanna’ Malekos Smith

19 December 2018 

 

In August 2018 the United Nations Group of Governmental Experts (UN GGE) held their second session on autonomous weapons systems in Geneva. The delegation examined a variety of subjects on human-machine interface, accountability, and intelligent autonomy.

This article first describes the concept of intelligent autonomy and then offers a rather pointed critique of one view expressed in the UN GGE Chair’s Report on the delegation’s discussion, an advanced copy of which is available here.

Intelligent Autonomy

Autonomy refers to the ability of a machine to function without a human operator.

The UN GGE’s report describes autonomy as a spectrum; it notes that there are variations based on machine performance and technical design characteristics like ‘self-learning’ and ‘self-evolution,’ which is essentially machine-based learning without human design input.

Bearing in mind that autonomous systems function differently from automatic systems, the U.S. Department of Defense’s report Unmanned Systems Integrated Roadmap FY 2011- 2036 describes automatic systems as largely self-steering: ‘follow[ing] an externally given path while compensating for small deviations caused by external disturbances.’

In contrast to these systems, according to DoD Directive 3000.09, an autonomous system ‘can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system[.]’

Although fully autonomous weapons (FAW) systems operate according to control algorithms set by system operators, they do not require human command to perform combat and support functions. Currently, these specialized systems are being developed by the US, China, the UK, Russia, Israel, and South Korea.  The Congressional Research Service’s report U.S. Ground Forces Robotics and Autonomous Systems provides specific examples of how other states have integrated armed robots into warfighting. Per the report, ‘South Korea has deployed a robot sentry gun to its border with North Korea. Israel has sent an armed robotic ground vehicle, the Guardium, on patrol near the Gaza Border. Russia is building an array of ground combat robots and has plans to build a robot tank.’

A Critique of the UN GGE Chair’s Summary Report

One point of friction in the summary report concerns the vitality of the relationship between law and autonomous weapons.

For instance, section three, paragraph B(27)(e) reads:

‘Autonomy in the military targeting and engagement cycle has to be studied further keeping in view that autonomy can exist throughout or during parts of the targeting cycle and could start to be applied increasingly in other contexts as close combat.’ (emphasis added)

However, section three, paragraph E(33) states: ‘As IHL [international humanitarian law] is fully applicable to potential lethal autonomous weapons systems a view was also expressed that no further legal measures were needed.’ (emphasis added)

Really? No additional inquiry is necessary to develop legal measures addressing autonomous weapons, but we must continue testing these systems in military targeting?

How can ‘no further legal measures be needed’ if the summary report is silent on how international law applies to:

• Situations where a non-state actor uses an autonomous weapon system to harm persons, or objects.

• How the international legal principle of state responsibility extends to this technology.

• How the international legal principle of reciprocity applies here.

• How the use of FAWs influences the way states should inform their decision on ‘when to resort to force.’

• And how a state’s inherent right to self-defense under Article 51 of the United Nations Charter might be challenged if proper and timely attribution to the FAWs attack is encumbered.

This simultaneous call for continued research and development, and the implicit support for the stagnation of international law, is befuddling. This situation is much like a train conductor urging travelers on the station platform to hop aboard the train before it departs, while at the same time barring all entry on or off.

Case in Point: Reciprocity and FAWs

Focusing on the challenges with reciprocity, while the functioning of international humanitarian law and the law of armed conflict (IHL/LOAC) is largely dependent upon states agreeing to be held accountable for their actions, how will the legal concept of reciprocity translate as a control algorithm for FAWs?

Reciprocity is the legal and diplomatic concept that whatever rules and customs states agree to, each shall abide by the terms. In jus in bello, reciprocity encourages combatants to abide by the state-sponsored customs of war. For example, a predominant feature of IHL/LOAC recognizes the need to reduce the means and methods of warfighting that risk unnecessary suffering to combatants and civilians. Human Rights Watch argues that FAWs risk unnecessary suffering because they ‘lack the human qualities necessary to meet the rules of international humanitarian law.’

Responding to this concern, international legal scholar Michael Schmitt provides countervailing evidence about FAWs capabilities. ‘Modern sensors can, inter alia, assess the shape and size of objects, determine their speed, identify the type of propulsion being used, determine the material of which they are made, listen to the object and its environs, and intercept associated communications or other electronic emissions,’ he explains.

To this issue of target discrimination, however, The Verge reports that military commanders are leery of ‘surrendering control to weapons platforms partly because of a lack of confidence in machine reasoning, especially on the battlefield where variables could emerge that a machine and its designers haven’t previously encountered.’ With these compelling counter-viewpoints and burgeoning areas of law to yet explore, how can the position that ‘no further legal measures are needed’ be reasonably supported?

Attempts to interpret the delegation’s intent are further muddled when read alongside paragraph C(b):

‘Where feasible and appropriate, inter-disciplinary perspectives must be integrated in research and development, including through independent ethics reviews bearing in mind national security considerations and restrictions on commercial proprietary information.’

This passage signposts that there are international legal issues yet to be grasped. And yet, the ‘train conductor’ in paragraph E(33) takes the stance that ‘none shall pass.’

Pressing Ahead – Intelligent Law

Discussions at the 2019 UN GGE meeting on lethal autonomous weapons systems must include, and cannot sacrifice, examining how IHL/LOAC applies to the above-mentioned areas, to develop granularity here. ‘Reason is the life of the law,’ as the 16th-century English jurist Sir Edward Coke observed, and indirectly encouraging a lethargy in legal analysis is neither a healthy nor reasonable approach to driving this train.

 

Editor’s note: This article was earlier published in Lawfire on 7 December 2018.


Jessica ‘Zhanna’ Malekos Smith, J.D., the Reuben Everett Cyber Scholar at Duke University Law School, served as a Captain in the U.S. Air Force Judge Advocate General’s Corps. Before that, she was a post-doctoral fellow at the Belfer Center’s Cyber Security Project at the Harvard Kennedy School. She holds a J.D. from the University of California, Davis; a B.A. from Wellesley College, where she was a Fellow of the Madeleine Korbel Albright Institute for Global Affairs; and is finishing her M.A. with the Department of War Studies at King’s College London.


Image source: https://www.theverge.com/2018/9/8/17833160/pentagon-darpa-artificial-intelligence-ai-investment

Filed Under: Blog Article Tagged With: Artificial Intelligence, autonomous weapons systems, autonomy, international law, reciprocity, United Nations Group of Governmental Experts

Why the Terminator might turn out to be Chinese: China’s bid for an AI-empowered military

April 13, 2018 by Clément Briens

By Clément Briens

A mock robot used part of the ‘Campaign to Stop Killer Robots’, aiming at banning any autonomous AI systems (Credit Image: Getty)

Science fiction’s portrayal of Artificial Intelligence (AI) has very often been centered on the fear of “killer robots” almost always often manufactured by the United States (US) government. The malevolent Skynet AI in the Terminator franchise, for instance, is developed by Cyberdyne Systems, a US military contractor. The AI in I, Robot was also manufactured by a fictional US government contractor- the aptly named US Robots and Mechanical Men Inc. However, reality has strayed from these popular conceptions, as they approach almost half a century of age. The reality is that AI will have many military applications other than “killer robots” as we picture them- and if we ever do see the rise of a Terminator in our lifetimes, they will not be built with Arnold Schwarzenegger’s iconic Austrian accent. Rather, the Terminator will most likely speak Mandarin Chinese.

 

 China’s march towards AI

In June 2017, China’s State council released its AI Development Plan– which shows Beijing’s clear intent in boosting its efforts in developing its own AI capabilities. The document reveals China’s ambition to become “the world’s premier AI innovation centre” by 2030– which will likely increase its competitiveness in the economic and military-industrial sectors, a worrying thought for US policymakers. Even more worrying for US leaders is the fact that China will most likely piggyback on US innovation to achieve this, either through industrial espionage, (which China has long history of resorting to); or by legitimately investing in American AI startups (which has already has begun).

Beijing’s dual strategy to fulfil its ambitions is clear. First, it will seek to acquire some of these technologies by investing in the US. While some of these investments may seem like normal Foreign Direct Investments (FDIs) at first glance, the risk remains that AI is a dual-use technology. That is, it “can be used for both civilian and military applications”, such as nuclear power or GPS. Furthermore, dual-use technologies are notoriously hard to control, as demonstrated by the difficulty of the JCPOA negotiations with Iran in 2015. Compliance through inspectors and safeguard mechanisms is difficult, as demonstrated by the alleged construction of underground nuclear enrichment facilities, although its actual existence remains inconclusive. Similar measures to marshal AI research could be just as problematic.

Second, Beijing will not only seek to exploit investments in sensitive US technology firms, it will also look to stimulate its own local start-up community. A report by Eurasia group and Sinnovation Ventures states that “China’s startup scene has transformed dramatically over the past 10 years, from copycat versions of existing applications to true leapfrog innovation”, in part due to “supportive government policies”. This seems to be far from the reality of Washington’s policies with regard to AI, as Donald Trump’s 2019 budget proposal supposedly “wants to cut US public funding of “intelligent systems” by 11 per cent, and overall federal research and development spending by almost a fifth.” Indeed, the Trump presidency lays out a stark future for US development of AI and any hopes of catching up with China.

 

Terminator or J.A.R.V.I.S.?

With recent news on China’s advances in AI, it is important to clarify what the security implications of AI may refer to, and to debunk the “killer robot” myth. The public’s main perception of AI is one of “killer robots”, fuelled by the sci-fi portrayals cited above. Boston Dynamics recently published a series of viral videos showing their advances in developing dog-like SpotMini robots mounted with robotic arms. In this video, the robots were capable of taking a beating from a nearby engineer and opening doors. Internet users reacted to these videos with mixed feelings:

Screenshot of comments on Boston Dynamics’ viral “Testing Robustness” video showing off their SpotMini robot (Credit YouTube)

 

Some of these reactions have been much stronger than others. For instance, an open letter by 116 scientists sparked discussions at the United Nations (UN) of a ban on autonomous “AI-empowered” weapons systems. Additionally, campaigns to ban “killer robots” have sparked worldwide.

But while the public fear for Terminator-type killer robots is understandable, as AI-controlled weapons systems cross innumerable ethical and moral lines, other fearful applications of AI exist. The potential for AI-empowered cyber weapons, for example, seems more likely than killer robots, and can have destructive effects that would endanger Western liberal democracy. A report titled “The Malicious Use of Artificial Intelligence” outlines the risks that AI poses in helping with the automation of cyber-attacks. Although AI is already being employed as a form of cyber defence, the report outlines how AI can be used to empower “spear phishing”, a technique where hackers target individuals through social engineering in order to obtain passwords and sensitive data.

One can imagine how AI can also be employed in spreading disinformation on a much larger scale and more efficiently than “dumb” botnets. The Internet Research Agency in St. Petersburg, Russia, allegedly had a hand in interfering in the 2016 US Presidential elections. Such troll factories will be a thing of the past as AI will become able to autonomously mimic online profiles by feeding off of huge amounts of data on Twitter and Facebook to automatically formulate opinions, spread political messages and retweet/share disinformation. While the US Department of Justice was able to attribute, identify and indict thirteen of these so-called trolls, attribution of AI-controlled social media accounts will be even harder. Thus, AI-empowered disinformation may have far-reaching consequences on our electoral processes.

 

How the US can stay ahead

Many are skeptical of the ability of the UN to actually ban autonomous weapon platforms, and the fact that Russia and China would probably not abide by such a ban makes the proposed ban much less endearing to US policymakers.

How then can the US stay on par with China’s rising economic power and its fine-tuned strategy?

Firstly, to remain competitive, the US needs to limit the spread of dual-use AI technology originating from its Silicon Valley laboratories.  An unreleased Pentagon report obtained by Reuters highlights the need for the curbing of these risky Chinese investments, which most likely gain access to AI advances without triggering CFIUS (Committee on Foreign Investment in the United States) review. In fact, implementing these proposed reforms from the Pentagon would be a welcome first step for US lawmakers.

Secondly, the US needs to bank on foreign nationals by investing in and retaining top talents. A comprehensive report by Elsa B. Kania for the Center for a New American Security argues that it is “critical to sustain and build upon the current U.S. competitive advantage in human capital through formulating policies to educate and attract top talent”. Modifying immigration laws for top Chinese students in AI and other high-tech domains to allow them to stay after their studies could be a solution to mitigate the “brain-drain” effect that Silicon Valley is currently enduring, and a way to gain an edge on Beijing.

Lastly, the US should also rely on its private sector, in which most of the world’s cutting-edge AI research is taking place. According to a CB Insights report, 39 out of the top 50 AI companies are based in the US. However, the most well-funded of them remains a Chinese firm, ByteDance, which has raised $3.1bn according to the report. The US has a flourishing industry which needs backing from its leaders. With industry leaders such as Google’s DeepMind venture or even Tesla investing billions into AI research, competing with Chinese firms is more than achievable. MIT’s compiled list of the world’s 13 “Smartest” AI ventures lists five US companies as their top 5 picks: NVidia, SpaceX, Amazon, 23andMe, and Alphabet, Google’s parent company.

While the US is still ahead of China in terms of AI research, its rival’s intent and strategy and the lack thereof from the current US administration seems to give it the edge in the next decade of AI research. US allies such as the United Arab Emirates (UAE) and France have both unveiled AI-oriented national strategies. The Trump administration seems to be lagging behind in presenting an American equivalent of similar scale, despite desperate calls from academics and cybersecurity professionals.

 


Clément Briens is a second year War Studies & History Bachelor’s degree student. His main interests lie in cyber security, counterinsurgency theory, and nuclear proliferation. You can follow him on Twitter @ClementBriens


Image Source 

Banner: https://blogs.spectator.co.uk/2017/08/we-should-regulate-not-ban-killer-robots/

Image 1:  https://www.youtube.com/watch?v=aFuA50H9uek

 

Filed Under: Blog Article Tagged With: Artificial Intelligence, China, Cyber Security, feature, USA

Footer

Contact

The Strife Blog & Journal

King’s College London
Department of War Studies
Strand Campus
London
WC2R 2LS
United Kingdom

blog@strifeblog.org

 

Recent Posts

  • Climate-Change and Conflict Prevention: Integrating Climate and Conflict Early Warning Systems
  • Preventing Coup d’Étas: Lessons on Coup-Proofing from Gabon
  • The Struggle for National Memory in Contemporary Nigeria
  • How UN Support for Insider Mediation Could Be a Breakthrough in the Kivu Conflict
  • Strife Series: Modern Conflict & Atrocity Prevention in Africa – Introduction

Tags

Afghanistan Africa Brexit China Climate Change conflict counterterrorism COVID-19 Cybersecurity Cyber Security Diplomacy Donald Trump drones Elections EU feature France India intelligence Iran Iraq ISIL ISIS Israel ma Myanmar NATO North Korea nuclear Pakistan Politics Russia security strategy Strife series Syria terrorism Turkey UK Ukraine United States us USA women Yemen

Licensed under Creative Commons (Attribution, Non-Commercial, No Derivatives) | Proudly powered by Wordpress & the Genesis Framework