dilluns, 23 de desembre del 2019

Autonomous Weapons in Security, should they be banned?


3rd December 2019
What uses of artificial intelligence in today’s and tomorrow’s war should be allowed, restricted and completely excluded?
Artificial Intelligence (AI) may be seen as a science fiction and technology-only matter, however, it poses major questions about society, politics and security, just as other major technological leaps did.
Being aware of the per se bias of social sciences, I acknowledge my point of departure: war is immoral, and it should be eradicated. Despite that, this utopia should not obstacle theorizing about the meanwhile, with the aim of diminishing human suffering in the wars to come.
First of all, we need a definition of AI: a system that autonomously and algorithmically decides actions given determined circumstances. In the security and war field, AI is applied to Autonomous Weapons Systems (AWS), which can self-select targets with the process above mentioned.

AWS issue serious legal, practical and moral problems that eventually conduct to the belief that their use ought to be extremely restricted, if not banned. To begin with, International Humanitarian Law has two important principles: distinction and proportionality. The former, is the military-civil differentiation[1], and AI may not really recognize well a civilian, a surrendering soldier or a prisoner of war. The latter prohibits actions that may cause more damage than the actual military advantage[2], but AWS will not have the contextual understanding to balance this, algorithms have aims and means, material and mainly human destruction is not quantitatively measurable by machines, as “only humans can make context-specific judgements” [3].
Moreover, the Martens Clause introduced by the Hague II Convention of 1899 stated that matters not included in the regulations should be guided by the “laws of humanity and (…) public conscience” [4], even if the definition of those terms is arguable, for me and many citizens the aim of those are to reduce suffering and to humanely conduct wars, which cannot be achieved through the development of the so-called “Killer Robots”.

Secondly, regarding the practical issues we will dig into the quantitative concerns of wars. Even if there is no agreement within experts on whether AWS would be advantageous to the attacker or the defender, to know if they would incentivize or deter armed conflict, we can determine that as wars would become more abstract for the public they would become more likely[5], added to the fact that fewer human deaths – of the offensive faction – would carry less political consequences of war, i. e. no more Vietnams; nevertheless, technological gaps would become disruptive with terrible lethal consequences for the human-based armies of less developed states or private factions.
Besides, we should bear in mind the lobbying of the new military-industrial-technological complex to governments to go towards fully autonomous militaries, adding the absence of “costly or hard-to-obtain raw materials” of AWS that would make them ubiquitous in military powers[6], an issue that would lead to a dangerous already-lived arms race. Plus, non-state actors would have far more capacities to increase state and personal security in destabilization with the hybrid capabilities that AI will enhance.

Another argument is that of the responsibility. As the advocates of AWS’ prohibition state, “humans may fade out of the decision-making loop” [7], then, the inevitable question arises: who will be responsible for AWS’ misbehaving? The designers and programmers? The commander that gave the broad order? The governmental owners of these machines? No one could be held responsible before a national or an international court, because the mere characteristic of autonomy exempts responsibility further that the machine itself, which cannot be held accountable to human justice for evident reasons. Therefore, “a war cannot be waged justly if no-one could be morally and legally held accountable for such serious wrongings” [8], because justice may be applied when the criminal can recognize beforehand the wrongdoing of his/her conduct.

Thirdly and finally, we should look at the moral matters. When talking about this, we tend to believe that robots or AWS will not have moral constraints, as soldiers do have “humane treatment, compassion and empathy for their fellow humans” [9]. That is partially true, but in warfare soldiers usually have “fear, hatred and cruelty”, AWS will not have emotions, neither the good nor the bad ones; therefore, it can be argued that robots will follow more strictly the rules of humanitarian law and the military code[10] [11]. However, as many power organizations – being public or non-state actors – tend to be “ethically corrupt or even evil”, the programming and the orders AWS follow will be the key issue, because autonomous robots will have immense capabilities complying military objectives, an those can be humanely immoral[12].
Nowadays there are many discussions over the morality of the issue. Is it ethical to deliver the decision over life and death to autonomous machines[13]? Also, it can be strongly questioned the fact that probabilities, upon which algorithms, AWS, are based, determine life, or rather, the termination of it.
To end this part, a last point has not been very proposed by experts. From a human security perspective, AWS would be a source of insecurity taking in mind the freedom from fear concept. The argument would be that regarding the widely spread and current sensations of the population in respect to killer robots, people, societies could see Autonomous Weapons as dangerous, incomprehensible and uncontrolled machines, as weapons systems with no emotions nor feelings. Those issues would most probably lead to a generalized sense of distrust, instability and eventually insecurity within communities.

As a conclusion, the international community should enforce control over AI warfare applications in accordance with the following prescriptions: (a) ban of fully autonomous weapons, (b) regulation to have meaningful human control on specific attacks to semi-autonomous weapons, supervised autonomous weapons, and to autonomous weapons if no possibility of the aforementioned ban is practically applicable, (c) human capacity to suspend / abort attacks prior to impact, and (d) regulation to allow non-lethal AI warfare applications as transportation, reconnaissance and such. International regulation is a must if we want to avoid uncontrolled development of weapons which have many questionable points as we have seen.

Total number of words: 998
References
-          Brookings Institution (April 5, 2018). Autonomous weapons and international law [Video]. Retrieved from: https://www.youtube.com/watch?v=-vvryhsECno
-          Campaign to Stop Killer Robots (2018). No country would be safe from fully autonomous weapons. Retrieved from: https://www.stopkillerrobots.org/learn/
-          Casanovas O. & Rodrigo A. (2012). Compendio de Derecho Internacional Público. Madrid: Ed. Tecnos.
-          Convention with respect to the Laws and Customs of War on Land (Hague, II). July 29, 1899.
-          Cummings, M. (2017). Artificial Intelligence and the Future of Warfare. International Security Department and US and the Americas Programme – Chatham House, The Royal Institute of International Affairs.
-          Future of Life Institute (2015). Autonomous Weapons: An open letter from AI & robotics researchers. Retrieved from: https://futureoflife.org/open-letter-autonomous-weapons/?cn-reloaded=1
-          Human Rights Watch & International Human Rights Clinic (2018). Heed the Call: A Moral and Legal Imperative to Ban Killer Robots.
-          International Committee of the Red Cross (2018). Autonomous weapons: States must agree on what human control means in practice. Retrieved from: https://www.icrc.org/en/document/autonomous-weapons-states-must-agree-what-human-control-means-practice
-          Kaspersen A., Barth, E., et al (2016). 10 trends for the future of warfare. Global Competitiveness Report 2019 – World Economic Forum.
-          Müller V. & Simpson T. (2014). Autonomous Killer Robots Are Probably Good NewsSociable Robots and the Future of Social Relations - Proceedings of Robo-Philosophy 2014 (pp. 297-305). (Frontiers in Artificial Intelligence and Applications; Vol. 273). IOS Press. Retrieved from: https://doi.org/10.3233/978-1-61499-480-0-297
-          Noah Harari, Y. (2018). 21 lliçons per al segle XXI. Barcelona: Edicions 62
-          Report of the 2016 Informal Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS). CCW/CONF.V/2 (2016) - Fifth Review Conference of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects.


[1] Casanovas & Rodrigo, 2012, p. 502
[2] Idem, 2012, p.502
[3] International Committee of the Red Cross, 2018
[4] Convention with respect to the laws of war on land (Hague II), 29 July 1899
[5] Kaspersen & Barth, 2016
[6] Future of Life Institute, 2015
[7] Campaign to Stop Killer Robots, 2018
[8] Müller & Simpson, 2014, p. 7
[9] Human Rights Watch & International Human Rights Clinic, 2018, p. 2
[10] Noah Harari, 2018, p. 105-107
[11] Brookings Institution, 2018, min. 21:49-22:45
[12] Noah Harari, op. cit.
[13] Report of the 2016 Informal Meeting of Expert son Lethal Autonomous Weapons Systems – CCW/CONF.V/2, p. 10

Cap comentari:

Publica un comentari a l'entrada