Supervised by: Gayathree Devi Kalliyat Thazhathuveetil, BA LLB Hons (GNLU), BCL (Oxon), MPhil (Oxon). Gayathree is a DPhil (Law) candidate at the University of Oxford. In addition to her LLB, she holds an MPhil and a Bachelors of Civil Law. During her BCL studies, she received prizes for her exemplary work in International Law & Armed Conflict and International Economic Law. Before coming to Oxford, she completed BA, LLB (Hons) at Gujarat National Law University, India. Gayathree has worked with graduate students on the Diplomatic Studies Programme at Oxford in a mentoring capacity, as well as assisting undergraduate students at Hertford College, Oxford and SOAS, University of London.

Introduction

From advancing medicine to optimizing business processes, artificial intelligence (AI) and machine learning (ML) are rapidly changing the face of science, business and everyday life. At the time of writing in 2022, self-driving vehicles are on the verge of becoming common vehicles on our roads. However, with these advancements in technology come other progressions, where weapon systems are becoming increasingly automated. Recent advances in automated systems and the possibilities they foresee have produced interest as well as anxiety within some militaries and defense ministries, bringing several questions to light surrounding the safety and protection of civilians and combatants during conflict. 

 

Lack of Regulations Surrounding AI

With Russia’s invasion of Ukraine as the backdrop, the United Nations recently held a meeting to discuss the use of autonomous weapons systems, commonly referred to as killer robots. These are essentially weapons that are programmed to find a class of target, then select and attack a specific person or object within that class, with little human control over the decisions that are made. Russia took center stage in this discussion, in part because of its potential capabilities in this space, but also because its diplomats thwarted the effort to discuss these weapons, saying sanctions made it impossible to properly participate. Russia’s aggression is becoming an unfortunate test case for how artificial intelligence-fuelled warfare can and likely will proceed. 

The technology behind some of these weapons systems is immature and error-prone, and there is little clarity on how the systems function and make decisions. Some of these weapons will invariably hit the wrong targets, and competitive pressures might result in the deployment of more systems that are not ready for the battlefield. To avoid the loss of innocent lives and the destruction of critical infrastructure in Ukraine and beyond, we need nothing less than the strongest diplomatic effort to prohibit in some cases, and regulate in others, the use of these weapons and the technologies behind them, including AI and machine learning. This is critical because when military operations are proceeding poorly, countries might be tempted to use new technologies to gain an advantage. 

Data fed into AI-based systems can teach remote weapons what a target looks like, and what to do upon reaching that target. Similar to facial recognition tools, AI technologies for military use have different implications, particularly when they are meant to destroy and kill, and as such, experts have raised concerns about their introduction into dynamic war contexts. And while Russia may have been successful in thwarting real-time discussion of these weapons, it isn’t alone. AI is employed in the development of swarming technologies and loitering munitions, also called kamikaze drones. 

Rather than the futuristic robots seen in science-fiction movies, these systems use previously existing military platforms that leverage AI technologies. AI technologies have been shown to be biased, particularly against women and people in minority communities. If recent social media discussions among the AI community are any indication, the developers, largely from the private sector, who are creating the new technologies that some militaries are already deploying are largely unaware of their impact. At the moment, the United States is at the head of the pack, but a joint statement by Russia and China in early February notes that they aim to jointly build international relations of a new type, and specifically points to their aim to shape governance of new technologies, including what may well be military uses of AI. 

The utilization of data by the software—particularly by sophisticated, ever-evolving algorithms— is becoming more and more important as nations integrate artificial intelligence into their services, procedures, and decision-making. According to the EU, which is once again setting the pace in the proposal for an AI legal framework, regulations are crucial to the creation of trustworthy AI technologies. 

Concerns about potential misuse or unintended consequences of AI, however, have prompted efforts to examine and develop standards, such as the US National Institute of Standards and Technology (NIST) initiative involving workshops and discussions with the public and private sectors around the development of federal standards to create building blocks for reliable, robust, and trustworthy AI systems. 

In April 2021, the European Commission released its 108-page proposal to regulate artificial intelligence, describing it as an attempt to ensure a well-functioning internal market for artificial intelligence systems that is based on EU values and fundamental rights. It is the bloc’s first major attempt to comprehensively regulate such systems, and could have global repercussions. 

The war in Ukraine should be a wake-up call regarding the use of technology in warfare, and the need to regulate AI technologies to ensure civilian protection. If the international community doesn’t properly manage the development, proliferation, and use of military AI, international peace and stability could be at stake. It’s not too early for national governments to take action. The big question is what are the right tools for doing so. 

While the EU regulation broadly acknowledges these risks, it does not meaningfully protect people’s rights to safety, social security and an adequate standard of living. The current and proposed regulation fails to address these concerns. 

There are many reasons a simple treaty is unlikely to be the answer. First, capturing what military AI and its risks are in the language of a treaty is no easy task. AI is a fuzzy technological area, and experts themselves still disagree about what it is and isn’t. The arms control community would face significant conceptual and political hurdles if it pursued a blanket regulation on military AI. In fact, governments have yet to articulate use cases for most applications of military AI. 

Second, it might take years or even decades for governmental negotiators to reach an agreement on an AI arms control treaty. Given how fast AI technology evolves, officials may find that the eventual outcome of any international negotiation is out of tune with technological reality and obsolete from the get go, especially if a treaty is based on technical characteristics. 

Third, the political outlook for a new arms control agreement is gloomy. As tensions rise between Russia, China, and the United States, it’s difficult to imagine these countries having many constructive discussions on military AI going forward. 

The White House has also recently released guidance for the regulation of AI applications, establishing a framework that future rulemaking or legislation can build upon. The good news is that the administration is committed to a sectoral approach. Since AI is just a collection of statistical techniques that can be used throughout the economy, it makes no sense to have a federal AI commission to enforce one-size-fits-all rules. In a recent op-ed, former White House official R. David Edelman makes a similar point about not regulating AI as if it were a single thing. However, the idea that measures such as transparency, accountability, and fairness might promote AI growth and innovation is also foreign to this framework. Although this is a good first step, the current and even proposed regulations still have their flaws. In today’s world, the real task for AI regulators is to create a rules structure that both protects the public and promotes industry innovation—not to trade off one against the other. Another challenge is that regulators must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth.

 

The Impact of Gender and Race Bias in AI

The Disregard of Minorities

MIT researcher Joy Buolamwini discovered that she had to wear a white mask to get a facial recognition service to see her face. That revelation sparked Buolamwini to carry out tests on software by brand-name tech firms such as Amazon, uncovering much higher error rates in classifying the gender of darker-skinned women than for lighter-skinned men. This simple example highlights how machine learning can easily ignore minorities. Many other researchers have shown how AI systems, which look for patterns in large amounts of data, will mimic the pre-established biases of the data they are learning from. For example, If AI systems are developed using images of mostly white men, the systems will work best in identifying white men. These systems may discriminate against people on the basis of race or gender. Imagine the dangers that this poses for targeting decisions where there would be no way of knowing which minority groups could erroneously fall into the category of legitimate targets. The principle of distinction requires parties to an armed conflict to at all times distinguish between combatants and military objectives on the one hand, and civilians and civilian objects on the other hand, and to attack the former only. This bias is a clear infringement of the principle of distinction as it would be certainly impossible to determine which minorities could enter the territory of being a legitimate target, potentially leading to the targeting of civilians or civilian objects if classified as a minority by a certain algorithm. 

Lack of Transparency 

The purpose of transparent artificial intelligence is to accurately analyze and communicate the outcome of a particular model—in other words, Explainable AI. This enables us as humans to understand the inner workings responsible for certain outcomes and why particular decisions are generated. Unlike traditional algorithms, the decisions which are made by machine learning processes can be impossible for humans to trace and understand, where in a targeting context there would be no explanation as to why a particular, potentially destructive decision has been made. With this lack of explainability, we can only imagine the dangers that these issues of unpredictability will generate, where it is not possible to decipher which features are responsible for classifying a person or an object as a legitimate target. Our databases are so inherently biased to an extent where we are now no longer able to derive answers for certain decisions in conflict. Not only does this transgress the International Humanitarian Law, but it also has a clear impact on our rights as human beings. This obscurity of AI systems can prevent individuals from recognizing why their rights were violated and therefore from seeking amends for those violations.

Automated Face Recognition 

Facial recognition systems have overthrown passwords and fingerprint identifications on our phones, laptops and watches. Who’s to say that they will not be utilized for automating kill decisions in urban warfare? Not only are facial recognition systems used by major companies such as Apple, but they have also proven to be an imperative tool used in banks, airports and border checks. It could be argued that it may provide an excellent tracking tool to identify and kill ‘high value targets’. But, although face recognition algorithms boast a high classification accuracy of over 90%, these positive outcomes are not always pervasive. There are issues, just like with every other AI algorithm. Eight trials carried out in London between 2016 and 2018 resulted in a 96% rate of “false positives”—where a software incorrectly alerts police that a person passing through the scanning area matches a photo on the database, misidentifying members of the public as potential criminals, with only 4% correct recognition accuracy over eight trials. Just imagine the humanitarian consequences if that had been robotic weapons selecting targets. 

Bias Issues in Data 

It is no secret that what we value as a society at large is constantly changing and progressing, with large-scale changes over the last several years in what is admissible to say in the media or in public, as well as what is classified as prejudiced. However, It seems that most of society’s old, discriminatory values are deeply fastened into the internet, which is where the majority of the data which machine learning algorithms use are derived. In a 2015 scandal, Google’s face recognition technology identified two black American users as gorillas as a result of biased inputs as well as incomplete training. In another well-known experiment, recruiters chose resumes with names which were “white-sounding”. By training an algorithm based on this database, it naturally learned to automatically filter out and exclude any “black-sounding names”. This algorithm selected candidates based on subjective criteria, perpetuating evident racial discrimination. The societal push towards greater rationality and justice is being repressed by unjust historical principles about race, poverty, and gender. In a humanitarian context, there is no reason to believe that bias in targeting data would be any different. 

The complexity and rise of data in healthcare suggest that artificial intelligence will be increasingly applied in the field. AI and machine learning software will enable medical professionals to identify healthcare needs and solutions rapidly and with more accuracy, where it may provide valuable assistance to medical staff in times of emergency. For example, Corti is an AI tool that assists emergency staff. Corti can alert staff members when it detects a heart attack, for instance, by analyzing the voice of the caller, background noise and relevant data from the medical history of the patient. If machine learning is already being deployed within the healthcare sector, it can firmly be argued that its use may be of significant value during armed conflict to treat and care for the wounded. In doing so, it may violate the Geneva Conventions, particularly Article 12 of Convention I, which states that the wounded and sick shall be respected and protected without discrimination on the basis of sex, race, nationality, religion, political beliefs or other criteria. With Russia’s invasion of Ukraine and increasingly prevalent discussions surrounding the consideration of new means and methods of warfare, it is almost certain that particular groupings may be targeted at a larger scale than others as a result of historically biased databases, directly violating International Humanitarian Law

International Humanitarian Law Applicability in the Use of AI

This section will address the relevance of international humanitarian law (IHL) and how it governs AI, in addition to new means of cyber and information warfare. Hence, it will go into more detail about the Geneva convention, the regulations in concern, but most importantly, it will answer the question “With AI and its use during warfare, is the IHL still applicable today?” 

The Geneva Conventions and their Additional Protocols are the cornerstone agreements of international humanitarian law. Their goal is to give individuals who become victims of armed conflicts the bare minimum protections, standards of humane treatment, and fundamental guarantees of respect, while also allowing States to meet their military objectives. These Conventions were established on August 12, 1949, and 188 States have ratified them at time of writing. Thoughts of what has transpired on the world’s battlefields since the Second World War could lead one to believe that this is something to celebrate. In “traditional” or “conventional” wars like those fought in Korea, Vietnam, the Middle East, on the Falklands/Malvinas, and in the Gulf region, the rules of international humanitarian law were generally upheld, and despite a few horrifying exceptions, it did protect thousands of victims of these conflicts. 

Everyone has witnessed massive transgressions of international humanitarian law during conflicts around the world, however; most notably, lately in Yemen, Somalia, Syria, Iraq, Ukraine, and many other locations. And it’s challenging to avoid posing queries like the one we have today: Is this law still applied by States? 

People firmly believe in the immense worth of international humanitarian law, but we should never take its applicability for granted. The legislation requires debate, discussions, critical examination, assessment of its implementation, gap analysis, and follow-through with the players on the ground in order to remain relevant and useful. Relevance therefore is not a given; it results from a strong commitment to the law. 

IHL violations remain one of the biggest obstacles to protecting civilians, aid workers, and medical personnel. Attacks on aid organization buildings, convoy hijackings, and murder or abduction of staff members all occur. As a result of this brutality, millions of people are denied access to life-saving aid. 

Inspecting how the rules of customary International Humanitarian Law fails to govern autonomous artificial intelligence in warfare is another way to criticize the relevance of IHL now. In fact, Lethal Autonomous Weapons Systems (LAWS) have become more prevalent in recent years, which has many people worried. According to the UN, they are weapons that “elect, find, and engage targets without human supervision”. In simple words, they are capable of operating unattended. AI and machine learning-enabled cyber capabilities may automatically look for cyberattack vulnerabilities and execute counterattacks, or they could automatically protect against cyberattacks. These kinds of changes could alter the kind and severity of attacks as well as their magnitude and scope. Some of these technologies could even be considered “digital autonomous weapons”, which could lead to concerns about human control comparable to those that concern real world autonomous weapons. There have been concerns voiced over the possibility of a legal “accountability gap” in the event of IHL violations due to the employment of autonomous weapon systems. The choice to deploy and operate a weapon will always be made by a person, to whom culpability may be attributed, but due to the autonomy of weapon systems, it is not always clear who is responsible. A State may be responsible for IHL violations brought on by the use of an autonomous weapon system under the doctrine of state responsibility. In fact, under general international law, which regulates the responsibilities of States, they would be held accountable for internationally unlawful activities, such as IHL violations by their armed forces while utilizing autonomous weapon systems. A State would also be liable if it deployed an autonomous weapon system without conducting proper testing or evaluation first. The limitations of human control over an autonomous weapon system, however, may make it challenging to hold those engaged in the design, development, and deployment of the weapon accountable for IHL offenses under international criminal law. Given that the weapon systems may choose and attack targets on their own after being turned on, it’s possible that they lack the knowledge or intent necessary to be held accountable. Programmers may not be aware of the specific circumstances in which the weapon system would subsequently be used and where IHL violations might take place, and commanders might not be aware of the precise moment and location of an attack at the time it is activated.

The use of AI and machine learning in the digital sphere for information warfare, specifically the production and dissemination of false information with the intent to misleadi.e. disinformationas well as the dissemination of false information without such an intenti.e. misinformationis a related application. Even if not all of them include AI and machine learning, these innovations appear destined to alter the scope, nature, and possible effects of information warfare. Artificial intelligence-enabled systems have been extensively utilized to manufacture bogus information, whether it be text, audio, photographs, or videos, which is getting harder to tell apart from real material. It might have a big impact on the ground if conflict parties use these tools to amp up tried-and-true propaganda techniques in order to sway opinion and influence decisions. The ICRC is concerned that civilians may experience arrest or ill treatment, discrimination, attacks on their person or property, or be denied access to necessary services, as a result of digital disinformation or misinformation. 

Furthermore, the conditions under the IHL have rules that regulate the type of weapons deployed and used. “The use of means and methods of warfare which are of a nature to cause superfluous injury or unnecessary suffering is prohibited.”; “The anti-personnel use of incendiary weapons is prohibited, unless it is not feasible to use a less harmful weapon to render a person hors de combat” and, “The use of weapons which are by nature indiscriminate is prohibited.” However, they all set a high threshold, meaning that they are unlikely to be subjected to an outright ban. If it serves a military objective, “unnecessary pain” does not preclude the possibility of excessive agony. In order for the test to be passed, “unnecessary suffering” to civilians would have to occur in all circumstances, beginning with the weapon’s design, the usage of that certain weapon. Nothing about the way LAWS were created suggests that they would always result in unnecessarily painful outcomes. One may argue that they reinforce the connection between the harm done and the suggested military goal in many instances due to increased accuracy.

The first issue with regulation laws found within the legal parameters of current legal systems is determining proportionality. The principle of proportionality, codified in article 51(5)(b) of Additional Protocol I prohibits “any attack which may be expected to cause incidental loss of life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated”. For instance, bombing a whole building to take out a sniper would be excessive because the collateral damage would outweigh any military benefit. The accountability gap is the second significant issue in regulating LAWS. Accountability makes it easier to punish wrongdoing and works to stop it from happening again. It encourages healing and provides victims with the knowledge that justice has been served. In accordance with international humanitarian law, war crimes require individual responsibility. How are we to assign guilt to a computer given the significance of responsibility for the administration of justice during times of war? These weapons have the capacity to carry out crimes for which nobody would be held accountable. Human producers and operatives are too far removed from the criminal conduct to be held accountable in any way. Furthermore, a judgment would not serve the goals of punishment for a victim of a war crime even if the legislation were changed to include such devices. A machine cannot be “punished” or discouraged in any real sense.

 

Impact on Human Rights When AI is Used in Combat

The use of AI being initiated into combat situations brings many questions to light concerning an Automated Weapons System’s ability to complete high levels of decision-making processes such as proportionality assessment in sieges or firefights as well as discerning between a military target and a civilian caught in harm’s way. There is no doubt that when it comes to executing a mission or target, an AWS would excel, though at this point in time there are very few fully autonomous weapons in the world, the issue remains that these new AI must have programming sophisticated enough that its decision-making skills are comparable to that of a human being. The rules of war must still apply to these machines, whether their objective requires them to eliminate a target. Chemical warfare, torture, intentional killing of innocent civilians, etc. are all prohibited, even for AWS. The AWS must also be able to select and attack targets without further human intervention past perhaps maintenance and receiving orders. By interacting too deeply with the AI it is possible that the individuality would be taken away, while that wouldn’t necessarily be a bad thing it would stall the progression of machines becoming more autonomous. Though with any major decision-making processes being left to people it would eliminate the worry of AI making mistakes in high risk or high casualty situations. 

Mentioned previously was an AI’s ability to act out proportionality assessment, which is in most situations a commanding officer’s ability to assess the situation within an ongoing or upcoming battle, decide whether the casualties, be it civilian or allied military, would outweigh the benefits of continuing the siege. This is all based on perceived advantages not assured, so perhaps in a logical standpoint an AI would be superior in such a situation if the machine had the ability to differentiate between civilian and military targets, which brings up another question. 

Though it may sound like a simple distinction, there are many factors that must be recognized and understood for the AWS to be trusted in such positions. For example, most medics keep a sidearm on their person for personal safety and the safety of the one they are responsible to care for, not for the sake of joining in the ongoing battle, but would the AI be able to understand that this person is not a target despite having a weapon? Perhaps it can identify certain insignia that would require the system not to engage in combat. Similarly, would an AWS be able to identify a civilian or hostage caught in the middle of combat as innocent simply based on hand gestures, body movement, or certain phrases? The ability to discern military personnel from innocent civilians would keep military operations from causing unnecessary damage to areas of high civilian population. In doing so this would also preserve the natural human rights people have to life, a home, and to live their lives safely. Most sieges and wars take place near homes and in cities where the military presence isn’t always high and causes extra damage to both people and homes unnecessarily; if an AI had the ability to quickly distinguish and further prevent casualties of this sort then perhaps utilizing them would benefit more so than being a detriment. 

Despite AI being so advanced and efficient in most situations, it seems as though human rights are more so on the fence than assured when it comes to fully utilizing AWS in the field. It has already been stated that there is a chance of AI violating the Principle of Distinction which boils down to separating combatants from noncombatants, which at times is even difficult for soldiers in the midst of battle. In 2013, a group of scientists called for a ban on autonomous lethal robots due to the lack of evidence that robots could have the “functionality required for accurate target identification, situational awareness, or decisions regarding the proportional use of force” (International Committee for Robot Arms Control).

Lack of control during combat operations could leave unnecessary lasting effects that are outside of the war effort such as the destruction of stores or hospitals that cater more towards the civilians than the tiny or perhaps even singular military officer presence. It ignores and decimates the rights of those caught in the crossfire or affected by the aftermath of a skirmish. 

 

Conclusion

The introduction of AI into combat has been described as the third revolution to warfare, after gunpowder and nuclear arms, reaching the point where the full deployment of autonomous weapons systems is feasible within less than a decade. Though similarly to the rapid spread of both gunpowder and nuclear weapons, the same can be assumed of an arms race concerning AI. In an attempt to gain the upper hand, the reputation of AI could be tarnished as a new type of intelligence used for the benefit of humanity, with its entire focus and meaning being changed to the dismantling of the alliances the world has spent so long building and strengthening. Even without AI deployment on such a massive scale, the primary concern remains that an AWS has the potential to cause more damage than good.

 

Bibliography

Lack of Regulations Surrounding AI

https://www.scientificamerican.com/article/ai-influenced-weapons-need-better-regulation/

https://thebulletin.org/2021/03/regulating-military-ai-will-be-difficult-heres-a-way-forward/

https://hbr.org/2021/09/ai-regulation-is-coming 

https://www.brookings.edu/research/ai-needs-more-regulation-not-less/

https://www.hrw.org/news/2021/11/10/how-eus-flawed-artificial-intelligence-regulation endangers-social-safety-net#_How_can_the_EU_ensure2

https://www.ncsl.org/research/telecommunications-and-information-technology/2020-legislation related-to-artificial-intelligence.aspx

https://www.researchgate.net/publication/351963707_THE_REGULATION_OF_THE_USE_OF_ARTIFICIAL_INTELLIGENCE_AI_IN_WARFARE_between_International_Humanitarian_Law_IHL_and_Meaningful_Human_Control

https://www.csis.org/analysis/one-key-challenge-diplomacy-ai-chinas-military-does-not-want-talk

 

The Impact of Gender and Race Bias on AI

https://blogs.icrc.org/law-and-policy/2018/08/28/impact-gender-race-bias-ai

https://apnews.com/article/north-america-ap-top-news-artificial-intelligence-ma state-wire-technology-24fd8e9bc6bf485c8aff1e46ebde9ec1

https://international-review.icrc.org/articles/ai-humanitarian-action-human-rights ethics-913#:~:text=The%20obscurity%20of%20AI%20systems,ordinary%20people %20do%20not%20possess

https://www.independent.co.uk/news/uk/home-news/facial-recognition-london inaccurate-met-police-trials-a8898946.html https://www.educative.io/blog/racial-bias-machine-learning-algorithms

 

The International Humanitarian Law Applicability During the Usage of AI

https://www.law.cornell.edu/wex/geneva_conventions_and_their_additional_protocols#:~:text=The%20Geneva%20Conventions%20and% 20their,become%20victims%20of%20armed%20conflicts.

https://international-review.icrc.org/sites/default/files/reviews-pdf/2021-03/ai-and-machine learning-in-armed-conflict-a-human-centred-approach-913.pdf

https://civil-protection-humanitarian-aid.ec.europa.eu/what/humanitarian-aid/international humanitarian_law_en#:~:text=IHL%20violations%20continue%20to%20be,and%20personnel%20murdered% 20or%20kidnapped

https://www.law.cornell.edu/wex/geneva_conventions_and_their_additional_protocols#:~:text=The%20Geneva%20Conventions %20and%20their,become%20victims%20of%20armed%20conflicts

https://www.forbes.com/sites/madelinehalpert/2022/04/13/russia-committed-clear-violations-of humanitarian-law-and-war-crimes-osce-says/?sh=595a5c04686a

https://www.wnyc.org/story/role-ai-russia-ukraine-war/file:///Users/karma/Downloads/autonomous_weapon_systems_under_international_humanitarian_law.pdf

https://www.icrc.org/en/war-and-law/weapons/ihl-and-new-technologies

https://www.icrc.org/en/doc/resources/documents/misc/57jq6v.htm

 

Impact on Human Rights When AI is Used in Combat

https://deliverypdf.ssrn.com/delivery.php?ID=08211200209402209711301608710602211709907206701207904307801209112309210710 2064000011021023022028055040113123028119109072015120107025064010065103016067 08310211200010402109206012111508502611507208606612512012212402811811908800507 7084086118007102031009008&EXT=pdf&INDEX=TRUE

https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May June-2017/Pros-and-Cons-of-Autonomous-Weapons-Systems/

https://www.degruyter.com/document/doi/10.1515/9781478007135-069/html?lang=en