Supervised by: Gayathree Devi Kalliyat Thazhathuveetil, BA LLB Hons (GNLU), BCL (Oxon), MPhil (Oxon). Gayathree is a DPhil (Law) candidate at the University of Oxford. In addition to her LLB, she holds an MPhil and a Bachelors of Civil Law. During her BCL studies, she received prizes for her exemplary work in International Law & Armed Conflict and International Economic Law. Before coming to Oxford, she completed BA, LLB (Hons) at Gujarat National Law University, India. Gayathree has worked with graduate students on the Diplomatic Studies Programme at Oxford in a mentoring capacity, as well as assisting undergraduate students at Hertford College, Oxford and SOAS, University of London.

Introduction

This paper aims to analyse international human rights and the ever-changing technological world where artificial intelligence can be designed and utilised as both a positive and/or harmful vehicle of impact – and human right laws are inadequately addressing or targeting these concerns. Human Rights are defined as the fundamental rights that all individuals are entitled to internationally, and Artificial Intelligence is defined as any technique that enables computers to mimic human intelligence. The paper will specifically discuss: the right to privacy, the right to freedom of assembly, the right to equality – with regards to racial, socio-economic, and disability discrimination.

 

Section 1 – Right to Privacy

The invasion of privacy in a digital age

In the current digital world where almost everything is available and conducted online, this online activity is monitored through the use of an AI operated cookie system that tracks you across the web collecting large data packets on an individual. These cookies store personal information from any electronic device you use. The data collected is virtually everything you do online including your name, address, workplace, current location, texts and emails sent and received, websites visited, items bought, your search history, IP address and even where your cursor hovers on the screen. This is a massive amount of sensitive information that humans have the right to keep confidential as stated in the International Covenant on Civil and Political Rights. Part III, Article 17 of the Covenant states that – 1. No one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence, nor to unlawful attacks on his honour and reputation 2. Everyone has the right to the protection of the law against such interference or attacks.

However, with the use of cookies, websites insist that you accept ‘essential’ cookies, and some do not allow you to use the website unless you accept these cookies. Therefore, this directly infringes human rights as you are forced to accept this data collection on you just to carry on with your intended work. Furthermore, the option to only accept essential cookies itself is usually in a white colour whilst the option to accept ‘all cookies’ is usually highlighted in a bright colour in order to sway people to accept it as businesses want to collect all personal information. This is a way corporations manipulate people as most don’t even realise accepting only essential cookies is an option. The most concerning example is Google’s data collection on its users as before using Google’s services, people must either accept all cookies or decline all cookies with no option for only essential acceptance. This massively influences people to accept all instead of declining as if you do, it causes major inconvenience due to essential cookies not being saved either thus, Google signs the user out of everything every time you close the browser. This is a major human rights violation as Google’s services such as Gmail store information about all the emails an individual sends and receives which are private conversations and furthermore, Google Maps has the information on where a person lives, works, where their kids go to school, etc. This clearly violates how “no one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence” as this data collected on individuals is reviewed by google themselves to deliver better services, make improvements, and customise experiences. Therefore, while it benefits making their services more efficient, effective, and usable, it still infringes on the privacy of humans, their security and their right to the details of their identity.

Additionally, Google also shares data about the user’s favourite product, the type of items they search for, websites they usually visit and their favourite mobile games to third parties, usually advertisers. While Google only does this when you give them consent to share personal information, with accepting all cookies there is usually a clause in the privacy policy of both Google and other companies’ websites that state that they use your personal information for marketing purposes. Therefore, with accepting data collection through cookies, most users also consent to sharing basic data with third-party advertisers. While this information is not necessarily sensitive, it is still personal information and therefore, it is protected under the right to privacy. Yet, Google makes $147 billion in ad revenue annually by providing information to advertisers for a more personalised ad experience based on your activity.

Between 2018-2020, Google fired 80 employees for abusing internal tools to access user data and spy on people, with some even sharing personal information outside the company. Other tech companies like Facebook, have also experienced similar situations of user data abuse from employees where Facebook fired 52 people between 2014-2015. One Facebook engineer also used his access to Facebook user data to track a woman he was travelling with who had left their hotel room after they had a fight on vacation. Another instance occurred with Snapchat where employees improperly accessed user information through internal tools giving them the ability to spy on individuals, accessing their location information, saved Snaps, and personal information including phone numbers and email addresses. Therefore, it is evident that the existing privacy policies and regulations in place both within businesses, and in country legislations on online activity are not sufficient in protecting the international human right to privacy through data misuse.

The legal regulations on online data collection

According to the International Covenant on Civil and Political Rights, “everyone has the right to the protection of the law against such interference or attacks”. This implies that it is the local government’s job to impose legal restrictions that safeguard the privacy rights of individuals against these corporations. However, the law as it currently stands is not sufficient in protecting the right to privacy that individuals are entitled to as there are no regulations as to the actual collection of these large chunks of data.

The European cookie law (ePrivacy Directive) requires websites to obtain prior consent from users when employing cookies to process personal data from individuals inside the European Union. This is effective in that users may not be exploited as their personal data cannot be collected and reviewed without their consent. However, this in reality gives virtually no real protection to the right to privacy of individuals as some websites do not allow you to use the site unless you accept. This coerces people to accept as they need to get on with the work they were trying to carry out as if they refuse the cookies, they simply cannot use the website for the purpose they needed to fulfil. Whereas, The Data Protection Act 2018 in the UK, regulates how your personal information is used by organisations, businesses, or the government where they have to follow strict rules called ‘data protection principles’. Although, this law also does not realistically stop the collection of this sensitive data but rather tries to ensure that it is “used fairly, lawfully and transparently”, “for specified, explicit purposes” and “in a way that is adequate, relevant and limited to only what is necessary”. Therefore, the large amounts of data collected on an individual that forms a ‘profile’ on them including all knowledge known on the person is still legal even though it infringes on their privacy. The law simply ensures that this data is being used appropriately so that the individual’s rights are not exploited by corporations acting as a data broker to third parties without the user’s knowledge.

The use and consequences of data sets

As data profiles are made using AI technology on each individual containing all knowledge about them, this enables a variety of serious threats to the human right to privacy. First, it has become very common for data breaches to occur by experienced hackers that either sell the sensitive information stored in your set on the dark web or use it for their own advantages including mainly social security numbers, credit card, and bank information, exploiting your privacy. Cyberhackers stole nearly 294 million consumer records containing sensitive personal information according to the 2021 End-of-Year Data Breach Report from the Identity Theft Resource Center. This number was 310 million in 2020 and 883 million in 2019. Using your personal information, hackers can apply for credit card accounts, government benefits and loans in your name, make fraudulent purchases, and use your health insurance to access medical care. The danger of such data breaches is that an individual’s financial security is stolen and abused where they lose large amounts of money due to fraud and suffer identity theft that damages your credit and inflicts harm that takes years to undo.

Furthermore, as Google states in their privacy policy, they will share your data “if they receive government requests to share data for legal reasons”. As the International Covenant on Civil and Political rights only prohibits “arbitrary or unlawful interference” with privacy, most countries’ domestic law provides that the right to privacy can be subject to lawful restrictions when necessary to protect national security, public order, public health or morals, or the rights and freedoms of others. Therefore, the government can request the data sets on an individual to be shared simply by stating that they believe that the individual is a threat to national security or for the prevention of a crime. However, it can be argued that this is still a technical breach of the principles of the international right to privacy since, as long as the Government states that they perceived a threat to national security, even if it turns out to be false, their actions are still legally protected by the law. This violates the line separating the individual from the state completely and especially, in a democratic society where the balance of power is then shifted towards the state. Thus, people’s human right to privacy is not sufficiently regulated as there are virtually no restrictions preventing the abuse of this right as long as the country claims that they needed to protect national security and public health. Additionally, the harm of this data collection and compilation of data sets is that a person may be incorrectly flagged as a threat to national security or potential offender due to inaccuracies of the AI technology used to create the profile. Therefore, while this benefits the government to identify and flag potential suspicious behaviour and activities for public safety and protection, these flags could be completely erroneous due to AI software issues which infringes on the right to privacy for the individual as the government can look through all their personal data. Currently, at least 75 out of 176 countries globally are actively using AI technologies for surveillance purposes.

The U.N. High Commissioner for Human Rights, Michelle Bachelet, warned that AI’s use as a forecasting and profiling tool could have a negative impact on the right to privacy. In addition, in the backdrop of the Pegasus spyware revelations of 2021, she also warned of an “unprecedented level of surveillance across the globe by state and private actors,” that she insisted is “incompatible” with human rights. The Pegasus malware infects electronic devices, enabling operators of the tool to obtain messages, photos, emails, record calls, and even activate microphones. The leak contained a list of more than 50,000 phone numbers which belong to those identified as people of interest, by clients of the company behind Pegasus, which also included some governments. Authorities were using surveillance tools from a variety of sources that were supposed to promote public safety in order to hack the phones and computers of people conducting legitimate journalistic activities, monitoring human rights, and expressing dissent or political opposition. Therefore, this is evidently an extremely alarming major wide scale human rights privacy violation and has even greater implications if worldwide government surveillance ensues with the current stream of online data collection.

The implications of legal regulations being insufficient in protecting privacy rights

The right to privacy is a fundamental human right that is necessary for the dignity and security of persons. The use of AI systems to collect and compile large amounts of data on an individual inarguably infringes on this right. According to the International Covenant on Civil and Political Rights, governments must not only refrain from violating human rights, but they must also take positive steps to protect these rights. Yet, the current legislations in place do not effectively safeguard the right to privacy of individuals, but rather merely stop the exploitation and misuse of their privacy. Business enterprises must also respect all internationally recognised human rights, yet, they seem to have found a way to get around the right to privacy by explaining its wide range of benefits to user experience and effectiveness. However, the UN Guiding Principles on Business and Human Rights do implement appropriate guiding principles in the safeguarding of human rights in which States have the duty to protect, and corporations have the responsibility to respect these principles. Although this guideline is effective in preventing, addressing, and providing remedies to human rights abuses committed in business operations, it is not binding on all states and businesses. Thus, many corporations in many countries still have no effective restrictions preventing them from abusing international human rights. Furthermore, the use of these data sets by governments themselves for surveillance infringes on people’s autonomy and dignity. As the U.N. The High Commissioner for Human Rights states, “AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights”. Therefore, the law must adapt to the new challenges of protecting the privacy of individuals in a technological world governed by developing AI software with greater transparency and guardrails being key to safeguarding the rights of individuals.

 

Section 2 – Right to Peaceful Assembly

The individual right or capacity of individuals to gather together and collectively express, promote, pursue, and defend their collective or shared beliefs is referred to as freedom of peaceful assembly, which is frequently used interchangeably with the freedom of association. Slowly as the globe is innovating, these, and other human rights, are being challenged by the development and widespread deployment of artificial intelligence. Indeed, the development of artificial intelligence and the impact of these new technologies has slowly become a cause for concern. Nonetheless, arguably new technologies have been positively impacting a fundamental human right, the right to assemble, under Article 11 of the Human Rights Act of 1998, yet as well in a negative manner. 

Positive impact of new technologies

Technologies up and coming, such as ICT, have a potentially good impact. New technologies have the ability to allow people to use their right to assembly, as well as similar human rights. Peaceful protests, forming networks and coalitions, and becoming better informed about assemblies and the reasons behind them has the ability to drive social change. These new technologies may also help increase the transparency and accountability during protests, assemblies, and more such as in the occurrence of violations and abuses. As mentioned, ICT, Information and Communications Technology can be used to organise assemblies in a relatively swiftly and accessible way of communicating quickly and efficiently, spreading messages to a large audience at a low financial cost, which is becoming increasingly important in today’s world. In some instances, even social media networks can be used to organise and plan assemblies. It is essential to consider that the speed with which information travels now through these new technologies has made assemblies easier as well as supports the notion of spontaneous demonstrations. Another example is a smartphone application, which has recently been developed to arrange the movement of protests to critical geographic areas and therefore maximise their effectiveness. Automated Chatbots are also a new technology created by civil society, which provides help, specifically legal aid, to protestors who are getting arrested. Moreover, to make protests and physical demonstrations more accessible, online streaming services and different forms of online sharing platforms have been created for communities who may not be able to attend/participate physically, such as people with disabilities who hinder them from attending. These streaming platforms can also occur live, which ICT allows, often streamed through social media platforms. Live streaming, or any form of video recording, as well holds accountability as it documents the protest in case of Human Rights violations. Live streaming is as well used as a broadcasting and media tool, as sometimes news reporters cannot or will not cover assemblies or protests. Furthermore, body cameras, typically worn by security officials at demonstrations and even generally, can help encourage accountability as well as ensure transparency for violations of human rights or forms of violence. Given that they are appropriately used continuously, legally, and legitimately, they can offer a valuable record of an event and be used as evidence. These also discourage officials from using or resorting to violence, which has been an increasing topic in today’s world. Finally, online spaces to virtually connect are an extremely innovative tool recently, such as in the #MeToo movement. Essentially, the rise in new artificial intelligence technologies has reinvented the right to assembly and made it more efficient, accessible, and arguably safe. 

Negative impact of new technologies

Nonetheless, while the surge of artificial intelligence technologies has proven to be optimistic regarding the right to assemble, the use of these new technologies, especially online platforms, has also enabled negativities. There has been a sharp increase in hateful speech against some racial and religious groups. Moreover, gender-based discrimination, attacks, and violence, such as violence against women and young girls, have been occurring more frequently due to the accessibility of online platforms. This can result in minorities limiting their participation on online platforms. This online violence and abuse have led to minorities self-censoring or even restricting their online actions, inadvertently infringing their freedom of peaceful assembly. Furthermore, a recent council of Europe publication on Algorithms and Human rights noted that social media platforms such as Youtube and Facebook had adopted filtering mechanisms to detect violent extremist content; however, a criterion to establish whether something would fit this content has not been publicised. While this initiative to stop such material is a step in the right direction, the concern should be raised around the lack of transparency of the criterion as it may infringe on people’s right to free speech and encroach on their freedom of expression. These tensions have also been manifested through the new technologies used for facial recognition; At the same time, undoubtedly, it is a powerful tool used in law enforcement; it can also be weaponised, especially in recognising and permanently watching people, which infringes people’s freedom of assembly, movement, and privacy.  

Laws regarding Articificial Intelligence and the right to peaceful assembly

It is as well a worthwhile discussion whether laws are being upheld and reflect the innovative technology. While there is an extent of regulations set to provide people right regardless of the physicality of the assemblies or protests, the law does not truly reflect the speed of innovation of new technologies. In 2018, UN HRC resolution A/HRC/RES/38/11 states ‘that although an assembly has generally been understood as a physical gathering of people, human rights protections, including for the rights to freedom of peaceful assembly, of expression and of association, may apply to analogous interactions taking place online.’ As well finally, the United Nations General Assembly resolution A/RES/73/173 strived for ‘States to ensure that the same rights that individuals have offline, including the rights to freedom of expression, of peaceful assembly and of association, are also fully protected online’ and to ‘respect to all individuals exercising their rights to freedom of peaceful assembly, of expression and of association, online and offline, in cases of threat, harassment, violence, discrimination, racism and other violations and abuses committed against them.’ Evidently, there is an extent of importance placed upon changing and bettering laws regarding the right to peaceful assembly; however, in my opinion, they are insufficient. For any democratic state, the importance of exercising the right of peaceful assemblies cannot be understated, and therefore states must make sure it is entirely feasible. Any restrictions and limits on peaceful assembly must have a legal basis and not display any form of prejudice. Moreover, states must upkeep and establish more responsibility for breaches and abuses that new technology brings in the context of online assemblies. 

All in all, in the context of the human right to peaceful assembly, international human rights, and innovative technological advancement, the impact of these technologies have been both positive and negative. Yet, human rights law is arguably in somewhat need of upkeeping with the concerns raised with Artificial Intelligence and the right to assemble.

 

Section 3 – Racial/Socioeconomic Discrimination

Preface: 

Regarding equality and inclusion, technology is commonly perceived as a neutral agent benefiting or harming people in areas such as employment, education, health care, and criminal justice. However, algorithms often reproduce bias due to the implicit biases or human faults. Examples of these ingrained issues involve “skews, gaps,” “faulty assumptions,” “signal problems,” and “demographic non- or under-representation because of the unequal ways in which data were created or collected” (UNGA Doc. A/HRC/44/57, 2020). To better explore the impacts of these issues, specific examples branching into multiple aspects of racial and socioeconomic areas will be analysed to exemplify both the depth and expansiveness of artificial intelligence. 

a. Representation in the Digital Divide

1. Inaccurate Representation in Data Collection

1A. Process of Collecting Data

“Dirty data” describes the “inaccurate, missing and poorly represented data, often manipulated intentionally or distorted by biases” (UNGA Doc. A/HRC/44/57, 2021). Regarding such inadequate data collection, technology proves to be an ineffective and harmful tool that can lead to discrimination and inequality. 

One example of an inaccurate method of data collection includes poor facial recognition. Consistently proven to be less apt in identifying people with darker skin, mistakes such as misidentification is a common issue due to the inability of some technology. Because technology becomes more and more accurate when information fed becomes more diverse, the lack of collection of data from certain groups of people can account for such mistakes. 

According to the 2018 “Gender Shades” project, inaccurate identification was most common for subjects who were female, Black, and between the ages of 18-30 years old (B. F. Klare, 2012). Testing based on categories of “eight total cohorts are isolated based on gender (male and female), race/ethnicity (Black, White, and Hispanic), and age group (18–30, 30–50, and 50–70 years old),” algorithms consistently showed more false match scores for females with darker skin than for males with lighter skin. 

Collection of data from the National Institute of Standards and Technology (U.S. Department of Commerce) also confirms that darker-skinned females are most commonly misidentified after processing a total of “18.27 million images of 8.49 million people” from over 24 countries “through 189 mostly commercial algorithms from 99 developers.” (Grother, P., Ngan, M. and Hanaoka, K., 2019).  

1B. Impacts of Poor Identification In Data Collection

The results of such mistakes are lethal. As artificial intelligence and facial recognition are used in areas such as law enforcement surveillance and airport security, misidentification could cause problems simply due to one’s skin colour. Repetition of skewed data for the biometric collection could further incite racial motivations for unfairly targeting certain people of colour. 

One such instance occurred when a Black man was arrested in Detroit, United States for supposedly shoplifting nearly “$4,000 worth of watches from an upscale boutique.” Due to the facial recognition algorithm match, the innocent man was arrested and locked up (Murray, 2022). In situations or countries where justice may be delayed or punishments are more severe and immediate, issues in misidentification could be life-changing – if not deadly such as in circumstances where the death penalty is instated.     

Furthermore, in instances such as national security and immigration, misidentification of dangerous subjects could not only lead to potentially disastrous effects such as delayed access to safety or resources. Even on missions such as drone strikes or military target missions where human interaction is limited and AI has a primary role in gunfire, human or technological mistakes resulting in the misidentification of a certain person can be realised all too late.

1C. Solutions to the Data Collection Crisis 

As technology is ever-changing, it can be rapidly improved as more and more data is inputted. Just as “algorithms developed in East Asia have done better than Western systems at identifying Asian faces,” adding a diverse range of different appearances and “expanding the training set should improve accuracy and reduce differential error rates” (Baker, 2022). In increasing exposure to a larger pool of subjects, systems are more likely to produce accurate recognition in the same way that “psychological research indicates that humans recognise faces of their own race more accurately than faces of other races” – known as the “Other-Race Effect” (Phillips, 2010). 

Reports consistently support this notion as “Face recognition performance on race/ethnicity and age cohorts generally improves when training exclusively on that same cohort.” As a result, training recognition of a variety of datasets across ALL demographics is beyond necessarily in resolving disparities between different races. (Klare, 2012)

2. Power and Diversity of Influence 

As stated by the UN General Assembly on June 18, 2020, “the seven ‘super platforms’ – Microsoft, Apple, Amazon, Google, Facebook, Tencent and Alibaba – account for two thirds of the total market value of the world’s 70 largest platforms.” Specific findings of the magnitude of influence state that: “Google has 90 per cent of the global market for Internet searches. Occupying two thirds of the global social media market, Facebook is the top social media platform in more than 90 per cent of the world’s economies. Amazon has an almost 40 per cent share of the world’s online retail activity.”

Accordingly, many of the cultural, economic, and political values of these companies result in shaping the rest of the world due to their influence. While such power possesses the potential benefit, attention and laws must be also drafted to prevent instances where negative impacts are resulting instead. Potential effects pointed out by the Special Rapporteur include “trends in previous reports on neo-Nazi and other white supremacist groups that rely on social media platforms to recruit, raise funds and coordinate.” Other examples include “radical nationalist Buddhist groups and military actors in Myanmar” exacerbating discrimination and violence “against Muslims and the Rohingya ethnic minority” in which “Facebook’s artificial intelligence systems were unable to detect hate speech.” (UNGA Doc. A/HRC/44/57, 2021).

B. Specific Acts of Racial/Socioeconomic Discrimination Enabled by AI

Although the following issues can occur without the use of technology, they are often enabled or exacerbated to further extents due to technology. By no means does this limit the expansiveness of such acts unfairly targeting certain groups, rather it serves to exemplify examples of how technology can be used to promote discriminatory practices.

1. Issue: Predictive Policing

Predictive policing is a system utilising AI to track common areas of crime based on historical evidence of criminal data. In deciphering patterns of common times and locations, police can use this data to predict where crimes will likely occur – resulting in employment in certain areas. 

While this method of law enforcement can be utilised to increase efficiency, two problems tend to arise due to the utilisation of AI technology.  The first issue arises when police treat technology as a neutral and unbiased source of evidence when tracking down crimes. In doing so, they may defend their intentions and racial biases by blaming the systems “without opinions.” The second issue is how predictive policing incites a feedback loop of over-policing. In tracking past crimes in one specific area and continuing to target that area, police will continuously perpetrate crime in a way that skews future data collected. Historical areas of crime will only appear worse and worse if there are expectations placed where crimes would be expected.  

Studies on software such as PredPol conducted by Gizmodo and The Markup showed that the system disproportionately predicted crimes for neighbourhoods predominantly inhabited by working-class people, people of colour, and Black people. “For the 11 departments that provided arrest data, we found that rates of arrest in predicted areas remained the same whether PredPol predicted a crime that day or not. In other words, we did not find a strong correlation between arrests and predictions” (Sankin, 2021). 

As stated by the ACLU, “predictive policing software is more accurate at predicting policing than predicting crime. Rather than informing us where criminals will be and when they will commit crimes, these algorithms more reliably predict where the police will deploy” (Edwards, 2016).

2. Issue: Health Care

Artificial intelligence in predictions of health care costs has been used to predict the level of risk for certain groups of people. In gauging the costs spent on healthcare programs, the system labels these people as higher risk – resulting in more referrals to the healthcare system. This system may be effective in identifying high-risk individuals and helping them receive the care needed. However, healthcare algorithms may also unintentionally be biased in their programming. 

As found by the UN General Assembly HRC in June 2020, “the algorithm failed to identify less than half the number of black patients at risk of complicated medical needs as white patients” resulting in fewer referrals to improve their health. (UNGA Doc. A/HRC/44/57, 2021)

Explanations for such mistakes can be explained from a socioeconomic standpoint in also accounting for the amount of “no shows” to appointments. Working individuals with less stable financial situations may make it hard for them to take a day off for healthcare appointments – one potential explanation for this discrepancy. To combat such systems, artificial intelligence and programming must account for certain aspects of an individual’s background to address the differences in healthcare. 

3. Issue: Advertisement

With a new system identification of a “successful person,” job hiring or recruitment uses filters based on online information gathered. Based on gender, ethnicity, or religion, AI technology enables employers to determine how exclusive employment becomes. In certain examples, ad filters for jobs only appear on male/female pages based on the target audience. In doing so, discriminatory hiring practices are enabled and can be applied and narrowed down based on interests, religion, background, ethnicity, etc.   

Impacts of ad selection can also impact housing – as certain groups can be intentionally attracted to certain areas. Due to selectiveness in areas such as mortgage qualifications, potential discrimination similar to redlining may occur. With such refining abilities, the potential to discriminate against minorities or certain groups of people is enabled through innovative AI technology. 

 

Section 4 – Impact of AI on the Human Rights of People With A Disability

Preface

In this section we will discuss how the rights of people with disability are negatively impacted by new algorithm driven recruitment and hiring processes and how this might impact on their rights. We will pay particular attention to how this these AI driven recruitment and hiring processes may infringe on the rights of people with a disability and result in their being subject to discrimination in the workplace.

1. Human rights – people with a disability

This paper will use as its core human rights text, the Convention on Rights of People with a Disability (CRPD). This Convention is arguably the most important legislation of its kind with respect to outlining and attempting to establish a framework for the protection of the rights of people with a disability given its position as a major international agreement, which as of 6 May 2022, had 185 ratifications / accessions and 164 signatories.

In this section, we shall use therefore apply the definition in the CRPD on the rights of persons with disabilities which encompasses those “who have long-term physical, mental, intellectual or sensory impairments which in interaction with various barriers may hinder their full and effective participation in society on an equal basis with others’. While this paper will be concentrating on looking at where AI might be detrimental to the full enjoyment of the human rights of persons with a disability, there are many well documented benefits that AI as a technology can deliver to people with disabilities. (Chakraborty, 2021).

Of particular importance for our analysis of the Human Rights implication of AI on people with a disability will be Articles 2 and 5 of the Convention. In the case of Article 2 (Definitions), the paper will apply the meaning of the article text definition for “Discrimination on the basis of disability”. This is defined as “any distinction, exclusion or restriction on the basis of disability which has the purpose or effect of impairing or nullifying the recognition, enjoyment or exercise, on an equal basis with others, of all human rights and fundamental freedoms in the political, economic, social, cultural, civil or any other field…”.

Article 5 of the Convention (Equality and non-Discrimination) will be the primary reference article for the evaluation of the human rights’ implications of AI on people with a disability. Of special relevance for this paper is the emphasis of this article on the equality of all persons “before and under the law” as well as the stipulations that State Parties shall

i) Forbid all disability-based discrimination and guarantee people with disabilities are provided with “equal and effective legal protection against discrimination on all grounds;”

ii) Undertake the appropriate measures “to promote equality and eliminate discrimination” and

iii) Adopt “[S]pecific measures” as required “to accelerate or achieve de facto equality of persons with disabilities” shall not be considered discrimination under the terms of the present Convention.

2. Recruitment process – eliminating discrimination and ensuring equality

In terms of the potential negative discriminatory implications of AI, we will focus on how AI may be used in an inequitable manner against persons with disabilities in the recruitment process. In the recruitment process, an issue of particular concern is how algorithms are applied and their impact upon the impartiality and fairness of hiring procedures.

When an algorithm driven hiring system is applied, the first step that applicants will encounter is the requirement to respond to chatbots before being able to submit their resume. Potential issues that can arise here include sight impaired candidates having difficulty navigating the chatbots if they have not been rendered screen reader accessible, thus potentially preventing them from submitting their resume. In such instances, employers and job recruiters need to pay attention to Article 5 of the CRPD to implement the required AI enhancements or measures to “promote equality and eliminate discrimination.”

At the resume screening stage, there is also a risk that the application of algorithms can result in future discrimination against people with disabilities. As the AI ‘learns’ through the identification of patterns to favour candidates who have completed a college internship, this being noted as a trait associated with successful past hires, this can discriminate against applicants with a disability. In many instances people with disabilities might have been unable to find a college internship in an accessible environment or an opportunity to do so online. Consequently, they may submit information about equivalent experiences but if the AI is unable to properly process this information effectively and it is never assessed by a human recruiter, they may be rejected without due consideration being given to their qualifications and experience. Again, there is a responsibility for employers and recruiters to pay heed to Article 5 of the CRPD to ensure equality for people with a disability.  

In an AI controlled hiring process, a frequent procedure is the use of a virtual preliminary interview which may take place prior to the standard interview. This can be conducted with virtually no human oversight. In performing this function, the AI applied will have been formed based upon training sets based upon its ‘observation’ of interviews conducted by human interviewers. Should these human interviewers have exhibited bias, there is a serious risk that the AI will have ‘learnt’ and acquired these biases. In addition, there is also the risk of a well-known phenomenon called “automation bias”, which consists of humans believing that AI is less biased than humans simply by being AI. This can be dangerous because it helps biased AI go unnoticed and makes those using the AI less likely to search its hire history for signs of bias as they just assume it is unbiased. Furthermore, there is the problem of algorithms inappropriately developed to conduct virtual interviews with persons with a disability. These algorithms which have been programmed to analyse facial movement and word choice to evaluate potential candidates’ personalities and cultural fit, have generally been developed to work on people without a disability. As such, they will not be suitable for candidates with a disability such as deaf candidates, neurodivergent candidates, and other candidates with disabilities which may affect their facial appearance and movements as well as how they speak. This could result in these candidates not receiving equitable treatment and therefore needs to be addressed to ensure candidates with a disability are not discriminated against as per Article 5 of the CRPD.

Finally, in terms of recruitment and AI, there is the pre-employment testing process, which often takes the form of a gamified pre-employment test. This is a test done before a candidate is hired and which involves the prospective jobseeker matching ‘photographs of ideal employee’s facial expressions that indicate empathy to candidate’s facial expressions.” A rote application of an AI developed methodology can lead to qualified neurodivergent candidates being unfairly eliminated as they may score quite poorly on this test, given its format. If there is nonot human input at this stage and the AI is not effectively developed to take account of their particular situation, these applicants might find themselves being rejected despite glowing references talking about their empathy and social skills. Again, companies engaging workers through the application of AI need to ensure they adhere to the tenets of Article 5 of the CRPD to avoid unfair discrimination against persons with a disability and ensuresure they receive equitable treatment to people without a disability.

 

Discussion

As is clear from our look at the recruitment process, one of the major reasons why AI algorithms can be discriminatory is on account of their data sets which frequently underrepresent people with disabilities, thus preventing the algorithm from incorporating them in an equitable manner. This is a well-documented problem as these result in ‘inaccurate’ data sets and /or discriminatory algorithms which fail to either consider reasonable accommodation for people with a disability or acknowledge their specific needs. (Moss, 2021; ) This can arise from the usage of data sets of previous employee hires, which in many companies, especially old ones, may not be very diverse. This is especially true in terms of disability as even today, despite an increased recognition of the importance of equal rights for people with a disability, people with disability in the United States have an employment rate of only 19.1% compared to 63.7% for people without a disability.

Another potential reason why AI discriminates against people with disabilities is that the indicators used to figure out how well the candidate would likely do at their potential job are all secondary, that is they do not measure actual job performance but factors which are supposed to contribute to job performance The selection of such criteria is a matter of personal judgment by those involved in designing the AI and may not give a clear or unbiased picture, particular for people with a disability, as to what criteria most contribute to enhanced job performance. This coupled with AI’s proclivity for shortcuts could mean that the indicators the AI uses will not necessarily have much to do with actual job performance. For instance, in one case an AI determined that the two best indicators for job performance were if the candidate’s name was Jared and if they played high school Lacrosse.

Based on this analysis, it is clear that while AI can potentially deliver benefits to people with a disability, it is imperative that the State, as well as companies and recruitment agencies, work to eliminate discrimination and ensure that the employment rights of this community are protected, particularly during the recruitment process.

 

Conclusion

To conclude, with regards to international human rights and developing Artificial Intelligence technology, respective areas of the law, both internationally and domestically, differently address the positive and negative aspects of AI and correspondingly vary in their effectiveness and sufficiency. 

In respect to the right to privacy, while the International Human Rights Covenant sufficiently defines an individual’s entitlement to this right, local and regional law enforcement tends to lack. The use of AI systems including cookies and data profiling hence result in an infringement and abuse of the right to privacy, by both governments and by businesses, due to States lacking in their responsibility to safeguard human rights from corporate misuse. 

In respect to the right to freedom of assembly, Artificial Intelligence has an equally positive yet negative effect on Human Rights. Whilst there are many positives such as accessibility, inclusivity and accountability, it is undeniable that the advancement of technology has as well opened up online assemblies/demonstrations to danger and hateful speech. However, once these infringements of rights are held accountable and monitored more closely, then AI has an overall more positive effect on the Human Right to freedom of assembly. 

In respect to racial or socioeconomic discrimination, the most common groups unfairly targeted and/or harmed through the use of AI include female, Black, minorities, and poorer individuals. While many issues arise in instances such as representation, facial recognition, policing, advertisement, etc, actively adding a diverse range and depth of information will help improve the system for the future. 

In respect to the rights of persons with disabilities, there are several problems today regarding discrimination of people with disabilities at the hands of the current AI driven Hiring system. The discrimination goes through all levels and stages of the hiring process. The solution for this malaise is keeping the ideas of people with disabilities at the centre of the AI creation process and at the centre of drafting legislation for the regulation of AI.

 

Bibliography

Section 1 – Right to privacy 

Bastone, N. Multiple Snap employees reportedly accessed user data improperly — including location information, phone numbers, and saved Snaps. Insider. (2019). <https://www.businessinsider.com/google-fired-employees-abusing-user-data-sharing-personal-info-vice-2021-8>

Canales, K. Google fired 80 employees for abusing user data and spying on people, with some even sharing personal information outside the company, a new report says. Insider. (2021). <https://www.businessinsider.com/google-fired-employees-abusing-user-data-sharing-personal-info-vice-2021-8>

Cookiebot. EU cookie law | ePrivacy Directive and cookies | Compliant EU cookie consent with Cookiebot CMP. (2021.) <https://www.cookiebot.com/en/cookie-law/>

Geneva. Use of spyware to surveil journalists and human rights defendersStatement by UN High Commissioner for Human Rights Michelle Bachelet. Office of the High Commissioner for Human Rights. (July 2021). <https://www.ohchr.org/en/2021/07/use-spyware-surveil-journalists-and-human-rights-defendersstatement-un-high-commissioner>

GOV.UK. Data protection. (2011). <https://www.gov.uk/data-protection#:~:text=The%20Data%20Protection%20Act%202018%20is%20the%20UK’s%20implementation%20of,used%20fairly%2C%20lawfully%20and%20transparently>

ITRC. Annual Data Breach Report. (2021). <https://www.idtheftcenter.org/publication/2021-annual-data-breach-report-2/>

Jackson, S. A Facebook engineer abused access to user data to track down a woman who had left their hotel room after they fought on vacation, new book says. Insider. (2021). <https://www.businessinsider.com/facebook-fired-dozens-abusing-access-user-data-an-ugly-truth-2021-7>

Kerry, C. F. Protecting privacy in an AI-driven world. (2020). <https://www.brookings.edu/research/protecting-privacy-in-an-ai-driven-world/>

Leslie, D., Burr, C., Aitken, M., Cowls, J., Katell, M., and Briggs, M. Artificial intelligence, human rights, democracy, and the rule of law: a primer. The Council of Europe. (2021). <https://www.turing.ac.uk/research/publications/ai-human-rights-democracy-and-rule-law-primer-prepared-council-europe>

Mijatović, D. Safeguarding human rights in the era of artificial intelligence. Commissioner for Human Rights. Human Rights Comments. (2018). <https://www.coe.int/en/web/commissioner/view/-/asset_publisher/ugj3i6qSEkhZ/content/in-the-era-of-artificial-intelligence-safeguarding-human-rights/pop_up?inheritRedirect=true#:~:text=AI%20in%20fact%20can%20negatively,they%20may%20change%20over%20time>

Neuman, S. The U.N. Warns That AI Can Pose A Threat To Human Rights. npr. (2021). <https://www.npr.org/2021/09/16/1037902314/the-u-n-warns-that-ai-can-pose-a-threat-to-human-rights> UNGA. The right to privacy in the digital age* Report of the United Nations High

UNGA. Promotion and protection of the right to freedom of opinion and expression** Note by the Secretary-General. UNHRC 37th Session A/73/348. (August 2018). <https://freedex.org/wp-content/blogs.dir/2015/files/2018/10/AI-and-FOE-GA.pdf>

UNGA. Right to privacy* Note by the Secretary-General. UNHRC 73th Session A/73/438. (October 2018). <https://documents-dds-ny.un.org/doc/UNDOC/GEN/N18/324/46/PDF/N1832446.pdf?OpenElement>

UNGA. Artificial intelligence and privacy, and children’s privacy Report of the Special Rapporteur on the right to privacy, Joseph A. Cannataci*,**. UNHRC 46th Session A/HRC/46/37. (March 2021). <https://documents-dds-ny.un.org/doc/UNDOC/GEN/G21/015/65/PDF/G2101565.pdf?OpenElement>

UNGA. Commissioner for Human Rights. UNHRC 48th Session A/HRC/48/31. (September 2021). <https://documents-dds-ny.un.org/doc/UNDOC/GEN/G21/249/21/PDF/G2124921.pdf?OpenElement>

UNHR Office of the High Commissioner. International Covenant on Civil and Political Rights. United Nations. (2021). <https://www.ohchr.org/en/instruments-mechanisms/instruments/international-covenant-civil-and-political-rights>

UN News Global Perspective Human Stories. Pegasus: Human rights-compliant laws needed to regulate spyware. United Nations. (2021). <https://news.un.org/en/story/2021/07/1096142>

UN News Global Perspective Human Stories. Urgent action needed over artificial intelligence risks to human rights. United Nations. (2021). <https://news.un.org/en/story/2021/09/1099972>

 

Section 2 – Right to freedom of Assembly

Ashraf C, ‘Artificial Intelligence and the Rights to Assembly and Association’ (Taylor & Francis Online, 2020) <https://www.tandfonline.com/doi/full/10.1080/23738871.2020.1778760?scroll=top&needAccess=true> accessed 21 August 2022

Mijatović D, ‘In the Era of Artificial Intelligence: Safeguarding Human Rights | OpenDemocracy’ (openDemocracy, 2018) <https://www.opendemocracy.net/en/digitaliberties/in-era-of-artificial-intelligence-safeguarding-human-rights/> accessed 21 August 2022

UNGA ‘Impact of new technologies on the promotion and protection of human rights in the context of assemblies, including peaceful protests*’ UNHRC 44th Session A/HRC/44/24 (24 June 2020), online <https://documents-dds-ny.un.org/doc/UNDOC/GEN/G20/151/06/PDF/G2015106.pdf?OpenElement>

 

Section 3 – Racial/Socioeconomic Discrimination

Baker S, ‘The Flawed Claims about Bias in Facial Recognition’ (Lawfare2 February 2022) <https://www.lawfareblog.com/flawed-claims-about-bias-facial-recognition>

Edwards E, ‘Predictive Policing Software Is More Accurate at Predicting Policing than Predicting Crime’ (American Civil Liberties Union31 August 2016) <https://www.aclu.org/blog/criminal-law-reform/reforming-police/predictive-policing-software-more-accurate-predicting>

Grother P, Ngan M and Hanaoka K, ‘Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects’ (National Institute of Standards and Technology 2019) <https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf>

Klare BF and others, ‘Face Recognition Performance: Role of Demographic Information’ (2012) 7 IEEE Transactions on Information Forensics and Security 1789 <http://openbiometrics.org/publications/klare2012demographics.pdf>

Murray S, ‘How AI Can Help Combat Systemic Racism’ (IDSS17 March 2022) <https://idss.mit.edu/news/how-ai-can-help-combat-systemic-racism/.> accessed 15 August 2022

Phillips P and others, ‘An Other Race Effect for Face Recognition Algorithms’ (2010) <https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=904972>

Sankin DM Surya Mattu, Annie Gilbertson, and Aaron, ‘How We Determined Predictive Policing Software Disproportionately Targeted Low-Income, Black, and Latino Neighborhoods’ (Gizmodo2 December 2021) <https://gizmodo.com/how-we-determined-predictive-policing-software-dispropo-1848139456>

UNGA ‘Report of the Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance*’ UNHRC 44th Session A/HRC/44/57 (18 June 2020), online <https://documents-dds-ny.un.org/doc/UNDOC/GEN/G20/151/06/PDF/G2015106.pdf?OpenElement>

 

Section 4 – Impact of Human Rights Regarding Disability

Bureau of Labor Statistics, U.S. Department of Labor, The Economics Daily, 19.1 percent of people with a disability were employed in 2021 at <https://www.bls.gov/opub/ted/2022/19-1-percent-of-people-with-a-disability-were-employed-in-2021.htm>

Chakraborty N, Mishra Y, Bhattacharya R and Bhattacharya B, ‘Artificial Intelligence: The Road Ahead For The Accessibility Of Persons With Disability’ [2021] Materials Today: Proceedings; https://doi.org/10.1016/j.matpr.2021.07.374

‘Convention On The Rights Of Persons With Disabilities (CRPD) | United Nations Enable’ (Un.org, 2022) <https://www.un.org/development/desa/disabilities/convention-on-the-rights-of-persons-with-disabilities.html> accessed 24 August 2022

Dave Gershgorn, ‘Companies Are on The Hook If Their Hiring Algorithms Are Biased’ Quartz (2018) <https://qz.com/1427621/companies-are-on-the-hook-if-their-hiring-algorithms-are-biased/>

Moss H, ‘Screened Out Onscreen: Disability Discrimination, Hiring Bias, And Artificial Intelligence’ (2021) 98 SSRN Electronic Journal <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3906300>

‘How Good Candidates Get Screened Out – Peatworks (Peatworks, 2022) <https://www.peatworks.org/ai-disability-inclusion-toolkit/how-good-candidates-get-screened-out/>

OHCHR, Human Rights Council Forty-ninth session 28 February–1 April 2022 ‘Artificial Intelligence and The Rights of Persons with Disabilities – Report of The Special Rapporteur on The Rights of Persons with Disabilities’ (28 December 2021) UN Doc A/HRC/49/52

PEAT (Partnership on Employment & Accessible Technology – US), ‘Risks Of Bias And Discrimination In AI Hiring Tools – Peatworks’ (Peatworks, 2022) <https://www.peatworks.org/ai-disability-inclusion-toolkit/risks-of-bias-and-discrimination-in-ai-hiring-tools/> accessed 24 August 2022