Supervised by: Almas Shaikh DPhil Law (Oxon). Almas is a human rights lawyer from India, currently reading as a DPhil candidate at the University of Oxford. She has been awarded the Barbara Mills Graduate Scholarship in Law, Alpa (Rome) Scholarship and the LMH Postgraduate Scholarship. She focuses on the role of intersectionality in Indian affirmative action, under the supervision of Professor Sandra Fredman. She completed her LLM in International Law at the Graduate Institute of International and Development Studies, Geneva; and her undergraduate degree in BA. LLB (Hons.) from the National University of Advanced Legal Studies, Kochi. Almas possesses Bar Council Qualification to practice in India and has worked in dispute resolution.

Nationally, she has worked with the Centre for Law and Policy Research, where she gained experience as a litigation lawyer in High Courts and Supreme Courts, on issues of trans rights, caste matters and constitutional law. Additionally, she has been involved in grassroots and high-level advocacy on legislations of human interest and its implementation. Internationally, she has worked with the UNITAR, UN International Law Commission, a Security Sector Organisation in Geneva and also with the Blavatnik School of Government at the University of Oxford. Her research includes anti-discrimination laws, intersectionality, gender laws and human rights. She currently serves as the Communications Director at the Oxford Human Rights Hub. She also convenes the Criminal Law Discussion Group in the Faculty of Law.

Abstract 

Facial recognition is a development of artificial intelligence that has taken over the world and emerged as a contentious issue in America, with concerns about its potential to perpetuate discrimination against minority communities and its utilisation in the military. Having been installed in a majority of public places, facial recognition has defeated its initial purpose and has discriminated against minority groups to a great extent. The indiscriminate use of facial recognition systems raises serious questions about privacy and equal treatment under the law. While intended to enhance security and efficiency, these systems have been criticised for disproportionately impacting marginalised groups, such as racial and ethnic minorities. Additionally, the biases embedded within the algorithms used to train these systems can lead to inaccurate identification and wrongful targeting, resulting in the reinforcement of existing societal prejudices. George Floyd’s incident portrays how biases embedded in facial recognition technologies can have detrimental consequences for minority groups. As a result, minority communities are at a higher risk of being subjected to wrongful arrests, unwarranted surveillance, and biased treatment.

Introduction

Blooming from necessary militant efforts, artificial intelligence (AI) technologies have astounded American citizens. In particular, further developments of facial recognition technology have vastly increased  efficiency in everyday tasks and the organisation of valuable goods. However, as technology grows, inherent biases caught in the development of AI remain an obstruction to minority groups in the United States.

AI has transformed society and its perspectives — facial recognition software has begun to implicate and express the prejudices of its creator. The extreme polarisation of the media is not only consumed by the people themselves but hinders how AI formulates and interprets data. In reflecting societal biases, facial recognition proves to be devastating in areas surrounding minority prejudice. Such predilections accumulate, impeding the inalienable individual rights of American citizens to Life, Liberty, and the Pursuit of Happiness; yet, such discrepancies rarely go unnoticed. Americans continually exercise their rights to protest: attempting to surmount legal obstructions such as the Patriot Act and enabling the passage of the George Floyd Act, which vocalised the American concern for systemic racism.

The expansion of facial recognition software as an extension of AI disproportionately targets minority groups as starkly seen in military endeavours, the treatment of varying racial and gender demographics, and the legal developments resulting from such prejudices. 

In the following four sections of this paper, we question the extent to which this facial recognition technology adversely affects minority groups in the United States. In Section 2, this paper will assess the use of facial recognition technology in the military. Then, in Section 3, the paper discusses gender and race discrimination in light of facial recognition technology. Lastly, we explain the legal developments regarding facial recognition technology in the US.

Military

The use of facial recognition technology in military settings has the potential to perpetuate discrimination and infringe upon civil liberties, disproportionately affecting and harming minority populations, and necessitating the implementation of comprehensive ethical guidelines to mitigate these concerns.

The rapid advancement of AI technology has had a profound impact on the military sector, with its applications expanding from the US Department of Defence training computers to mimic human reasoning in the 1960s to the utilisation of drones by the US military in recent times (Military Embedded Systems, 2019). Alongside these advancements, facial recognition technology has gained prominence in many countries’ military operations. It finds applications in various areas such as surveillance and force protection. While the initial appeal of utilising AI and facial recognition in the military may seem undeniable, a closer examination reveals that the severe consequences outweigh the potential benefits significantly.

For numerous countries worldwide, investing in facial recognition on the battlefield appears to be an obvious choice. For instance, the United States employs drones, reducing the risk to military personnel and allowing technological advancements to carry out operations. There is also the belief that AI and facial recognition could mitigate the potential biases and errors that humans are prone to which, in the military context, can be fatal. However, this notion is far from accurate.

It is crucial to recognise that AI technology reflects the information provided by humans, making it susceptible to biases and discrimination prevalent in society, particularly against women and minority communities (Scientific American, 2022) The development of Autonomous Weapon Systems (AWS), automated machinery capable of identifying targets based on descriptions or specific classes of people, introduces a high risk of errors and potential harm to minorities (International Committee of the Red Cross, 2018). Facial recognition algorithms often exhibit high levels of false positives for darker-skinned individuals, leading to misidentifications and harmful consequences (LA Times, 2023).

This underrepresentation of minorities can lead to facial recognition systems failing to acknowledge people of colour and mistakenly causing harm due to their skin colour (LA Times, 2023). Additionally, the lack of transparency and understanding regarding the target selection process employed by AWS exacerbates the issue, making it challenging to rectify these faults (International Committee of the Red Cross, 2018). Consequently, the usage of such technology in the battlefield should be avoided.

Evidence from the UK’s police force revealed that NEC’s NeoFace technology, when used for facial recognition, produced an average false recognition rate of 95% (International Committee of the Red Cross, 2018). Such a high level of inaccuracy is deeply concerning when considering the lasting and devastating effects the technology can have. The negative impacts of NeoFace technology on minority groups and women far outweigh any potential benefits as it further segregates such groups from society while leaving lighter-skinned men largely unaffected. This injustice must be addressed to protect the civil liberties of marginalised groups. The flawed and error-prone nature of facial recognition technology highlights its fatal flaw, potentially leading to the loss of innocent lives solely based on the colour of their skin.

However, the pursuit of improved military systems has prompted governments globally, including China, to expedite the integration of AI and facial recognition technologies into its military. This “arms race” mentality has pushed the US military to advance its technologies while overlooking the ethical concerns surrounding facial recognition and its potentially disastrous effects (CSIS, 2022). Over-reliance on technology poses a catastrophic threat to civil liberties, particularly for minority communities. It is imperative for countries to consider this danger when seeking to enhance their military capabilities.

The United States is promoting the use of facial recognition through RealNetworks, which has been contracted for $729,056 to provide SAFR drones equipped with facial recognition capabilities, commonly known as “killer robots” (Vice,2023). These drones are notorious for their inaccuracies, and their utilisation could have catastrophic consequences for people of colour due to poor levels of accuracy. Factors such as wide databases and imperfect lighting conditions can contribute to inaccuracies in facial recognition (Newsweek, 2023). In a military setting, the consequences of these seemingly minor problems can be devastating. Addressing these issues is essential to safeguard the civil rights of marginalised and racialised groups as the same algorithmic racial biases that are present in American Law will be reflected in the US facial recognition system (Georgetown Security, 2020).

For example, in February 2019, Nijeer Parks was accused of shoplifting and attempting to hit a police officer with his car in New Jersey. He was falsely identified by facial recognition systems despite living 30 miles from the crime scene. Mr Parks was forced to spend 10 days in jail and had to pay over $5,000 to defend himself (New York Times, 2020). The measures that were taken to protect himself should not have been necessary, and he had been placed in this situation purely due to the colour of his skin. Such breach of civil liberties should not be allowed — facial recognition systems must be improved if they are to be used in such ways.

In the aftermath of the 9/11 attacks, global security measures have significantly escalated due to widespread fears, leading to a surge in surveillance initiatives. This was reflected by the swift introduction of 17 security-focused bills by the United States Congress within weeks of the attacks (Introna and Wood, 2004). Consequently, considerable advancements have been made in facial recognition technology within the realm of surveillance, with the US military investing over $345 million in biometric database technology over the past decade (Gershgorn, 2019). However, the exorbitant amount of funding allocated to the military raises concerns regarding opportunity costs, as it could have potentially been utilised for various other purposes, such as education.

The expansive surveillance network now in place continually collects biometric data on millions of individuals across the United States. This data enables the US military to readily identify and monitor individuals of interest by placing them on the Biometrical Enabled Watch List (BEWL), subsequently subjecting them to scans by surveillance systems on battlefields. Disturbingly, between 2008 and 2017, the Department of Defense (DoD) added over 213,000 individuals to the BEWL and during the same period, they reportedly killed or arrested more than 1,700 people based on biometric information, according to reports by the Government Accountability Office (USGAO, 2017; Gershgorn, 2019).

The reliability of the biometric data raises concerns as it heavily relies on facial recognition systems, which have been proven to be flawed. Consequently, it is crucial to question the number of innocent individuals who may have been wrongfully prosecuted due to the inaccuracies within these systems.

Furthermore, it is important to address the issue of surveillance disproportionately affecting communities of colour. In 2020, numerous Florida police departments utilised facial recognition systems to identify participants of the Black Lives Matter protests, mistakenly regarding them as suspicious and including their faces in law enforcement databases (Center for Democracy and Technology, 2022)

Ultimately, it is vital for the US government to take into account the problems surrounding facial recognition before properly implementing its usage in the military, as they can disproportionately discriminate against already marginalised groups to protect their civil liberties.

In this section, the uses and effects of facial recognition technologies within the military have been discussed. The following section will discuss the axes of gender and race discrimination in light of facial recognition technologies.

Axes of Discrimination

a. Gender

AI, in upholding the gender-sensitive indicators of facial recognition software, creates a deficit in treatment between women and non-heteronormative gendered peoples versus their cis-male counterparts.

The lack of representation for women in computer science has led to a bias in facial recognition technology based on inputs to machine-learning models. Inequity in computing access discourages female participation in developing AI technologies. Men bore the responsibility of advancing AI, not out of obligation, but from the exclusion of minority demographic counterparts. Women, in particular, were systematically ostracised from the conception of AI since, at the time, society seldom expected women to attend school and, instead, to maintain the domestic roles assigned by their archaic society (Chauhan and Kaur, 2022). This system has prioritised men and has perpetuated the lack of opportunity for female involvement in the development of modern AI, reflected through glaring deficiencies in facial recognition (FR) software (Rankin 2020). Discourse on AI often attempts to take a holistic approach by claiming FR is “universally” applicable (Barrett, 2020); however, this conjecture implies that certain groups are equally vulnerable to surveillance when, in fact, the contrary is true. With males amounting to nearly 80 percent of academics working in AI at leading US universities (Hall and Clapton, 2021), Joy Rankin at the AI Now Institute — as demonstrated in her Discriminating Systems report — identifies the feedback loop when men are primarily involved in the building of algorithmic technologies: considerable, disturbing harm directed toward women (Rankin, 2020). For example, the National Institute of Standards and Technology (NIST) found wrongful criminal detection to be between two to five times more likely for women than men, exposing women to unwarranted law enforcement searches for the sake of security (Barrett, 2020). The most notable use of FR is identity verification, which identifies an individual through cross-referencing images on files (Feeney, 2022). However, developers built gender bias into this technology, resulting in the increased prevalence of false positives when identifying criminals, with a skew toward women. In that way, Hall and Clapton discuss the detrimental effects of gender bias in AI development and its threat to civil liberties through a high propensity to misidentify women (Hall and Clapton, 2021). The key to facial recognition software is training models using pertinent, readily available data. When applied in a law enforcement context, due to a lack of involvement in FR technology development, women are vastly underrepresented in model inputs and, therefore, are at a higher risk of being misidentified as criminals than men (Zalnieriute, 2021). These misidentifications, rooted in patriarchy, disproportionately secernate women.

The sophistication of FR algorithms corresponds to the amount of data serving as input and its quality. By training AI models, the precision of FR increases and can be used to determine specific traits, such as sexual orientation. In a study conducted by Kosinski and Wang, AI, when compared to humans, used a single image to correctly distinguish homosexual men and women from heterosexuals 81 percent and 74 percent of the time respectively — versus 61 percent and 54 percent. The algorithm’s prediction increased to 91 percent and 83 percent when given five facial images per person (Kosinski & Wang, 2018). The accuracy of these predictions enables the detection of the most intimate characteristics particular to members of the LGBTQ+ community, thereby threatening their right to privacy and security. In this way, governments and private companies can use FR technology to increase the likelihood of job discrimination, a practice contested in Bostock v. Clayton County (2020) and Price Waterhouse v. Hopkins (1989) (Wilkinson, 2021). The Supreme Court extended protection against employment discrimination under Title VII of the Civil Rights Act of 1964 based on sexual orientation and gender identity. In addition, individuals may seek protection under the Fifth Amendment to not “incriminate” themselves of deceit in cases where their sexuality, and therefore credibility, are questioned (Hall and Clapton, 2021). The systematic exclusions of large swaths of society perpetuate bias and become ingrained in computer algorithms which, in turn, hinder AI from articulating its machine-learned perspective, which excludes groups and members of the LGBTQ+ community. Given the vast futuristic applications of this technology, discriminatory practices against members of the LGBTQ+ community are likely to remain institutionalised. Further, members of the LGBTQ+ community are at a heightened risk of privacy violations stemming from infringements on individuals’ rights to “define one’s own concept of existence” as guaranteed in Lawrence v. Texas (2003) (Wilkinson, 2021). Although advancements in FR algorithms benefit from an abundance of data, its costs (i.e. the basic tenets of human rights) outweigh any such rewards.

Os Keyes, a Ph.D. Candidate at the University of Washington’s Department of Human-Centred Design & Engineering, utilised their expertise in gender and technology to articulate how FR developers do not train most FR systems to accommodate gender on a non-binary scale (Barrett, 2020). Namely, Automated Gender Recognition (AGR), a subsidiary technology to FR that equips machine learning to infer the gender of the subject of a photo or video (Feeney, 2022), cannot accurately distinguish individuals unless describing them as male or female. Such assessments of gender are oblivious to the impact of androgyny on gender classification (Cairns, 2021), which poses an even more significant risk to transgender and gender-nonconforming people. The blatant refusal to acknowledge transgender people as valid and, therefore, train FR systems to distinguish them signifies an attempted erasure of any such non-heteronormative identities (Barrett, 2020). Certain jurisdictions feign indifference to the matter yet implement the controversial “bathroom bill” to label customers and employees as male or female, conforming to a binary understanding of gender (Grose, 2019). Likewise, private businesses, universities, and schools may follow suit, ​segregating spaces by gender under the guise of security enhancement and privacy protection to avoid criticism (Feeney, 2022). AGR-equipped bathrooms detect “violations” — that is, transgender individuals visiting the designated restroom most consistent with their identity — and involve the police, from whom transgender individuals have little protection (Grose, 2019).

Moreover, with FR tools at their disposal, corporations play a crucial role in sustaining employment discrimination against the transgender community, with 53 percent of transgender individuals claiming that discrimination harmed their chances of employment. For instance, Uber imposes a binary standard of gender identification, whereby nonconformity leads to failure in identity verification (Barrett, 2020). Likewise, in 38 percent of cases, gender recognition tools built by IBM, Amazon, and Microsoft misidentified transgender men as women while paying no regard to agender, genderqueer, or nonbinary, misclassifying them 100 percent of the time (Feeney, 2022). Furthermore, as it had with the LGBQ+ community, AGR risks hindering the autonomy of transgender individuals from disclosing their true identity (Wilkinson, 2021). If a transgender individual chooses to conceal that identity, they risk public exposure as a liar or deceiver, scrutiny, and even violence (Hall & Clapton, 2021). AGR inherently violates individual privacy, and its usage to especially transgender individuals enables corporations to exploit nonheteronormative peoples and minority groups.

Now that the relation between FR and gender discrimination has been established, we will further discuss the role of FR as a catalyst to racial discrimination.

b. Race

This section explores how FR has bred the prevailing racial discrimination and prejudices and empowered its ability to create societal inequalities. Cases where a distinct classification of White males and females and Black males and females and its adverse effects will be explored; furthermore, an intersection of gender and race, assessing gender as a factor, will also be considered.

One of the primary concerns of FR is the misidentification of individuals, especially people of colour. The consequences of FR have only acted as catalysts to current social biases regarding race. The development of FR is mostly done in the West, leading to the legitimacy of its application as the developers. Every individual in this world has built-in prejudices that prohibit us from having a racially-equal society.

If FR software exhibits racial biases, it can lead to wrongful arrests, discrimination of individuals, and mistreatment of individuals from minority groups. According to the federal government of the United States of America, there have been notable biases within FR software (Crockford, 2020). The reinforcement of this social bias can be seen in the Michigan case where a Black man was wrongfully convicted of shoplifting. This was because, to the AI software, the man looked similar to the suspect. However, the man looked nothing like the suspect and had to be released. Nevertheless, the man was wrongfully put in jail for hours for a crime he had not committed (Hill, 2020). This was merely because of an error in the FR software. It was seen that the software had only been tested on White males and thus such an error almost cost an innocent man his life. Another case of misidentification saw Nijeer Parks’ wrongful arrest (Hill, 2020). The incident took place in New Jersey where a report was lodged that a man had been stealing snacks from a gift shop. The police used FR and detected that Parks was the culprit even though he was 30 miles away at the time of the incident. Such errors in AI lead to racial discrimination as the White males who developed the FR software hold a bias towards their own. The lack of diversity in the developers of FR software lead to unintended consequences. The software may struggle to identify individuals from minority groups leading to inequality and thus discrimination. Another study conducted by the National Institute of Standards and Technology (NIST) in 2019 showed that the algorithm gravely misidentified Asian and African-American participants as compared to White individuals (NIST, 2019). This inequality created by FR software catalyses the enormous racial prejudices and sentiments already present in individuals and society at large.

The extent of FR does not stop here; it has intersections with gender. Alongside racially discriminating individuals, FR has also led to gender discrimination. More often than not, White males and White females occupy the top positions in the race concerning racial groups and gender. Black males, to a large extent, are recognized due to gender bias. However, what about Black females? Black females arguably suffer the most due to their identity lying in the often disadvantageous intersectionality of race and gender. This intersectionality of race and gender has been enhanced by FR. Black females have been marginalised due to both, racial and gender discrimination. Throughout history, Black females have played an integral role in shaping the world. However, their contributions have been covered due to gender and racial discrimination. This injustice is also reflected in the field of AI and FR in particular. Joy Buolamwini, an MIT researcher, found that she had to wear a White mask in order to be recognized by the FR software to identify her face (Hardesty, 2018). Buolamwini further carried out tests on the algorithm revealing the high error rates, especially in classifying darker-skinned females than lighter-skinned males. This nourishes the prevailing gender and racial biases leading to the disruption of peace. In 2018, “Gender Shades” projects analysed three gender classification algorithms, including those developed by IBM and Microsoft (Buolamwini and Gebru, 2018). It was seen that all three algorithms performed the worst on darker-skinned females, with an error rate of 34 percent than for lighter-skinned males. This reason for high error rates can be attributed to the high prevalence of White male developers in the field of FR. Another study was conducted by the American Civil Liberties Union on Amazon’s FR software called Rekognition (Snow, 2018). A high percentage of racial bias was found in the development of the algorithm. In the study, Rekognition incorrectly matched 28 members of Congress with individuals in a mugshot database. Rekognition mainly misidentified individuals of colour, including Black women. This lack of accuracy and accountability leads to questions regarding the legitimacy of the algorithm.

The lack of transparency and accountability in the field of FR in the United States needs to change. The use of FR needs to be limited to necessary situations and should be monitored. The absence of guidelines and authority allows individuals to exploit the use of the software. In order to prevent this, a clear legal framework needs to be established that states the permissible use of FR is essential. Law enforcement should outline specific situations where FR can be used, such as in national threats and investigations of serious crimes. Additionally, law enforcement agencies need to be transparent about their use of the algorithm. Most importantly, law enforcement should seek warrants or court orders before using FR software. Therefore, such specific guidelines must be implemented as the current sentiments present among the US population may enhance the extent of racial discrimination.

In conclusion, biases in FR algorithms, whether due to data collection or the development of the algorithm, have fostered racial discrimination, especially against minority groups. The examples of misidentification and wrongful arrests due to errors in the algorithm support the claim that FR plays an important role in powering prejudices and sentiments among individuals, leading to negative consequences. These biases not only breach the human rights of the wrongfully convicted individuals but also add fuel to the sensitive topic of racism. Addressing these issues in a structured manner would allow law enforcement agencies to overlook the actions that foster the illegitimate use of FR software. A technological revolution that was supposed to be a boon to mankind, is in fact eradicating humanity throughout the globe at an alarming rate as a result of biases and discrimination in its advancements.

Now, we will discuss the legal developments that took place in the United States of America after the establishment of the gender-race intersectionality. The incident of George Floyd will be discussed and the legitimacy of FR will be assessed in relation to subsequent legal advancements.

AI and Legal Development in the US

Legal developments following the George Floyd incident and the Black Lives Matter movement have revealed adverse effects of FR on minority groups in the US.

On May 25, 2020, a 46-year-old Black man named George Floyd died after his arrest in Minneapolis, Minnesota. Derek Chauvin, a White police officer, was seen squatting on Floyd’s neck when he was pinned to the ground (BBC, 2020). The incident began with an allegation of Floyd possibly using a counterfeit $20 note, which resulted in a conflict between Floyd and the police (BBC, 2020). Chauvin maintained his knee on Floyd’s neck for about nine minutes. Several witnesses videotaped the incident, which went viral on social media. Floyd remained unresponsive, and despite appeals from onlookers, the police did not check his pulse. Floyd was later pronounced dead at the hospital (BBC, 2020). Viral video of the incident refuelled the Black Lives Matter movement (BLM) in the US and around the world. BLM is a decentralised movement formed in 2013,, with the aim of bringing attention to racism, discrimination, and racial inequality faced by Black individuals. In particular, BLM focuses on police brutality and racially motivated violence against Black people. BLM became trending again from May 26th to June 7th, 2020, with over 37 million tweets condemning police brutality and demanding justice for Black people across FR and other aspects (Cohen, 2020). Legal developments have highlighted the adverse effects of FR technology on minority groups in the US.

The use of FR technology has adverse consequences for the African-American community. Extensive studies and real-world instances have shed light on disparities in accuracy, notably manifested in higher rates of misidentification and false positives among individuals with darker skin tones. The National Institute of Standards and Technology (NIST) conducted a 2019 investigation that revealed a pronounced proclivity of numerous FR algorithms to incorrectly identify Black or East Asian faces at rates 10 to 100 times higher than those of White faces (Findley, 2020). Furthermore, African-Americans have been subjected to overt violations of their privacy and assembly rights, as demonstrated by the case of Derrick “Dwreck” Ingram, a BLM activist who was targeted by the New York Police Department using FR technology (Amnesty International, 2020). The increased reliance on databases, using mug shots, further compounded these issues by disproportionately encompassing marginalised communities. As a result, indiscriminate surveillance and deployment of such technology constitute a substantial danger to privacy and civil rights, burdening minority communities disproportionately and reinforcing societal imbalances which have already been observed in the use of FR during the BLM protests in the US. 

Recent legal developments surrounding FR technology have been triggered by the George Floyd incident and the Black Lives Matter movement. Notably, Microsoft, Amazon, and IBM have implemented restrictions on the utilisation of FR technology by police departments (Najibi, 2020). Microsoft has explicitly stated its decision to withhold sales of the technology to law enforcement until national regulations are established (BBC, 2020). Likewise, Amazon has implemented a one-year prohibition on the utilisation of FR technology by law enforcement, while IBM has opted to discontinue sales of the technology for the purposes of mass surveillance or racial profiling. These measures directly stem from the extensive demonstrations against police brutality and racial discrimination, leading to notable advancements in fortifying the legal landscape surrounding FR technology subsequent to the George Floyd incident and the BLM movement. Microsoft, Amazon, and IBM have all implemented measures to limit the application of face recognition technology by police agencies. Microsoft’s commitment to refrain from selling its technology to law enforcement until national standards are implemented, alongside Amazon’s temporary ban and IBM’s discontinuation of sales for specific purposes, exemplify the industry’s response to public demands arising from demonstrations against police misconduct and racial injustice. Furthermore, the George Floyd Justice in Policing Act represents a significant legal development in response to the George Floyd incident. The Bill aims to reform US police departments and address issues of police misconduct and violence by implementing various measures. It proposes the banning of chokeholds, which were implicated in Floyd’s death, by federal law enforcement officers (Lee, 2022). It also demands the police with federal funds to adhere to this same rule. This Bill also prevents the use of FR on body cameras and police cars (Lee, 2022). The prevention of FR on those gadgets would decrease biases and wrongful arrests. This would help address ethical concerns such as privacy, surveillance, and bias through regulations to ensure responsible and fair utilisation of this technology (MacCarthy, 2020).

Learning about the adverse impact of FR through the BLM movement has sparked US lawmakers to introduce Bills to prevent the impact on minorities. US Representative Pramila Jayapal, alongside some representatives and senators, introduced the Facial Recognition and Biometric Technology Moratorium Act with the aim of prohibiting the use of biometric technology along with FR technology by organisations in the US government (Evans, 2021). This was a reaction to the unregulated use of FR by law enforcement following an alarming increase in the misidentification of individuals from racial and gender minorities. The measure also attempts to connect federal grant funds to the passage of suspensions on the use of FR and biometric technologies by state and municipal governments (Evans, 2021). This piece of legislation is effective as it prevents the perpetuation of innocent minorities due to FR bias. In 2022, Representative Ted Lieu and other members of the House of Representatives introduced the Facial Recognition Act of 2022. This limits the use of FR to significant violent offences and forbids its use during protests for real-time monitoring, and in combination with body cameras (Lieu, 2022). In addition, the measure mandates frequent reviews, reporting, and independent testing of FR technologies. It creates a private right of action for anyone who has been affected by FR and requires the removal of specific data from FR databases (Lieu, 2022). Civil liberties organisations and law enforcement experts have backed the proposal. This Act would be essential in response to FR bias and the protection of citizens’ privacy. Both Bills are essential for curbing the targeting of minorities who are vulnerable under this technology. These Bills are still in the beginning stages and thus would still need further development.

The legal developments that have taken place following the George Floyd incident and the BLM movement showed the detrimental impact of FR technology on minority communities in the United States. These events have prompted an examination of the potential harms and discriminatory outcomes associated with the use of FR technology, particularly concerning its effects on marginalised groups. A US Government Accountability Office (GAO) report revealed that six federal agencies, including the FBI, utilised FR technology during the 2020 BLM protests that followed the murder of George Floyd (De Leon, 2021). The agencies employed this technology for criminal investigations and remote identity verifications due to the COVID-19 pandemic (De Leon, 2021). The findings validate activists’ concerns about potential surveillance and retaliation, as well as the privacy implications for protesters, which is attributed to the COVID-19 pandemic in 2020. The research also emphasises the lack of openness in the usage of third-party technology businesses, as well as the necessity for improved tracking to assure accuracy and privacy. Grassroots efforts and legislative actions, such as the proposed Bill to permanently ban FR technology, highlight the growing recognition of the negative effects of FR technology on marginalised communities and the need for stricter regulations to protect minority rights and privacy. The revelations and discussions surrounding the George Floyd incident and the BLM movement have sparked a deeper understanding of how FR technology can disproportionately impact minority rights. These developments have prompted greater attention and the implementation of legal and regulatory measures aimed at mitigating the negative consequences of face recognition on underprivileged populations. The current debate aims to guarantee that legislative safeguards are in place to protect minority groups’ rights and to avoid discriminatory activities linked to FR technology.

In conclusion, the legal developments surrounding FR technology in response to the George Floyd incident and the BLM movement have been significant. Companies like Microsoft, Amazon, and IBM have implemented restrictions on the use of FR technology by police departments, with Microsoft explicitly refusing sales until national regulations are established. The George Floyd Justice in Policing Act proposes the banning of chokeholds and the use of FR on body cameras and police cars, addressing concerns about police misconduct and privacy. Additionally, lawmakers have introduced Bills including the Facial Recognition and Biometric Technology Moratorium Act and the Facial Recognition Act of 2022 to prohibit the use of FR technology and establish safeguards against bias and privacy violations. These legal developments reflect a growing recognition of the adverse impact of FR technology on marginalised communities and the need for stricter regulations to protect minority rights and privacy. The debate and actions surrounding these developments aim to ensure the responsible and fair utilisation of FR technology while mitigating its potential discriminatory consequences.

Conclusion

In summation, the rapid growth of AI technology, particularly in FR, has had a significant influence on the military industry. While using AI and FR for military objectives may seem appealing at first, it is critical to acknowledge the serious ramifications and ethical considerations involved with this technology. The inherent biases and discrimination prevalent in society, particularly against women and minority communities, are reflected in FR algorithms, leading to misidentifications and harmful consequences. The high levels of inaccuracy and false recognition rates further exacerbate the negative impact on marginalised groups, disproportionately affecting their civil liberties and potentially leading to the loss of innocent lives.

The pursuit of improved military systems, driven by an “arms race” mentality, has prompted militaries worldwide to expedite the integration of AI and FR without sufficient consideration of the ethical concerns surrounding these technologies. The reliance on flawed and error-prone FR systems poses a catastrophic threat to civil liberties, especially for minority communities. It is essential for countries to prioritise the potential discriminatory effects and the risks associated with FR technology when seeking to enhance their military capabilities.

Furthermore, the gender disparities in FR software highlight the urgent need for algorithmic fairness and inclusivity. The biases ingrained in these algorithms perpetuate gender discrimination, leading to misidentifications and biased treatment against women and non-heteronormative gendered individuals. Efforts should be made to enhance the diversity of training datasets, develop gender-neutral algorithms, and incorporate ethical guidelines and regulations to rectify these biases and ensure equitable treatment.

Similarly, FR technology has fuelled racial discrimination and prejudices, further exacerbating existing social biases. The lack of diversity in the development of FR software contributes to the perpetuation of racial inequalities, as these algorithms are often biased in favour of White individuals. The intersectionality of race and gender worsens the discrimination, with Black females being particularly affected and rendered invisible in society.

To address these issues, comprehensive solutions are required. This includes enhancing the diversity of training data, developing gender-neutral and racially unbiased algorithms, incorporating ethical guidelines and regulations, and fostering interdisciplinary collaborations to understand and mitigate biases in FR technology. Transparency, accountability, and oversight are required to guarantee that new technologies are consistent with societal ideals including equity, justice, and individual rights.

In conclusion, the serious ramifications and ethical problems raised by the use of FR technology in the military, as well as the impact on disadvantaged groups, should not be neglected. To protect civil rights, correct biases, and promote algorithmic fairness and inclusion in face recognition systems, a thorough and cautious approach is required. Only by solving these concerns can we hope to achieve a future in which FR technology treats all persons with the same respect, accuracy, and justice, regardless of gender or colour.

Bibliography

Alex Najibi, ‘Racial Discrimination in Face Recognition Technology’ (Harvard University, October 24, 2020) <https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/> accessed 21 June, 2023

Ali Breland, ‘How White engineers built racist code – and why it’s dangerous for Black people’ (The Guardian, 4 December 2017) <https://www.theguardian.com/technology/2017/dec/04/racist-facial-recognition-White-coders-Black-people-police> accessed 20 June, 2023

Artificial Intelligence Timeline (Military Embedded Systems, 24 January 2019) <https://militaryembedded.com/ai/machine-learning/artificial-intelligence-timeline> accessed 20 June, 2023

Lindsey Barrett, ‘Ban Facial Recognition Technologies for Children and for Everyone Else’ (2020) 26 Boston University Journal of Science & Technology Law 223

Branka Marijan “AI-Influenced Weapons Need Better Regulation”, (Scientific American, 30 March 2020 <https://www.scientificamerican.com/article/ai-influenced-weapons-need-better-regulation/> accessed 19 June, 2023

Buchanan L, Bui Q and Patel JK, ‘Black Lives Matter May Be the Largest Movement in U.S. History’ (The New York Times, 3 July 2020) <https://www.nytimes.com/interactive/2020/07/03/us/george-floyd-protests-crowd-size.html> accessed 21 June, 2023 

Stephanie Cairns, ‘Defanging Facial Recognition: A Statistical Approach to Bias Mitigation and Policy Conditions for Responsible Use’ [2021] McGill University (Canada)

Chauhan P and Kaur G, ‘Gender Bias and Artificial Intelligence: A Challenge within the Periphery of Human Rights’ (2022) 8 Hasanuddin Law Review 46

Jason Cohen, ‘#Blacklivesmatter Hashtag Averages 3.7 Million Tweets per Day during Unrest’ (PCMAG, 20 July 2020) <https://www.pcmag.com/news/Blacklivesmatter-hashtag-averages-37-million-tweets-per-day-during-unrest> accessed 21 June, 2023 

Dave Gershgorn, “This Is How the US Military’s Massive Facial Recognition System Works” (Medium, 6 November 2019) <https://onezero.medium.com/exclusive-this-is-how-the-u-s-militarys-massive-facial-recognition-system-works-bb764291b96d> accessed 19 June, 2023

Radhamely De Leon, ‘Six Federal Agencies Used Facial Recognition On George Floyd Protestors’ (Vice, 30 June 2021) <https://www.vice.com/en/article/3aqpmj/six-federal-agencies-used-facial-recognition-on-george-floyd-protestors> accessed 21 June, 2023 

Pramila Jayapal, ‘Jayapal Introduces Bill to Ban Use of Facial Recognition Tech’ (Congresswoman Pramila Jayapal, 28 April 2022) <https://jayapal.house.gov/2021/06/15/ban-facial-recognition-tech/#:~:text=Facial%20recognition%20technology%20has%20been%20shown%20time%20and%20time%20again,use%20of%20this%20dangerous%20technology> accessed 22 June, 2023 

Matthew Feeney, ‘When AI Meets the Transgender Community’ (Cato Institute, 20 May 2022) <https://www.cato.org/blog/when-ai-meets-transgender-community> accessed 23 June, 2023

Beth Findley, ‘Why Racial Bias Is Prevalent in Facial Recognition Technology’ (Harvard Journal of Law & Technology, 4 November 2020) <https://jolt.law.harvard.edu/digest/why-racial-bias-is-prevalent-in-facial-recognition-technology> accessed 21 June, 2023 

George Floyd: Microsoft Bars Facial Recognition Sales to Police’ (BBC News, 11 June 2020) <https://www.bbc.com/news/business-53015468> accessed 21 June, 2023 

George Floyd: What Happened in the Final Moments of His Life’ (BBC News, 16 July 2020) <https://www.bbc.com/news/world-us-canada-52861726> accessed 21 June, 2023 

Gregory C. Allen “ One Key Challenge for Diplomacy on AI : China’s Military Does Not Want to Talk”, (CSIS, 20 May 2022) <https://www.csis.org/analysis/one-key-challenge-diplomacy-ai-chinas-military-does-not-want-talk> accessed 19 June, 2023

Thomas Grose, ‘Transcending Bias’ (2019) 29 Washington: American Society for Engineering Education

Lawrence Hall, ‘Programming the Machine: Gender, Race, Sexuality, AI, and the Construction of Credibility and Deceit at the Border’ (2021) 10 Internet Policy Review

Jacob Snow, ‘Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots’ (ACLU, July 26, 2018) <https://www.aclu.org/news/privacy-technology/amazons-face-recognition-falsely-matched-28> accessed 22 June 2023

Jake Laperruque “Limiting Face Recognition Surveillance : Progress and Paths Forward”, (CDT, 23 August 2022) https://cdt.org/insights/limiting-face-recognition-surveillance-progress-and-paths-forward/ accessed 21 June, 2023

Jess Thomson “DoD AI Drones That Can Recognize Faces Posse Ethical Minefield”, (NewsWeek, 27 February 2023) <https://www.newsweek.com/ai-drone-facial-recognition-ethic-issues-1784088> accessed 18 June, 2023

Kade Crockford, ‘How is Face Recognition Surveillance Technology Racist’ (ACLU, June 16, 2020) <https://www.aclu.org/news/privacy-technology/how-is-face-recognition-surveillance-technology-racist> accessed 18 June, 2023

Kashmir Hill “Another Arrest, and Jail Time, Due to a Bad Facial Recognition Match”, (NY Times, 29 December2020) <https://www.nytimes.com/2020/12/29/technology/facial-recognition-misidentify-jail.html> accessed 19 June, 2023

Kashmir Hill, ‘Another Arrest, and Jail Time, Due to a Bad Facial Recognition Match’ (The New York Times, December 29, 2020) <https://www.nytimes.com/2020/12/29/technology/facial-recognition-misidentify-jail.html> accessed 18 June, 2023

Kashmir Hill, ‘Wrongfully Accused by an Algorithm’ (The New York Times, June 24, 2020) <https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html> accessed 18 June, 2023

Lee NT and Chin caitlin (2022) rep <https://www.brookings.edu/research/police-surveillance-and-facial-recognition-why-data-privacy-is-an-imperative-for-communities-of-color/#:~:text=In%20 March%202021%2C%20the%20House,cameras%20or%20 patrol%20 vehicle%20 cameras> accessed 21 June, 2023 

Lucas D. Introna and David Wood “Picturing Algorithmic Surveillance: The Politics of Facial Recognition Systems”, (ssoar.info, 2004) <https://www.ssoar.info/ssoar/bitstream/handle/document/20067/ssoar-surveillance-2004-23-introna_et_al-picturing_algorithmic_surveillance_the_politics.pdf?sequence=1> accessed 19 June, 2023

Matthew Gault “US military signs contract to put facial recognition on drones” (Vice, 27 February 2023) <https://www.vice.com/en/article/7k85qe/us-military-signs-contract-to-put-facial-recognition-on-drones > accessed 14 June, 2023

McCarthy M (2020) rep <https://www.brookings.edu/research/ai-needs-more-regulation-not-less/> accessed 21 June, 2023 

Alex Najibi, ‘Racial Discrimination in Face Recognition Technology’ (Science in the News, 26 October 2020) <https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/> accessed 21 June, 2023 

National Institute of Standards and Technology, ‘NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software’ (NIST, December 19, 2019) <https://www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software> accessed 21 June, 2023

New York City’ (Amnesty International) <https://banthescan.amnesty.org/nyc/#stories> accessed 21 June, 2023 

Nicol Turner Lee and Caitlin Chin “ Police surveillance and facial recognition: Why data privacy is imperative for communities of color”, (brookings.edu, 12 April 2022) <https://www.brookings.edu/research/police-surveillance-and-facial-recognition-why-data-privacy-is-an-imperative-for-communities-of-color/> accessed 21 June, 2023

Noel Sharkey “The impact of gender and race bias in AI”, (blogs.icrc, 28 August 2018) <https://blogs.icrc.org/law-and-policy/2018/08/28/impact-gender-race-bias-ai/> accessed 14 June, 2023

Rankin JL, ‘Gender Bias in Technology: How Far Have We Come and What Comes Next?’ (Centre for International Governance Innovation, 19 March 2020) <https://www.cigionline.org/articles/gender-bias-technology-how-far-have-we-come-and-what-comes-next/> accessed 23 June, 2023

Reps Ted Lieu, Sheila Jackson Lee, Yvette Clarke, and Jimmy Gomez Introduce Bill to Regulate Law Enforcement Use of Facial Recognition Technology’ (Congressman Ted Lieu, 7 February 2023) <https://lieu.house.gov/media-center/press-releases/reps-ted-lieu-sheila-jackson-lee-yvette-clarke-and-jimmy-gomez-introduce#:~:text=Specifically%2C%20the%20Facial%20Recognition%20Act,committed%20a%20serious%20violent%20felony> accessed 22 June, 2023 

Sofia Gomez, “The Dangers of Militarising Racist Facial Recognition Technology”, (Georgetown security studies review, 30 September (2020) <https://georgetownsecuritystudiesreview.org/2020/09/30/the-dangers-of-militarizing-racist-facial-recognition-technology/> accessed 14 June, 2023

United States Government Accountability Office, Report to Congressional Committees GAO-17-580 (August 2017) <https://www.gao.gov/assets/gao-17-580.pdf> accessed 4 July, 2023 

Wang Y and Kosinski M, ‘Deep Neural Networks Are More Accurate than Humans at Detecting Sexual Orientation from Facial Images’ (2018) 114 Journal of Personality and Social Psychology 246

Wilkinson PHC, ‘The Legal Implications of Sexual Orientation Detecting Facial Recognition Technology’ (2021) 20 Dukeminier Awards: Best Sexual Orientation Law Review Articles 301

Monika Zalnieriute, ‘From Digital Platforms to Facial Recognition Technologies: Structural Challenges to Women’s Activism’ [2021] SSRN Electronic Journal 1