Abstract
Predictive policing is the use of advanced data analysis techniques to identify and predict potential crime activity before it occurs. This paper critically examines the rise of AI-driven predictive policing and its deep-rooted entanglement with historical injustices and unfair bias within law enforcement. While predictive policing is often illustrated as a futuristic and efficient method for crime prevention, the data that operates the whole system stems from racially discriminatory practices. This study explores how these systems risk resurfacing the same inequalities they aim to solve, and as a result, end up creating harmful feedback loops that unfairly affect marginalised communities. Within this paper, global case studies, legal frameworks and the role of private tech companies are analysed to highlight key challenges such as algorithmic obscurity, lack of liability and the blurring line between policing and surveillance. It argues that without significant change centred around ethical data use and legislative safeguards, predictive policing technologies will continue to automate injustice rather than correct it. This paper calls for a justice-oriented approach to law enforcement to ensure it prioritises fundamental human rights.
“It’s labelled a crime hotspot. So, when the police enter the area, they’re in the mindset of ‘we’re in a dangerous community – the people here are dangerous.’ It doesn’t matter if they’re young people, they’re still ‘dangerous’ and therefore ‘we can police them violently’ and they do police them violently.” – One resident of Grahame Park, North London, a location deemed as a high crime area (Amnesty, 2025).
Introduction
Predictive policing is a system that uses artificial intelligence (AI) or other predictive methods to improve the efficiency of police resource utilisation and crime prevention (US Department of Justice, 2024). Historical crime data and algorithmic models are used to assess the risk of a crime being committed. This can predict whether a person or persons are likely to be involved in a crime, known as person-based predictive policing. A certain place may also be flagged as a future crime scene. From these predictions, police can then take measures to prevent the predicted crime by directing policing resources to high-risk areas or groups (Amnesty, 2025). According to the National Association for the Advancement of Colored People, predictive policing is meant to increase objectivity and reliability in law enforcement, but there are growing concerns that, in reality, it perpetuates systemic biases and discrimination. For example, in 2013, the Chicago Police Department’s predictive “heat list” algorithm labelled hundreds of young Black men as likely perpetrators or victims of gun violence, despite no criminal history for many of them. Some were even visited or monitored by police simply for being on the list (Harcourt, 2015).
This paper seeks to analyse whether predictive policing allows bias to arise and continue in policing, and the role of artificial intelligence in law enforcement in the future. The first section of this paper will focus on the history of policing and whether the origin of policing, which is deeply intertwined with enforcement of slavery and control of marginalised/oppressed groups, continues to influence current predictive policing systems (Lepore, 2020). The second section will investigate the current day impact of predictive policing, including any benefits or concerns emerging from its use. Finally, the third section will focus on predictive policing in the future and ways in which it can be made more equitable.
Research Questions
– Does predictive policing perpetuate bias?
– What is the future of AI in policing?
Research Aims
This research paper aims to map the history of policing and its influence on predictive policing nowadays. One of the crucial objectives of this paper is to critically examine the moral, ethical, safety and justice-related implications of predictive policing technologies. This will mainly focus on how such systems may utilise and perpetuate societal biases, especially those based on race, gender and socio-economic status. It highlights the impacts of these technologies on different members of society – such as law enforcement agencies, communities subjected to heightened surveillance and individuals wrongly targeted due to algorithmic profiling – within different national legal and social contexts. Then, this paper intends to explore how predictive policing may evolve in the future and in what ways it can be made more effective and equal, such as, by including enhanced oversight, fairer data and increased accountability. Ultimately, this research will support creating global guidelines for how AI is used in policing and the goal is to make sure that AI in law enforcement is fair, just and respects human rights everywhere.
Research Methodology
In this paper, a desk-based socio-legal human rights methodology will be utilised. This type of methodology is an interaction between methodologies within both the legal and sociological fields. It studies how the law and its social, historical and economic contexts interact (Peck, 2023). This paper maps the historical context of policing and analyses the law behind predictive policing. The authors believe that law related to predictive policing is indivisible from its social context. We wish to focus on how predictive policing affects certain stakeholders, such as the individuals placed under surveillance. Therefore, the interdisciplinary nature of desk-based socio-legal human rights research would be the most effective methodology for this paper.
Section I: The Racial Roots of Policing
What does it mean to police? To police means to maintain law and order. The idea that the police exist to “fight crime” is a simplified exposition to a brimming background of a far more complex history. The roots of policing grasp deep into human civilisation – binding originally to governance, control and social order rather than crime-fighting. As legal theorist Markus Dubber warns: “under the rule of the law, people are equals; under the rule of the police, we are not”. It is important to differentiate between lawful social governance and the exertion of policing power, specifically of the unjust nature, especially when diving into the roots of what would later develop into predictive policing.
The term “police” stems from the Greek word for “city-state”, polis. Polis refers to the governance and organisation of a city-state (Lepore, 2020). In classical Athens, maintaining order was more about the upholding of social norms rather than exercising rules and laws. There was not necessarily a concept of preventing crime, as we would interpret the term in the modern sense, rather, policing was viewed as a measure put in place to maintain the social fabric. The Greek concept morphed into Latin (politia) into Middle French as “police”, meaning the duty of maintaining law and order. By the late Middle Ages, this evolved in England into localised systems placed in charge of keeping the peace of the public under the crown’s authority (Lepore, 2020). However, it was only in 1829 that England established its first formalised police force: the Metropolitan Police (Lambert, 2021). This marked a shift towards systematic policing, blurring the line between state governance and the exertion of authoritative rule further. When Britain colonised the United States, they took with them many traditions and practices, policing being one of them – but in a darker form (Lepore, 2020).
In the early 1700s, slave patrols became one of America’s first policing systems in the south. Contrary to popular belief at the time, these slave patrols were not to protect the communities, but were rather enforced to capture runaway slaves, suppress revolts and control the enslaved populations by reinforcing fear within the communities to discourage them from “acting out” or attempting something that would “inconvenience” their owners (Hassett-Walker, 2020). The existence of the slave patrols was legalised through slave codes. Slave codes were a set of laws that defined the status of enslaved people and the rights of their owners and were based on the concept that enslaved people were property, not individuals (Robinson, 2017). They meant to control all aspects of enslaved people’s lives, ranging from where they could live, work and travel. Their goals were to return escaped slaves to their owners, deter potential slave revolts by unleashing terror and discipline slaves for breaking plantation rules outside the law (Potter, 2013). The codes legalised severe and brutal punishments for disobedience, escape attempts or any behaviour classed as rebellious, and therefore, created fearful environments that worked towards suppressing any uprise or resistance (Hassett-Walker, 2020). Besides enforcing restrictions on the slaves as individuals, the laws also intended to maintain a strong racial hierarchy, ensuring the embedment of white supremacy into society. Despite white people being deemed as superior, the codes often included severe penalties to any white person caught aiding a slave or questioning the system (Hassett-Walker, 2020). In southern colonies such as Virginia and South Carolina, slave codes served as legal systems that not only established but normalised racial oppression, paving the way for later discriminatory laws like the Black Codes and Jim Crow laws (Hassett-Walker, 2020). Slave codes were more than just mere regulations, they served as a controlling system that weakened boundaries between law enforcement and racial terror. These unjust principles were the origin of prejudiced racial control being embedded into the DNA of policing.
In 1909, modern policing is thought to have begun when August Vollmer became the police chief in Berkeley, California (Lepore, 2020). Vollmer, often referred to as the father of modern policing, implemented many new innovations serving in the Philippine-American War. For example, he introduced the idea of police divisions entirely devoted to intelligence gathering and analysis to track criminals, which is still present today (Go, 2020). Currently, intelligence gathering in policing is being used in new ways, such as predictive policing. Predictive policing’s bias towards certain groups can also be traced back to Vollmer, as he said that the police were conducting a war against society’s enemies, which included immigrants and Black people (Lepore, 2020).
Vollmer’s thoughts on the purpose of policing enhance our understanding of the racial biases and discrimination present in the modern era of policing, which was exacerbated by Jim Crow laws. Jim Crow laws fundamentally separated Black people from white people, influencing how these groups were policed (Lepore, 2020). In the 1900s, police were tasked with exerting excessive brutality on African-Americans who violated Jim Crow laws (NAACP, n.d.).
According to historian Khalil Gibran Muhammad, there were disproportionate arrests, prosecutions and sentences of Black people by police, prosecutors and judges, making social scientists believe that Black people being predisposed to become criminals was a matter of biology. This phenomenon is seen with the fact that in the 1920s, 25% of all those arrested in Philadelphia were Black, even though they made up only 7.4% of the population (Lepore, 2020). Therefore, it is clear that the police had been made into an oppressive tool instead of one meant to uphold law and order.
Ian Loader, a British Criminologist, explains that the reason why police matter is because the way they operate is an indicator of how well a society attends to the security and wellbeing of its citizens. Using this logic, the brutal history of American policing imply that their society is not doing well at maintaining everyone’s security and wellbeing due to the presence of racial biases. The oppressive nature of policing has superseded its ostensible intended purpose: maintaining security and wellbeing of all citizens.
From 1963 to 1969, Lyndon B. Johnson, Richard Nixon and Ronald Reagan all focused more on policing than other social services, although in unique ways. Johnson distributed funds to 80,000 crime control projects and diverted funds intended for social projects, including health and welfare, to police operations instead (Lepore, 2020). Reagan deprived even more social services of funding, forcing some to close. According to policing expert Elizabeth Hinton, more Americans went to jail in this time frame than in the previous century. This indicates that the US government has historically focused more on catching criminals than other forms of social assistance, which likely had negative ramifications for many Americans. This illustrates how the police are failing at their intended purpose to focus on everyone’s security and wellbeing, with this failure being supported by powerful governmental figures.
In the modern day, guns greatly affect American policing. Jill Lepore emphasises that an American study indicates that the number of American men, aged 15 to 34, being treated in emergency rooms for injuries inflicted by police is almost as high as the number of pedestrians killed in motor vehicle accidents. US Bureau of Justice Statistics show that during a given year, “an estimated 1 million civilians experience police threat of or use of force resulting in a conservative estimate of 75,000 non-fatal injuries requiring hospital treatment and 600-1100 deaths. Both Black/African-Americans and Hispanics/Latinos are twice as likely to experience threat of or use of force during police initiated contact” (Law Enforcement Epidemiology Project, 2023). Additionally, guns have more influence in American policing compared to other nations. Finnish police fired six bullets in all of 2013; in contrast, in one day, three American police officers fired 17 bullets in Paco, Washington that killed an unarmed Mexican orchard worker in 2015 (Lepore, 2020). While guns theoretically can harm anyone, an immigrant being killed here again illustrates how immigrants and people of colour are disproportionately affected by police violence. The fact that the rate of fatal police shootings among Black Americans is much higher than any other ethnicity also highlights this, showing that the American policing system is more oppressive than helpful (NAACP, n.d). Throughout the modern era, minorities were unfairly treated by police. This is then used as the historical data predictive policing relies on.
By the late 19th century, criminologists and statisticians began to explore the idea of using data to understand and potentially reduce crime rates. A Belgian-French astronomer, mathematician, statistician and sociologist, Adolphe Quetelet, introduced the idea that crime rates followed predictable social patterns, his work suggesting that certain areas and demographics of people could statistically be linked to higher crime rates (Donnelly, 2016). This mindset would later become the foundation for future predictive policing models. The assumption was simple yet dangerous: past behaviour could predict future crime, turning data into a tool for surveillance rather than its intended purpose of social support.
The concept of modern predictive policing is largely credited to William Bratton, the former chief of the Los Angeles Police Department (LAPD). Under Bratton, the LAPD began using predictive analytics to anticipate gang violence and support real-time crime monitoring. However, the collection, storage and analysis of historical crime data are from registers and systems that include and target the “undesirable” or “dangerous”, according to the London School of Economics. As an example of harmful historical data stores, South Africa’s colonial governments had a collection of information about people of Indian heritage to limit immigration (Amnesty, 2025). This illustrates how forms of historical crime data negatively impact minorities.
Today, predictive policing, using data and algorithms to forecast potential crime, is often considered a leap forward in technological law enforcement – it promises efficiency, crime reduction and visionary policing. However, history warns us to tread carefully; the same logic that governed slave patrols and racial profiling under the guise of prevention still echoes in algorithmic bias and over-policing of marginalised communities. If unchecked, predictive policing risks establishing further inequality within the systems rather than preventing crime. Dubber’s quote serves as a reminder that the rule of the law is meant to guarantee equality and safety for everyone, whilst policing has historically had the opposite effect. From Ancient Greece’s polis to the data driven approach of policing within the 20th century, policing has been about more than just law enforcement; it reflects the structures in society, the power dynamics and the ever-closing gap between order and authoritative control. Thus, we must be cautious when approaching the idea of predictive policing. The need of the hour is to prevent it from becoming a logical continuation of policing’s long and complex history embedded in systemic racism and discrimination.
The historical roots of policing, which is still deeply integrated with racial bias, social control and oppressive power, is even now a shaping factor for modern law enforcement. Predictive policing, though at first glance seems like an innovative and data-driven system, is built on this same historical prejudice, drawing from historical crime data shaped by decades of discriminatory practices. To fully understand today’s AI-driven systems, it is crucial to examine how these historical patterns have evolved into current predictive policing models and see if they overlap or change.
Section II: Current Landscape of AI-Driven Predictive Policing: Policies, Perspectives, and Challenges
Predictive policing technologies have rapidly expanded worldwide, offering the promise of improving crime prevention by using artificial intelligence to analyse historical crime data and forecast future incidents. While this sounds like a breakthrough in efficiency and public safety, the reality is far more complicated. The core of the controversy lies in the fact that these systems are trained on past law enforcement data, which is already shaped by years of racial and social inequality. Richardson and others in their New York University Law Review explain that these historical “dirty data” sets are often filled with structural bias and flawed assumptions, especially in how minority communities have been policed in the past (Richardson et al, 2021). Once these data patterns are built into the system, predictive policing tools may begin to replicate the very inequalities they are meant to reduce. This creates a feedback loop, where over-policed neighbourhoods keep getting flagged as high-risk, leading to more patrols, more stops and more data reinforcing their risk level (Richardson et al., 2019). This is exactly the danger because it gives an illusion of neutrality, an illusion that AI systems are fair because they’re data-driven, when in fact they are just mechanising the same discrimination that was already there. This is not simply a technical flaw; it reflects broader societal injustice being coded into technology. Predictive policing does not just echo the past, it locks people into it.
A well-known example that illustrates this cycle comes from Chicago’s Strategic Subject List. This tool used historical arrest records to identify individuals at risk of being involved in gun violence, either as victims or offenders. However, critics found that the list overwhelmingly targeted Black and Latino men, many of whom had never committed a violent crime (Harcourt, 2015). This kind of algorithmic profiling shows how predictive policing can label minorities not because of what they have done but because of who they are. This reflects the deeply flawed idea that past interaction with the police automatically equals future criminal behaviour.
These issues with policing become even more apparent in recent policy reviews. The June 2025 report by the Stanford Center for Racial Justice shows the use-of-force policies from the 100 largest police departments in the United States. The findings show that while some reforms have taken place following the nation-wide protests after George Floyd’s death in police custody, which sparked a global reckoning with systemic racism and police brutality, major problems still exist. For instance, only 79% of departments require officers to try de-escalation before using force. Even fewer limit the use of pepper spray on people who are handcuffed, and only about half clearly define deadly force as a last resort (Sutton & Dahir, 2025). These inconsistencies across the country are not just policy gaps, they are part of the larger environment that shapes how police data is collected, and since predictive algorithms learn from the data that gets reported, any uneven standards in policing will end up embedded in the technology and cause fault systems that cause the current trust issues towards AI in policing and the idea of predictive policing. The below picture, sourced from Amnesty’s 2025 report, aptly describes this above-mentioned discriminatory impact of predictive policing.

Source: Amnesty’s Report on Automated Racism, 2025
Globally, countries are handling predictive policing very differently, just like how they handle their legislative systems. In the European Union, strict regulations like the General Data Protection Regulation force police departments to evaluate AI tools carefully, especially in terms of privacy and human rights (European Commission, 2023). The Council of Europe has warned that predictive policing, if not properly regulated, can threaten freedom and increase discrimination against marginalised groups (Council of Europe, 2020). For example, in the Netherlands and the UK, programmes like SyRI and the Met Police’s facial recognition efforts have sparked public backlash and legal action due to concerns about racial profiling and lack of transparency (Van Brakel & De Hert, 2011).

Source: Cimphony, 2024
To add to the existing concerns, the prediction accuracies of the crime prediction technologies used vary drastically. The above table highlights the varying levels of accuracy reported by different studies, from 90% by the University of Chicago study to 0.6% by the Plainfield PD’s Geolitica software (Cimphony, 2024). Meanwhile, in countries outside of Europe and North America, there are even more extreme differences. China has used predictive policing and facial recognition in regions like Xinjiang in ways that many human rights groups say amount to surveillance-based oppression of Uyghur Muslims (Creemers, 2019). In contrast, Japan and South Korea are still experimenting cautiously with these technologies, with strong public debates over privacy rights and the appropriate limits of state surveillance (Kim & Park, 2024). These global differences highlight how predictive policing is not a neutral tool. The way it is used always reflects the political, cultural and legal norms of the respective country.
Another pressing concern is that many predictive policing systems are created by private companies rather than public institutions, increasing the risk of biased motives and improper handling of data. These companies often treat their algorithms as trade secrets, which means neither law enforcement nor the public can see how the tools work or what assumptions they rely on. This lack of transparency is sometimes called the “black box” problem. Amnesty International has raised concerns that when people are placed under more surveillance or even arrested based on algorithmic assessments, they have no clear way to challenge those decisions or understand how they were made (Amnesty International, 2021). This inability to challenge the law caused by the black box problem does not only reveal the technical flaws of predictive policing, but also demonstrates a deeper problem — a genuine and fundamental failure within our legal system itself. Under the rule of law, laws are meant to protect human rights and ensure equality for every citizen, thereby, making society a better place to live. The reality, however, is different. Laws which enable corporations to treat their algorithms as confidential trade secrets, and thus prohibit its disclosure to other organisations, block transparency and over-police marginalised groups. It is people from these marginalised groups who are disproportionately affected by these tools.
How do we balance privacy with transparency and accountability in predictive policing? Both privacy and transparency serve as indispensable elements in the law systems under our democratic societies. Privacy promotes innovation, while transparency upholds public accountability and protects human rights of marginalised groups. The difficulty lies in the fact that the two values, privacy and transparency, are both essential legal values, but they can pull in opposite directions, and the laws typically offer vague or insufficient guidance on resolving such issues (Bennett & Raab, 2003). Scholars like Frank Pasquale might argue that we should treat both values equally, so it is considered as fair, but the result of it might be that one side suffers more from this decision, just like marginalised groups tend to bear more pressure in this case since they are more likely to be misrepresented or over-policed by the current system. However, if justice is defined as everyone deserves to have equal rights and protection by the laws, then the existing predictive policing system that allows harms to be distributed unevenly, even under the name of balancing both values, should be considered as failing to achieve justice because partial justice is never true justice. Therefore, it is evident that the law can sometimes fail to fulfil its role and meet the requirements of the rule of law due to its complexity, conflicts in fundamental values and the presence of diverse perspectives in the decision-making process.
Looking through the lens of ethnic studies and critical race theory, it is clear that AI and technology are not free from cultural or political influence. As Ruha Benjamin and other scholars point out, technology reflects the values of the people and systems that create it (Benjamin, 2020). Predictive policing is not just about computers and statistics. It is about continuing or resisting the history of racialised surveillance. There is an assumption that machines are somehow more objective than humans. This is a dangerous myth. If anything, algorithms make it easier to scale up biased decision-making under the false appearance of neutrality.
Because of these risks, many advocacy organisations are pushing for stronger rules or even bans on predictive policing tools until there is clear evidence that they can be used ethically and fairly. The AI Now Institute and the Campaign to Stop Killer Robots have both called for moratoriums on these technologies, arguing that current systems do more harm than good, especially to already over-policed communities (Kak & Myers West, 2023). They argue that without a complete overhaul of how police data is collected and used, AI systems will simply automate racial injustice instead of solving it.
At the same time, some law enforcement groups still believe that predictive policing can be useful. There are examples from Canada and Australia where targeted programmes have reportedly reduced certain types of crime (Tang & Hiebert, 2025). Another example is from Richmond, Virginia: “Every New Year’s Eve, Richmond, Va. would experience an increase in random gunfire. Police began looking at data gathered over the years, and based on that information, they were able to anticipate the time, location and nature of future incidents. On New Year’s Eve 2003, Richmond police placed officers at those locations to prevent crime and respond more rapidly. The result was a 47% decrease in random gunfire and a 246% increase in weapons seized. The department saved $15,000 in personnel costs” (Pearsall, 2010). Sean Malinowski, a lieutenant with the LAPD, asserts that predictive policing does not deny people their civil right and that police are making arrests only when they have probable cause. He further emphasised that “predictive policing methods do not identify specific individuals; instead, they anticipate particular times and locations where crime is likely to occur” (Pearsall, 2010). But critics of these programmes often point out that the results are mixed and not well studied, and that successes in limited areas do not justify the risks of systemic harm.
In conclusion, the current landscape of predictive policing is complex. On the one hand, AI has high potential, but on the other hand, it still reeks of systemic racism. Countries vary widely in how they implement and regulate these systems, just as they differ in their broader legal and ethical frameworks. The gap between innovation and ethics remains significant, especially when AI tools are shaped more by the private sector than by public accountability. Who designs these systems, who trains them and whose data is used – all of it matters. Predictive policing cannot be separated from broader issues of justice, inequality and power. If we fail to confront these deeper questions, AI in law enforcement will not only fall short of its promises, it will deepen the very harms it claims to resolve. This will be especially dangerous for communities already targeted because of their race, socioeconomic status or past. Technology may change, but without justice at its core, the outcomes will stay the same.
But can we hope for a more equitable future?
Section III: Potential Solution to the Defect of Predictive Policing: Prevention and Correction
Each and every democratic society longs for fairness. While policing is supposed to be the cornerstone of maintaining fundamental fairness, that is, fairness before the law, history reveals a different tale (as shown in Section I). With technological advances, today’s police system is facing more challenges than ever before. As emphasised in Section II, technology can both be an opportunity and an obstacle. If the existing challenges in crime prediction technologies can be overcome, then there is a hope for an equitable future (Brown, 2013).
Although there are still significant biases and racial disparities in predictive policing, one predominant view is that the main problems of predictive policing stem from historical injustices and current structural racism, rather than defects in the technology itself. The root cause of bias in predictive policing as a system supported by big data lies in “dirty data” rather than problems with the concept of predictive policing (Richardson et al., 2021). Therefore, improvements can and should be made to establish fairer crime rate collection models and error correction mechanisms, rather than completely abolishing the predictive policing system. This section of this paper aims to explore possible improvements in predictive policing in the future.
The key to solving a problem is often not remedy but prevention. Therefore, this section will focus on what preventive measures can be taken to prevent predictive policing from contributing to systemic racism. One potential solution lies in addressing the biases in predictive policing algorithms, which are the result of computers extrapolating past data. Ensuring that the algorithms are fair is therefore critical, but these data collections, as argued in the previous section of this article, are often influenced by discriminatory policing patterns, which is why society is sceptical of predictive policing. This problem is exacerbated by the fact that data collection is now widely outsourced to private contractors, whose operations lack constant public oversight. Fara Sheila points out how the “United States has failed to create a legislative framework for AI innovators to follow, allowing the technology’s bounds to be defined by the people who developed the technology” (Sheila 2025). This regulatory vacuum has led to unconstrained data collection practices, which increases the risk of systemic bias. To mitigate these issues, predictive policing data collection must be transparent, publicly accountable and subject to federal oversight. Establishing rigorous review and accountability mechanisms for the data pipelines that support predictive policing systems can reduce algorithmic bias at the source, thereby improving integrity and fairness. Another major challenge for predictive policing is the simplicity of crime statistics, which often simplifies complex social realities into raw numbers such as arrests, while ignoring the voices and experiences of the people being counted and the marginalised communities in which they live. This one-sidedness exacerbates inequality and prejudice. Another solution is to adopt community-centered design, human rights groups and policing/LA/PCC officials (Lyon, 2025). Therefore, predictive policing systems should incorporate qualitative data and the perspectives of marginalised communities into decisions on data selection and risk indicators to establish an algorithm and database supported by refined qualitative data rather than simple and crude quantitative data (Lyon, 2025). In addition, David Lyon also recommends the establishment of a multilateral independent oversight committee composed of civil rights advocates, data scientists and local stakeholders to review data collection and algorithm predictions, assess their impact on society and recommend corrective measures. Incorporating community engagement and external oversight into predictive policing design can help prevent statistical errors and historical biases from translating into modern systemic discrimination, thereby promoting fairness and public trust (Lyon, 2025).
However, if the preventive measures for statistical errors in predictive policy are not in place, or other errors occur, policymakers need to establish a strong error correction mechanism to compensate for the losses caused by data errors. A powerful error correction mechanism is the one which could provide timely data updates and bias remedy to the system. Static historical data locks predictive policing models into past policing practices, many of which are affected by structural inequality (Pearsall, 2010). For example, arrest records in communities that have historically been over-policed due to racial discrimination may not reflect their actual crime levels, but only reflect that the area was once a focus of law enforcement. If not changed, these biases will be input into the predictive algorithm, forming a “feedback loop” that supports biased behaviour today based on history (Pearsall, 2010). According to Wu et al., to prevent this from happening, predictive policing systems must adopt dynamic data updates, so that new data inputs are regularly updated to reflect the current situation, rather than adopting outdated or unchanging data models. Another effective bias-remedy model can make micro-adjustments on existing data to compensate for deficiencies in past models due to historical discrimination or outdated factors. Compared with collecting data again, this can be regarded as a relatively cheap and simple compensation measure (Wu et al., 2022). Further, bias remedy methods could include adjusting data to correct historical over-representation and racial discrimination. This can ensure that the predictive model presents a fairer perspective. The authors of this paper firmly believe that by using dynamically updated and balanced data, the predictive policing model can eventually be transformed from a tool that replicates historical discrimination to a much fairer model.
To reduce algorithmic bias and to move towards a fairer tool, two steps are crucial: prevention and correction. First, predictive policing needs to establish a transparent and supervised data collection process to ensure the fairness of its data. Second, a strong correction mechanism should be established to improve the errors in existing algorithms to correct the injustice of the algorithms. To accomplish this, it is necessary not only to create a government-supervised supervisory body, but also to create an independent community-centred oversight committee to ensure that the predictive model reflects a broader social reality rather than historical discrimination. The authors of this paper strongly advocate for incorporating transparency, accountability and inclusive policies into predictive policing to reduce systemic bias and promote fairness and public trust.
History has shown us that, when unaccountable, policing can easily become a tool of oppression rather than fairness. From the slave patrols of early America to today’s predictive algorithms, the same forces of control, exclusion and unequal power threaten to resurface in new forms. If we are to break this cycle, we must build such systems that learn from not only the data but also from the voices affected by it; only by acknowledging the unjust past can we create an unprejudiced future. Fairness cannot be programmed; it must be pursued. Predictive policing should not be just another chapter in the long textbook of injustice, it must be a turning point.
Conclusion
Predictive policing, though seen as a leap of technology and praised by many, carries the weight of a history rooted in racial bias, social control and oppressive power structures. As Dubber emphasised, the law is intended to guarantee equality and safety for everyone, yet policing has often conflicted with these goals. From Ancient Greece’s polis to the present day, policing has been about more than just law enforcement – it mimics societal structures and power dynamics. This history reminds us to approach predictive policing with caution, ensuring it serves as a protection of liberty and justice for all communities, rather than a continuation of past injustices. This paper set out to ask the question whether predictive policing perpetuates bias. Sections I and II of this paper highlight how that question was answered in the affirmative. Historically, the police have unfairly policed minorities which emerges in the historical crime data that predictive policing relies on. AI looks fair because it is data-driven, but it just automates old biases. Section II of this paper stresses how algorithmic profiling labels people by identity, not behaviour. Various examples show how AI and predictive policing perpetuate systemic racism. The gap between innovation and ethics remains paramount, especially, when AI tools are shaped and controlled, predominantly, by the private sector rather than by public accountability. This paper then sought to answer questions about the future of AI in policing. Section III of this paper emphasises that to reduce algorithmic bias, two steps are crucial: prevention and correction. Transparent data collection, independent oversight, timely bias correction and active community participation is the need of the hour. Only by embedding justice and accountability into these systems can technology serve as a tool for fairness rather than a continuation of oppression. Technology may evolve, but without justice at its core, the outcomes will remain unchanged.
Bibliography
Amnesty International (2021) ‘Europe: Proposed legislation too weak to protect us from dangerous AI systems’. Amnesty Online. <https://www.amnesty.org/en/latest/news/2021/04/eu-legislation-to-ban-dangerous-ai-may-not-stop-law-enforcement-abuse-2/> Accessed 24 July 2025.
Amnesty International (2025) ‘Automated Racism Report’. Amnesty Online. <https://www.amnesty.org.uk/predictive-policing> Accessed 24 July 2025.
Benjamin, R. (2020) ‘The Shiny, High-Tech Wolf in Sheep’s Clothing’. (Department of African American Studies, Princeton University) <https://aas.princeton.edu/news/shiny-high-tech-wolf-sheeps-clothing> Accessed 24 July 2025.
Bennett, C. & Raab, C. (2003) The Governance of Privacy Policy Instruments in Global Perspective (Routledge).
Brown, J. (2013) The Future of Policing (Routledge).
Cimphony (2024) ‘AI Predictive Policing Accuracy: 2024 Analysis’. Cimphony. <https://www.cimphony.ai/insights/ai-predictive-policing-accuracy-2024-analysis> Accessed 24 July 2025.
Council of Europe (2020) ‘Justice by Algorithm: The Role of Artificial Intelligence in Policing and Criminal Justice Systems’. Committee on Legal Affairs and Human Rights. <https://pace.coe.int/en/files/28723/html> Accessed 24 July 2024.
Creemers,R. (2019) ‘The International and Foreign Policy Impact of China’s Artificial Intelligence and Big-Data Strategies’ in Nicholas Wright. Artificial Intelligence, China, Russia, and the Global Order (Air University Press).
Dodd, V. (2025) ‘UK use of predictive policing is racist and should be banned, says Amnesty’. The Guardian (London).
Donnelly, K. (2016) Adolphe Quetelet, Social Physics, and the Average Men of Science, 1796-1874 (University of Pittsburgh Press).
Go, J. (2020) ‘The Imperial Origins of American Policing: Militarization and Imperial Expansion’. (125:5) American Journal of Sociology 1193.
Harcourt, B.E. (2006) Against Prediction: Profiling, Policing and Punishing in an Actuarial Age (University Chicago Press).
Harcourt, B.E. (2015) Exposed: Desire and Disobedience in the Digital Age (Harvard University Press).
Hassett-Walker, C. (2020) ‘The racist roots of American policing: From slave patrols to traffic stops’. The Conversation. <https://theconversation.com/the-racist-roots-of-american-policing-from-slave-patrols-to-traffic-stops-112816> Accessed 24 July 2025.
Ji-min, P. & Mi-geon, K. (2024) ‘AI-powered crime prediction to transform safety in S. Korea’. The Chosun Daily (Seoul). <https://www.chosun.com/english/national-en/2024/07/19/6> Accessed 24 July 2025.
Kak, A. & West, S.M. (2024) ‘AI Now 2023 Landscape: Confronting Tech Power’. AI Now Institute. <https://ainowinstitute.org/publications/research/2023-landscape-confronting-tech-power> Accessed 24 July 2025.
Lambert, T. (2021) ‘A Brief History of the Birtish Police’. Local Histories. <https://localhistories.org/a-brief-history-of-the-british-police/> Accessed 24 July 2025.
Law Enforcement Epidemiology Project (2023) ‘Facts and Figures on Injuries Caused by Law Enforcement’ (University of Illinois Chicago). <https://policeepi.uic.edu/data-civilian-injuries-law-enforcement/facts-figures-injuries-caused-law-enforcement> Accessed 24 July 2025.
Lepore, J. (2020) ‘The Invention of the Police’. The New Yorker.
Loader, I. (2013) ‘Why Do the Police Matter? Beyond the Myth of Crime Fighting’ in Brown, J. The Future of Policing (Routledge).
Lyon, D. (2025) ‘Community-centred design and predictive policing in England and Wales: socio-legal dimensions, benefits, and challenges’ (Conference Paper). <https://repository.canterbury.ac.uk/item/9q4zv/community-centred-design-and-predictive-policing-applications-in-england-and-wales-socio-legal-dimensions-benefits-and-challenges> Accessed 24 July 2025.
NAACP (n.d) ‘The Origins of Modern Day Policing’. NAACP. <https://naacp.org/find-resources/history-explained/origins-modern-day-policing> Accessed 24 July 2025.
Pearsall, B. (2010) ‘Predictive Policing: The Future of Law Enforcement?’ National Institute of Justice Journal. <https://nij.ojp.gov/topics/articles/predictive-policing-future-law-enforcement> Accessed 24 July 2025.
Peck, T. (2023) ‘Interdisciplinary Methodological Approaches to Desk-Based Socio-Legal Human Rights Research’. Law and Method.
Potter, G. (2013) ‘The History of Policing in the United States’. Eastern Kentucky University Police Studies Online. <https://www.denvertaskforce.org/wp-content/uploads/2023/02/the_history_of_policing_in_the_united_states.pdf> Accessed 24 July 2025.
Richardson, R., Schultz, J. & Crawford, K. (2019) ‘Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice’. New York University Law Review 15.
Robinson, M.A. (2017) ‘Black Bodies on the Ground: Policing Disparities in the African American Community’. (48:6) Journal of Black Studies 551.
Sheila, F. (2025) ‘How AI is Setting the stage for a Digital Jim Crow Era’. CHCI. <https://chci.org/wp-content/uploads/2025/03/Rodriguez_Fara_Predictive-Policing-How-AI-is-Setting-the-Stage-for-a-Digital-Jim-Crow-Era-.pdf> Accessed 24 July 2025.
Sutton, D. & Dahir, F. (2025) ‘Police Use of Force Policies Across America: Regulations from 100 Cities, Post-Floyd Policy Reforms, and Revisiting Constitutional Standards’. Stanford Center for Racial Justice. <https://law.stanford.edu/2025/06/25/police-use-of-force-policies-across-america/> Accessed 24 July 2025.
Tang, J. & Hiebert, K. (2025) ‘The Promises and Perils of Predictive Policing’. Canadian Centre for Justice Innovation Blog. <https://www.cigionline.org/articles/the-promises-and-perils-of-predictive-policing/> Accessed 24 July 2025.
U.S Department of Justice (2024) ‘Artificial Intelligence and Criminal Justice (Final Report)’. U.S Department of Justice. <https://www.justice.gov/olp/media/1381796/dl> Accessed 24 July 2025.
Van Brakel, R. & De Hert, P. (2011) ‘Policing, Surveillance and Law in a Pre-Crime Society: Understanding the Consequences of Technology Based Strategies’. Cahiers Politiestudies.
Wu, J., Abrar, S.M., Awasthi, N., Frias-Martinez, E. & Frias-Martinez, V. (2022) ‘Enhancing short-term crime prediction with human mobility flows and deep learning architectures’. (53) EPJ Data Science 11.