Abstract 

The harms that come from the absence of legal regulation can be detrimental, yet legislative branches around the world still hesitate to regulate technological advancements. The unpredictable characteristics of technology could be considered a threat to communal well-being; however, the regulation of technology can also lead to the limiting of advances. This collection of academic papers will showcase the impact technology has had on law across various sectors. The first paper will explain AI’s integration into law enforcement and its impact on the global industry. At the same time, it is considering the issue of balance between the safety of society and proper regulation of law enforcement. The second paper will then evaluate technological bias’s impact on asylum seekers. Moreover, it demonstrates the plethora of ways the universal right to equality is breached across the US and Australia. The third paper will present the effect AI in the media has had on the law. Similarly, it consistently emphasises the problem of a lack of primary legislation. The fourth article will explore different countries’ approaches to limiting harm from blockchain and cryptocurrency technology. Finally, the fifth paper will demonstrate how traditional EU and UK merger controls must be adapted to handle data merging.

Global Expansion of Artificial Intelligence in Government & Law Enforcement

Introduction

AI and the use of advanced computing technology such as facial recognition in policing services pose definite risks to society if/when not properly deployed and managed. There are risks of building racial, gender and other bias into AI and these must be mitigated and overseen to ensure compliance with agreed standards and to protect society. There are examples of countries (China, Russia, Iran) already using AI to track citizens and build software to analyze their movements, identify their friends and associates and even determine if they are likely to be government agitators. The question is whether the technology and its benefits for good outweigh its uses for evil. It’s also a question of remaining competitive with other countries, particularly antagonistic ones, regardless of how they choose to use their technology.

I believe there will always be cases of inappropriate use of technology, but the risks of limiting or not allowing police services the use of advanced technologies will increasingly provide an advantage to criminals and antagonistic countries, who will continue to deploy these technologies against their citizens and rivals.

 

Benefits

Artificial intelligence (AI) is increasingly being used by law enforcement to supplement their officers and agencies. AI is thought to be a critical component in law enforcement because it may help boost efficacy in a variety of ways, including increasing efficiency. AI is rapidly being utilised for facial recognition and to screen video footage for abnormalities in crowd control and surveillance. If the fundamental purpose of law enforcement is to help deter and solve crimes while also providing society with a sense of security and certainty, AI can be used in a variety of ways to achieve that goal. Facial recognition software is now being used to assist, locate and identify missing people. According to NCIC Statistics on Missing and Unidentified Persons, in the United States alone, nearly 88,000 persons went missing in 2017 (1). Law enforcement agents can use facial recognition technology in crowded public areas such as stadiums, parks, airports etc. to identify missing persons. 

AI has also proven to be useful with non-violent crimes such as counterfeit goods, petty theft, and financial crimes. Since police departments are often focused on violent crimes, they don’t have the resources or manpower to efficiently investigate these sorts of crimes. Now with the help of AI, identifying fake or counterfeit products has never been easier. A startup company named Entrupry has designed a device that takes microscopic pictures of certain materials such as leather, metal, or wood to analyze them and scan for faulty materials (1).

Financial crimes, such as expense report fraud or stolen credit cards can also benefit from AI algorithms. Banks can use historical data to train AI algorithms, which can then be used in conjunction with transaction monitoring systems to validate or flag unusual transactions.

A recent study found that smart technologies such as AI could help cities reduce crime by 30 to 40 per cent and reduce response times for emergency services by 20 to 35 per cent (2). The same study found that cities have started to invest in real-time crime mapping, crowd management and gunshot detection. In order to protect the safety of the public, law enforcement agencies have implemented in-car and body cameras with facial recognition and biometric technologies. Although not as common, these technologies are also found in drones and aerial surveillance, as well as crowdsourcing crime reporting and emergency apps. The AI Global Surveillance (AIGS) Index states that 56 out of 176 countries used AI for surveillance of safe city platforms, although with different approaches (2). Many countries are exploring the possibilities and capabilities of using AI to predict crime using surveillance data. This is achievable through machine learning (ML) and big data analytics, which make it possible to navigate large amounts of data in a shorter amount of time. The objective is to create flexible security systems that can detect criminal and terrorist databases in order to defer and hopefully intercept criminal activities.

 

How it is being used

  • Faster video analysis and redaction

Video analysis is a time-consuming and tedious process for officers; many agencies have revealed that they have spent up to 2 hours redacting a 5-minute clip for public release. Another example would be spending hours going through video footage just to find a single suspect. AI has the ability to filter through the footage to find certain characteristics that an officer is looking for. This allows for more time-efficient video analysis.

  • Audio analysis and transcription

An officer’s radio transmissions can be a crucial piece of evidence in a criminal investigation. However, sifting through hours of audio can be tiresome and overwhelming for officers. Fortunately, AI can filter through audio clips and highlight essential information. Many departments have implemented AI technologies for a more time-efficient and effortless approach to this process.

  • Real-time vocal search

Due to audio advancement technologies, officers can now vocally input a name and date of birth and an AI system will quickly state the suspect’s driver’s licence, warrant information, recent arrests, and more. Not only is this convenient, it is also much safer for officers who need information on a suspect while driving; it eliminates distractions.

  • Gunshot detection

Gunshot detection systems are one of the most rapidly expanding applications of AI and ML in law enforcement. This gunshot locator technology has the ability to detect where a shot was fired, how many rounds were fired, and even which kind of gun was used. Scientists are currently working on developing this technology so that it can also detect which kind of bullet was fired. This will help tremendously with homicide and armed robbery investigations.

  • Crime forecasting

AI systems are able to sort through large volumes of historical crime data and then use that information to predict where, when, how, and which types of crimes are likely to occur. Place-based predictive policing is commonly practised by a variety of countries. They find patterns in where and when crimes have occurred previously, so that more officers can be allocated to that area. 

AI’s ability to synthesise large amounts of data is what makes it so powerful. These systems are also able to do it in such a short amount of time, which would otherwise be impossible for humans. However, the main concern with predictive policing is that it relies on historical data, which reinforces existing biases within the system.

There is also an issue with transparency between police departments and the community. Although many countries have denied this, certain departments use predictive policing to identify individuals and their likelihood of committing a crime. Citizens want to know the nature of how AI is being used within law enforcement departments, but the information that is released is often vague. This is not to say that AI can’t be used for good, but the way it is being used varies depending on the country. It is important to recognize how AI can be more advantageous than human intelligence, but it is equally important to acknowledge that it can be used in an unethical manner.

 

How different countries Utilise AI

China Using AI against Uighur Muslim Population (3)

China’s government has incarcerated up to a million Uighur Muslim citizens in an attempt at ethnic cleansing that they call “re-education camps”. This campaign has horrified human rights activists all over the world. Among the various efforts is an artificial intelligence system that the government claims can forecast terrorists and those in need of “re-education” at the slew of newly constructed camps. The kinds of behaviour that are now being monitored include: which language you speak at home, whether you’re talking to your relatives in other countries, and how often you pray. Information is now being gathered and used to decide whether people should be subjected to political re-education in these camps.

Surveillance and artificial intelligence technologies are being deployed all throughout China. Cameras with AI-powered facial recognition can be found on the corner of every street, and various pilot projects use AI to give people a “social credit” score, punishing some for certain behaviour and rewarding others for what the government considers good citizenship. Tech giant Huawei supplies over 50 countries with AI surveillance alone. Since China is the main driver of AI surveillance, it is only a matter of time before other countries start adopting their unethical practices.

The Constitution of the People’s Republic of China guarantees freedom of religion in article 36. Article 4 also guarantees the equality of all nationalities, and the right to speak any language and practise any religious customs (3). Of course, the way China is targetting their Uighur Muslim population violates both articles. However, administrative policies and decisions often contradict the legal protection for “security reasons”. Their use of surveillance also violates Article 3 (fair and just) of the Artificial Intelligence Industry Alliance, which Chinese president Xi Jinping signed.

Israel’s Use of Biometrics (4)

Isreal is utilizing biometric data in an attempt at ‘counter-terrorism’. Three particular practices have caught the eye of human rights activists: automated checkpoints between Israeli and Palestinian states, biometric IDs, and facial recognition in surveillance. In 2017, the Israeli parliament passed a law requiring all Israeli and Palestinian citizens to carry biometric IDs to ensure that Palestinian citizens don’t enter Israeli states unless granted access. Automated checkpoints now heavily rely on biometric IDs. The most common deployment of facial recognition technology within the occupied region is at recently updated fixed checkpoints in the West Bank, which govern the movement of persons in and out of the West Bank. This move toward automation is part of Israel’s larger plan to “humanize” its existing checkpoint system. AnyVision Interactive Technologies, an Israeli business, is said to have given the software for the cameras at the access control systems. Up until recently, they were receiving venture capital funding from Microsoft, but the tech giant has since terminated funding due to ethical concerns about how this technology is being used. Prior to AnyVision’s advancements, Israeli checkpoints were infamous for long wait times, plenty of delays, and the use of unnecessary force against Palestinians. With these upgraded checkpoints, Palestinian workers with permits to work in Israel and biometric IDs simply approach an optical turnstile, scan their digital IDs, and stare into a camera at newly improved checkpoints. AnyVision’s facial recognition software can verify their identification in a matter of seconds. Electronic panels open if they are verified, allowing them to enter Israeli territory. Not only does this practice pose security and privacy risks, but it can also lead to normalized military occupation and further segregation between the two communities. Israel is capable of systematically displacing Palestinians since they have been deemed a “threat to national security” and can now further restrict their movement through automated checkpoints. Israelis, on the other hand, are allowed to freely enter Palestinian territory as they please. 

This use of biometrics raises the question of how Israel is legally able to observe their citizens in an unethical manner. Surprisingly, the legal framework surrounding the use of biometric data enables Israeli authorities to deploy this technology. In 2007, the Law of Criminal Procedures (Enforcement Powers – Communications Data) (5) was enacted; this allows Israeli law enforcement to obtain information about their citizens from telecommunication companies. This information includes customers’ names, details about their calls, locations where calls took place, and more. These acts highly contradict Israel’s Protection of Privacy Law which was enacted in 1981. This law forbids any invasion of privacy which includes but is not limited to close observation of a person, intercepting private conversations, and photographing people in a private place. However, this law grants immunity to any Security Authorities such as the Israeli police, intelligence departments, military, and more. Furthermore, statutes that directly violate the rights of Palestinians still exist to this day. The Nationality and Entry into Israel Law (5) denies citizenship and entry to any citizens living in an area in which operations pose a threat to Israel. These areas are typically Palestinian states including but not limited to Gaza and West Bank. All Palestinians have green biometric IDs with ‘09’ written on the front. Citizens with this ID that attempt to enter Unified Jerusalem will be charged with an illegal stay and sent to prison. This process is also legal under the Nationality and Entry into Israel bill. In 2008, another bill was passed that allows the Israeli government to maintain a database of citizens’ fingerprints and facial features. According to the Ministry of Interior, this database is mandatory for issuing passports and “smart identity cards”. All of these procedures are a clear violation of human rights. Unfortunately, Israel has found a way to implement these policies in a way that does not breach privacy laws.

 

My conclusions

  • There are legitimate reasons for concern over inappropriate usage and application of AI systems in policing
  • Greater focus should be put on creating transparency with the public to ensure tools are being used in an appropriate manner
  • However, the good associated with using these AI tools outweighs the privacy concerns and potential for incorrect outcomes
  • We should not be restraining our police forces from using the tools available (see Toronto police case)
  • There are numerous positive applications for AI, from solving cold cases to using tech to track and locate vehicle thefts to cybercrime prevention 
  • Criminals are moving quickly to learn and use AI – law enforcement needs to stay as far ahead of them as possible

 

How the Human Rights of Asylum Seekers and Refugees Are Being Violated by Technology

 

Introduction

Systemic racism and discrimination of ethnic minority groups has been an apparent and controversial issue around the world. With technology a fast-evolving innovation, many agencies are finding unorthodox ways to use it, sometimes to the detriment of many people. This misconduct is enacted in the name of “national security” and “convenience” and is demonstrated through algorithmic bias, technological experimentation and more. Scrutiny surrounding the mistreatment of asylum seekers, refugees and immigrants has called for an investigation demanding more transparency into the infringement of their basic human rights. This paper will examine why the government takes precautions against migration and identify how the rights of asylum seekers may be breached during this process. Questions surrounding the involvement of racial prejudice in the screening process will be raised, alluding to how more needs to be done to uphold the universal right of equality. This examination will be furthered in highlighting the role of technology in violating these rights. Finally, an exploration into the US and Australia’s current laws and regulations will provide insight into the legislation that needs to be implemented or changed to protect the rights of minorities.

 

Treatment of asylum seekers

Whilst most countries uphold the universal right to asylum, in the process of holding individuals in detention centres they are disregarding freedom from torture, and freedom from arbitrary arrest (6, 7). Investigations by the UN have raised serious concerns surrounding the treatment of asylum seekers, especially in prominent countries such as Australia and the US (8). Australia has come under heavy criticism for the appalling conditions that asylum seekers live in whilst in their care. This is specifically a concern on Christmas Island, a territory located off the mainland where an environment of fear and violence has been cultivated through a combination of isolation and racism by the guards. After being subjected to abuse and assault, and being referred to as the “boaties”, many developed mental health trauma such as PTSD. This is furthered in Nasir v Australia CCPR/C/116/D/2229/2012, where Mr Nasir was convicted of aggravated people smuggling, having been a cook on a boat that brought asylum seekers to Australia. He was imprisoned for 146 days before being charged thus having his freedom from arbitrary arrest violated. 

In the US, the UN’s investigation yielded similar results as in Australia. Children were being separated from their families and were not able to maintain contact with them. Parents were reportedly unaware where their children were being held or the conditions they had to face alone. Issues were raised regarding both the ethical and medical impacts of separating children from their parents. This involved children requiring additional support not receiving the care they needed, babies being separated from breastfeeding mothers and mental health trauma that may arise as a result of these separations.

 

Government precautions against migration

    Countries have an obligation to investigate an individual’s claim for asylum, however it has been alleged that racial prejudice has been a motivation during these inspections. Although the treatment of asylum seekers is deemed unethical, it’s important to explore the Australian government’s reasons behind keeping asylum seekers away from the mainland. Whilst a claim to asylum is being investigated, the government takes necessary precautions to make sure these individuals aren’t a threat to the nation’s best interests. This includes mandatory health evaluations and reviewing whether a person poses a threat to national security.

     

    Infectious disease

    Infectious diseases spreading rapidly have been linked to asylum seekers fleeing from less economically developed (LED) countries. Population movements have played a critical role in the transmission and relocation of diseases across different regions, including HIV infections (9). Less economically developed countries have higher infection rates, as many of their citizens will have less access to information and healthcare in order to prevent or cure the disease. HIV is often carried through refugees, and migrants were likely involved in the spread of the disease from Central Africa to America in the 1960s, which later developed into a global pandemic (10). As of 2020, 37.7 million people around the world were living with HIV with 680,000 casualties. 20.6 million people with HIV (55%) were in eastern and southern Africa, 5.7 million (15%) in Asia and the Pacific, 4.7 million (13%) in western and central Africa, and 2.2 million (6%) in Western and Central Europe and North America. Only 75% of these people were receiving treatment to reduce transmission rates. 9.5 million still had no access to treatment; most of these were citizens from LED countries. As contagious viruses, diseases and bacteria can spread through communities quickly, preventing their entry into the country is best to avoid another epidemic. Countries must therefore take precautions to protect both asylum seekers who originate from countries where infectious rates for contagious diseases are higher, and their own populations.

     

    Character evaluations

    It is important that the government investigates an asylum seeker’s character, however these results may be distorted as the examination is often conducted in an unfair environment. Reports show that migrants and refugees are more susceptible to domestic abuse, with 1 out of 3 women saying they have experienced some form of violence, which may affect their responses in these examinations (11). As such character evaluations are conducted, in order to be thorough in these inspections governments suggest it is important to keep asylum seekers under observation for a prolonged period, in order to see how they interact with others. However more transparency and improvement into the process of this examination is required, as there are calls regarding unfair testing. This is because the examination is conducted whilst these asylum seekers are kept in stressful environments, where their right to freedom and torture is being violated. These detention centres may cause them to develop mental health issues, trauma and could cause them to act violently as an adaption to survive in their current situation. This is not a fair evaluation of their character and more needs to be done to provide them a fair opportunity to gain refugee status.

     

    Terrorism

    Furthermore, these character exams are necessary to identify potential threats to national security, however stemming from this is racial prejudice regarding previous terrorist attacks. Historically, some terrorist attacks have been commited by people of extreme religious belief, mostly from countries in the Middle East and Africa, where a lot of asylum seekers are from. This is evident when looking at notable terrorist incidents such as the infamous 9/11 attack which claimed the lives of 2977 people (12, 13). In 2017, an estimated 26,445 people died from terrorism globally. Beyond the scope of mortality rates, the attacks left many people fearful and, in turn, changed the way that society operated. This is further heightened by the media which cultivated an environment of fear and paranoia surrounding terrorist attacks. Terrorism dominates media coverage and thus it’s difficult to measure its true extent.  Consequently, border control has been tightened and stricter regulations have been placed on asylum seekers. Part of their character exam is as a result of societal pressure on the government to avoid another attack. This has affected an asylum seekers’ treatment and their application process. As a result of isolating them from society and letting our prejudices navigate how we treat them, we are infringing upon multiple human rights including equality.  

    To facilitate this point in a more practical sense, this paper will further reflect on the double standards of migration regulations (14). As of May 2022, the US has exceeded 200 mass shootings. Of this number, there were 27 school shootings including the elementary school shooting in Uvalde, Texas which resulted in the deaths of 21 people. 17,300 people have been killed as a result of gun violence (15). But reports show that 97.7% of mass shootings are committed by males, with 52.3% being white. The most dominant demographic is also mainly the perpetrators of these mass shootings.

    Although it is not only people from Middle Eastern and African countries that are committing these crimes, they are targeted by laws and regulations. This reveals how racial inequality is prevalent within our laws and regulations, highlighting an issue of double standards.

     

    Data collection

    Technology’s innovations are rapidly evolving, and laws are unable to adapt as quickly; as a result, many unethical practices such as data collection are operating outside the boundaries of the legal system. Article 12 of the UN Declaration of Human Rights, established 10 December 1948, states that people have a right to privacy. However, privacy in the online community has been a large issue affecting millions. Many are aware of ongoing privacy issues and concerns, however there is a common misconception that people have control over the information that they put out on the internet and social media platforms. This is enforced by the implementation of cookies and the ability to “delete” your history. However, big search engines like Google and Facebook run multiple algorithms when their products are being used (16). This information is stored, including a person’s age, gender, when and where they use the site, who they talk to and even their race. Once information is stored on a file or uploaded onto the internet, it is almost impossible to delete from the web.

     

    Data collection against asylum seekers

    This has particularly affected minority groups such as refugees where they are being exploited in order to target others from the same ethnic group. Asylum seekers on arrival to a country have sensitive information regarding their entire lives and family collected. Often this information is stored in a database where it will remain there for an indefinite period, leaving them vulnerable to attacks against their privacy and safety. Laws enforce governments to ask for consent before accessing any electronic data stored about a person, however many exceptions have been made to allow agencies to steal information without consent (17). In the first half of 2021 there were over 63 thousand data requests from the US to Facebook. Stemming from this is the overarching problem that many people don’t know who their information is given to and what they intend to do with it. For example, Palantir technologies, a company worth over 20 billion dollars, specialises in collecting data and has profited by selling data about asylum seekers and refugees to government agencies. This data was used to track and separate families and aid in deportation processes. Data collection continues to be an unsanctioned misconduct which places asylum seekers under the microscope without their consent.

     

    Algorithmic bias

    Furthermore, the data stored from asylum seekers contributes largely to the underlying issue of algorithmic discrimination. The UN Declaration of Human Rights ensures the equality of everyone, regardless of ethnicity or gender. Today this right is being violated by algorithms that determine employment, entry into a country and more. As we shift to a more effective and efficient way of life, algorithms have been integrated into the workforce to help access the capabilities of a person. This involves looking at their age, gender, school and work experience, and comparing it to previously successful employees. However, these AI systems were made including bias and prejudice as it compares minorities to the “ideal candidates” at the time of testing. This was predominantly white males. Specific demographics such as women, people of colour or with disabilities were not included in the testing process. Thus, algorithms were more likely to influence employers to hire people with traits most similar to many white males. This creates racial tension within society, as minority groups are being discriminated against. Further difficulties arise in who should take accountability for these algorithms, whether it be manufacturers, employers or the Federal Trade Commission who are responsible for the oversight of AI technologies and algorithms. Based on a complaint system, people can hold employers accountable when they feel prejudice was involved in their employment process. However, when an AI algorithm is involved in the process it is hard to determine such influences, as there is a lack of transparency as to how the algorithms were made. Algorithmic bias has affected the world as it discriminates against individuals secretly for not being the “ideal” demographic, impacting their employment chance.

     

    Algorithmic bias against asylum seekers

    The most affected minority groups happen to be people of colour or those with asylum seeker status. However, algorithmic bias is also often used against people once they have undergone the legal procedure and become a refugee. This limits and restricts their ability to live a safe life, as their human right to equality is being infringed upon. Furthermore, algorithmic bias continues to hinder them in more important aspects of daily life. It is not only in employment, but algorithms also influence the decision of whether banks should provide them loans. Refugees are also targeted by government agencies using facial recognition cameras as they may be deemed “suspicious”. In other words, of the same ethnicity as former terrorists. Additionally, Hungary, Latvia and Greece have introduced a new AI-powered lie detector at border checkpoints (18). They are produced by iBorderCtrl and use facial recognition software to determine whether a person is lying. It will alert patrol officers if the person needs further evaluation. Concerns regarding whether iBorderCtrl would be able to accurately detect different shades of skin tone has challenged its effectiveness. It also overlooks the cultural differences involved and trauma that an individual may have experienced. As such, this leads to a lot of miscommunication preventing people from entering into these countries. Algorithmic bias continues to be implemented in every aspect of our daily life; while it is providing convenience, unfortunately it is still very flawed as it violates the right to equality by discriminating against asylum seekers and refugees.

     

    Privacy and data laws and regulations for AI

    Australia

    Australia does not have any specific laws that target the protection of online privacy, but instead refer to the Privacy Act 1998 which broadly encompasses the handling of personal information. This includes the collection, use, storage and disclosure of personal information in the federal public sector and in the private sector. Although, there have been no new legislations introduced, the Privacy Act has been amended over the years. For example, in 2021 the legislation was enhanced to protect online privacy binding the penalties to social media and other online platforms. Australia does not currently have a specific law which protects or regulates algorithmic bias within the country. Australia’s laws regarding privacy and AI regulations have been slow to adapt to the rapid nature of these innovations and thus are allowing agencies to act unsanctioned.

    US

    Historically, the US has contributed largely to the discrimination of individuals as a result of bills based on algorithimc bias. Through their exploitation of facial recognition software to gather information on minority groups who they deem “suspicious”, the US has unethically acquired information about people who are of Middle Easten, Asian and African descent (19). Although the US has come under criticism for its collection of information from multiple sources, it has been limited by the law over the past few years. In Illinois, a 2008 law entitled the Biometric Information Privacy Act (BIPA) allowed the people to sue Clearview AI for manipulating billions of social media photos in order to create facial recognition software. This led to swift regulations being placed on the company as they were monitored by the government. Algorithms used by businesses are monitored by the Federal Trade Commission (FTC) which has enforced federal consumer laws such as the Fair Credit Reporting Act established in 1970. This protects the information of consumers from being given to bodies outside of the act and places legal responsibilities on companies providing information to agencies. The FTC has also enforced the Equal Credit Opportunity Act (1974), which prohibits discrimination in the workforce on the basis of race, color, religion, national origin, sex, marital status, age, receipt of public assistance, or good faith exercise. Although the US has introduced bills aimed at preventing discrimination in the workforce, more needs to be done, especially to reinforce this in regards to algorithmic bias and data collection online.

     

    What legislation needs to be implemented?

    Australia and the US are representative of how legal systems around the world are failing to adapt to the advancement of technology, leaving asylum seekers liable to discrimination and attacks on their privacy. Countries need to take immediate action and implement new legislation and regulation in order to stop the violation of human rights by technology. Some promising bills have been introduced which could secure the privacy and protections of individuals. For example, a new bill introduced in 2019 known as the Algorithmic Accountability Act would enforce commercial firms to conduct assessment on high risk systems which involve automated decisions or the handling of personal information (20). This involves algorithms that are liable to inaccuracy, bias, the handling sensitive information, the raising security and privacy issues and more. Although the FTC has enforced legislation in the US which protects against discrimination and data collection, more needs to be done in order to improve its effectiveness (21). A proposal suggested that an international governing body would set global standards for regulations surrounding AI and privacy laws. This would include working closely with developers in the AI coding process to ensure that there is no prejudice involved. It also describes the monitoring and investigation of developments and trends in the use of AI, especially in areas of particular human rights risk. Australia, the US, and countries around the world need to place more restrictions on the information gathered and discrimination caused by technology.

     

    Conclusion

      Laws and regulations around the world need to be tightened to prevent technology from violating the human rights of asylum seekers and refugees. It is imperative that governments do their due diligence in conducting mandatory evaluations against asylum seekers, however alarming reports have poignantly exposed the inhuman conditions to which asylum seekers are subjected. Their human right to protection from arbitrary arrest and torture have been violated as a result of their imprisonment in dention centres. These injustices and human rights violations have been accentuated by technology through data collection and algorithmic bias. Asylum seekers’ sensitive information is stored and exploited in a movement against people of ethnic minority groups motivated by prejudiced beliefs. This information is used to discriminate against refugees through algorithms that are influential in employment and loan processes. These human rights violations against asylum seekers and refugees can be prevented through the proposed bills and legislations implemented on a global scale. 

      Automated journalism and the failure of legal regulation

      Introduction

      The current era has been dubbed the information age. Technological advancements and information distribution are at the core of everything. The issue is, the English and Welsh law-making system is slow. Compared to technological innovations that advance with lightning speed, the law cannot undertake all of the questions raised by the public and judicial bodies. In particular, one of the latest advancements was the integration of AI, software, and algorithms into journalism. This advancement is also known as Automated Journalism (also referred to as Algorithm Journalism and Robot Journalism), and is said to be one of the greatest revolutions in history. 

      Automated journalism is a way to create a more significant number of stories more quickly, cheaply, and potentially with fewer errors. Such algorithms transform data into written reports by applying templates and natural language generation techniques. These reports can even be personalised to a specific audience, and it is often virtually impossible to tell that an algorithm is the author. A point to consider is that this technology requires legal regulation. However, the speed at which this regulation should be enacted is debated. On one hand, delayed legal involvement might lead to several civil rights breaches, human health, and environmental risks. On the other hand, quickly created regulation can lead to flawed legislation.  Nonetheless, automated journalism is a new creation. It is not perfect or thoroughly researched, yet it still has vast implications for our world and is only continuing to develop further. Therefore, the law must have regulations to limit risks from automated journalism and supervise such technological advancements. In particular, with the growing concern of automated journalism replacing human journalists in the future (22).

      Throughout this article, I will analyse the different issues that arise from current laws’ inability to regulate new technologies. There are three main sections; liability, regulation, and relevance to tort law. Through the separation of these three topics, I am able to explore in-depth each of the possibilities that could arise. In addition, each section has a subsection addressing some form of current regulation, be that legislation, precedent, or soft law. These provide additional support for the argument made, alongside going more in-depth into the topic. I will explore concepts such as the responsibility gap, the question of available regulation for AI, the public’s perception of algorithmic writing, impacts on private law, and future possibilities. When describing different aspects, I will continuously highlight the issue of the law’s speed, explaining that, ideally, in the same way that technology constantly develops, the law must keep adapting at the same pace.

      This article will also embody the theory of technology being a disruptive innovation (23). It is the contrasting theory of technology having adaptive characteristics (an evolutionary adjustment to specific environmental changes). On the contrary, certain technologies “disrupt the status quo, alter the way people live and work, and rearrange value pools” (24). Law is the body that should undertake the adaptive role. Law should be able to regulate such technological advancements, thus bringing certainty and stability.

       

      Liability

      The English and Welsh legal system strives to support a free, inquisitive press; however, with AI’s integration into modern journalism, obstacles such as the responsibility gap arise. False information and statements can have detrimental economic, political and social implications. Hence, in this section, I will investigate the liability question: where the liability lies. I will do this by analysing the likelihood of liability falling onto each of the parties involved in making an automated journalism article; the AI itself, the programmers and the editors, or the media company who are the publishers of the information. Consequently, evaluating past case law will serve as the foundation of any further argument. Alongside an overarching issue being emphasised throughout – the lack of concrete legal regulation.

      Relevant Case Law

      This section will go over past cases that have contributed to answering the liability question and possible defences used against defamation claims. Firstly, a landmark defamation case (that since 2013 has been replaced) is Reynolds v Times Newspaper (25). This case created the Reynolds test that, if passed, would provide the qualified privilege defence. This has also sparked debate on whether the Reynolds test is a qualified privilege defence or a public interest defence. To which it has been ultimately named the test of “responsible journalism”; as it finds the balance between individual reputation and freedom of speech. In Loutchansky v Times Newspapers Ltd, the 10 Reynolds test factors were narrowed down to 2, which further emphasised the concept of “responsible journalism” (26). Narrowly before the Reynolds defence was substituted for legislation, it was applied in Jameel v Wall Street Journal Europe Sprl and Flood v Times Newspapers Ltd (27, 28). In both cases, the court ruled that as the defendants took appropriate steps to verify their statements before they were published, they were not liable.

      If the Reynolds test was still in place today, it could be used to deter harm caused by inaccurate automated journalism articles. In addition, such a “responsibility test” would encourage journalists to properly conduct their jobs and avoid libel cases (29). 

      Furthermore, in McCartan Turkington Breen (A Firm) v Times Newspapers Limited, Lord Nicholls highlighted the importance of the investigative aspect of journalism (30, 31). This statement justifies the importance of journalists ensuring their sources are credible to provide accurate information to the public.

      Artificial Intelligence Liability

      When automated journalism is being used, the robot collects information, analyses it, and assembles it into a report to create a narrative. This portion of the paper will explore whether an error occurring in the former steps is sufficient grounds for the liability to be put on the software itself. A key factor to take into account when exploring that topic is “legal entities”. Law has developed to create legal entities that can be identified as individuals and be held liable. Such legal entities include corporations, trusts, families, unions, and even ships (32). Therefore, the possibility of AI being recognised as a legal entity should be considered.

      Unfortunately, the current AI used in automated journalism does not contain cognitive thinking capabilities. It can collect and analyse information, but it will not know which is wrong and right. In a sense, the algorithms can be too “non-discriminating”; they lack independent decision-making and are not autonomous enough to be attributed authorship. Hence, the answer to that possibility is no, at least not yet.

      Automated journalism is limited, and there is a specific criterion the foundational information needs to meet to function. Nonetheless, this criteria vaguely divides narratives into two categories – “clear-cut” and “edge” stories. Clear-cut stories entail databases that are very concise and simple for a robot to interpret (e.g. The occurrence of “A” led to this amount of “B”). Sports and weather reports are most common in this category. On the other hand, edge stories pose a more complex job when creating narratives. The software has to access the data, identify what would be appealing to the audience, classify it, prioritise it and produce the article, such as election result narratives. Considering both of these scenarios, the “edge” narrative has a greater likelihood of causing harm than a “clear-cut” narrative, hence a higher possibility of an erroneous statement being produced. From there rises the question of liability.

      Programmer Liability

      News outlets like the BBC, Reuters Thompson, and Press Association use automated journalism software provided by software vendors or in-house developers. This software already has an abstract framework that dictates certain decisions the algorithm makes, which the programmers indirectly determine during the production stage. “Either you have human beings, a.k.a. editors, making those decisions, or you have algorithms making those decisions. And who writes the algorithms? People.” (33). Therefore, in the scenario where a false narrative is published and enough harm is caused for a claim to be made, some people raise the argument of the programmers holding the liability.

      In his study of “Automated Journalism and Freedom of Information,” Matteo Monti suggests that programmers should also be held liable to an extent, “some ad hoc laws could be necessary to assign liability to programmers.” On the other hand, S.C. Lewis et al. propose that the only scenario in which programmers could be held liable for harms caused by automated journalism is “if an algorithm were intentionally programmed to develop and produce false content”. Contrary to programmers intentionally causing malice through their software, it is more likely that malice is caused unintentionally through underlying AI bias (34). Considering that algorithms reflect the programmers’ views, they are likely to have bias.

      Overall, the liability falling onto the programmer would, in most cases, be an exception rather than a default situation, bearing in mind that there is most likely a contractual agreement between the software vendor and media company. Unfortunately, the question cannot be answered with certainty at this time due to a lack of legislation and court precedents.

      Journalist / Media Company Liability

      The name automated journalism, alongside the description, “an AI algorithm [that] independently collects and analyses data and then writes news articles,” suggests that the algorithm should operate on its own (35). However, that is not entirely the case. Rather than being completely autonomous as suggested, it functions collaboratively; some input from humans is still required. This input comes from providing a range of data and is always involved in the editing stage. Thus, I will be exploring the possibility that the liability rests in the editor/media company’s failure to carry out those processes adequately.

      The vital job of the journalists in this process is to provide the sources and edit the final output, while algorithms efficiently take the data provided to them and classify it by priority. The issue occurs with edge stories; such stories require more care, and there is tahe journalistic responsibility to check and revise the sources. Poor data equals poor results, which could lead to the spread of false information or legal suits. The second job of the journalists is editing, which involves revising the final outcome and fact checking that the information presented is correct (36). As there is a lack of legislation regarding automated works and their authorship, there is a subsequent lack of certainty regarding liability. 

      Arguably the only piece of legislation addressing the question of authorship is the Copyright, Designs and Patents Act, which suggests that the authorship lies with the individual who organised the creation.

      One way AI integration in journalism can be regarded is as a “tool” for use by the journalists. This idea supports the suggestion that authorship and liability may lie with the journalists organising the article. Nonetheless, a civil case is brought against the publishing company in many cases. Hence they might take on the liability. According to an interview featured in “I Robot, You Journalist, Who is Author?”, many media organisations highly emphasise the collaborative nature of their work: “There is no single person responsible for the creation of that content, the way we do it today” (37). Even though some specific organisations are willing to take on the liability as an organisation, that is not always the case. The absence of legislation or case law to attribute the liability to one specific body causes flexibility and, therefore, unpredictability, contradicting the rule of law (38).

      Conclusive Summary

      One of the main determining factors of liability regarding AI will be autonomy. Summarising former points, firstly, with the current stage of AI, it cannot be found attributed authorship; hence it cannot be found liable. Secondly, it is doubtful that liability will be put on the programmer as well, however, specific case facts need to be taken into consideration. Thirdly, the likeliest outcome regarding who will be liable in the case of false news or defamatory statements caused by automated journalism is the journalist responsible for the article. Due to the absence of specific legislation that specifies who the authorship would be attributed to in automated work, the question of liability cannot be answered with certainty.

       

      Regulation

      “It is difficult if not impossible for a slow-moving regulatory apparatus to take aim against such a fast-moving target,” Gary Merchant et al. reiterated the struggle of the legislature while referring to technology (39). Nonetheless, this difficulty should not mean that regulation should go unattempted. This section argues two things, firstly, that there is an exceeding absence of regulation regarding AI in the media field, and secondly, that regulation in the form of legislation is more favourable. This portion will first go over the current Acts in place, then compare US and EU AI regulations, and conclude by discussing the ideal regulation form.

        Current Regulations

        The long-established common law system of England and Wales and media companies are not in sync with the unfamiliar development of Automated Journalism. In many cases, the private sector looks for ways to develop, and media outlets search for means to increase the speed of publication. In the absence of aid from the legislative and judicial branches to regulate the implications of AI, “soft law” and self-regulations have been proposed to limit adverse outcomes. 

        Such “soft law” includes the IPSO Code of Practice which newspapers, magazines, and electronic publishers contractually agreed to follow (40). The first code of “Accuracy” is the most relevant, compelling the publishing agents to “take care not to publish inaccurate, misleading or distorted material including pictures,” and it should be quickly corrected in the occurrence of a mistake (41). Followed by “Newspapers, whilst free to be partisan, must distinguish clearly between comment, conjecture and fact.”

        Similarly, the NUJ “Code of Conduct” exists, which was agreed upon by the NUJ members (42). Code 2 ensures that “information disseminated is honestly conveyed, accurate and fair.” The issue with both of these “soft law” regulations is that AI was not considered during its creation, and even though it is possible to update, it remains unchanged. This code of practice creates an ethical level of responsibility; however, not only does it not apply to AI, it does not have a punishment in place if the code is broken. Current regulations are too weak, but AI is too imperfect not to cause harm. Therefore, it is critical for proper, fixed legislation to be established to protect the media and public.

        Comparison

        One of the main reasons why the legislative branch is so reluctant to create legal regulations is because they are unsure of how to achieve the balance between allowing for innovation to prosper and protecting citizens’ civil rights. The US and EU regulations showcase two imperfect but more developed AI regulation examples. One of the US’s NIR goals’ was to promote human-robot collaboration in different environments (43). While, the EU’s “Robololaw” embraced a new manner of regulation, directing from “the regulation of the technology to forms of regulating human behaviour with technology” (44, 45). While the NIR emphasises the accessibility and effectiveness of the robot, Robolaw focuses on the protection of human personhood, limiting risks for people.

        Legislation and Precedent

        Law has the duty to adapt to technology’s unpredictability and provide certainty. AI in the media raises a lot of concerns over liability and authorship regarding breach of tort. Therefore, the debate over who should take primary responsibility for creating regulation should be raised. Legislators taking primary responsibility would be able to use their democratically enacted power to address how technology is regulated with certitude. For example, creating an Act of Artificial Intelligence, that would cover possible scenarios in which AI would be liable. However, currently, the legislators prefer to revert to creating precedent. That is because it is  “incremental,” as Jonathan Morgan argues in his “Tort and Technology” paper (46). Such a characteristic is ideal in tort cases. Deciding on a case-by-case basis allows judges to gain knowledge and experience about how to solve harms that arise through technological advancements.

        On the other hand, the power of judges is constrained, and the essence of certainty in legislation is vital in the regulation of AI as they have an immense impact on our society. Academic and Senior Barrister Lord Goff supports this point; “…it is more appropriate for strict liability in respect of operations of high risk to be imposed by Parliament, than by the courts. If such liability is imposed by statute, the relevant activities can be identified, and those concerned can know where they stand.” (46).

         

        Relevance to Tort Law

        AI legal regulation overall poses a great question, however, in regards to Automated Journalism, a tort breach is most likely to occur. For example, defamation. This section will first explore the existing statutes, which could provide some information on liability, then explain the effect on defamation and how the generally increased credibility perception influences all of it.

        Current Regulation

        Specific legislation on AI regulation or automated work regulations is yet to be established. However, some relevant points were raised in alternative legislation such as the Automated and Electric Vehicles Act, Copyright, Design and Patents Act, and Defamation Act (47, 48, 49).

        The Automated and Electric Vehicles Act 2018 states that in a case of an accident, liability falls onto the insurer if the vehicle was insured. Moreover, section 9 of the Copyright, Design, and Patents Act provides another possible answer to the liability question by providing to whom the authorship would be attributed: “‘author’, in relation to a work, means the person who creates it”. These pieces of legislation provide an insight into the Parliament’s perspective when considering liability. It is possible to assume that in the case of AI, the legislature would prioritise protecting individuals’ reputations, however, still finding a balance between innovation and individuality. 

        The Defamation Act of 2013 poses a contradicting theory to the liability question. Section 4, which abolished the previously discussed Reynolds defence, provides a concrete defence to those who are publishing material for the purpose of public interest that could be done at the judgment of editors (50). Moreover, imposing stricter conditions on the burden of proof thus, strengthening freedom of speech (51). Such an addition allows a more significant threshold of liberty for journalists; however, this threshold also provides greater uncertainty due to concepts like automated journalism.

        Correlation to Tort Law

        Journalism runs along a thin line between freedom of speech and the possibility of ruining someone’s reputation. In certain situations, journalism has failed to comply with legislation against defamation; thus, journalists were found liable. This portion will highlight the importance of creating legal regulation due to the perception of automated journalism’s increased credibility and its effect on libel. 

        In the occurrence of a libel claim, it is safe to assume the responsibility will not lie with the algorithm. The reason for this is that the algorithm was not the one that failed to fulfil its duty. Even though a defamatory statement would be the basis of a civil claim, it would be a negligence breach. It is the human journalists’ responsibility under multiple codes of conduct and their professional duty to ensure that what is being provided to the public is correct and not harmful. Moreover, the official claim would likely be brought against the individual journalist or media outlet as they failed to act with reasonable care when performing their professional duties. 

        The issue with the absence of regulation is that without it, no deterrence factor encourages journalists to act with reasonable care. Furthermore, the harm caused by a defamatory statement written by an algorithm may be more significant than by one written by a human. This is because the public is likely to perceive the article written by an algorithm as more credible. This theory is supported by a study conducted in “Enter the Robot Journalist.” (52). By presenting two stories with the same facts, Chrsuter Clerwall interviewed individuals on which one they thought was written by an algorithm and which one by a human. Over a third of the interviewees mistook automated writing for human writing. Moreover, the algorithmic writing received a higher score on trustworthiness. These findings pose a significant threat in the circumstance that automated writing produces a defamatory statement, as people are likely to oversee the automated characteristics.

        In another study completed in “When Reporters Get Their Hands on Robo Writing,” Thompson Reuter journalist (H) reflected that “automation is only as good as the data” (53, 54). Consequently, Thomson Reuter (B) did not think that “[automated journalism] could be trusted to go out unchecked at the moment.” Other interviewees believed that the algorithms’ inability to highlight downfalls could suggest that data was tampered with, thus possibly leading to a defamation claim. Overall, all of the participants’ generalised view was that automated journalism was not perfect and definitely not autonomous yet. Even the experiences’ of experts in the field suggested that the enactment of legal statutes is essential to promote deterrence.

         

        Conclusion

          The cause of journalists’ negligent performance is relevant to media companies’ aims. Media outlets desire to gain revenue through cutting costs and increasing publication speed. However, as a result, media companies are sacrificing the accuracy of the information published. Furthermore, the pressure put on the speed does not allow journalists to perform their jobs properly. For example, an overworked employee may miss a flawed algorithm input, leading to false information publication. Many factors might lead to libel cases, and all of them can never be mitigated.

          On the other hand, the legal system of England and Wales can adapt to new technological advancements by developing legislation, thus, promoting deterrence and limiting the harms caused by technology such as automated journalism. With the planned increase and spread of such algorithms, this has to be done sooner rather than later to bring stability. 

          “… creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons …” (55). Robots or algorithms being attributed the legal entity status would provide a completely new outlook on the liability question. If currently, there is not a possibility of AI itself being liable with a legal entity status, that is possible and very likely. However, this status will only be considered if Automated journalism becomes entirely autonomous.

          The ideas this article proposes are only a fraction of what needs to be considered before legislation is made. The legislature should take into account the effects of automated journalism on other areas of law, like criminal law and contract law. As well as, the possible remedies and punishments that would come from the verdict of such cases. However, it is unlikely that legislation will be created in the nearest future. Until then, precedent should provide sufficient support in dealing with technological advancements. Nevertheless, for the English and Welsh system to thrive, it needs to consider looking ahead rather than only working on a case-by-case basis. Legislation and precedent make up the common system, and one needs the other to function at its best. 

          How cryptocurrency is regulated around the world

          This article compares different countries’ responses to cryptocurrency and blockchain as well as their regulation approaches. The biggest concern with blockchain technology is its decentralised nature and an inability to trace transactions. Such concerns have made a considerable impact on banking industries, and certain fields of law such as commercial law and contract law. This article explores those challenges and how countries are planning on mitigating further impacts.

          Introduction

          As technology continues to advance, changing the way we interact in our everyday lives, it has impacted important aspects of our lives, including our monetary system with the implementation of cryptocurrency. Many countries such as Switzerland, Singapore, and Malta have developed bespoke regimes in order to attract business activity as cryptocurrency hubs. Other countries however, have either placed significant restrictions on cryptocurrency or have banned them entirely. This article explores how different countries have responded to the use of cryptocurrency and how they plan on mitigating the legal concerns and impacts.

           

          History

          The idea for cryptocurrency was first formulated in the late 1980s, with the idea being to create a currency that could be exchanged untraceably and without the need for centralised entities such as banks. The earliest versions of cryptocurrency would materialise in the form of Digicash, created and implemented by American cryptographer David Chaun in 1995. Digicash utilised software to withdraw money from a bank and required specific encrypted keys before the money could be transferred to the recipient. The predecessor to Bitcoin, Bit Gold would be designed in 1998 by Nick Szabo, and required users to utilise computer power to solve cryptographic puzzles in order to receive their reward. However, Bit Gold would not be implemented, unlike its successor Bitcoin in 2009, the first successfully implemented cryptocurrency, which at time of writing is currently both the highest valued and most popular form of cryptocurrency.

           

          Code

          Cryptocurrency is digitised currency which can be exchanged over computer networks without the use of centralised authorities such as banks (56). In other words, rather than owning something tangible, holders of cryptocurrency own a key that allows the user to move units of currency to the recipient without the use of a third party. In order to ensure security during such transactions, cryptography is used to transmit and store data in advanced mathematical coding, with the encrypted keys ensuring only authorised recipients can process and read the data stored.

           

          Mined

          Units of cryptocurrency are created through mining, a process in which computer power is directed at solving complex mathematical problems in order to generate coins (57). There are various forms of mining: CPU, GPU, ASIC and cloud. In the early days of cryptocurrency, the central processing unit method, or CPU method, was most commonly used. It is the process in which the CPU performs the necessary calculations in order to mine cryptocurrency, however, this process has largely been discarded as it incurs high electrical costs and cooling costs for a process which requires months to garner small profits. 

          The graphics processing unit method, or GPU method, utilises multiple GPUs in order to maximise computational power in order to mine cryptocurrencies, and is more efficient than CPU mining. Application-specific integrated circuit, or ASIC mining, requires computers designed specifically for mining cryptocurrency and are therefore even more efficient than GPU mining. However, its efficiency comes at a price of increased expenses, and as the level of mining difficulty increases, such computers will quickly be rendered obsolete. As such, cloud mining has become the more popular method of crypto mining, whereby individuals rent a mining rig online from a cloud mining host for a specific time period.

           

          Blockchain

          Cryptocurrencies operate on a public ledger known as a blockchain, which are essentially databases shared between computer networks along computer nodes (58). Blockchains record all transactions updated and held by holders of cryptocurrency, and is the key component to decentralising the currency and removing the need for centralised authorities. 

          What differentiates blockchains from regular databases is found in how the data is stored and structured, where typically data is stored in tables, in blockchains, information is stored in groups known as blocks. Each block has a certain capacity for information and when the storage limit is reached, the block will close and link itself to another closed block, forming a chain of information and data, hence the name blockchain. When a block is added to the chain, the information stored becomes set in stone and the block is allocated an exact time stamp, effectively creating an irreversible timeline of data given that it is implemented within a decentralised ecosystem. This allows information to be recorded and distributed but protects it from being edited, therefore ensuring the security of the data by protecting it from external alterations.

          US

          Cryptocurrency is considered legal tender and exchanges are legal because they fall under the Bank Secrecy Act (BSA). This means that crypto exchange providers are responsible for registering themselves with the Financial Crimes Enforcement Network (FinCEN), implementing an Anti-Money Laundering program, and submitting reports and records to authorities. The US Treasury (national financial department) has implemented a new regulation which requires more data collection from exchange providers. They must submit suspicious activity reports for transactions over 10,000 USD, and wallet owners must identify themselves when making a transaction over 3,000 USD. This regulation is expected to be implemented in fall 2022. The Biden Administration Infrastructure Bill declared crypto exchange providers as brokers and they must comply with any AML (anti money laundering) or CFT (combating financial terrorism) obligations.

          China

          Cryptocurrency is not considered legal tender nor are exchanges allowed. The People’s Bank of China (PBOC) banned financial institutions from handling bitcoin in 2013, then banned international coin offerings and cryptocurrency exchanges in 2017. The only status of property that cryptocurrency has is for the purpose of inheritances. In 2021 China banned all cryptocurrency mining, which had a significant impact on the crypto market since most mining took place in China. It is unlikely that they will lift any of these bans in the future, but China has been working on its own digital currency called E-CNY, which will replace cash and coins and will be used to pay for goods and services (59).

          Singapore

          Singapore does not consider cryptocurrency legal tender, but exchanges are legal and exchange providers must be registered with the Monetary Authority of Singapore (MAS). MAS requires all crypto firms to obtain an operating licence. Although it isn’t considered legal tender, it is considered a “good” under Singapore’s Goods and Services Tax. MAS continuously warns the public about investing in cryptocurrencies, and has prohibited crypto firms from advertising their services to the public. Since China banned all crypto transactions, many high-profile crypto businesses have migrated to Singapore, which has led to speculation about more progressive regulations in the future.

          Australia

          Cryptocurrency is considered legal tender in Australia and transactions are legal too. Exchange providers must be registered with the Australian Transaction Reports and Analysis Centre (AUSTRAC). AUSTRAC requires crypto exchanges to register, identify and verify users, maintain records, and comply with any AML or CFT obligations. In 2017, Australia legalised all cryptocurrencies and stated that it should be treated as property, therefore they’re subject to Capital Gains Tax. In 2019, AUSTRAC forced exchanges to delist privacy coins (anonymous cryptocurrency). Australia has recently launched a retail central bank digital currency, and their progressiveness with cryptocurrencies caused an increase in crypto transactions; from 2018-2019 crypto transactions increased by 63%.

          UK

          The UK does not consider crypto to be legal tender, however exchanges are legal. Exchange providers must register with the Financial Conduct Authority (FCA). The FCA requires exchange providers to comply with any AML or CFT requirements. Any gains or losses on cryptocurrencies are subject to Capital Gains Tax (60).

           

          Legal complications

          Jurisdiction

          Of course one of the main reasons why the use of crypto and blockchain technology has skyrocketed is because of its decentralised nature and inability to be traced. Although there are benefits to this system, there are many concerns about jurisdiction issues. Since there is no figure of authority for these technologies, suspicious transactions cannot be interfered with, and bitcoins cannot be confiscated. There are preventative measures in place in order to limit the amount of suspicious transactions, however, these regulations vary depending on which country the bitcoin was purchased from. The issue is, since bitcoin cannot be traced, it is nearly impossible to track down the origin of the bitcoin, which means that there can be a lot of conflicting legal frameworks (61).

          Commercial Use

          As previously mentioned, bitcoin is anonymous and untraceable, and this can be dangerous when users decide to use crypto as collateral. Although bitcoin is generally eligible to be used as collateral, it isn’t exactly safe. Since it is untraceable, a user could be investing in someone else’s collateral and not even be aware of it. This is also dangerous for the person using crypto as collateral, someone could invest in it at any moment and it would be impossible to get it back. Using crypto as collateral is becoming increasingly popular, which is why users are pushing for more legislative protection, and waiver clauses in order to facilitate liquidity.

          Contract Law

          Blockchain is currently developing smart contracts which will allow computer programs to design a contract and automate execution once both terms have been met. Smart contracts are likely to become permissible, as long as they follow a code-only program. The worries with smart contracts is that they won’t be able to meet certain requirements. One of the largest concerns is that a smart contract could design a contract that enforces terms that aren’t legally permissible. Another concern is that many types of contracts (wills, divorces, other family law contracts) cannot be signed electronically. These two issues make it difficult to predict whether or not smart contracts will be permissible around the world (62).

           

          Conclusion

          Cryptocurrency is reflective of the problems imposed on the law by the rapid advancement of technology. It demonstrates how countries are learning to adapt to the issues surrounding cryptocurrency through it’s own laws and regulations. This article has shown the different types of laws and regulations for crypto around the world, as a result this paper calls to action an international governing body to maintain the standard of the law globally. This will reduce the inconsistencies of legal systems around the world in order to provide transparency into how crypto transactions should be regulated.

          Impact of data mergers on merger controls

          Introduction

          Competition law supports high-quality consumer benefits through the promotion of market competition. However, obstacles like abuse of dominant position and foreclosure through merging acquisitions have arisen. Moreover, the economy’s digitalization has brought a new perspective of harm to competition. With data merging of digital giants occurring, the diminishment of market power has become exceedingly possible as the law is not quick enough to adapt. While a fight for collecting valuable data occurs in the paid sector of the market, some argue that such data only increases acquirers’ market power and deprives consumers of benefits. While others say that diversification of data will lead to a positive increase in indirect network effects, as more data leads to more advertisers. Members of the antitrust community and regulatory bodies such as CMA and the European Commission have acknowledged that not all positives of digitalisation are harmless. They have recognised the necessity of balancing promoting consumer welfare and supporting a robust digital economy (63). Thus, calling for a reconstruction of merger control, however, has been debated the extent to which such should be done.

          This article explores the necessary adaption of new strategies for merger controls in the digital market. It is not suggesting that the current competition regulatory bodies have been unsuccessful in controlling mergers but, on the contrary, showcases that there needs to be a further adjustment of merger controls to eliminate any chances of harm to the I0 and competition. I structured this paper into three main sections; introduction, motivations behind merger control adoption, analysis of data merger case law, concluding with a discussion and final remarks. The introduction section presents the main subject of the paper by firstly introducing the importance of data and explaining the perspective of data being an asset. Secondly, followed by an overview of merger control and its necessity in regulating digitalisation of the market. At the same time, I supported the section by clarifying the advantages of data merging for firms. The second section consists of an idea by idea breakdown of inconsistencies between older merger controls and what needs to be adapted in them in order to suit the digital market. These ideas are used to reinforce the necessity of new merger control strategies. Such reasons are thresholds, indirect network effects, description of article 102 of TFEU (Abuse of Power in Dominant Position), input foreclosure, and the absence of proper tools and frameworks to carry out merger controls. The final section analyses three recent cases (EU and UK, but handling UK companies), some presenting a successful adaptation of new merger control strategies and some not so much. The article concludes with a discussion of possible future strategies and a reflection on the main idea.

           

          The Power of Data

          Data has become an asset, a currency. Big data is a term defined differently by different demographics; however, it has a common understanding. The term illustrates the power of data, especially in the digital economy. The data supply in the digital market has created an incredibly comfortable environment for consumers in which services are free of charge, and any need is taken care of in advance. The rise of Big Data has subsequently allowed for the advancement of network effects, in specific indirect network effects (64). This data can be described by its variety, value, and velocity and demonstrates the significant scale of impact it has had on the market. Likewise, retaining positives of improving the production process, forecasting market trends, and targeted advertising based on observed social behaviour have appealed to many advertisers and compelled more data giants to undergo data merging (65, 66). Thus, further expanding the scope of data collection. Data being a competitive asset and having zero marginal cost is bound to cause a clash (67). Notwithstanding, such powerful data abuse has been attempted to be regulated by competitor law, in specific merger review controls.

           

          Necessity of Merger Controls

          First, the critical factor to consider is if there is even a necessity for the merger control’s establishment. Since 1990, 8000 mergers have occurred, but only 30 of them were prohibited (68). Thus, some argue that merger controls need to be reconsidered overall and not only in the digital market sector. A 0.4% of merger prohibition suggests the underperformance of merger controls. Therefore, the concern regarding the uncertain future of the digital market’s development arises. There is no previous standard that can set an example, current theories of harm and metrics cannot be applied, and in the extreme scenarios that merger control cannot handle, the consumer may be harmed. The three domains of competition law would require reevaluation in the context of digitalisation (69). The prescription of appropriate remedies, redefinition of tools, and consideration of effects on a multi-sided market all are factors that showcase the necessity of competition agencies needing to adapt new merger controls.

           

          Acquirer’s Perspective: Advantages of Data Merging

          Data merging – the next generation of acquisitional merging. It is a merging that exemplifies the digitalisation of the economy. It is a way for firms in data-driven markets to progress and develop even further. Examples of such entities include social media platforms, data sharing firms, and companies that generally undertake indirect networking services. Consumers may think that the personalised ads they receive are a result of their long-term use, hence media predicting their desires. However, that is not entirely the case. Such targeted ads and posts that cater to our desires are, in most cases, based on a large amount of processed personal data. 

          Data is an asset. The long-term use of services is not necessary but beneficial; it provides the entities with more assets. More assets = better targeted ads = more interested advertisers. However, as the variety of data dictates value, data merging poses a convenient prospect. Acquisition of a company for its data is common in data-driven markets. The data of two firms can be combined to produce better quality data that can be sold or used. In the context of competition, having more data allows for better quality services. Hence the competitor needs to spend more money to match the same quality of services.

           

          Causes for New Merger Controls

          Digitalisation of the economy through data has caused several scenarios that were not considered during the establishment of merger control strategies in neither EU nor the UK. The issue with current merger controls can be generalised into — the quantification of factors. Such can be turnover values, harms incurred, and share amounts. However, data is not strictly quantifiable. It constantly develops and progresses; Big Data demonstrates that. As a result, the law was unprepared for new threats to competition like indirect networking services, data foreclosure, and killer acquisitions. This section will explore the causes behind integrating new merger control strategies.

          The merging of entities is a complicated process that competition agencies have successfully regulated in many cases. The M&A appraisal process has several steps before the acquisition can occur. The primary step of this process is – merger control proceedings. The jurisdictions’ relevant regulatory body conducts such proceedings – for the EU, it is the European Commission, while for the UK, it is the Competition and Markets Authority (CMA). Each body has its powers and the merger control processes may differ. Nevertheless, their core purpose is to determine whether a merging of entities will substantially lower competition. 

          Before these merger controls are carried out, the regulatory bodies need to be informed about an acquisition. A point to be considered is that the EU’s and UK’s merging notification is distinct. The EU fully adopts a mandatory notification system while the UK remains voluntary. Before the investigation takes place, the regulatory bodies have to ensure the entities reach certain thresholds. The universal threshold metric is monetary turnover.

           

          Thresholds

          For any acquisition to occur, the merging parties have to ask for approval and undergo an investigation from the European Commission according to the mandatory notification system (70). However, the notification only needs to be made if the established monetary threshold is passed. Examinations are not exclusive to circumstances when such a threshold is exceeded; however, it is the most common criterion. The EU’s threshold is a turnover of at least two undertakings that exceeds 250 million euros, or the global turnover exceeds 5.000 million euros. Similarly, under the Enterprise Act, for a merger to undergo a CMA examination in the UK, it must amount to a “relevant merger situation” (71). Such occurs if a firm’s turnover exceeds £70 million; or where the transaction creates a share of the supply of goods or services of 25% in the UK or a substantial part of the UK (72). This is where the issue of data mergers arises. These thresholds do not account for indirect network effect aspects of the digital economy.

           

          Indirect Network Effects

          Indirect network effects are a new concept that is predominantly adopted by media-sharing platforms. “Indirect network effects occur in multi-sided markets where positive spillover network effects from one side of the market gain the other, ” Oskar Törngren describes (65). In simple terms, this framework allows users to operate networks without paying for it. Hence, there is no significant revenue, especially at the beginning of a data-driven business. This poses a significant obstacle to merger controls that are dictated by monetary thresholds. The non-existence of a turnover cannot amount to a triggering of the threshold. Therefore, several companies, especially in the UK, may not go through the CMA examination if they are not required to do so.

           

          Abuse of Dominant Position

          Under TFEU 102, the abuse of power in a dominant position breaches competition law. It causes disruption and poses a threat to the dynamic market. If the threshold is somehow triggered, or competition regulatory bodies see that a merger has the potential to disturb competition and require them to go through an examination. A necessary step is to define the “relevant market.” This relevant market is essential for identifying the ex-ante post-merger effects and the extent of such effects.

          Commonly, regulators and acquirers have already debated the definition of the market. Firms argue that they are a part of the broader market, while regulators would argue a significantly narrower scope; each is more advantageous for their position. The issue is that defining a market of a data-driven firm is even more difficult. Due to digitalisation, the line classifying digital markets is hazy. For example, digital platforms allow people to share information; however, advertisers are a big part of it, and consumers can also promote products. The communication of digital space makes it challenging to define a relevant market. Therefore, the regulators cannot presume exact ex-ante scenarios if they cannot comprehend the scope in which this scenario could occur—for example, the Facebook / WhatsApp case (73). The EU decided to classify the relevant market of these platforms as “consumer communication services”; in reality, the EC was considering both active and inactive users. It was too broad of a scope to achieve accurate prediction. 

          Consequently, confusion is caused when considering the UK’s share threshold. To identify someone’s shares in a market, the market first needs to be defined. Thus, if the market cannot be defined, the shares cannot either, and a merger cannot reach a relevant merger situation.

           

          Input Foreclosure

          Input foreclosure entails the restriction of data access. While data merging at the core is firms merging to diversify their data. An important factor considered is the value of data gained as a result; using data as an asset. Firms with giant databases merging can create a very significant, more extensive database that other players in the market will be unable to compete with. In some cases, companies can even restrict access to this data to their competitors — also known as downstream input foreclosure. For example, an argument against the Google / DoubleClick merge was that their database would be so significant that no one else on the market could compete (74). “Crucial to measure the role of the dynamic feature in digital markets in its effects dispelling “foreclosure concerns” and its relevance to acquisitions of an intention to stifling innovation” (75). Foreclosure is a risk that competition law knows how to handle; however, it requires reevaluation when considering the digital economy.

           

          Non-Parameter Aspects

          Turnover, the chance of foreclosure, and substantially lowering competition (SLC) possibilities are considered when appraising a merger. However, concentrations can also be affected by many “non-parameter” aspects. These non-parameter factors include consumer preference, innovation, privacy, and more. All such factors apply to data. For example, the ability to predict the development of social platforms compared to its competitors would be helpful in the long term when considering the disruption to competition, yet it cannot be taken into account. The absence of such tools and analytical frameworks causes the inability to consider these parameters. Therefore, it calls for the adjustment of merger control strategies.

           

          Merger Aspects in Action

          Current competition law has not been not unsuccessful in the area of data merging, however, that is because new merger control strategies were adopted. There are successful data merging cases and some that have not so much; nonetheless, they both highlight the necessity of adapting merger control strategies to deal with newly arising situations accordingly. Moreover, the intevensionalism of CMA has had effects on the competition structure and merger regulations.

          Tobi v CMA

          Tobii v CMA was an acquisition of Smartbox by Tobii, who was part of the AAC solutions (76, 77, 78). This company provided devices that enabled non-verbal people to communicate. After a CMA examination, the merging was denied. The ground for merger denial was on substantial lessening of competition (79). However, CMA could have considered the long-term benefits the technology would bring and applied non-parameter aspects. Current merger controls focus on promoting competition; however, with the development of technology, it is essential to consider upholding that innovation and the long-term benefits for the consumer. Even though it is not a case showcasing a data merging, it showcased the areas for improvement in UK merger controls.

          Google / Waze

          Google and Waze merger was put through the CMA(at the time OFT) examination regardless of not fulfilling the “turnover test” (80). The Public Office training considered that the transaction fulfilled the parameters of enterprises gaining a reward under the Enterprise Act because Waze was gaining advertising revenues (81). As a result, the merge passed the shares threshold in the supply of turn-by-turn navigation applications for mobile devices in the UK (80). This showcases that the issue of the monetary threshold could be solved by applying the shares threshold. 

          Furthermore, the killer acquisition theory is the ex-ante harm of digital giants acquiring firms in order to gain benefits and subdue a future competitor. This theory was also considered in the Google and Waze case. The competition agency concluded that such mergers do not lessen competition or disrupt innovation. For the reason that innovation growth cannot be stopped and is always replaced by other competitors with potential (75).

          Telefónica UK / Vodafone UK / Everything Everywhere / JV

          The Telefónica UK/ Vodafone UK/ Everything Everywhere/ JV merger case showcased merger controls to use many tools to predict an accurate scenario (82). The concern about the joint venture merging was that the amount of data provided by the three companies with the joint venture itself collecting data would result in such a significant variety of data that would hinder competition. Therefore, other companies in the same market would depend on the joint venture. However, The European Commission “analysed questionnaires” sent to mobile operating system providers, financial institutions, retailers, analytic data providers, and others (82). As a result, the EC concluded that the input foreclosure was not possible. The realistic scenario was that the data available to intern service providers were available to other players in the market. Hence they had access to critical input.

           

          The Impact of CMA

          The growing power of CMA in the UK competition regulation sphere has had its impacts. The role of CMA is to regulate competition, such as concentrations through the application of theories of harm concerning factual and economic evidence. Moreover, they consider the relation between the interested parties and identify the constraints to competition and counterfactual situations. All with the goal to bring the fair standards for competition, a similar aim to the European Commision. With the UK leaving the European Union CMA’s is yet to be considered a global competition regulatory body but has had its impacts.

          The introduction of IEOs has been one of the most significant recent changes. Under the Enterprise Act, CMA has the power to put a completed or ongoing merger under an IEO to prevent preemptive actions (83). Similarly, the CMA can also enforce interim measures in which they can prohibit any actions and transactions of those mergers that will cause preemptive action (83). These newly awarded powers allow CMA to act quickly, consider the voluntary notification system and deal with Brexit’s implications. Such as the one-stop shop, which could no longer be applied in the UK and hence created the opportunity to investigate mergers that have not informed the EU. Some would argue that CMA’s increased interventionist approach to regulation has been beneficial compared to the EU. “Merger litigation in the EU is not quick enough to be usable in fast-moving commercial contexts.” (84). However, others argue that CMA has failed to provide derogations in appropriate situations like the Facebook v CMA case (85). This increased power and interventionism exercised by the CMA impacted the global economy by imposing a template IEO on all facebook companies and subsidiaries, thus freezing the entire competition market structure.

          Overall, it can be said that an increase in merger litigation is logical to follow. CMA’s increased role and increased tendency to intervene only supports that theory. A positive aspect of this is that CMA would focus on issues in tech-driven markets, such as non-price competition and two-sided marketplaces. Even though CMA has been quite extreme, it is completing its job and shows hope for ineffective regulations of data mergers.

           

          Discussion

          The English and Welsh merger control system has made changes to adjust to data merging scenarios. For example, it adopted the shares threshold and continued to lower it through the National Security and Investment Act (86). That states the secretary of state can give notice of merger acquisitions if he suspects or is aware of a “trigger event” taking place. This trigger event was defined as when a person gains control of a “qualifying entity” or “qualifying asset” (86) In addition, it is a possibility that the UK merger control will move to a mandatory notification system, at least with high-value mergers. This would limit any possibility of harmful data mergers avoiding CMA examination. However, this would require primary legislation and make CMA a global regulatory body.

          This article has only brushed up on the necessity of merger controls in digital markets; there is much more to consider considering the complexity of competition law and merger controls—for example, theories of harm. Theories of harm provide stability and certainty to antitrust competition assessments. This stability is necessary for the unpredictable environment of technological developments. Not to mention, this article has not delved into the issue of privacy. However, competition agencies should consider it, especially with data being regarded as a currency—the effect on personal privacy in large databases full of personal data combined. Data protection is a separate area closely intertwined with the digital economy, hence also with competition law. As presented in Mergers in Big-Data Driven Markets, a possible theory of harm is that the price consumers pay for free services is their privacy (65). 

          Similarly, a concept not explored but ought to be considered are remedies—the question of adjustment of remedies and how. Overall, more evaluation on the subject is needed; however, one theory proposed by Christian Alborn is the necessity of behavioural remedies (68). He acknowledged CMA’s support of them, but others’ belief of their ineffectiveness. Many thresholds, remedies, and theories of harm will differ depending on the jurisdiction, however, all of them need to undergo reconsideration.

           

          Conclusion

          Competition, consumer welfare, and merger controls are essential. However, the heavy burden on competition regulatory bodies is balancing them, which they have done well to an extent. The concept of a digital economy is new, and its complete and perfect integration into existing merging controls is virtually impossible. However, adjusting merger control strategies to suit data merging acquisitions is something that can be developed. The European Commission and CMA are adjusting their review systems, investigation proceedings, and powers to effectively deal with possible harmful situations. With the immense use of devices every day, Big Data will only evolve into a more significant concept, justifying that merger controls have to regulate transactions effectively. Competition law is vital in the new digital market.

          Overall Conclusion

          The law’s ability and effectiveness to regulate technology remains under question, and the debate of whether it even should be regulated continues. It would seem as though it is only logical to control something unpredictable like technology that can breach our privacy and cause discrimination. However, the regulatory bodies consider the negative implications of such regulations on technological advancements. There are many clashing views on creating such laws in different jurisdictions; an entirely different regulation could be made for other areas of law. As the power technological advancements hold in our society will only increase, its relation with the law will need to be observed and adjusted.

          Bibliography

            1. https://www.techtarget.com/searchenterpriseai/feature/AI-in-policing-is-controversial-but-delivers-results
            2. https://www2.deloitte.com/fi/fi/pages/public-sector/articles/urban-future-with-a-purpose/surveillance-and-predictive-policing-through-ai.html
            3. https://strathprints.strath.ac.uk/71586/1/Daly_GISW2019_Algorithmic_oppression_Chinese_characteristics_AI_against_Xinjiang_Uyghurs.pdf
            4. https://privacyinternational.org/sites/default/files/2021-06/PI%20Counterterrorism%20and%20Biometrics%20Report%20Israel_Palestine%20v7.pdf
            5. https://books.google.ca/books?hl=en&lr=&id=JlUuCgAAQBAJ&oi=fnd&pg=PA199&dq=%E2%80%9CLegal+Analysis+and+Critique+of+Some+Surveillance+Methods+Used+by+Israel%E2%80%9D&ots=JZyK3GUqqW&sig=3A6tNCD0rMUihiV4p_wAj641vyk#v=onepage&q=%E2%80%9CLegal%20Analysis%20and%20Critique%20of%20Some%20Surveillance%20Methods%20Used%20by%20Israel%E2%80%9D&f=false
            6. Humanrightsmeasurement.org. 2022. Human rights abuses of refugees and asylum seekers in Australia – Human Rights Measurement Initiative
            7. Human Rights Law Centre. 2022. UN finds Australia’s treatment of asylum seekers violates the Convention Against Torture | Human Rights Law Centre.
            8. Asylum Seeker Resource Centre. 2022. Report from the Christmas Island Detention Centre – ASRC
            9. Shao, Y., & Williamson, C. (2011). The HIV-1 Epidemic: Low- to Middle-Income Countries. Cold Spring Harbor Perspectives In Medicine
            10. HIV.gov. 2022. Global Statistics. [online] Available at: <https://www.hiv.gov/hiv-basics/overview/data-and-trends/global-statistics> [Accessed 26 May 2022].
            11. The Conversation. 2022. One third of migrant and refugee women experience domestic violence, major survey reveals
            12. Encyclopedia Britannica. 2022. September 11 attacks | History, Summary, Timeline, Casualties, & Facts
            13. Ritchie, H., Hasell, J., Appel, C. and Roser, M., 2022. Terrorism. [online] Our World in Data. 
            14. Npr.org. 2022. [online] Available at: <https://www.npr.org/2022/05/24/1101050970/2022-school-shootings-so-far> [Accessed 28 May 2022]. 
            15. National Institute of Justice. 2022. Public Mass Shootings: Database Amasses Details of a Half Century of U.S. Mass Shootings with Firearms, Generating Psychosocial Histories
            16. What Does Google Do With Your Data?. 2022. What Does Google Do With Your Data?. [online] Available at: <https://www.avast.com/c-how-google-uses-your-data#:~:text=According%20to%20the%20Google%20Safety,like%20your%20age%20or%20gender> [Accessed 28 May 2022].
            17. Statista. 2022. Facebook: user data requests from federal agencies by country 2021 | Statista. [online] Available at: <https://www.statista.com/statistics/287845/global-data-requests-from-facebook-by-federal-agencies-and-governments/> [Accessed 4 June 2022].
            18. Fmreview.org. 2022. New technologies in migration: human rights impacts | Forced Migration Review. [online] Available at: <https://www.fmreview.org/ethics/molnar> [Accessed 24 May 2022].
            19. Cato.org. 2022. [online] Available at: <https://www.cato.org/blog/algorithmic-bias-under-biden-administration> [Accessed 1 June 2022].
            20. https://www.congress.gov/bill/116th-congress/house-bill/2231/all-info [Accessed 1 June 2022].
            21. Tech.humanrights.gov.au. 2022. [online] Available at: <https://tech.humanrights.gov.au/sites/default/files/2021-05/AHRC_RightsTech_2021_Final_Recommendations.pdf> [Accessed 29 May 2022].
            22. Waleed ALI and Mohamed HASSOUN, ‘Artificial Intelligence and Automated Journalism: Contemporary Challenges and New Opportunities’ (2019) 5 International Journal of Media, Journalism and Mass Communications <https:/www.arcjournals.org/pdfs/ijmjmc/v5-i1/4.pdf> accessed 15 May 2022.
            23. Robert Birnbaum, Clayton M Christensen and Michael E Raynor, ‘The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail’ (1997) 91 Academe <http://web.mit.edu/zolot/Public/Innovator’s%20Dilemma%20excerpt.pdf>.
            24. James Manyika and others, ‘Disruptive Technologies: Advances That Will Transform Life, Business, and the Global Economy’ (McKinsey Global Institute 2013) <https://s3.us-west-2.amazonaws.com/us-west-2.files.campus.edublogs.org/sites.gsu.edu/dist/c/1669/files/2015/08/disruptive-technologies-25x119j.pdf> accessed 16 May 2022.
            25. [2001] 2 AC 127 (HL)
            26. [2002] QB 783
            27. [2006] UKHL 44 (HL)
            28. [2012] UKSC 11
            29. See “Regulation – Relevance to Tort Law” section for more detail
            30. [2000] 262 N.R. (HL)
            31. . . it should be kept in mind that one of the contemporary functions of the media is investigative journalism. This activity, as much as the traditional activities of reporting and commenting, is part of the vital role of the press”
            32. Elizabeth Kirley, ‘The Robot as Cub Reporter: Law’s Emerging Role in Cognitive Journalism’ (2016) 7 European Journal of Law and Technology, 11.
            33. KEN DOCTOR, ‘Newsonomics: Fake-News Fury Forces Google and Facebook to Change Policy’ (Nieman Lab15 November 2016) <https://www.niemanlab.org/2016/11/newsonomics-fake-news-fury-forces-google-and-facebook-to-change-policy/> accessed 17 May 2022.
            34. Sandra Wachter, ‘Affinity Profiling and Discrimination by Association in Online Behavioural Advertising’ (2019) 35 SSRN Electronic Journal.
            35. Matteo Monti, ‘O PINIO JURIS in Comparatione Studies in Comparative and National Law Automated Journalism and Freedom of Information: Ethical and Juridical Problems Related to AI in the Press Field’ (2019) 1 Opinio Juris in Comparatione <https://ssrn.com/abstract=3318460> accessed 18 May 2022.
            36. In this part editors, fast checkers, etc are all generalised as “journalists” for the reason that every one of those professional’ responsibilities differs based on the media company
            37. Tal Montal and Zvi Reich, ‘Digital Journalism’ (2016) 5 I, Robot. You, Journalist. Who is the Author?
            38. Tom Bingham, The Rule of Law (Penguin 2011).
            39. Gary E Marchant and Douglas J Sylvester, ‘Transnational Models for Regulation of Nanotechnology’ (2006) 34 The Journal of Law, Medicine & Ethics 714.
            40. Independent Press Standards Organisation(IPSO), ‘Editors’ Code of Practice’ 
            41. “A significant inaccuracy, misleading statement or distortion must be corrected, promptly and with due prominence, and – where appropriate – an apology published. In cases involving IPSO, due prominence should be as required by the regulator”
            42. National Union of Journalists, ‘Code of Conduct’.
            43. National Science Foundation, ‘National Robotics Initiative’.
            44. European Commission, ‘Regulating Emerging Robotic Technologies in Europe RoboLaw Project’ <www.robolaw.eu>.
            45. Bibi Van den Berg and Ronald Leens, ‘Abort, Retry, Fail: Scoping Techno-Regulation and Other Techno-Effects, vol. 25 (Dordrecht Springer 2013) 67 67–87.
            46. Jonathan Morgan, ‘Torts and Technology of a Single Chapter of a Title in Oxford Handbooks Online for Personal Use (for Details See Privacy Policy and Legal Notice). Torts and Technology the Oxford Handbook of Law, Regulation and Technology’ [2017] The Oxford Handbook of Law, Regulation and Technology.
            47. Automated and Electric Vehicles Act 2018, s 2.
            48. Copyright, Designs and Patents Act 1988, s 9.
            49. Defamation Act 2013
            50. (4) In determining whether it was reasonable for the defendant to believe that publishing the statement complained of was in the public interest, the court must make such allowance for editorial judgement as it considers appropriate
            51. (1) A statement is not defamatory unless its publication has caused or is likely to cause serious harm to the reputation of the claimant.
            52. Christer Clerwall, ‘Enter the Robot Journalist’ (2014) 8 Journalism Practice 519.
            53. This study consisted of assembling 12 journalists from corporations such as BBC, The Times, Thomson Reuters, etc and allowing them to gain experience writing with automated journalism algorithms.
            54. Neil Thurman, Konstantin Dörr and Jessica Kunert, ‘When Reporters Get Hands-on with Robo-Writing’ (2017) 5 Digital Journalism 1240.
            55. Commission on Civil Law Rules on Robotics 2017.
            56.  Investopedia. 2022. Explaining the Crypto in Cryptocurrency. [online] Available at: <https://www.investopedia.com/tech/explaining-crypto-cryptocurrency/> [Accessed 31 May 2022].
            57.  Freeman Law. 2022. Mining Explained: A Detailed Guide on How Cryptocurrency Mining Works. [online] Available at: <https://freemanlaw.com/mining-explained-a-detailed-guide-on-how-cryptocurrency-mining-works/> [Accessed 29 May 2022].
            58. Investopedia. 2022. Blockchain Explained. [online] Available at: <https://www.investopedia.com/terms/b/blockchain.asp> [Accessed 1 June 2022].
            59. https://arxiv.org/pdf/1810.02466.pdf
            60. https://complyadvantage.com/insights/cryptocurrency-regulations-around-world/
            61. https://academic.oup.com/jfr/article/7/1/48/6248122?login=false#261836354
            62. https://corpgov.law.harvard.edu/2018/05/26/an-introduction-to-smart-contracts-and-their-potential-and-inherent-limitations/
            63. Ariel Ezrachi & Jay Modrall, ‘Rising to the Challenge – Competition Law and the Digital Economy’ (2019) 15 Competition L Int’l 117
            64. Nicholas Johnson, ‘What Are Network Effects? Indirect and Direct Network Effects’ (Applico28 November 2018) <https://www.applicoinc.com/blog/network-effects/>.
            65. Oskar Törngren, ‘Mergers in Big Data-Driven Markets’ (Thesis in EU Law2017) p7.
            66. OECD, ‘Bringing Competition Policy to the Digital Era’ (2016) <https://one.oecd.org/document/DAF/COMP(2016)14/en/pdf> p8.
            67. Terrell Mcsweeny and Brian O’dea, ‘DATA, INNOVATION, and POTENTIAL COMPETITION in DIGITAL MARKETS -LOOKING beyond SHORT-TERM PRICE EFFECTS in MERGER ANALYSIS’ (2018) <https://www.ftc.gov/system/files/documents/public_statements/1321373/cpi-mcsweeny-odea.pdf>.
            68. Christian Ahlborn, ‘The Meaning of Competition in Digital Merging Control’, Assertive Antitrust Enforcement and the Digital Economy (Institute of Economic Affairs 2021).
            69. Cartels, Abuse of dominant position and concentrations.
            70. COUNCIL REGULATION (EC) No 139/2004 2004 article 4.
            71. Enterprise Act 2002.
            72. National Security and Investment Act 2021 s8.
            73. Case No COMP/M7217 – FACEBOOK/ WHATSAPP (European Commision).
            74. Google/DoubleClick (UK OFT).
            75. Siyou Zhou, ‘Merger Control in Digital Era’ (FIDE 2021).
            76. [2020] CAT 1.
            77. British AAC solutions company
            78. Swedish AAC solution company
            79. Aidan Robertson, ‘Recent Merger Decision Challenges’ (Competition Law Seminar Series : Merger Control8 December 2020).
            80. NoME/6167/13 Google/Waze (UK OFT).
            81. Enterprise Act 2002, s129.
            82. Case No COMP/M6314 – Telefónica UK/ Vodafone UK/ Everything Everywhere/ JV (European Commision).
            83. Enterprise Act 2002, s73.
            84. Sarah Abram, ‘Merger Control after the End of the Brexit Transition Period’ (Competition Law Seminar Series – Merger Control15 December 2020).
            85. [2020] CAT 23.
            86. National Security and Investment Act 2021.