Abstract
This article explores the multifaceted impacts of Artificial Intelligence (AI) on society, focusing on its environmental, economic, labour, ethical and social implications. AI offers transformative benefits, enhancing efficiency, automating tasks and driving innovation across various sectors. However, its rapid adoption also presents significant challenges, including increased energy consumption, job displacement, economic inequality and ethical concerns related to bias and privacy. While AI has the potential to revolutionise industries, it also raises questions about sustainability, fairness and the future of work. The article discusses the hidden costs of AI, such as the environmental footprint of data centres, the exploitation of low-wage workers in developing nations and the erosion of creativity and human decision-making. It also highlights the need for sustainable practices, worker retraining and stronger regulations to mitigate these impacts. The research calls for a balanced approach to AI deployment that maximises its benefits while addressing its risks, ensuring that AI serves the greater good without exacerbating existing inequalities.
1.0. Introduction
Artificial Intelligence (AI) has emerged as one of the most disruptive and transformative forces of the 21st century, reshaping economies, labour markets and social interactions on a global scale. Defined broadly as the ability of machines to simulate human intelligence processes such as learning, reasoning and problem-solving, AI has moved rapidly from experimental research labs into mainstream applications (Russell & Norvig, 2021). From healthcare diagnostics and autonomous vehicles to financial forecasting and customer service chatbots, AI systems are now deeply embedded in daily life, signalling a shift comparable to earlier technological revolutions such as the Industrial Revolution and the rise of the internet (Brynjolfsson & McAfee, 2017).
1.1. Significance of the research
The rapid diffusion of AI technologies has been enabled by exponential growth in computational power, the availability of massive datasets and advances in machine learning algorithms, particularly deep learning (LeCun, Bengio & Hinton, 2015). Global investment in AI is also expanding significantly, with estimates suggesting that the AI industry could contribute up to $15.7 trillion to the global economy by 2030 (PwC, 2017). Governments and corporations alike frame AI as a key driver of competitiveness, national security and long-term prosperity. Proponents argue that AI offers solutions to pressing societal challenges, from improving healthcare outcomes and optimising energy use to enabling personalised education and advancing scientific discovery (Floridi, 2019).
Despite its promise, the deployment of AI is not without significant risks and controversies. One major concern is its potential to deepen economic inequality by displacing low- and medium-skill jobs while concentrating wealth and power in the hands of those who control AI technologies (Frey & Osborne, 2017; Acemoglu & Restrepo, 2019). At the same time, AI development is resource-intensive, relying on energy-hungry data centres and hardware, which raises urgent questions about its contribution to climate change and the sustainability of digital economies (Strubell, Ganesh & McCallum, 2019). Moreover, the use of biased datasets in training AI systems has already demonstrated how algorithmic decision-making can reproduce, and even amplify, social inequalities, particularly in areas such as hiring, policing and credit scoring (Noble, 2018; Benjamin, 2019).
In addition to economic and environmental concerns, ethical issues loom large. Scholars and policymakers have raised alarms about AI’s potential misuse in surveillance, misinformation, autonomous weapons and mass manipulation (Crawford, 2021). The lack of transparency in many AI systems, particularly those employing “black-box” deep learning models, complicates accountability and raises questions about whether individuals and institutions can effectively contest or understand algorithmic decisions (Burrell, 2016). These risks highlight the need for comprehensive frameworks of AI governance that balance innovation with protection of human rights, fairness and sustainability (Jobin, Ienca & Vayena, 2019).
By situating AI within these debates, the article contributes to the growing body of critical scholarship that seeks to move beyond technological determinism and towards a more holistic understanding of AI’s role in society. Ultimately, the discussion emphasises that the trajectory of AI’s impact will depend less on the technology itself than on how governments, corporations and communities choose to regulate, contest and adapt to it. Addressing the challenges of inequality, sustainability and ethics is therefore essential not only for ensuring responsible innovation but also for building a future in which the benefits of AI are distributed more equitably and sustainably.
Against this backdrop, this paper explores these hidden costs of AI in depth, examining the environmental footprint of AI, its impacts on labour and economic systems, and the ethical and social dilemmas it creates. The goal is to provide a comprehensive understanding of AI’s broader societal implications and suggest strategies for mitigating its adverse effects while maximising its benefits.
2.0. Exploring the Hidden Costs of AI Development
This article explores the hidden cost of AI development through the following areas: environmental, labour, economic, social and ethical. These themes were chosen not only because they are the most visible and widely debated consequences of AI, but also because they cut across technical, social and political domains. Together, they demonstrate that AI cannot be understood solely as a technological tool – it must also be analysed as part of a broader socio-technical system that reflects global power relations, environmental trade-offs and deeply contested notions of justice.
2.1. Environmental Cost
The environmental consequences of AI are increasingly central to debates about the future of technology and sustainability. While AI promises innovation and efficiency across multiple sectors, its ecological footprint has raised urgent concerns. Training and deploying large-scale AI models demands vast computational resources, which translates into significant energy consumption and greenhouse gas emissions. Strubell, Ganesh and McCallum (2019) estimated that training a single large natural language processing model could emit more than 284 tons of CO₂, equivalent to the lifetime emissions of five average cars. As the scale and sophistication of AI systems grow, so does the demand for energy-intensive data centres, many of which are concentrated in regions with heavy reliance on fossil fuels (Henderson et al., 2020).
The environmental impact is not confined to carbon emissions alone. Data centres require substantial amounts of water for cooling, raising questions about their effects on local ecosystems and communities. Research shows that large data infrastructures can exacerbate water scarcity and contribute to biodiversity loss in already fragile environments (Gupta et al., 2022). The clustering of these facilities in certain geographies has also been linked to localised air pollution, altered microclimates and ecological imbalances (García-Martínez & Fiala, 2021).
Hardware production for AI adds another layer of environmental cost. The mining of rare earth elements and other critical minerals necessary for AI hardware leads to deforestation, land degradation and water pollution. Communities in countries such as the Democratic Republic of Congo, where cobalt mining is concentrated, face severe ecological and human costs (Boccardo et al., 2023). The short lifespan of hardware used for AI accelerates the global crisis of electronic waste, which reached 53.6 million metric tons in 2019 and continues to rise sharply (Forti et al., 2020). Improper disposal of these materials introduces toxins into soil and water systems, posing long-term risks to both human and ecological health.
Yet, paradoxically, AI also holds potential to support environmental sustainability. AI-powered systems are increasingly being applied to optimise energy grids, improve the integration of renewable sources and reduce inefficiencies in industrial processes (Rolnick et al., 2022). They are also vital in climate modelling, natural disaster prediction and monitoring environmental degradation such as deforestation, illegal fishing and biodiversity decline (Vinuesa et al., 2020). These applications suggest that, if guided by deliberate policy and ethical frameworks, AI could play a critical role in addressing environmental crises rather than exacerbating them.
Mitigating AI’s environmental footprint requires coordinated interventions from both industry and government. Shifting data centres to renewable energy sources, investing in energy-efficient cooling technologies and extending the life cycle of hardware through circular economy approaches are some of the solutions being explored (Henderson et al., 2020). Furthermore, transparency in reporting the environmental costs of AI training and deployment is essential to hold companies accountable and ensure that AI development aligns with global sustainability targets such as the Paris Agreement (Vinuesa et al., 2020). Without such interventions, the rapid growth of AI risks undermining climate action and intensifying ecological degradation.
2.2. Labour Cost
AI has profound implications for the labour market, reshaping the nature of work across industries. While AI promises increased productivity and efficiency, it also raises concerns about large-scale job displacement. Studies estimate that automation driven by AI and machine learning could impact up to 300 million jobs worldwide, particularly those involving routine and repetitive tasks (Frey & Osborne, 2017; International Labour Organization, 2023). This disruption disproportionately affects workers in sectors such as manufacturing, logistics and customer service, where machines increasingly outperform humans in speed and precision.
However, AI’s labour market impact is not exclusively negative. New opportunities emerge in fields such as data science, AI system development, ethics oversight and digital content creation (World Economic Forum, 2020). These new roles require advanced digital skills, highlighting the importance of upskilling and reskilling initiatives. Without significant investment in retraining programmes, workers risk being left behind in an economy that increasingly rewards technical expertise. This uneven transition could exacerbate existing inequalities between high-skilled and low-skilled workers, as well as between countries in the Global North and South (Susskind, 2020).
Beyond displacement, AI also introduces new forms of precarious labour. The rise of platform-based gig work, often powered by AI-driven algorithms, has transformed employment relationships. Workers are subjected to algorithmic management systems that dictate schedules, monitor performance and even determine pay, frequently without transparency or recourse (Zuboff, 2019). Similarly, the hidden workforce of data annotators – many based in developing countries – perform essential but undervalued tasks to train AI systems, raising questions about fairness, recognition and labour rights (Gray & Suri, 2019).
These challenges require coordinated policy and industry responses. Governments and corporations must implement labour protections for gig and contract workers, invest in large-scale reskilling initiatives and establish frameworks for ethical AI deployment in workplaces. Ensuring that AI complements human labour rather than wholly replacing it is key to fostering inclusive growth. If handled responsibly, AI could enhance human creativity and innovation while reducing drudgery; however, without proactive measures, it risks deepening global inequalities in employment.
2.3. Economic Cost
AI offers transformative potential for global economies by increasing productivity, enabling innovation and facilitating more efficient decision-making (Brynjolfsson & McAfee, 2017). AI-powered systems can optimise supply chains, enhance financial forecasting and automate complex processes, contributing to economic growth and improving living standards. For instance, AI-driven predictive analytics allow businesses to minimise waste, reduce operational costs and identify new market opportunities, thereby enhancing competitiveness (Manyika et al., 2017).
However, alongside these benefits, AI introduces profound economic challenges. One of the most pressing is income and wealth inequality. AI adoption tends to disproportionately benefit highly-skilled workers, corporations and investors, while displacing lower- and middle-skilled employees (Cornelli, Frost & Mishra, 2023). Research indicates that the wealthiest 10% of the global population have experienced income gains due to AI integration, whereas the poorest 10% have seen a reduction in their share of income (Pacelli, 2023). This disparity exacerbates existing social and economic divides, threatening social cohesion and economic stability (IMF, 2024).
Job displacement is another significant concern. While AI creates new roles in technology, data management and algorithm auditing, it also automates tasks across a broad spectrum of industries, including finance, law, healthcare and manufacturing (Brynjolfsson & McAfee, 2023). The IMF (2024) estimates that up to 40% of global jobs are exposed to AI automation, with advanced economies at particular risk due to their reliance on cognitive labour. Unlike previous industrial revolutions, AI affects not only manual labour but also high-skill, white-collar occupations, creating a phenomenon known as job market polarisation, where mid-level positions decline while high- and low-level jobs remain. This polarisation undermines the middle class, reduces social mobility and increases the economic vulnerability of ordinary workers.
Market concentration is a further economic implication of AI. Developing and deploying AI requires substantial computational infrastructure, access to large datasets and significant capital investment, factors that favour large tech corporations (Brynjolfsson & McAfee, 2023). As a result, a handful of dominant firms control critical AI resources, enabling monopolistic behaviour that limits competition, stifles innovation among smaller companies and concentrates wealth and economic power (Brookings Institution, 2024). This trend risks the termination of small and medium enterprises that cannot compete with AI-driven giants, undermining local economies and exacerbating economic inequality.
The stability of labour markets is also affected. AI-driven platforms introduce heightened surveillance, algorithmic management and unpredictable work schedules (ILO, 2023; OECD, 2023). Workers in AI-managed roles often face precarious contracts, wage suppression and fragmented employment opportunities, creating a dualised workforce with secure, high-paying jobs for a few and insecure, low-paying roles for the majority. This economic instability may have broader societal consequences, including increased reliance on social safety nets, reduced consumer spending and heightened political tensions.
Moreover, reliance on AI introduces systemic economic risks. The concentration of AI infrastructure and control in a few countries or corporations makes economies susceptible to disruptions such as cyberattacks, software failures or monopolistic practices (World Economic Forum, 2023). Over-dependence on AI-controlled decision-making in sectors like healthcare, finance and transportation can also lead to economic misallocations if algorithms malfunction or perpetuate biases, potentially causing cascading effects across industries and national economies (Brynjolfsson & McAfee, 2023).
To address these economic challenges, policymakers and businesses must adopt strategies that ensure inclusive and sustainable growth. This includes investment in worker retraining programmes, promoting equitable AI access for small and medium-sized enterprises, implementing taxation policies that prevent excessive wealth concentration and regulating AI deployment to avoid monopolistic practices (Cornelli et al., 2023; OECD, 2023). By integrating AI as a complement to human labour rather than a complete replacement, societies can harness economic benefits while mitigating risks to employment, equity and market stability.
Overall, AI presents both significant economic opportunities and serious risks. While it can drive productivity, innovation and global competitiveness, it also exacerbates income inequality, disrupts labour markets and concentrates economic power in the hands of a few. Proactive measures – including fair regulation, education and inclusive economic policies –are essential to ensure that AI contributes to broad-based prosperity rather than deepening existing disparities.
2.4. Ethical and Social Costs
AI has revolutionised industries and transformed daily life with unprecedented speed and scale. While AI offers significant benefits such as improving efficiency, automating repetitive tasks and enabling data-driven decision-making, it also raises complex ethical and social concerns that require careful examination (Crawford, 2021; Marwala, 2024). These challenges encompass fairness, bias, privacy, labour exploitation, societal inequality and threats to human creativity, highlighting the need for a comprehensive ethical framework for AI deployment.
Job displacement and human value are among the most visible social implications of AI. While AI can automate routine and repetitive tasks, it also risks eroding the nuanced roles humans play in workplaces (Chalmers, 2025). For instance, the Commonwealth Bank of Australia attempted to replace 45 call centre workers with AI-powered voice bots, but customer dissatisfaction forced the company to reverse the decision, demonstrating the irreplaceable value of human interaction in service industries (Chalmers, 2025). This example underscores a broader concern: while AI may enhance productivity, it cannot fully capture empathy, critical judgement or complex problem-solving, highlighting the ethical importance of human oversight in automated systems.
Algorithmic bias represents another critical ethical challenge. AI systems learn patterns from data, which can reflect historical inequalities, social biases or underrepresentation of minority groups. Avoidable bias stems from flawed datasets, homogenous development teams or inadequate testing, while unavoidable bias arises from the inherent limitations of AI models that operate as “black boxes”, where even designers may not fully understand the decision-making process (Marwala, 2024). These biases can lead to discriminatory outcomes in hiring, lending, criminal justice and welfare distribution, potentially exacerbating social inequality and undermining public trust in AI systems (Tshilidzi, 2024).
AI’s impact on labour equity extends beyond automation. Many AI systems rely on “invisible labour” performed by low-wage workers in developing countries, who annotate, label and moderate data that enables AI functionality (Cheah, 2025). Despite their essential contribution, these workers are often underpaid and underrecognised, while the economic benefits of AI accrue primarily to large corporations and high-skilled engineers (Crawford, 2021). This raises critical questions about fairness, justice and the equitable distribution of AI’s societal gains.
Privacy and surveillance are also major social concerns. AI systems require access to large volumes of personal data to function effectively, creating risks of data misuse, breaches and unauthorised surveillance (Prajapati, 2025). AI technologies deployed for facial recognition, predictive policing or workplace monitoring can undermine civil liberties and increase social control, demanding careful regulation to balance security and personal freedom.
Another ethical dimension is the impact on human creativity and knowledge work. AI is increasingly capable of performing tasks that require analysis, synthesis and creative problem-solving. For example, AI can generate historical analyses, write news articles or produce creative content. While this augments human capability, it risks homogenising ideas and reducing diversity of thought, potentially eroding the value of expertise in fields like academia, journalism and the arts (Smith, 2025).
Despite these challenges, AI also offers opportunities to enhance social wellbeing. By automating routine tasks, AI can free workers to focus on higher-value, creative or interpersonal work, potentially improving job satisfaction and societal productivity (Patel, 2025). To achieve this balance, proactive regulation, governance and ethical oversight are essential. This includes developing standards for algorithmic fairness, enforcing data privacy protections, ensuring fair labour practices and promoting transparency and accountability in AI development (Krimmelbein, 2025).
Furthermore, inclusive governance can ensure that AI benefits are shared equitably. Policymakers, corporations and civil society must collaborate to create frameworks that prevent discrimination, protect vulnerable communities and foster socially responsible AI adoption (Crawford, 2021). Public engagement, ethical education for developers and diversity in AI research teams are additional measures to align AI with human values and social justice.
Conclusively, while AI presents enormous potential for societal advancement, it also introduces profound ethical and social risks. Addressing these risks requires a multidimensional approach that balances technological innovation with fairness, accountability and human-centred design. By prioritising ethical governance, inclusive practices and social responsibility, AI can serve as a tool for equitable progress rather than a source of inequality or harm.
3.0. Conclusion, Recommendations & Summary
3.1. Conclusion
AI has the potential to transform society in unprecedented ways, offering remarkable benefits across industries from healthcare and education to finance and logistics. It improves efficiency, automates routine tasks, enhances decision-making and drives innovation, ultimately increasing productivity and potentially improving the quality of life (Brynjolfsson & McAfee, 2017; Haenlein & Kaplan, 2019). AI can also free human workers from repetitive work, enabling a shift toward more creative, strategic and interpersonal roles (Patel, 2025).
However, these benefits are accompanied by significant hidden costs across environmental, labour, economic and ethical dimensions. Environmentally, AI relies on energy-intensive data centres, contributing to high electricity consumption, carbon emissions, water usage and electronic waste (Strubell et al., 2019; International Energy Agency, 2024; Global E-Waste Monitor, 2024). Labour markets face profound disruption, with AI threatening routine and mid-skill jobs, while simultaneously creating demand for highly specialised roles, potentially widening inequality between high-skilled and low-skilled workers (Goldman Sachs, 2023).
Economically, AI adoption can exacerbate wealth concentration and market monopolisation, posing risks to small businesses, labour protections and national economic stability (Cornelli et al., 2023; World Economic Forum, 2023). Ethically and socially, AI systems risk perpetuating bias, eroding privacy, exploiting low-paid “invisible labour” and diminishing creativity in knowledge-based work (Crawford, 2021; Cheah, 2025; Smith, 2025).
3.2. Recommendations
To ensure that AI serves society equitably, a comprehensive policy approach is essential:
1. Environmental Sustainability
- Transition AI data centres to renewable energy and implement energy-efficient cooling systems.
- Promote lifecycle management of hardware, including recycling and sustainable production of rare earth elements.
- Encourage “green AI” practices that prioritise algorithmic efficiency and minimal environmental footprint.
2. Labour Market Adaptation
- Invest in large-scale retraining programmes to equip workers with AI-complementary skills.
- Protect vulnerable workers through updated labour laws, wage protections and job security measures.
- Recognise and fairly compensate low-wage workers engaged in AI training, data labelling and moderation.
3. Economic Governance
- Prevent monopolistic concentration in AI through competition policy and support for small and medium enterprises to adopt AI responsibly.
- Monitor systemic risks related to dependence on centralised AI infrastructures.
- Encourage equitable wealth distribution through taxation and public investment in AI-enabled growth sectors.
4. Ethical and Social Oversight
- Develop robust regulatory frameworks to address algorithmic bias, discrimination and unfair AI decision-making.
- Enforce data privacy and security standards to protect individuals from unauthorised surveillance or exploitation.
- Promote inclusive AI governance with diverse development teams, public consultation and transparent reporting of AI impacts.
3.3. Summary
Summarily, AI presents extraordinary opportunities for human progress, but it also introduces complex challenges that cannot be ignored. By adopting a balanced, multi-dimensional policy framework – addressing sustainability, labour, economic equity and ethics – governments, corporations and civil society can ensure AI maximises societal benefits while mitigating its risks. Responsible AI governance is not merely a technical necessity but a moral imperative to ensure that AI contributes to a fair, sustainable and inclusive future for all.
Bibliography
Bessen, J. E. (2019). AI and jobs: The role of demand. NBER Working Paper No. 24235.
Birhane, A. (2021). Algorithmic injustice: A relational ethics approach. Patterns, 2(2), 100205. Article DOI: https://doi.org/10.1016/j.patter.2021.100205
Boccardo, P., Gamba, P., Tuia, D. & Zanuttigh, P. (2023). AI and the environment: Risks and opportunities of large-scale technologies. Environmental Science & Policy, 142, 120–130. Article DOI: https://doi.org/10.1016/j.envsci.2023.03.008
Brynjolfsson, E., & McAfee, A. (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W.W. Norton.
Brynjolfsson, E., & McAfee, A. (2017). Machine, Platform, Crowd: Harnessing Our Digital Future. W. W. Norton & Company.
Brynjolfsson, E., & McAfee, A. (2023). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W.W. Norton & Company.
Cheah, Z. (2025). AI’s invisible labor is tech’s biggest blind spot. TechRadar. Available online: https://www.techradar.com/pro/ais-invisible-labor-is-techs-biggest-blind-spot
Chui, M., Manyika, J. & Miremadi, M. (2016). Where machines could replace humans—and where they can’t (yet). McKinsey Quarterly. Available online: https://www.mckinsey.com/~/media/mckinsey/business%20functions/mckinsey%20digital/our%20insights/where%20machines%20could%20replace%20humans%20and%20where%20they%20cant/where-machines-could-replace-humans-and-where-they-cant-yet.pdf
Cornelli, G., Frost, J. & Mishra, S. (2023). AI and income inequality: The impacts on global economies. Bank for International Settlements. Bank of International Settlement. Available online: https://www.bis.org/publ/work1135.htm
Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.
Davenport, T. H. & Ronanki, R. (2018). Artificial intelligence for the real world. Harvard Business Review, 96(1), 108–116.
De Stefano, V. (2016). The rise of the “just-in-time workforce”: On-demand work, crowdwork, and labour protection in the gig economy. Comparative Labor Law & Policy Journal, 37(3), 471–504.
Forti, V., Baldé, C. P., Kuehr, R. & Bel, G. (2020). The Global E-waste Monitor 2020: Quantities, flows and the circular economy potential. United Nations University, International Telecommunication Union, and International Solid Waste Association.
Frey, C. B. & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 114, 254–280. Article DOI: https://doi.org/10.1016/j.techfore.2016.08.019
García-Martínez, I. & Fiala, D. (2021). The environmental footprint of AI: Data centers, energy, and sustainability. Journal of Cleaner Production, 298, 126789.
Goldman Sachs. (2023). How will AI affect the global workforce? Goldman Sachs. Available online: https://www.goldmansachs.com/insights/articles/how-will-ai-affect-the-global-workforce
Gray, M. L. & Suri, S. (2019). Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Houghton Mifflin Harcourt.
Gupta, M., Kaushik, S. & Kumar, A. (2022). Water-energy nexus in AI data centers: Sustainability challenges and future pathways. Renewable and Sustainable Energy Reviews, 162, 112421.
Haenlein, M. & Kaplan, A. (2019). A brief history of artificial intelligence: On the past, present, and future of artificial intelligence. California Management Review, 61(4), 5–14.
Henderson, P., Hu, J., Romoff, J., Brunskill, E., Jurafsky, D. & Pineau, J. (2020). Towards the systematic reporting of the energy and carbon footprints of machine learning. Journal of Machine Learning Research, 21(248), 1–43.
International Energy Agency (IEA). (2024). AI, data centers, and their role in global electricity consumption. International Energy Agency. Available online: https://www.iea.org/reports/energy-and-ai/energy-demand-from-ai
ILO. (2023). The impact of AI on employment: Fragmentation and insecurity. International Labour Organization. Framework Working Group. Available online: https://www.ilo.org/publications/artificial-intelligence-adoption-and-its-impact-jobs
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
Patel, M. (2025). Worried that AI will take away your job? Here is the good news from MIT. The Economic Times. Available online: http://economictimes.indiatimes.com/magazines/panache/worried-that-ai-will-take-away-your-job-soon-here-is-the-good-news-from-mit/articleshow/123429683.cms?from=mdr
Prajapati S.B. (2025). Data Privacy and Security in AI: Strategies for protecting user data while maintaining the functionality and scalability of AI solutions. World Journal of Advanced Research and Reviews, 2025, 25(01), 2142-2146. Article DOI: https://doi.org/10.30574/wjarr.2025.25.1.0268.
Rolnick, D., Donti, P. L., Kaack, L. H., Kochanski, K., Lacoste, A., Sankaran, K., … & Bengio, Y. (2022). Tackling climate change with machine learning. ACM Computing Surveys, 55(2), 1–96. Article DOI: https://doi.org/10.1145/3485128
Smith, J. (2025). Will historians be replaced by AI? They remain skeptical. The Washington Post. Available online: https://www.washingtonpost.com/business/2025/08/24/ai-job-replacement-historians/
Strubell, E., Ganesh, A. & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 3645–3650. Article DOI: https://doi.org/10.18653/v1/P19-1355
Vinuesa, R., Azizpour, H., Leite, I., Balaam, M., Dignum, V., Domisch, S., … & Fuso Nerini, F. (2020). The role of artificial intelligence in achieving the Sustainable Development Goals. Nature Communications, 11(1), 233. Article DOI: https://doi.org/10.1038/s41467-019-14108-y
Wood, A. J., Graham, M., Lehdonvirta, V. & Hjorth, I. (2019). Good gig, bad gig: Autonomy and algorithmic control in the global gig economy. Work, Employment and Society, 33(1), 56–75.
World Economic Forum. (2023). Global Risks Report, 2023. World Economic Report. Available online: https://www.weforum.org/publications/global-risks-report-2023/digest/