Introduction (NL)

Artificial intelligence (AI) in medicine is not a new concept; it is the culmination of previous advancements and work spanning multiple decades. AI dates back to the 1950s when British mathematician Alan Turing first proposed the Turing Test, which questioned whether a man-made machine could mimic human intelligence. In the 1970s, AI was integrated into healthcare for the first time with INTERNIST-1, the first artificial medical consultant capable of utilising a search algorithm to deduce clinical diagnoses based on patient symptoms (Hirani et al., 2024). In the early 2000s, advancements in language processing allowed AI to extract and analyse data from a large range of sources to arrive at an answer (Hirani et al., 2024), expanding its usage in the medical field. For example, in the 2010s, AI was used to find RNA binding proteins associated with amyotrophic lateral sclerosis, and the AI “Pharmbot” was used to educate patients and their families about treatments and medication processes. 

Recent advancements in the field of AI have helped improve healthcare globally through rapidly accelerating existing medical and public health infrastructure, adding essential capacity to countries with a lack of doctors, public health workers and aid specialists, and accelerating vaccine development that will dramatically improve health outcomes (Cooper & Green, 2023). Specifically, AI has now aided in screening, radiology, psychiatry, disease diagnosis and telemedicine. According to Xsolis (n.d.), an X-ray or mammogram image can be scanned and analysed by AI in a fraction of the time it takes a radiologist. Additionally, AI is capable of facilitating the drug and vaccine research process by running simulations, calculations and experiments at incredible speed and efficiency, decreasing the time and cost of research (Cooper & Green, 2023). This is particularly promising for managing pandemics or rampant disease in low-income countries.

Credibility, Ethics and Global Equity (MM)

Artificial intelligence has become increasingly useful in the field of medicine, being used as a revolutionary tool in managing disease outbreaks, remote patient monitoring, telemedicine and oncology. It is already being used for earlier detection, more precise treatment planning and improved access to specialised care. However, the global integration of AI into medicine comes with many ethical concerns.

Those in favour often highlight that AI-driven imaging and pathology systems can identify subtle lesions, which may be missed by the human eye, and predict more accurate treatment responses than human clinicians. For example, a meta-analysis covering 8 studies with 120,950 patients found that AI systems had a higher overall sensitivity of 0.85 compared to radiologists (0.77), while also maintaining similar specificity. The AI models also achieved a higher overall diagnostic accuracy (AUC) of 0.89 versus radiologists (0.82). This aligns with the movement toward precision medicine, reducing unnecessary interventions and addressing workforce shortages in underserved regions. 

On the other hand, there is also evidence that over-reliance on AI can worsen human performance, as illustrated in a colonoscopy study where clinicians who were used to AI assistance performed worse when it was absent, with a 6% fall in detection rates (Jeyaretnam, 2025). This raises serious concerns about implicit bias, de-skilling and the reliability of clinician expertise. Issues like these accentuate the necessity of human oversight and constant monitoring, as AI models are not fixed and therefore vulnerable to performance degradation when patient populations or clinical conditions change. This was seen when an oncology AI tool’s accuracy declined by 7% during the COVID-19 pandemic as it was not adjusted for the rapid change of its surroundings, creating the need for further infrastructure, personnel and resource-intensive validation, further widening the gap between well-funded and resource-limited health systems.

Similarly, AI systems are frequently trained on data from high-income, majority populations, meaning they can fail in global contexts where disease epidemiology, population genetics and care infrastructures differ, which may worsen inequities and cause more systematic failures. This is supported by Sendra-Balcells et al.’s (2023) study where an AI model, trained on foetal ultrasound data from Spain, was used in five African countries. The model underperformed in the African countries when compared to European nations. After the model was retrained with localised data instead, the recall rate increased to 99.2% ± 0.4% post-data transfer, emphasising the vital need for varying datasets and regionally adapted trials.

Generally, patients support AI when it’s used to assist doctors, but there are concerns surrounding impersonal care, privacy risks and unfair bias. Many current AI systems are “black-box” models, meaning they give the results of a decision without showing the reasoning behind it, meaning patients and clinicians are often reluctant to trust AI, especially when it comes to life-altering decisions and higher risk illnesses such as cancer. Rezaeian et al.’s (2025) conducted a study involving 28 clinicians assessing breast cancer diagnosis. They were given results from both black-box AI models and explicable AI models. The study showed that the majority of the clinicians trusted the explicable AI models more as they facilitated a clearer understanding of the given results and decision-making process behind the output. 

There also comes the pressing issue of determining accountability in cases of error. If an AI tool makes a mistake, does responsibility fall on the developers, clinicians or whole institutions? Current legal and regulatory frameworks are not ready for AI models capable of making independent decisions, restricting the implementation of these tools in the healthcare industry. To address this, global frameworks such as ISO and IEC are working on responsible AI management and trial guidelines (e.g., SPIRIT-AI and CONSORT-AI) to promote transparency. The World Health Organization (WHO) has also proposed an “AI Hippocratic Oath” to insert principles of human rights, transparency and equity into the design and utilisation of AI tools. 

While AI has significant potential to positively transform cancer care globally, it will only be trusted and truly helpful if its integration is balanced with fairness, transparency and good governance. 

AI in Early Warning and Disease Outbreak Management (DB)

Artificial intelligence has become an increasingly transformative tool for early disease warning and outbreak detection in global health. Traditional surveillance systems – relying on physician reports, laboratory confirmations and manual data collection – often operate with significant delays, allowing localised outbreaks to grow unchecked before interventions can be deployed. AI enhances these traditional methods by detecting subtle signals, recognising complex patterns that human analysts might miss and providing public health decision-makers with timely, actionable insights to prioritise interventions. Its ability to process vast, heterogeneous datasets makes AI particularly well-suited for a hyper-connected world where infectious diseases can spread across continents in mere hours. By enabling proactive detection and forecasting, AI can reshape how health systems anticipate and respond to emerging threats, saving lives and resources in the process.

At the core of AI detection is predictive analytics, which uses machine learning algorithms to identify early warning signs of outbreaks and forecast the potential spread of infectious diseases. Platforms like BlueDot, a Canadian AI-driven surveillance system, analyse airline travel, news feeds, official health bulletins and local epidemiological reports to identify unusual clusters of illness. In late December 2019, BlueDot successfully flagged a cluster of pneumonia cases in Wuhan, 9 days before the World Health Organization issued its first official alert (Bogoch et al., 2020). By modelling potential transmission routes and estimating the risk of international spread, BlueDot demonstrated that AI could provide early warnings for emerging infectious threats. Similarly, HealthMap, developed at Boston Children’s Hospital, aggregates news reports, eyewitness accounts, official bulletins and social media posts to create a near real-time global disease surveillance network (Brownstein et al., 2008). This integration of heterogeneous data sources allows HealthMap to detect anomalies and outbreak signals that might otherwise remain invisible to traditional public health monitoring. Beyond these systems, next-generation platforms such as BEACON (Biothreats Emergence, Analysis and Communications Network) combine AI, including natural language processing models, with expert human oversight to rapidly analyse, translate and disseminate information on emerging threats worldwide (Boston University, 2021; News-Medical, 2021). Open-source tools like BEACON strengthen global early warning infrastructure by providing timely, accessible intelligence to public health agencies in multiple countries.

Pattern recognition is another essential component of AI in outbreak management. Early signals of disease are rarely obvious; they often appear as scattered anomalies across multiple datasets. AI models excel at identifying these subtle patterns. For example, during the COVID-19 pandemic, AI-enhanced wastewater surveillance allowed communities in the United States, Europe and Asia to anticipate infection surges days or even weeks before confirmed case numbers rose, giving hospitals and local authorities critical lead time to allocate resources and implement mitigation measures (Peccia et al., 2020). 

Beyond detection, AI also optimises outbreak responses. Public health authorities must make high-stakes decisions regarding the allocation of vaccines, hospital beds, personal protective equipment (PPE), ventilators and other critical resources, often under extreme time pressure. AI models, including reinforcement learning and optimisation algorithms, can simulate alternative intervention strategies to identify approaches that maximise public health benefits while minimising shortages (Bertsimas et al., 2020). During COVID-19, AI-assisted logistics models predicted bottlenecks in PPE and ventilator distribution, ensuring resources reached areas most in need (Ivanov, 2020). Similarly, AI has been applied to vaccine allocation strategies by integrating factors such as demographic vulnerability, regional infection rates and healthcare system capacity, supporting equitable distribution policies. 

Globally, AI has been used to successfully complement human expertise, guiding decision-makers with data-driven insights that improve efficiency and outcomes. During the 2014-2016 Ebola outbreak in West Africa, predictive modelling helped allocate medical teams and monitor potential transmission zones. In Latin America, AI-driven surveillance has been applied to the Zika virus, forecasting outbreak hotspots based on climate conditions, mosquito breeding sites and travel patterns. In Asia, AI models have supported influenza tracking by combining hospital admissions, public transportation data and climate metrics.

These examples demonstrate that AI is not limited to a single region or disease; it is a versatile tool that can enhance preparedness for diverse pathogens worldwide.

AI in Oncology: Remote Patient Monitoring and Telemedicine (HP)

Over the last decade, there have been significant efforts to integrate AI-enabled remote patient monitoring, evaluation and telemedicine into standard practice. This transition was dramatically accelerated by the stress placed on global healthcare systems during the COVID-19 pandemic, which exposed vulnerabilities in traditional healthcare delivery (Aziz et al., 2025; Sain et al., 2023). In this context, remote patient monitoring and telemedicine emerged as essential tools for maintaining patient care. With continued advancements in machine learning algorithms and increased computational power, AI has the potential to become a practical tool across numerous healthcare sectors, including oncology (Aziz et al., 2025). 

A significant application of AI in this domain is its ability to predict clinical outcomes and patient needs without the need of continuous care. For example, AI models have demonstrated remarkable accuracy in predicting clinical deterioration. A study by Liu et al. (2023) successfully used machine learning algorithms, analysing data from wearable devices to predict 7-day mortality in patients with terminal cancer. The study identified heart rate as a critical predictor, showcasing how continuous, passive monitoring facilitated by AI can provide proactive care. This also shows a possible future application of telemedicine for cancer care. 

AI is also proving useful in optimising clinical workflows and resource allocation. In radiation oncology, a machine learning-based planning workflow for prostate LDR brachytherapy has shown the potential to significantly save time and operational efficiencies, offering an alternative to manual planning while reducing the time required for the operations (Nicolae et al., 2020). Similarly, AI is being tested in accurately scheduling surgical procedures. A randomised clinical trial by Strömblad et al. (2021) deployed a machine learning model to predict surgical case duration for gynecologic and colorectal oncology cases. The algorithm demonstrated superior accuracy compared to surgeons’ estimates (p<0.03), leading to a reduction in patient wait times. These examples illustrate how AI can improve the patients’ experience and optimise timings in cancer treatment.

As cancer care increasingly moves towards remote and home-based models, the role of oncology nurses in monitoring patients and managing symptoms becomes even more important. AI-based systems can analyse a patient’s self-reported outcomes and physiological data in real-time, flagging concerns to nurses. A review by O’Connor et al. (2024) highlights the development of AI-based decision support systems designed to be used by oncology nurses. These systems aim to aid in decision-making and enhance care by providing data-driven analytics. By automating the initial analysis of large streams of patient data, these tools can help nurses prioritise patients, identify subtle signs of deterioration earlier and deliver targeted and effective care via telehealth platforms.

For telemedicine, the real-world success of these AI-driven systems depends critically on the patient’s experience such as adherence, satisfaction and usability. The effectiveness of a remote patient monitoring tool is directly tied to a patient’s willingness and ability to engage with it consistently. While studies generally report positive patient satisfaction, practical challenges remain as significant barriers. For instance, in the context of end-of-life care, the burden of managing technology often falls on caregivers and physical discomfort from wearable devices can be a notable issue for these patients (Liu et al., 2023). Therefore, successful implementation of this technology requires a deep understanding of the user experience and a commitment to patient-centred design that minimises burden and maximises ease of use, particularly for vulnerable populations.

AI in Oncology: Supporting Mental Health in Breast Cancer Care (SP)

Breast cancer is one of the most common cancers in women and a major health issue affecting millions of individuals globally. In addition to the physical challenges that patients face upon diagnosis, the psychological impact of the disease cannot be overstated. When dealing with cancer, patients often have to learn to navigate a range of difficult emotions and experiences, which is why a psychologically-supportive approach is so important.

This section explores how the integration of AI can change our approach to patients’ wellbeing for the better. Specifically when dealing with cancer treatment, as well as looking at the challenges and ethical considerations that come along with it. When receiving a breast cancer diagnosis, it is common for patients to undergo feelings of depression as they try to rationalise their experience and make sense of their new reality. Furthermore, the uncertainty and stress from their treatment can enhance these feelings, leading to heightened levels of anxiety. Research shows that nearly 1 in 3 cancer patients experience significant psychological distress, highlighting the essential role of mental health support in oncological care (Liu et al., 2022). Moreover, the experience of undergoing treatment, such as chemotherapy or radiation, can further worsen the patient’s psychological wellbeing. The significant side effects of this treatment combined with and the social pressures associated with cancer can lead to feelings of isolation and hopelessness. Recognising these emotions and the patient’s individual experience is crucial as a healthcare provider. Furthermore, addressing these mental health concerns can significantly improve the overall quality of life for breast cancer patients.

The integration of AI into mental health care can provide numerous benefits for breast cancer patients, including emotional support and mental health monitoring. For instance, the use of AI-driven chatbots can provide 24/7 support and immediate access to information about emotional management and self-care procedures. These chatbots are able to imitate human interactions, allowing users to express their feelings and seek help without judgement, at any time and from non-clinical settings, increasing accessibility to mental health support. 

Additionally, advances in sentiment analysis allow AI systems to adapt their responses based on the tone of the conversation, offering responses tailored to the patient’s specific situation. Moreover, AI can analyse patient data and identify those who may be at a higher risk for depression or anxiety, allowing healthcare providers to intervene at an earlier stage. AI models can also adapt their output based on individual patient data. For instance, if a patient experiences frequent anxiety attacks, AI can provide targeted calming techniques and resources.

A number of hospitals have already successfully integrated AI to support breast cancer patients. One relevant example is Woebot, a conversational AI chatbot that employs cognitive behavioural therapy techniques to help users manage their mental health. Since its launch in 2017, Woebot has shown promising outcomes for patients dealing with chronic illnesses, offering a supportive platform users can rely on at any time. Another example is the digital mental health app MyJourney, which provides users with personalised resources and tools tailored to their specific experience. By using AI algorithms, this platform tracks progress and adapts its support to the changing emotional state of cancer patients. Both projects demonstrate how AI can play a significant role in improving emotional wellbeing for patients diagnosed with breast cancer. 

Conclusion

Overall, artificial intelligence has the potential to make revolutionary changes in medicine, from improved disease outbreak management to holistic oncological care. However, its integration relies on building trustworthy and safe systems that prioritise patient confidentiality, unbiased outputs and transparent decision-making. By leveraging AI responsibly, strategically and globally, we can build more proactive, coordinated and resilient health systems, ultimately improving patient outcomes and overall wellbeing.

Bibliography

Allen, C. & Brownstein, J.S. (2008) Real-time influenza surveillance using hospital network data, PLoS ONE, 3(10), e2591.

Aziz, F., Bianchini, D., Olawade, D.B. & Boussios, S. (2025) The Impact of AI-Driven Remote Patient Monitoring on Cancer Care: A Systematic Review, Anticancer Research, 45(2), pp. 407–418.

Bogoch, I.I., Watts, A., Thomas-Bachli, A., Huber, C., Kraemer, M.U. & Khan, K. (2020) Pneumonia of unknown aetiology in Wuhan, China: potential for international spread via commercial air travel, Journal of Travel Medicine, 27(2), pp. 1-3.

Brownstein, J.S., Freifeld, C.C. & Madoff, L.C. (2009) Digital disease detection — harnessing the web for public health surveillance, New England Journal of Medicine, 360(21), pp. 2153–2157.

Carreira, H., Williams, R., Müller, M., Harewood, R., Stanway, S. & Bhaskaran, K. (2018) Associations between breast cancer survivorship and adverse mental health outcomes: A systematic review, Journal of the National Cancer Institute, 110(12), pp. 1311–1327.

Cookson, C. (2025) Routine AI assistance hits skills of health experts performing colonoscopies, The Financial Times [online]. <https://www.ft.com/content/74b82366-1ea1-4f90-80aa-e84a1e655d28>

Cooper, H. & Green, D. (2023) 7 Use Cases for Artificial Intelligence in Global Health, ICT Works [online]. <https://www.ictworks.org/use-cases-artificial-intelligence-global-health/>

Cortlana, V., Itodo, K., Leyfman, Y., et al. (2025) Artificial Intelligence in Cancer Care: Addressing Challenges and Health Equity, Oncology (Williston Park), 39(3), pp. 105-110.

Eysenbach, G. (2009) Infodemiology and infoveillance: framework for an emerging set of public health informatics methods to analyze search, communication and publication behavior on the Internet, Journal of Medical Internet Research, 11(1), e11.

Field, H. (2025) Some doctors got worse at detecting cancer after relying on AI, The Verge [online]. <https://www.theverge.com/ai-artificial-intelligence/758672/some-doctors-got-worse-at-detecting-cancer-after-relying-on-ai>

Fortin, J., Leblanc, M., Elgbeili, G., Cordova, M.J., Marin, M.F. & Brunet, A. (2021) The mental health impacts of receiving a breast cancer diagnosis: A meta-analysis, British Journal of Cancer, 125(11), pp. 1582–1592.

Ghasemi, A., Hashtarkhani, S., Schwartz, D.L. & Shaban-Nejad, A. (2024) Explainable artificial intelligence in breast cancer detection and risk prediction: A systematic scoping review, arXiv, 2407.12058.

Ginsberg, J., Mohebbi, M.H., Patel, R.S., Brammer, L., Smolinski, M.S. & Brilliant, L. (2009) Detecting influenza epidemics using search engine query data, Nature, 457(7232), pp. 1012–1014.

Hantel, A., Clancy, D.D., Kehl, K.L, et al. (2022) A Process Framework for Ethically Deploying Artificial Intelligence in Oncology, Journal of Clinical Oncology, 40(34).

Hasei, J., Hanzawa, M., Nagano, A., Maeda, N., Yoshida, S., Endo, M., Yokoyama, N., Ochi, M., Ishida, H., Katayama, H., Fujiwara, T., Nakata, E., Nakahara, R., Kunisada, T., Tsukahara, H. & Ozaki, T. (2025) Empowering pediatric, adolescent, and young adult patients with cancer utilizing generative AI chatbots to reduce psychological burden and enhance treatment engagement: A pilot study, Frontiers in Digital Health, 7(1543543).

Hashim, H.T., Alhatemi, A.Q.M., Daraghma, M., et al. (2025) Artificial intelligence versus radiologists in detecting early-stage breast cancer from mammograms: a meta-analysis of paradigm shifts, Pol J Radiol, 90, e1-e8.

Healthcare in Europe (2023) Experts raise ethics concerns posed by patient-facing AI in oncology, Healthcare in Europe [online]. <https://healthcare-in-europe.com/en/news/ethics-concerns-patient-facing-ai-oncology.html>

Hirani, R., Noruzi, K., Khuram, H., Hussaini, A.S., Aifuwa, E.I., Ely, K.E., Lewis, J.M., Gabr, A.E., Smiley, A., Tiwari, R.K. & Etienne, M. (2024) Artificial Intelligence and Healthcare: A Journey through History, Present Innovations, and Future Possibilities. Life, 14(5), p. 557.

Jeyaretnam, M. (2025) New Study Suggests Using AI Made Doctors Less Skilled at Spotting Cancer, Time Magazine [online]. <https://time.com/7309274/ai-lancet-study-artificial-intelligence-colonoscopy-cancer-detection-medicine-deskilling/>

LaBrecque, S. (2024) Can artificial intelligence extend healthcare to all?, Reuters [online]. <https://www.reuters.com/sustainability/can-artificial-intelligence-extend-healthcare-all-2024-03-25/>

Lin, S.J., Sun, C.Y., Chen, D.N., Kang, Y.N., Hoang, K.D., Chen, K.H. & Chen, C., 2024. Chatbots for breast cancer education: A systematic review and meta-analysis, Supportive Care in Cancer, 33(1), p. 55.

Liu, J.-H., Shih, C.-Y., Huang, H.-L., et al. (2023) Evaluating the Potential of Machine Learning and Wearable Devices in End-of-Life Care in Predicting 7-Day Death Events Among Patients With Terminal Cancer: Cohort Study, Journal of Medical Internet Research, 25, p. e47366.

Liu, J.K., Kaji, A.H., Roth, K.G., Hari, D.M., Yeh, J.J., Dauphine, C., Ozao‑Choy, J. & Chen, K.T. (2022) Determinants of psychosocial distress in breast cancer patients at a safety net hospital, Clinical Breast Cancer, 22(1), pp. 43–48.

Nicolae, A., Semple, M., Lu, L., et al. (2020) Conventional vs machine learning–based treatment planning in prostate brachytherapy: Results of a Phase I randomized controlled trial, Brachytherapy, 19(4), pp. 470–476.

O’Connor, S., Vercell, A., Wong, D., et al. (2024) The application and use of artificial intelligence in cancer nursing: A systematic review, European Journal of Oncology Nursing, 68, p. 102510.

Olver, I.N. (2024) Ethics of artificial intelligence in supportive care in cancer, The Medical Journal of Australia [online]. <https://www.mja.com.au/journal/2024/220/10/ethics-artificial-intelligence-supportive-care-cancer>

Rezaeian, O., Asan, O. & Bayrak, A.E. (2025) The impact of AI explanations on clinicians’ trust and diagnostic accuracy in breast cancer, Applied Ergonomics, 129(104577).

Sain, B., Gupta, A., Ghose, A., et al. (2023) Impact of COVID-19 on cancer services and patients’ outcomes: a retrospective single-center study, Annals of Translational Medicine, 11(9), p. 310.

Sendra-Balcells, C., Campello, V.M., Torrents-Barrena, J., et al. (2023) Generalisability of fetal ultrasound deep learning models to low-resource imaging settings in five African countries, Scientific Reports, 13(2728).

Shaban, M., Osman, Y.M., Mohamed, N.A. & Shaban, M.M. (2025) Empowering breast cancer clients through AI chatbots: Transforming knowledge and attitudes for enhanced nursing care, BMC Nursing, 24(994).

Strömblad, C.T., Baxter-King, R.G., & Meisami, A. (2021) Effect of a Predictive Model on Planned Surgical Duration Accuracy, Patient Wait Time, and Use of Presurgical Resources, JAMA Surgery, 156(4), p. 315.

Tahir, D. (2025) AI was meant to cut health costs. It turns out to need expensive human support, San Fransisco Chronicle [online]. <https://www.sfchronicle.com/health/article/ai-health-care-needs-costly-human-oversight-20028092.php>

Tschandl, P., Rinner, C., Apalla, Z., et al. (2020) Human-computer collaboration for skin cancer recognition, Nat Med, 26(8), pp. 1229-1234.

Woebot Health (n.d.) About Us, Woebot [online]. <https://woebothealth.com/>

World Health Organization (2021) Ethics and governance of artificial intelligence for health, WHO [online]. <https://www.who.int/publications/i/item/9789240029200>

World Health Organization (2023) Strengthening the global architecture for health emergency preparedness, response and resilience, WHO [online]. <https://www.who.int/publications/m/item/strengthening-the-global-architecture-for-health-emergency-prevention–preparedness–response-and-resilience

‌Xsolis (n.d.) The Evolution of AI in Healthcare, Xsolis [online]. <https://www.xsolis.com/blog/the-evolution-of-ai-in-healthcare>

Xu, B., Kraemer, M.U., Xu, B., Gutierrez, B., Mekaru, S., Sewalk, K., Loskill, A., Wang, L., Cohn, E., Hill, S. & Zarebski, A., 2020. Epidemiological data from the COVID-19 outbreak, real-time case information, Scientific Data, 7(1), pp. 1–6.

Yan, S., Li, J. & Wu, W. (2023) Artificial intelligence in breast cancer: Application and future perspectives, Journal of Cancer Research and Clinical Oncology, 149(17), pp. 16179–16190.