Abstract
Artificial intelligence (AI) is rapidly emerging as a transformative tool in breast cancer care, with applications ranging from mammogram interpretation to risk stratification and clinical decision support. While these technologies hold considerable promise for improving diagnostic accuracy, reducing clinician workload and enabling earlier interventions, their integration into healthcare raises ethical issues. This paper examines four critical regions of this implementation: ethical guidelines, data privacy, responsibility and accessibility. Expectations and regulations in the medical field highlight the need for transparency and choice in the use of AI within screening programmes, while concerns persist regarding automation bias, data governance and disparities in availability across healthcare systems. Addressing these issues requires careful regulation, strong clinical oversight and engagement with patient perspectives to build trust. The analysis presented argues that the successful implementation of AI in breast cancer care depends on balancing innovation with ethical safeguards. By ensuring fairness, accountability and respect for autonomy, AI can be responsibly integrated into practice while maintaining the integrity of patient-centred care.
Introduction
Artificial intelligence (AI) is transforming the implementation of breast cancer treatment, from early detection to how the treatment is planned. In the UK and internationally, healthcare sectors are exploring how AI can be used to address shortages in the workforce, enhance technical accuracy and improve patient outcomes. However, while AI promises a lot, to integrate it into such a sensitive area of medicine, it should not be evaluated for cancer treatment on its own. The ethical, legal and social implications that AI brings are just as critical: particularly, areas of implication like potential bias, privacy concerns, unclear responsibilities and unequal access.
Breast cancer is the most common cancer among women, and through that, trust, transparency and human judgement remain central to care. Due to this, the challenge is not simply whether AI can outperform professionals and clinicians in certain tasks, but rather if it can be safely and responsibly embedded within the healthcare system without undermining the fundamental principles of medical ethics. This article examines the balance between innovation and responsibility by exploring four key areas: ethical guidelines, data privacy, responsibility and accessibility. All these domains highlight the promise and paradox of AI in oncology and radiology, allowing a chance to enhance patient outcomes while also demanding careful safeguards to preserve medical ethics.
1. Ethical Guidelines
Artificial intelligence (AI) promises improved breast cancer detection and management, but must be implemented under clear ethical principles and regulations to ensure patient safety, autonomy and trust. In the UK, where the NHS is working to integrate AI into breast cancer services, guidance must not only meet national standards but also reflect global principles to ensure fair and responsible use (Kyle & Streetling, 2025). To achieve this, four key areas need attention: grounding AI in ethical pillars, establishing mandates for adoption, regulating human-machine roles and setting conditions for when AI should be recommended.
1.1 Grounding AI in the Ethical Pillars of Medicine and Cancer Care
Leading frameworks stress that AI systems should keep ethics and human rights as the main consideration of their design and use (Pathology Innovation, 2025). For example, the WHO’s 2021 guidance for AI in health defines six core principles: protecting human autonomy and promoting well-being and safety, transparency, accountability, inclusiveness and equity, and sustainability (Pathology Innovation, 2025; Abujaber & Nashwan, 2024). These principles align with the classic medical ethics, which are respect for autonomy, beneficence, non-maleficence and justice (Abujaber & Nashwan, 2024). For instance, respecting patient autonomy means informing patients on how their data is being used by AI (Varkey, 2021). This resembles the ethical considerations of an oncologist, ensuring AI algorithms are designed and used under the same guidelines as a doctor to maximise patient benefit while minimising harm, reflecting the medical aim of detecting cancer early while avoiding unnecessary anxiety through false positive results (Varkey, 2021; Taylor-Phillips et al., 2022).
Ethical guidelines also emphasise the need to address bias to ensure that the AI systems produce fair and reliable outcomes across diverse patient populations. Currently, traditional models of AI dealing with breast cancer detection have been shown to have poor performance across different populations due to the datasets used when developing the model (RSNA, 2023). While newer models demonstrate improvements, without sufficient safeguards, algorithms risk reinforcing healthcare inequalities. This is particularly critical in breast cancer, where diagnostic accuracy may vary by age, ethnicity or genetic background. Such disparities could lead to misdiagnosis, with potentially harmful consequences for both the mental and physical health of patients if treatment is initiated unnecessarily or if treatment is not given when it is required.
Ethical principles such as transparency are vital. For high-stakes contexts such as cancer screening, it is ethically inadequate to deploy “black box” systems that provide outputs without rationale. Instead, explainability should be proportionate to risk. Systems that influence medical decisions, like biopsy recommendations or treatment, need to be understood by clinicians to be able to be justified. Importantly, guidelines should emphasise that technical accuracy is not the end, but rather a means to give safe and equitable care. AI in breast cancer should therefore be evaluated not just on technical and predictive performance but also on how well it enhances clinical decision-making, preserves trust between patients and doctors, and continues to follow the ethical principles at the core of medicine.
1.2 Establishing Mandates
Ethical guidelines for AI in breast cancer care extend beyond theoretical principles and must include practical mandates for adoption. These mandates provide an ethical necessity to ensure safe and responsible use of AI. Mandates clarify both the responsibility of the clinician and the AI system, ensuring that accountability remains with qualified healthcare professionals (Pathology Innovation, 2025). However, in practice, enforcing mandates can be challenging, especially since the AI systems change faster than regulatory frameworks. This lag risks situations where the AI systems may be in use before comprehensive validation, which then undermines the medical ethics like non-maleficence (Topol, 2019). For example, while AI can significantly enhance breast cancer prediction models, existing UK and NHS mandates are not fully equipped to monitor AI. Current regulations were primarily designed for traditional medical devices and human-led diagnostics, and they often lack mechanisms to assess continuously-learning algorithms, update protocols for rapidly evolving models or enforce standardised validation across diverse populations. This regulatory gap can result in AI systems being deployed before thorough clinical validation, leaving potential biases, inaccuracies or inequities unaddressed and undermining patient safety and trust (Goh et al., 2025).
A notable advancement, however, in this area is the development of an AI risk assessment model that predicts breast cancer without racial bias. This model, developed using mammographic images alone, outperforms traditional risk assessment models across different racial groups (RSNA, 2023). While this demonstrates AI’s potential to enhance equity in healthcare, current mandates must explicitly ensure that such innovations are systematically implemented and monitored so their ethical benefits are realised in practice. Without clear mandates, even promising AI systems risk inconsistent use across healthcare settings, limiting their impact on patient justice and beneficence. In other words, not having the technology under clear policies could lead to incorrect use and lack of equity, going against the pillars of medicine (Pathology Innovation, 2025; Abujaber & Nashwan, 2024).
1.3 Regulating Human-Machine Roles
A major ethical consideration is how authority and responsibility are split among human specialists and AI systems. Clear guidelines are necessary to prevent automation bias, which involves over-reliance on AI and the rejection of valid AI signals. In research conducted by Dratsch et al. (2023), radiologists of all levels of skillset and experience were studied to see if automation bias influenced the readings from a mammogram. Key findings showed that all levels of expertise tended to lean towards the results given by the AI, whether right or wrong, which highlights the risk of misplaced reliance on AI systems (Dratsch et al., 2023).
To avoid this, the consensus in both UK and international frameworks is that breast cancer AI must operate under human supervision, as AI is not here to replace the jobs of radiologists or oncologists, but rather to bridge the gap created by lack of doctors by giving mundane and time-consuming tasks to the AI systems while allowing radiologists and clinicians to keep face-to-face interactions with patients. This shows that AI’s role in medical practice should be supportive rather than authoritative. By clearly defining responsibilities, healthcare can ensure that AI augments human expertise without undermining accountability, thereby fostering trust and protecting patients from the risks of automation bias.
1.4 Setting Conditions for When AI Should be Recommended
Another major consideration to be made is when AI should or should not be used. As previously mentioned, the role of AI is not to undermine clinicians, but rather to play a supportive role, allowing faster analysis of results and ultimately leaving the decision up to the professionals. However, this does not fully render what prompts the use of AI in the practice. AI in healthcare, especially for breast cancer, should always be about improving patient care. It is important for patients to know how AI is being used in their treatment and the benefits or potential risks. In the context of breast cancer, where AI is used to analyse mammograms, patients should be given the choice to opt in or out. This ensures that patient autonomy is respected while promoting transparency in how their data is used. As AI continues to evolve, ethical frameworks must prioritise consent throughout the patient journey. Patients should be able to opt out anytime they want throughout the treatment. The WHO’s guidelines (2021) say that patient autonomy, wellbeing and trust must remain central to any AI application in healthcare (Pathology Innovation, 2025).
Ultimately, the integration of AI in breast cancer care relies on trying to balance its use with keeping patient safety as the main aim. This means the aims of AI applications need to align with human judgement and clinical ethics.
2. Data Privacy
In recent years, there has been an increase in the usage of AI and digital technologies for breast cancer research, screening and treatment (Singh et al., 2024). Although there are multiple advantages, one of the crucial ethical issues that remains is data privacy protection. Mammography screening models’ databases consist of genetic sequences, private health information and sensitive medical images. Bad data handling can lead to data abuse, privacy issues and a decline in patient trust. To mitigate the problem, a whole gamut of factors must be considered, including the assessment of informed consent and ownership, implementation of strong breach defences, solving the difficulties of anonymisation and advancement of privacy-enhancing technology.
2.1 Informed Consent, Ownership and Data Governance
Standard consent forms in clinical environments may allow patient data to be used broadly, vaguely and without any specific definition, including for the use of future AI training. If consequences are not explained in the right way, the autonomy of the patient could be jeopardised (Carter et al., 2020). When there is a lack of transparency, consent becomes a mere formality rather than truly informed consent. Besides, patient data such as genomics or images may be used without the knowledge or awareness of the patient for the purpose of generating profits; hence, there are questions regarding data ownership rights and the division of profits (Milne et al., 2021).
The first step in creating good control over this matter is informing and gathering the consent of the subjects. User-friendly mechanisms such as dynamic consent would be particularly helpful, allowing individuals to adjust their choices at any given point. To ensure that data practices are in line with the expectations of patients, the trust of stakeholders is retained and ethical norms are upheld, ethical supervision by the institution review board, transparent polices and regulated data-sharing agreements should be implemented (Carter et al., 2020).
2.2 Data Security and the Risk of Breaches
The datasets that underlie AI-supported breast cancer care, such as imaging files, genomic markers and personal histories, are extremely sensitive and vulnerable to exploitation by cybercriminals (Shabani et al., 2018). The ethical and legal principle of non-maleficence includes protecting a patient’s privacy. The unintended consequences of breaches of health data can result in identity theft, unwitting disclosure of genetic predispositions and economic discrimination in insurance and employment. AI in healthcare is often associated with private technology companies, raising the stakes on issues concerning data custody and security disposition (Milne et al., 2021). This is why security protocols, access controls, second sources of authentication, encryption, auditability and regulatory scrutiny must be enforced (National Data Guardian, 2016).
2.3 Anonymisation, Re-identification and Privacy-Enhancing Technologies
Advancements in AI have reduced the effectiveness of anonymisation, which has been used to protect patient privacy for a long time. De-identified data can often be re-identified, especially when it comes to genomic or imaging datasets (Gymrek et al., 2013; Cohen et al., 2018). For example, mammographic images can often be re-identified with unique anatomical features that a typical de-identification process would miss, which provides a false sense of privacy protection. Privacy-enhancing technologies are now being seen as important tools to address privacy risks in breast cancer AI research. Federated learning enables data and subsequent models to remain decentralised and protected, as training can occur locally with sensitive institutional data, rather than a central pool of shared data that could become a security risk (Yang et al., 2019).
While maintaining statistical validity, differential privacy does add attention to the datasets with a goal of preventing individual re-identification (Dwork & Roth, 2014). More reliable methods, including secure multi-party computation and homomorphic encryption, allow data to be computed while it is encrypted or in the possession of several people, guaranteeing that private information is hidden even during analysis. Collectively, these methods ensure that the privacy of patients is better protected than traditional anonymisation approaches.
3. Responsibility
The integration of artificial intelligence into breast cancer diagnostics has sparked significant debate about responsibility and liability. Traditionally, radiologists, oncologists and surgeons carried the legal and ethical weight of decision-making; a human expert was responsible for every mammogram reading, biopsy and treatment plan. With AI systems now screening mammograms, triaging images and even suggesting treatment strategies, the lines of accountability are becoming blurred. Patients and clinicians are confronted with the challenge of deciding who takes responsibility when an AI tool makes an error, whether through misdiagnosis, poor image quality interpretation, biased training data that skews results or limitations in handling atypical cases. To address the issue, blurring medical responsibility, shared responsibility and patient trust and ethical responsibility is covered.
3.1 Blurring Medical Responsibility
The adoption of AI in breast cancer diagnostics, such as screening mammography, histopathological analysis and treatment planning, introduces a shift in accountability. Historically, clinicians assumed full legal and ethical responsibility for diagnoses and treatment decisions. As AI increasingly acts as a “second reader” or standalone triage system, liability becomes obscured. Studies reveal stark patient preferences: fewer than 5% are comfortable with sole AI reading of mammograms, while nearly 90% insist on radiologist review of any suspicious AI findings (Carter et al., 2020). This underlines the importance of clinicians in emotional contexts such as breast cancer care.
Simultaneously, the AI models operate as opaque “black boxes”. This opacity puts clinicians in a position where they may legally bear the consequences of following AI recommendations they cannot interpret. Kellis (2020) emphasises that, under current tort law, clinicians remain accountable, even if harm stems from flawed algorithms or biased training data. Another review argues that AI integration into breast cancer care should not be treated as purely technical, as it involves ethical, legal and societal challenges that must be proactively addressed through oversight and public engagement (Carter et al., 2020).
3.2 Shared Responsibility
Recognising the limitations of physician-centric liability, scholars are advocating for shared models that distribute responsibility among clinicians, institutions and AI developers. Concepts such as common enterprise liability, where costs of medical error are shared, or even assigning legal entity status to AI systems have been proposed (Tucci et al., 2020). The EU’s forthcoming AI Act designates medical AI as “high-risk”, mandating rigorous validation, transparency and human oversight, thereby holding manufacturers to stricter standards than in some North American jurisdictions. Meanwhile, analyses of AI in histopathology caution that algorithmic models require ongoing calibration, bias testing and performance oversight; failure in these areas may represent negligence on the part of developers or implementing institutions (McKinney et al., 2020). Moreover, the ambiguity of insurer coverage for AI-based diagnoses may lead to elevated premiums or outright denial, presenting the need for AI-specific liability insurance mechanisms (Kellis, 2025).
3.3 Patient Trust: Transparency and Consent
Breast cancer screening is emotionally challenging. Consequently, patients deserve clarity on who is responsible for AI-assisted decisions and what is available if errors occur. Ethical standards require meaningful informed consent. Patients must be told whether AI is involved in their diagnosis, how its outputs will be reviewed and what limitations exist (McKinney et al., 2020). Notably, AI models have demonstrated tangible clinical gains. For example, an algorithm reduced false-positive callbacks by 31.1%, benign biopsies by 7.4% and decreased radiologist workload by 41.6%, without reducing cancer detection rates (McKinney et al., 2020). While such improvements show AI’s promise, they also raise liability stakes, reinforcing the importance of explainability.
Explainable AI (XAI) offers a path forward, improving transparency and helping clinicians interpret AI outputs. Techniques like SHAP (Shapley Additive Explanations) are prevalent in breast cancer detection and risk prediction, enhancing interpretability and trust in AI systems (Samek et al., 2021). Combining XAI and other programs further bolsters diagnostic accountability and interpretability in mammography. However, the relationship between explanation and clinician trust is not straightforward: recent studies find that increasing levels of AI explanation do not always enhance clinician trust or diagnostic performance, highlighting the need for well-designed interpretability tools (McKinney et al., 2020).
The path to responsible AI use in breast cancer care involves collaboration across disciplines. Legal scholars, ethicists, clinicians and technologists must develop guidelines that clarify liability without stifling innovation. Suggestions include requiring explainability where possible, mandating clinician override protocols, establishing joint liability and educating jurors about AI limitations. By balancing technological potential with accountability, healthcare can harness AI’s strengths, speed, accuracy and scalability, while protecting patients from harm and clinicians from unfair blame.
4. Accessibility of AI
When it comes to balancing AI with ethics in breast cancer, accessibility should not be overlooked. This integral part has four major pillars which we propose: cost and affordability; equity of access; policy and regime (in which the whereabouts and why an AI is being permitted to be used are looked at); and finally, global disparities (in which the LMIC gaps are looked at and how this affects the use of AI in the breast cancer treatment process).
4.1 Cost and Affordability
AI has been proven to bring about benefits such as increased accuracy, where it can spot abnormalities which may not be visible to the human eye and reduce the costs of repeated mammograms, biopsies and chemotherapy. Rubio (2024) advises that when implementing schemes, the cost-effectiveness and savings should be used in other ways, such as for medication or daily necessities, which should engage patients and non-attenders to carry out their personalised treatment in a faster and cheaper manner, while also finding the process less emotionally and financially draining. We propose that a department can be set up that helps patients manage their expenses, such as an accounts group within the hospital.
4.2 Equity of Access
While using AI for screening or treatment, it’s vital that everyone is treated equally and given the same opportunity to receive the care they need, instead of aggravating the current health injustice in the world. The core issue lies in the data bias. AI algorithms are trained on existing data, which is often inaccurate and prejudiced towards certain populations: e.g., wealthier, white, female groups from high-income countries (Carter, 2024). When applied to a different population, the biased algorithms are not able to keep up with the genetic differences and hence, perform poorly. This leads to inaccurate diagnoses and widening gaps in treatment for already disadvantaged groups. To fight this bias, more diverse datasets should be used to help algorithms learn and function in the right way to produce accurate diagnoses.
4.3 Policy and Regime
The usage of AI should be strictly regulated to prevent any potential risks, such as misuse or overuse. Implementation should include action taken by the government, regulators or a specific department in the hospital to manage the use of AI. To ensure a situation does not occur where there are no clear standards and rules for who is legally and morally responsible if an AI system causes harm, a responsible department must be established before widespread implementation to ensure AI is introduced safely and effectively. Conducting rigorous clinical trials with the public to determine what kind of AI is acceptable, rather than simply accepting what is offered by developers, also ensures a strict environment. This enforces that data is protected and not misused or leaked (Carter, 2024; Carter, 2020; Roadevin, 2025).
4.4 Global Disparities
The main reason for implementing AI is to allow everyone to receive equal treatment regardless of where they are. However, the issue of global disparity does provide challenges in implementing AI into breast cancer care. For instance, the five-year survival rate exceeds 90% in high-income countries, whereas rates fall to 66% in India and 40% in South Africa (The Global Breast Cancer Initiative, 2021). AI developed in high-income countries may not be applicable in low- and middle-income countries due to the differences in genetic risk factors, cancer epidemiology and, most importantly, lack of necessary infrastructure in LMICs, such as stable electricity, internet and electronic health records. The high installation and maintenance costs of these systems are a major barrier to their widespread adoption in developing nations, further widening the gap in cancer care globally. To aid implementation, local hospitals can utilise charities or fundraisers that focus on improving public health infrastructure, focusing on social determinants of health (like poverty and education), utilising technology to increase access and strengthening health workforces.
Although there may be challenges to applying AI in realistic situations, these can be overcome by taking the necessary precautions. This technology should be accessible to all; women around the world have a better chance of diagnosis, treatment and remission with the application of AI, and it should be incorporated in all parts of the world.
Conclusion
Artificial intelligence is gradually demonstrating its potential to transform breast cancer care, from early detection to treatment strategies. Nonetheless, the full benefits will only be realised once robust ethical frameworks are established. Across the four pillars: ethical principles, data protection, accountability and accessibility, it can be concluded that the success of AI in healthcare hinges not just on technology but also on trust, responsibility and equity. The foundation should rest on ethical principles, making certain that AI aids rather than replaces human experts, while responsibilities and regulations specify when and how AI should be utilised. Protecting data privacy is essential, with informed consent, strong security and privacy-enhancing technologies ensuring patient safety and trust. Responsibility must be clearly defined so clinicians, developers and institutions are accountable for preventing gaps in patient care. Ultimately, accessibility involves guaranteeing that AI does not exacerbate existing disparities but rather promotes equitable and affordable healthcare for diverse populations across various regions of the globe.
The integration of AI in breast cancer treatment must be recognised as a beneficial influence on quality of life, rather than merely a technical enhancement. When implemented properly, the right AI can enhance accuracy, reduce the burden on healthcare and improve patient results worldwide. However, if rigour in privacy, accountability, fairness and ethical behaviour is not upheld, the technology’s potential could devolve into a cause of pain and inequality. The path forward requires combining technological innovation with clear ethical practices, ensuring AI supports clinicians, protects patient rights and upholds established standards of medical care.
Bibliography
Abujaber, A.A. & Nashwan, A.J. (2024) Ethical framework for artificial intelligence in healthcare research: A path to integrity, World Journal of Methodology, 14(3), 94071.
Carter, S.M., Rogers, W., Win, K.T., Frazer, H., Richards, B. & Houssami, N. (2020) The ethical, legal and social implications of using artificial intelligence systems in breast cancer care, The Breast, 49, pp. 25-32.
Carter, S.M., Popic, D., Marinovich, M.L., Carolan, L. & Houssami, N. (2024) Women’s views on using artificial intelligence in breast cancer screening: A review and qualitative study to guide breast screening services. The Breast, 77(103783).
Cohen, J.P., Luck, M. & Honari, S. (2018) Distribution Matching Losses Can Hallucinate Features in Medical Image Translation, arXiv, 1805.08841.
Dratsch, T., Chen, X., Mehrizi, M.R., Kloeckner, R., Mähringer-Kunz, A., Püsken, M., et al. (2023) Automation Bias in Mammography: The Impact of Artificial Intelligence BI-RADS Suggestions on Reader Performance, Radiology, 307(4).
Dwork, C. & Roth, A. (2014) The Algorithmic Foundations of Differential Privacy, Foundations and Trends in Theoretical Computer Science, 9(3–4), pp. 211–407.
Goh, S., Goh, R.S.J., Chong, B., Ng, Q.X., Koh, G.C.H., Ngiam, K.Y. & Hartman, M. (2025) Challenges in Implementing Artificial Intelligence in Breast Cancer Screening Programs: Systematic Review and Framework for Safe Adoption, J Med Internet Res, 27, e62941.
Gymrek, M., McGuire, A.L., Golan, D., Halperin, E. & Erlich, Y. (2013) Identifying Personal Genomes by Surname Inference’, Science, 339(6117), pp. 321–324.
Kaissis, G.A., Makowski, M.R., Rückert, D. & Braren, R.F. (2020) Secure, privacy-preserving and federated machine learning in medical imaging, Nature Machine Intelligence, 2, pp. 305–311.
Kellis, D.M. (2025) Existential Health Care Ethics, AMA Journal of Ethics, 27(8), E543-548.
Kyle, P. & Streeting, W. (2025) World-leading AI trial to tackle breast cancer launched, Gov UK [online]. <https://www.gov.uk/government/news/world-leading-ai-trial-to-tackle-breast-cancer-launched>
McKinney, S.M., Sieniek, M., Godbole, V., Godwin, J., Antropova, N., et al. (2020) International evaluation of an AI system for breast cancer screening, Nature, 577(7788), pp. 89–94.
Milne, R., Morley, K.I., Almarri, M.A., Anwer, S., Atutornu, J., et al. (2021) Demonstrating trustworthiness when collecting and sharing genomic data: public views across 22 countries, Genome Medicine, 13(92).
National Data Guardian for Health and Care (2016) Review of Data Security, Consent and Opt-Outs, National Data Guardian [pdf]. <https://assets.publishing.service.gov.uk/media/5a7502feed915d3c7d529b39/data-security-review.PDF>
Pathology Innovation CC Alliance (2025) Ethics and Governance of Artificial Intelligence, PIcc Alliance [online]. <https://pathologyinnovationcc.org/projects/ethics-and-governance-of-artificial-intelligence>
Radiological Society of North America (2023) AI Model Predicts Breast Cancer Risk Without Racial Bias, RSNA 2023 Daily Bulletin [online]. <https://dailybulletin.rsna.org/db23/index.cfm?pg=23thu15>
Roadevin, C. & Hill, H. (2025) AI interventions in cancer screening: balancing equity and cost-effectiveness, Journal of Medical Ethics.
Rubio, M. (2024) Can AI and Machine Learning Revolutionize the Mammogram?, Breast Cancer Research Foundation [online]. <https://www.bcrf.org/blog/ai-breast-cancer-detection-screening/>
Samek, W., Montavon, G., Lapuschkin, S., Anders, C.J. & Muller, K.-R. (2021) Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications, Proceedings of the IEEE, 109(3), pp. 247–278.
Shabani, M., Knoppers, B.M. & Borry, P. (2015) From the principles of genomic data sharing to the practices of data access committees, EMBO Molecular Medicine, 7, pp. 507-509.
Singh, A., Singh, A. & Bhattacharya, S. (2024) Research trends on AI in breast cancer diagnosis, and treatment over two decades, Discover Oncology, 15(1), 772.
Taylor-Phillips, S., Seedat, F., Kijauskaite, G., Marshall, J., Halligan, S., Hyde, C., et al. (2022) UK National Screening Committee’s approach to reviewing evidence on artificial intelligence in breast cancer screening, The Lancet Digital Health, 4(7), pp. 558–565.
Topol, E.J. (2019) High-performance medicine: the convergence of human and artificial intelligence. Nature Medicine, 25(1), pp. 44–56.
Tucci, V., Saary, J. & Doyle, T.E. (2021) Factors influencing trust in medical artificial intelligence for healthcare professionals: a narrative review, Journal of Medical Artificial Intelligence, 5.
Varkey, B. (2021) Principles of Clinical Ethics and Their Application to Practice, Medical Principles and Practice, 30(1), pp. 17–28.
World Health Organization (2021) New global breast cancer initiative highlights renewed commitment to improve survival, WHO [online]. <https://www.who.int/news/item/08-03-2021-new-global-breast-cancer-initiative-highlights-renewed-commitment-to-improve-survival>
Yang, Q., Liu, Y., Chen, T. & Tong, Y. (2019) Federated Machine Learning: Concept and Applications, ACM Transactions on Intelligent Systems and Technology, 10(2), pp. 1–19.