Why AI in Healthcare Raises Ethical Questions
AI tools in clinical settings present a distinctive ethical challenge because they function in ways that are often opaque, probabilistic, and context-dependent. Unlike a clinical guideline or a diagnostic protocol that a clinician can read and evaluate, an AI system may produce recommendations based on patterns in large datasets that are not readily interpretable by the clinician using them.
This creates real questions about accountability, informed consent, the appropriate boundaries of clinical reliance on AI outputs, and the professional responsibility of clinicians when AI tools make — or contribute to — errors.
Regardless of the tool used in a clinical encounter — whether it is a stethoscope, a risk stratification algorithm, or an AI diagnostic tool — the regulated clinician remains professionally and ethically responsible for the decisions made about their patient's care. Technology does not transfer responsibility.
The Six Key Ethical and Professional Issues for UK Clinicians
The GMC's Good Medical Practice and the NMC Code are both explicit that registered professionals cannot delegate accountability for clinical decisions. Where an AI tool provides a recommendation — whether a diagnostic suggestion, a risk score, or a treatment pathway — the clinician using that tool bears responsibility for how that recommendation is acted upon.
This means that relying on an AI output without applying your own professional judgement to it is a professional failing, in exactly the same way that blindly following a junior colleague's recommendation without appropriate scrutiny would be. The AI is not the decision-maker. You are.
Patients have a right to know how decisions about their care are being made. Where AI tools play a significant role in a clinical decision — for example, an AI system that analyses a mammogram or triages an ECG — questions arise about whether patients should be informed of this and whether their consent to AI involvement is required.
While the regulatory position on AI-specific consent continues to develop, the GMC's core principles on shared decision-making and transparency are clear: patients are entitled to understand how decisions about their care are made and to have their questions about those decisions answered honestly. Concealing the use of AI tools from patients when they ask, or when it would materially affect their decision, is inconsistent with the duty of candour and the GMC's honesty standards.
The professional duty of candour — enshrined in GMC, NMC, and GDC standards — requires healthcare professionals to be open and honest with patients, including about the tools and methods used in their care. This duty does not pause when the tool in question is an AI system.
All UK healthcare regulators require professionals to work within the limits of their competence and to seek help when they encounter situations outside those limits. This principle applies directly to AI tools. Using an AI system in a clinical context requires a level of understanding sufficient to evaluate its outputs appropriately, recognise its limitations, and identify when its recommendations should not be followed.
A clinician who uses an AI diagnostic tool without understanding its validated performance characteristics, known error rates, or demographic biases is not working within the bounds of informed professional practice. Competence in the use of clinical AI is becoming an expectation, not an optional extra.
AI systems trained on historical healthcare data may replicate and entrench existing health inequalities. Well-documented examples include pulse oximetry accuracy differences across skin tones and diagnostic algorithms that perform differently across demographic groups. Clinicians using AI tools have an ethical responsibility to be aware of these limitations and to apply clinical judgement that compensates for them.
The ethical obligation to provide equitable, non-discriminatory care does not diminish because an AI system has generated a recommendation. If anything, where AI tools are known to perform less well in certain patient populations, the professional responsibility to apply additional scrutiny is heightened.
AI tools in clinical settings typically require access to patient data to function. This creates professional obligations under data protection law — including the UK GDPR — and under professional standards relating to confidentiality. Healthcare professionals who use AI tools in ways that involve the unauthorised processing of patient data, or who use consumer AI tools that do not meet healthcare information governance standards, risk both regulatory action and the trust of their patients.
- Do not enter patient-identifiable data into AI tools that are not approved for clinical use by your organisation
- Understand whether any AI tool you are using stores, shares, or uses patient data for training purposes
- Follow your organisation's information governance policies on the use of AI and technology in clinical practice
The professional duty to report patient safety incidents applies to incidents where AI tools may have contributed to an adverse event or near-miss. Clinicians who suspect that an AI tool has produced an incorrect or misleading output that affected patient care must report this through their organisation's incident reporting system. Suppressing or downplaying such incidents is inconsistent with the professional duty of candour and the regulatory obligation of openness.
What UK Regulators Currently Expect
The GMC, NMC, GDC, and GPhC have each begun to address AI in their guidance and published statements, though comprehensive, AI-specific regulatory frameworks are still developing. The consistent message from all regulators is that existing professional standards — in particular those relating to accountability, consent, working within competence, honesty, and patient safety — apply fully in AI-assisted clinical contexts.
Clinicians who want to demonstrate that they have engaged seriously with the ethical dimensions of AI in their CPD portfolio, revalidation submission, or appraisal are well-placed to reference courses in healthcare ethics and professional standards that address these emerging responsibilities. Our online ethics and professional standards courses include relevant content for all regulated healthcare professionals.
All Probity & Ethics courses are certified by the CPD Certification Service (CPDUK). Our online courses in healthcare ethics and professional standards are relevant for all regulated UK healthcare professionals seeking CPD that reflects contemporary clinical practice, including the emerging ethical dimensions of AI use.
Healthcare Ethics CPD for the Modern Clinician
CPD UK Certified courses in ethics, probity, and professional standards — relevant for GMC, NMC, GDC, GPhC, and HCPC registrants. Online. Self-paced. Instant certificate.
Explore Online CoursesFrequently Asked Questions
Can AI replace clinical judgement?
No. Regulators including the GMC are clear that the clinical professional retains responsibility for decisions made about patient care, regardless of whether an AI tool was used in reaching that decision. AI tools are aids to clinical decision-making, not substitutes for it. A clinician who defers to an AI output without applying their own professional judgement remains accountable for the outcome.
What should I do if I think an AI tool has made an error?
Trust your clinical judgement. If you have concerns about an AI output, do not proceed on the basis of it without independent clinical verification. Report the concern through your organisation's governance channels and, where patient safety is involved, through the appropriate incident reporting system. Document your reasoning and the steps you took.
Does the GMC have specific guidance on using AI in clinical practice?
The GMC's Good Medical Practice and guidance on decision making, delegation, and consent all apply to AI-assisted clinical practice. AI-specific regulatory guidance continues to develop across all UK regulators. The GMC's core principles — patient safety, informed consent, accountability, honesty, and working within competence — apply fully to the use of AI tools.
Is using AI in clinical practice covered by standard professional indemnity?
This depends on your indemnity arrangement and the specific AI tool in question. You should check with your medical defence organisation or indemnity provider before relying on AI tools in clinical practice. Using AI tools not approved by your organisation for clinical use may have implications for your indemnity cover.
This article is for general informational and CPD purposes only. Regulatory guidance on AI in healthcare is developing rapidly. Healthcare professionals should consult their regulator's current published guidance and their organisation's information governance policies when using AI tools in clinical practice.