When AI gets cancer diagnosis wrong: how algorithmic bias can harm patients
Artificial intelligence promises faster, more consistent and more personalised cancer diagnoses, but a recent study led by Kun‑Hsing Yu (Harvard Medical School / Blavatnik Institute) raises an urgent red flag: AI models trained to read tumour images can inadvertently learn and use demographic cues—age, sex or ethnicity—introducing systematic biases into diagnostic decisions. For women who care about health, prevention and fair access to care, this is not a theoretical debate: it can directly affect who gets the right diagnosis and treatment at the right time.
Why pathology was supposed to be objective — and why AI challenges that
Pathology—the microscopic analysis of tissue samples—is traditionally seen as an objective discipline: pathologists examine cellular architecture, staining patterns and structural anomalies to classify tumours. Those visual cues should speak for themselves, independent of who the patient is. Yet the study found that several widely used AI models can infer demographic information from the same images and let those inferred features influence diagnostic outputs. In short: the machine sees patterns correlated with both diagnosis and patient attributes, and without constraints it may use either.
Real‑world consequences: who is at risk?
The biases uncovered are not abstract statistical quirks. They translate into measurable disparities:
That means delayed or incorrect treatment choices for some patients, with tangible impacts on prognosis and quality of care.
How do these biases emerge?
There are several root causes:
Why aggregate accuracy is not enough
A model can report excellent overall accuracy while masking very poor performance on specific groups. For a tool used in clinical settings, that concealment is dangerous. Regulatory and clinical evaluations must examine subgroup performance, not just global metrics. Patients deserve transparency about whether AI assisted their diagnosis and how the tool performs across diverse populations.
Practical steps to reduce AI bias
Scientists and clinicians can act now to limit harm:
The clinician’s role remains central
AI should be an aid, not an oracle. Pathologists and oncologists must retain responsibility for final interpretation, integrating AI outputs with clinical context, genetic tests and patient history. For patients, a second human review and open discussion about how AI was used are essential safeguards.
What patients can do
Research and policy must move together
Technical fixes alone will not suffice. Addressing algorithmic bias requires coordinated action: funders and research groups must prioritise diverse cohorts; hospitals must demand transparency and subgroup reporting; regulators should set standards for fairness in clinical AI. Without this governance, the technology risks entrenching health inequities rather than alleviating them.
A cautious optimism
The promise of AI in oncology is real: better triage, standardised readings, and new insights extracted from complex images. The study’s findings are a vital corrective: they remind us that power and responsibility come together. With rigorous validation, diverse data and clinician oversight, AI can become a reliable partner in cancer care—one that benefits everyone, not just those who already receive the best care.