Bias by Design: AI and global health inequity
The promise of artificial intelligence in global health is grand — but will it deliver for the many, or just the privileged few?
Open any newspaper, scientific journal or popular science magazine today, and chances are you’ll stumble upon the latest AI breakthrough in healthcare – from algorithms that predict diseases before symptoms emerge, to robotic-assisted surgeries, or systems designed to forecast the next pandemic.
These advances are often impressive. But more often than not, they remain far from real-life clinical integration, as also highlighted in this interview. Beyond the headlines, one pressing question looms large: Can AI in healthcare truly improve outcomes for everyone – or will it deepen existing health inequalities?
As of 2025, global healthcare remains a fractured landscape. While low- and middle-income countries face chronic underfunding and lack of access to care, even wealthy nations are not immune to stark disparities– across socioeconomic groups, ethnicities, and geographies.
In this context, in silico medicine offers a tantalising promise: to bring high-quality, scalable healthcare solutions to underserved populations. But, as with every technological revolution, the potential benefits come with complex trade-offs.
Consider the story of Ellen Kaphamtengo, a pregnant woman in Malawi whose life – and that of her baby – was saved by an AI-enabled fetal monitoring tool. As reported by The Guardian in December 2024, this software system tracks fetal signs in real time, alerting clinicians to early signs of distress. It helped slash stillbirths by 82% in just three years at one clinic. Remarkably, only 10% of the staff there were trained in traditional monitoring techniques, while a much larger share could proficiently use the AI-enabled software, highlighting the promises of AI in healthcare to enable advanced functions with fewer resources and less specialised training.
This is a compelling success story– but also a rare one. The reality is far more complex. AI cannot simply be parachuted into struggling health systems like humanitarian aid packages.
AI models are only as good as the data they’re trained on, and data are far from neutral.
In a landmark 2019 Science study, Ziad Obermeyer and colleagues uncovered racial bias in a widely used healthcare algorithm in the United States. The algorithm predicted healthcare needs based on past healthcare records, but it was a flawed proxy. Because historically less money is spent on black patients with similar conditions, the system incorrectly concluded that they were healthier than white patients with the same symptoms. The result? Black patients were half as likely to be flagged for dedicated care.
This isn’t an isolated example. Bias in AI can encode, and amplify, existing inequities, often invisibly. As many experts warn, algorithms trained on data from wealthy countries can lead to poor, even dangerous, decisions when deployed elsewhere. Moreover, AI-based tools may rely on technologies which can be scarce, diverse, or even not available in other contexts, making them useless. These aren’t just technical challenges, they are systemic barriers.
Health inequalities are not just about income, but about power and representation. While the term “Global South” defies strict definition, it typically refers to a vast group of countries across Africa, Latin America, and parts of Asia– home to over 6 billion people, around 75% of the world population. Yet, only a sliver of AI health tools are developed with their realities in mind.
Take omics data: despite Africa’s extraordinary genetic diversity, only around 1% of global omics datasets include African populations. This lack of representation not only limits the generalisability of AI models, but also risks reinforcing systems that exclude the people most in need.
That’s why initiatives like the Africa-Canada Artificial Intelligence and Data Innovation Consortium (ACADIC) stand out. ACADIC brings together researchers, institutions, and governments across Africa and Canada to co-develop AI tools tailored to local health challenges.
Co-led by Prof. Jude Kong and Prof. Bruce Mellado, ACADIC has piloted AI and big data analytics to guide COVID-19 vaccine rollouts in South Africa, using deep learning to identify priority groups. These insights are now being adapted and scaled across other African countries like Nigeria, Rwanda and Eswatini.
But ACADIC’s most important contribution may not be technological. It’s a shift in paradigm towards locally led, community-owned innovation. Among its core lessons there are:
- Engage local community health workers and grassroots organisations;
- Secure buy-in from policy and decision-makers early on;
- Blend research with real-world implementation;
- Establish networks for mutual support and capacity-building;
- Embrace citizen science and simple, anonymous, data reporting tools;
- Advocate for clear AI governance and legal frameworks tailored for the Global South;
- Strengthen modelling capacity from within.
These aren’t just technical tweaks, they are foundational principles for ethical, inclusive AI in healthcare which would also serve a fair implementation in high-income countries.
A meaningful transformation in global health won’t come from the simple exporting of AI solutions. It must be rooted in the realities of the people it aims to serve.
And yet, researchers from the Global South remain starkly underrepresented in leadership, authorship, and funding– even in projects that claim to serve their communities. As a growing chorus of scholars has pointed out, this perpetuates the very inequalities AI claims to address.
To build truly equitable systems, AI in healthcare must be inclusive by design. That means investing in local data infrastructures, training local experts, and enabling countries to co-lead, not just consume, the next generation of digital health tools.
Because, in the end, AI won’t fix healthcare inequalities unless we fix the inequalities behind the AI.
Resources:
- Globalizing Fairness Attributes in Machine Learning: A Case Study on Health in Africa- EAAMO ’24: Proceedings of the 4th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, 2024
- Exploring the Impact of AI on Global Health in Developing Nations. Reviews ethics issues like data privacy, bias, interpretability, and infrastructure gaps in low-resource settings- Journal of Primary Care & Community Health, 2024
- Artificial Intelligence for Public Health Surveillance in Africa: Applications and Opportunities- arXiv, 2024
- How AI monitoring is cutting stillbirths and neonatal deaths in a clinic in Malawi- The Guardian, 2024
- Dissecting racial bias in an algorithm used to manage the health of populations- Science, 2019
- Decolonizing AI Ethics in Africa’s Healthcare: An Ethical Perspective Explores how Western-centric AI ethics often clash with African communal values and argues for adopting locally relevant ethical principles- AI & Ethics, 2025
- Can Artificial Intelligence Extend Healthcare to All?- Reuters, 2024