AI and Health Equity: Re-Imagining the Future of Clinical Decision Support

Artificial intelligence (AI) is increasingly being explored to support a wide range of healthcare applications. Over the past decade, a major focus has been on training machine learning models to predict clinical outcomes and risks, with the goal of enhancing decision-making around assessment and care, referred to as clinical decision support. The hope is that integrating predictions into the clinic at the point of decision-making will meaningfully impact patient care, translating to more accurate assessment, personalized treatment, and targeted prevention or intervention.
Yet, a growing body of evidence shows that AI models used in healthcare can reinforce or even amplify existing biases, particularly against marginalized or underrepresented patient groups. Although clinical oversight, or the presence of a “human in the loop”, is often proposed as a safeguard, it remains unclear whether this approach is sufficient to prevent AI from exacerbating health inequities when deployed.
In this presentation, we explore the health equity implications of AI-based clinical decision support for assessing inpatient violence risk in acute psychiatric settings. Drawing on our interdisciplinary research at the Centre for Addiction and Mental Health (CAMH), we present findings from two studies that demonstrate how social and systemic biases can become embedded in training datasets, which, in turn, influence AI model predictions in ways that disadvantage socially and racially marginalized patients. We also share insights from our experiments in human-computer interaction, which suggest that clinical oversight may not be sufficient to mitigate these biases. We conclude with an overview of our future research, which argues for the need to re-imagine how we design and deploy AI in contexts where it runs the risk of exacerbating existing health inequities.