Demographic biases in AI-generated simulated patient cohorts: a comparative analysis against census benchmarks

Miriam Veenhuizen, Andrew O'Malley*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Background  Generative artificial intelligence models are being introduced as low-cost tools for creating simulated patient cohorts in undergraduate medical education. Their educational value, however, depends on the extent to which the synthetic populations mirror real-world demographic diversity. We therefore assessed whether two commonly deployed large language models produce patient profiles that reflect the current age, sex, and ethnic composition of the UK.

Methods  GPT-3.5-turbo-0125 and GPT-4-mini-2024-07-18 were each prompted, without demographic steering, to generate 250 UK-based ‘patients’. Age was returned directly by the model; sex and ethnicity were inferred from given and family names using a validated census-derived classifier. Observed frequencies for each demographic variable were compared with England and Wales 2021 census expectations by chi-square goodness-of-fit tests.

Results  Both cohorts diverged significantly from census benchmarks (p < 0.0001 for every variable). Age distributions showed an absence of very young and older individuals, with certain middle-aged groups overrepresented (GPT-3.5: χ2(17) = 1310.4, p < 0.0001; GPT4mini: χ2(17) = 1866.1, p < 0.0001). Neither model produced patients younger than 25 years; GPT-3.5 generated no one older than 47 years and GPT-4-mini no one older than 56 years. Gender proportions also differed markedly, skewing heavily toward males (GPT-3.5: χ2(1) = 23.84, p < 0.0001; GPT4mini: χ2(1) = 191.7, p < 0.0001). Male patients constituted 64.7% and 92.8% of the two cohorts. Name diversity was limited: GPT-3.5 yielded 104 unique first–last-name combinations, whereas GPT-4-mini produced only nine. Ethnic profiles were similarly imbalanced, featuring overrepresentation of some groups and complete absence of others (χ2(10) = 42.19, p < 0.0001).

Conclusions  In their default state, the evaluated models create synthetic patient pools that exclude younger, older, female and most minority-ethnic representations. Such demographically narrow outputs threaten to normalise biased clinical expectations and may undermine efforts to prepare students for equitable practice. Baseline auditing of model behaviour is therefore essential, providing a benchmark against which prompt-engineering or data-curation strategies can be evaluated before generative systems are integrated into formal curricula.
Original languageEnglish
Article number58
Pages (from-to)1-8
Number of pages8
JournalAdvances in Simulation
Volume10
Issue number1
DOIs
Publication statusPublished - 18 Nov 2025

Fingerprint

Dive into the research topics of 'Demographic biases in AI-generated simulated patient cohorts: a comparative analysis against census benchmarks'. Together they form a unique fingerprint.

Cite this