Algorithms can now combine large amounts of data such as medical records, demographic information, and genetic data to create models that provide personalised predictions about an individual’s risk of developing diseases.
This paves the way for medical treatments, preventative interventions, and screening decisions to be made – before the individual even gets sick – giving an opportunity to revolutionise healthcare by influencing policy, saving healthcare systems’ money, and preventing illness. It is imperative that the algorithms used to make health decisions are fair – that is, that they can provide equally good predictions for all individuals, regardless of race, ethnicity, gender, age, or socioeconomic background – as all individuals have the right to receive equitable medical treatment.
However, this is not straightforward in practice, with existing models often performing worse for certain demographics. This is because the limited data used to develop these models is not representative of the entire population, resulting in predictions reflecting the data, rather than the population.
This is not intentional, but the underlying biases and inequities of many existing predictive tools have not been deeply analysed, which may have serious implications. An example of this is a disease risk calculation tool widely deployed in the USA, which was found to identify the need for care in less than half the number of Black patients compared to their white counterparts with the same level of risk . This study illustrates why it’s important to study what fairness means in healthcare in the era of big data, where many algorithms are being proposed for and translated into the clinic.
Addressing this problem is not straightforward – there are contrasting ideas about what fairness means, both in healthcare and in algorithms – and no simple definitions. Therefore, the first goal for this project is to attempt to address the question: what is fairness in healthcare? This will be done by exploring the literature on relevant definitions, to understand fairness in the context of society.
Then, I will look at how these definitions can be used to analyse the fairness of existing risk prediction tools used in healthcare. I will do this by exploring how predictions differ for different subgroups of society and how models may be constrained with fairness criteria to ensure more equitable predictions. Using the knowledge gained from this, to understand how fair current healthcare algorithms are, I plan to focus on an emerging area of disease risk prediction: “polygenic risk scores”.
These algorithms use genomic data to calculate individual’s risks of contracting different diseases and are expected to soon be deployed in healthcare globally. However, the data used for these are often not representative of the populations for which they will be used. For example, the majority of genomic data is from white participants, suggesting there may be hidden racial/ethnic/ancestry biases  or unknown outcomes for individuals not represented in the models. It is unclear the extent to which this could have a negative impact on certain subgroups of society. Therefore, it is very important that the potential for bias and (conscious or unconscious) discrimination caused by these predictions is studied and improved according to fairness criteria.
These models should not be rolled out unless their equitability can be improved. In order to do this, I will address to what extent it is possible to create fair models, and whether it is possible to verify this equitability before these models are used in practice. Using this knowledge, I plan to develop the next generation of disease risk prediction algorithms, integrating medical records, demographic information, and genomic data – with fairness as a central focus.
It is likely that a standard one-size-fits-all model is inappropriate for use population-wide, and that nuances in the population should be captured by using different models. The more fairness is studied in this context, the surer we can be that these disease risk prediction models will benefit society as a whole, identifying health risks and improving healthcare equitably, to ensure that those disadvantaged by historical biases in the data are not left behind.
 Obermeyer, Z., Powers, B., Vogeli, C. and Mullainathan, S., 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), pp.447-453.
Changes in preterm birth and stillbirth during COVID-19 lockdowns in 26 countries
16 October 2023
This paper by Calvert et al was selected by the HDR UK Impact Committee as an Open Access Publication of the Month.
Register by 18 October 2023
BHF Data Science Centre Webinar: October 2023