Medical Algorithm with a Racial Bias

From Privacy Wiki
Jump to navigation Jump to search
Medical Algorithm with a Racial Bias
Short Title Medical Algorithm Exhibited Racial Bias Against Black Patients
Location United States
Date October 2019

Solove Harm Aggregation, Decisional Interference
Information Identifying, Transactional, Medical and Health
Threat Actors The Unites States' health care system

Affected Black patients whose profiles were analyzed by the algorithm
High Risk Groups Ethnic Minority
Tangible Harms Lost Opportunity, Financial Cost

Medical algorithm, that was meant to determine which patients had the most complex medical needs was found to exhibit a bias against black patients.


In 2019 a team of researchers announced the findings of a study examining an algorithm widely used by health care professionals in the United States. The study showed that the algorithm had an implicit bias, affecting millions, meant black patients were consistently underserved.

The tool was supposed to determine which patients had the most complex medical needs and who’d benefit the most from increased medical intervention. It specifically excluded race from the processed data but nonetheless developed a bias against black patients.

Bias occurred because the algorithm used health costs as a representation of health needs. In the United States less money is spent on black patients who have the same level of need, and the algorithm thus falsely concluded that black patients are healthier than equally sick white patients. In other words, the algorithm decided, since there was not much money spent on medical care, there wasn’t much care needed.

Such automated decision making can be seen as a form of Decisional Interference. The algorithm invaded the private sphere of individuals and created severe consequences for patients’ personal affairs by limiting the amount of state-financed medical help they would get.

Another privacy violation, identified here is Aggregation, since the data from multiple individuals were aggregated, as well the identifying information was aggregated with transactional information. This resulted in distinguishing black individuals from the group, even though racial information was intentionally not included in processing. Comparing the costs spent on medical treatment for individuals (even without knowing their racial origin) caused the development of a bias. This happened because the algorithm simply aggregated the data and didn’t take into account that the costs spent on medicine are influenced by longstanding social, cultural and institutional biases, and thus cannot be representative of the real medical needs of patients.

Laws and Regulations