Recommended ReadsMarch 3rd, 2020
AI harms people indirectly by cementing a worldview that excludes them
AI and machine learning can harm people in direct, obvious ways (for example, imprecise facial recognition technology sweeps up dozens of similar faces when attempting to identify suspected criminals).
But these systems can also harm people indirectly, by dictating a worldview that excludes those who sit outside of the categories the algorithm has been trained on.
For example, this article discusses software that analyses medical data to gain an understanding of device failure by patient demographic.It’s useful, but the process it uses for determining a patient’s gender is archaic. If a patient refers to a husband, they are assumed to be a woman. If the patient has a prostate, they are assumed to be a man. This sort of blunt-instrument approach not only flattens individuals into a gender binary, but also leads to bad data (intersex people, trans people and non-heterosexual people are going to be miscategorised).
Progress driven by poorly designed AI comes at the cost of silencing people who don’t fit the frame.