Microsoft Researcher Details The Real-World Dangers Of Algorithm Bias
However quickly artificial intelligence evolves, however steadfastly it becomes embedded in our lives -- in health, law enforcement, sex, etc. -- it can't outpace the biases of its creators, humans. Microsoft Researcher Kate Crawford delivered an incredible keynote speech, titled "The Trouble with Bias" at Spain's Neural Information Processing System Conference on Tuesday.
In Crawford's keynote, she presented a fascinating breakdown of different types of harms done by algorithmic biases.
As she explained, the word "bias" has a mathematically specific definition in machine learning, usually referring to errors in estimation or over/under representing populations when sampling. Less discussed is bias in terms of the disparate impact machine learning might have on different populations. There's a real danger to ignoring the latter type of bias. Crawford details two types of harm: allocative harm and representational harm.
"An allocative harm is when a system allocates or withholds a certain opportunity or resource," she began. It's when AI is used to make a certain decision, let's say mortgage applications, but unfairly or erroneously denies them to a certain group. She offered the hypothetical example of a bank's AI continually denying mortgage applications to women.
She then offered a startling real world example: a risk assessment AI routinely found that black criminals were a higher risk than white criminals. (Black criminals were referred to pre-trial detention more often because of this decision.)
Representation harms "occur when systems reinforce the subordination of some groups along the lines of identity," she said -- essentially, when technology reinforces stereotypes or diminishes specific groups. "This sort of harm can take place regardless of whether resources are being withheld." Examples include Google Photos labelling black people as "gorillas", (a harmful stereotype that's been historically used to say black people literally aren't human) or AI that assumes East Asians are blinking when they smile.
Crawford tied together the complex relationship between the two harms by citing a 2013 report from LaTanya Sweeney. Sweeney famously noted the algorithmic pattern in search results whereby googling a "black-sounding" name surfaces ads for criminal background checks.
In her paper, Sweeney argued that this representational harm of associating blackness with criminality can have an allocative consequence: employers, when searching applicants' names, may discriminate against black employees because search results are tied to criminals.
"The perpetuation of stereotypes of black criminality is problematic even if it is outside of a hiring context," Crawford explained. "It's producing a harm of how black people are represented and understood socially. So instead of just thinking about machine learning contributing to decision making in, say, hiring or criminal justice, we also need to think about the role of machine learning in harmful representations of identity."
Search engine results and online ads both represent the world around us and influence it. Online representation doesn't stay online. It can have real economic consequences, as Sweeney argued. It also didn't originate online -- these stereotypes of criminality/inhumanity are centuries old.
As Crawford's speech continued, she went on to detail various types of representational harm, their connections to allocation harms and, most interestingly, the ways to diminish their impact. As is often suggested, it seems like a quick fix to either break problematic word-associations or remove problematic data, what's often called "scrubbing to neutral".
When Google image search was shown to have a pattern of gender bias in 2015, showing almost entirely men when users searched for terms like "CEO" or "executive," they eventually reworked the search algorithm so it's more balanced. But this technique has its own ethical concerns.
"Who gets to decide which terms should be removed and why those ones in particular?" Crawford asked. "And an even bigger question is whose idea of neutrality is at work? Do we assume neutral is what we have in the world today? If so, how do we account for years of discrimination against particular subpopulations?"
Crawford opts for interdisciplinary approaches to issues of bias and neutrality, using the logics and reasoning of ethics, anthropology, gender studies, sociology, etc, and rethinking the idea there there's any one, easily quantifiable answer.
"I think this is precisely the moment where computer science is having to ask much bigger questions because it's being asked to do much bigger things."