when “move fast and break things” crosses the line

Areena Akhter
6 min readMar 19, 2022

the case against facial recognition for predicting human behaviour

march 19th, 2022

Harvard University professor and author of “The Age of Surveillance Capitalism” Shoshana Zuboff recently stated that “in virtually every economic sector, from insurance to automobiles to health, education, finance… every product is described as “smart” and every service described as ‘personalized.’” As the tech industry races to release personalized products that are built by leveraging consumer metrics, the next step seems to be using facial recognition technology (FRT) to predict consumer behaviour.

However, critics of FRT have noted that this approach to building predictive technology mirrors the now-discredited science of physiognomy, which sought to “link physical appearance, especially of the face, with in-born character traits, including emotional capacity and intelligence.” Although the technology industry may be tempted to leverage FRT to predict user behaviour, it is not accurate or scientifically-justified enough to be deployed fairly in a diverse population. It would be unethical to create an FRT tool that predicts user behaviour because developing it would primarily serve to affirm the biases of its creator, and using it would disproportionately place underrepresented groups in danger. Building such a tool would ignore historical misuses of facial data and lead down a slippery slope towards technological racism.

Should a computer be allowed to look at my face and predict who I will be when I grow up?

Creating an FRT tool to predict human characteristics is unethical because facial features have already been discredited as a source of behavioural data, due to their historical use to justify racism. Physiognomy was discredited as a science in the late 20th century, as its practitioners used it to correlate European facial features with positive behavioural characteristics, justifying white supremacy. In Nazi Germany, physiognomy was used as a tool to promote institutionalized racism through misinformation. A state-sponsored children’s textbook published in 1938 cautioned that “it is often hard to recognize the Jew as a swindler and criminal […] How to tell a Jew: the Jewish nose is bent. It looks like the number six.” The Nazis actively taught children to distinguish Jewish facial features and associate those traits with criminal behaviour, leveraging physiognomy to justify their a priori assumptions that Jewish people were inferior. Furthermore, Nazi scientists could argue that this interpretation was objective because it was based on the science of physiognomy, enabling them to entrench racist theories as facts in textbooks. This data was used to target and persecute more than 11 million Jewish people during the Holocaust on the basis that their personal characteristics were scientifically undesirable. Historical evidence clearly depicts the danger of reintroducing facial features as a source of actionable behavioural data for use in society, as it has already been used to justify inhumane discrimination.

Like physiognomy in the 20th century, today’s FRT tools have been found to perpetuate confirmation bias, which is “the tendency to process information by looking for, or interpreting, information that is consistent with one’s existing beliefs.” It would be unethical for computer scientists to take advantage of the fact that facial data exists and use it to predict human behaviour, as the practice has not been scientifically justified and is ultimately morally wrong. Consider a controversial Stanford study by Wang and Kosinsky which argued that an AI model using FRT and facial characteristics could predict gayness better than a human could. The computer science community has criticised the study for perpetuating “junk science,” as it ignored the fact that an AI considering differences in grooming patterns was able to achieve the same level of accuracy as one that considered differences in facial features. FRT technologies are clearly subject to the same confirmation bias that physiognomy suffered from: by striving to associate physical attributes with behaviour, computer scientists may ignore the conclusion that they are not related at all. This has dangerous moral implications for society. Like seemingly factual textbooks published in Nazi Germany, predictive algorithms are increasingly becoming ‘black-box’ arbitrators of truth, whose definition of fact becomes impossible to dispute. Society is increasingly reliant on these algorithms for decision-making: in a 2019 opinion piece, New York police commissioner James O’Neill stated that “[i]t would be an injustice to the people we serve if we policed our 21st-century city without using 21st-century technology.” Considering the growing reliance on predictive tools for factual evidence, computer scientists who link facial features to human behaviour are enabling racist ideologies to become embedded in society’s notion of truth. This practice erodes the user’s right to be protected from discrimination, as they are unable to dispute the narrative that was written by the creators of a potentially racist FRT algorithm.

Moreover, by choosing to use FRTs for behaviour prediction, computer scientists will perpetuate the oppression of marginalized groups by FRT systems. These systems already demonstrate demographic bias, which occurs when “there are significant differences in how [an algorithm] operates when interacting with different demographic groups. Consequently, certain groups of users are privileged while other groups are disadvantaged.” Thus, a tool that uses FRT to predict human characteristics would provide disproportionately inaccurate results for marginalised groups, compounding its discriminatory effects. Steed and Caliskan corroborate this fact, finding that if off-the-shelf [FRT] can learn biased trait inferences from faces and their labels, then application domains using [FRT] to make decisions are at risk of propagating harmful prejudices. A seminal study on the inadequacies of industry-standard data sets for FRT, by Joy Buolamwini and Timnit Gebru, found that “leading tech companies’ commercial AI systems [for FRT] significantly mis-gender women and darker skinned individuals.” With FRT already failing to accurately identify the gender of black and brown individuals, how can they be expected to successfully predict complex behavioural characteristics of these marginalised groups? Furthermore, Buolamwini and Gebru found that when a company like Microsoft does misgender a face, 93.6% of the time it is for a darker subject. This implies that without improving existing datasets, any predictive tools using FRT would disproportionately provide inaccurate results for minorities, detrimentally impacting their status in systems where they are already at a disadvantage. Based on the Microsoft statistic, if they were to build an FRT tool that was used by the police to predict criminal behaviour, darker individuals would be inaccurately flagged nine out of ten times, compounding the racial profiling that these individuals already face from the criminal justice system. Clearly, FRT tools are inaccurate when applied to diverse populations, and should not be used to provide consumer insights when it inevitably causes more harm than good to its users.

It is clear that the data-driven technology industry aspires towards the creation of FRT tools for behavioural prediction. Surveillance technology startup Clearview AI recently stated that “everything in the future, digitally and in real life, will be accessible through your face.” Considering the external pressure to build without thinking first, it is imperative that computer scientists pause to question whether it is morally right to use their data source to begin with. There is overwhelming evidence that it would be morally wrong to predict behavioural traits using FRT, because these predictions would be rooted in racial bias and aggravate existing injustices against marginalised groups. Ultimately, computer scientists must know when to refuse to build harmful technologies. Otherwise, they become equally complicit in the systemic racism that would be perpetuated by continuing to “move fast and break things.”

***
Thank you for coming along my journey of learning about how facial recognition systems are entrenched in historical prejudice! I’m a computer science and human-computer interaction student at uWaterloo with the aim of relearning technology through the lens of history, cognition, and systems of power. This essay was written for my AI ethics class, CS 497 at Waterloo. If you’d like to chat more about this topic with me, find me on LinkedIn or Twitter. Also, I run an organization for undergraduates working towards algorithmic integrity, called u:wait — come out to our first event to learn more about algorithmic bias, from much smarter people than me!

References

Laidler, John. “Harvard Professor Says Surveillance Capitalism Is Undermining Democracy.” Harvard Gazette. Harvard Gazette, March 4, 2019. https://news.harvard.edu/gazette/story/2019/03/harvard-professor-says-surveillance-capitalism-is-undermining-democracy/.

“Physiognomy.” Omeka RSS. Accessed March 7, 2022. https://exhibits.lib.unc.edu/exhibits/show/race-deconstructed/physiognomy.

Arcas, Blaise Aguera y. “Physiognomy’s New Clothes.” Medium. Medium, May 20, 2017. https://medium.com/@blaisea/physiognomys-new-clothes-f2d4b59fdd6a.

Daub, Adrian. “The Return of the Face.” Longreads. Longreads, September 29, 2018. https://longreads.com/2018/10/03/the-return-of-the-face/.

“Confirmation Bias.” Encyclopædia Britannica. Encyclopædia Britannica, inc. Accessed March 7, 2022. https://www.britannica.com/science/confirmation-bias.

“How Facial Recognition Makes You Safer.” The official website of the City of New York, June 10, 2019. https://www1.nyc.gov/site/nypd/news/s0610/how-facial-recognition-makes-you-safer.

Mitek. “What Is Demographic Bias in Biometrics?” Mitek. Mitek, April 15, 2021. https://www.miteksystems.com/blog/what-is-demographic-bias-in-biometrics.

Steed, Ryan, and Aylin Caliskan. “A Set of Distinct Facial Traits Learned by Machines Is Not Predictive of Appearance Bias in the Wild.” AI and Ethics 1, no. 3 (2021): 249–60. https://doi.org/10.1007/s43681-020-00035-y.

Buolamwini, Joy, and Timnit Gebru. Gender Shades. Accessed March 7, 2022. http://gendershades.org/overview.html.

Harwell, Drew. “Facial Recognition Firm Clearview AI Tells Investors It’s Seeking Massive Expansion beyond Law Enforcement.” The Washington Post. WP Company, February 19, 2022. https://www.washingtonpost.com/technology/2022/02/16/clearview-expansion-facial-recognition/.

--

--

Areena Akhter

thinking about people, computers, and the space in between them 💭 computer science @ uwaterloo, coding @apple, bloomberg, uber 👩🏽‍💻 she/her