Algorithmic Racism and Bias

In Alana Lentin’s chapter, the basic argument is that racism is inherently built into algorithms.  Users don’t see it obviously and are not necessarily aware of it, because information is imputed then results are outputted, but the algorithm functions as a “black box” that masks the racism built between the input and output.[1]  Similarly, authors Gipson, Corry and Noble discuss the inherent bias built into algorithms.[2]  In our HUM 501 course lab exercise, we explored bias built into the Google Scholar search algorithm “black box”.  Searching the term “Digital Humanities” with and without the term “Gender” resulted in listing the publications of white male and female university professors.  The results were not diverse, indicating the class bias and ethnic racism built into the algorithm.  This contrasted sharply with a similar search on Amazon using major textbooks in Digital Humanities (Uncertain Archives: Critical Keywords for Big Data).  The results included suggestions for books in digital feminism, oral history, black studies/history among other subjects.  The wider variety of subjects which resulted suggests that the Amazon algorithm has less racism and bias built inside its “black box.”  The takeaway lesson is that creators of DH content should be aware that racism and bias are built into algorithms.  Gipson et al. suggest that employing intersectionality is a method to overcome white, patriarchal, heteronormative bias (pp.306).  They claim that the application of intersectionality “overcomes traditional empiricism, which is predicated on the experiences and standards put forth by white male Western Europeans (pp.307).”  Additionally, it is stated that “…intersectional frameworks shake the very foundations upon which claims of big data’s power rest…”  Yet, it sounds like the authors are asserting their claims based upon the same positivism underlying white male Western Science that it is claimed they are challenging.  Does substituting female, non-white, and homonormative in an algorithm eliminate racism and bias?  Is it possible that one “black box” is being switched out for another?  Algorithms are used because they take masses of big data and collate it into something that is easier for humans to comprehend and utilize.  If we could accomplish interpreting data without algorithms people would.  But I wonder if and how intersectionality can really and truly eliminate racism and bias.  Is there truly any method or algorithm in which bias of any and all sorts can be completely eliminated?


[1] Alana Lentin, “Algorithmic Racism,” in Uncertain Archives: Critical Keyword for Big Data, ed. Nanna Bonde Thylstrup, Daniela Agostinho, Annie Ring, Catherien D’Ignazio and Kristin Veel (Cambridge, MA: MIT Press, 2021), 57-64.

[2] Brooklyne Gipson, Frances Corry and Safiya Umoja Noble, “Intersectionality,” ,” in Uncertain Archives: Critical Keyword for Big Data, ed. Nanna Bonde Thylstrup, Daniela Agostinho, Annie Ring, Catherien D’Ignazio and Kristin Veel (Cambridge, MA: MIT Press, 2021), 305-312.