Wednesday, October 2, 2019

Core Post: Long Tails




In Algorithms of Oppression, Safiya Noble argues that Google’s search engine algorithms are not neutral; quite the opposite, they prioritize corporate interests, automate cultural biases, and push damaging stereotypes to the top of the list. The resulting representations have deleterious effects on groups represented as well as those forming opinions about different races, genders, religions, and other marginalized groups.  In the context of PageRank functionality, it is also important to note that Google Search is unwaveringly the most visited and linked to site, ensuring it’s own supremacy as an epistemic model: "[algorithm’s] logics are self-affirming” (Gillespie). In 2016, however, Google officially turned off its display of PageRank data. The public was no longer invited to see the live rankings (or the unconditional supremacy of Google itself). Interestingly, around the same time, PageRank was upgraded with PageBrain, which employs machine learning to deliver query results. With this shift, Google claims that it can locate users’ true intent, reinvoking the same issues of relevance and credibility at the heart of Noble’s critique. PageBrain also specifically addresses the question of how to respond to long tail queries that have not been entered previously. Jumping ahead in the text, I was particularly interested in Noble’s examination of Long tails: Noble articulates the increasingly reductive nature of query keywords. Users generally enter the simplest possible word combinations, casting a wide net and relying on the search engine to return relevant, popular, and credible results. Consequently, advertisers are incentivized to frame content under the most generic terms. As paid content rises to the top, critical perspectives are forced further down the list. Locating non-commercial content now often requires more descriptive and specific search terms; these ‘long tails,’ are often the only tools for directing algorithms away from commercial and pornographic content (Noble, 88). Even if PageBrain is more responsive to long tail queries, questions of simplification, bias, and causality have even higher stakes in the context of machine learning, as Taina Bucher lays out in If...Then:Algorithmic Power and Politics. These algorithms operate as cultural logics with productive power, shaping subjects and events. In The Relevance of Algorithms, Tarleton Gillespie asserts that “it is important that we conceive of this entanglement not as a one-directional influence but a recursive loop between the calculations of the algorithm and the ‘calculations’ of people.” Although machine learning algorithms are are fundamentally human, seeded by human values engaged in training processes, the recursive loop creates so much complexity that predictive algorithms in particular are increasingly beyond their engineers’ comprehension. Algorithms and effective computability may represent a new model of knowledge, but it is one we can’t quite wrap our heads around.




No comments:

Post a Comment