Wednesday, October 2, 2019

core post: my abstraction of their abstractions

To borrow Finn’s titling of his sub-section, one “thread” throughout the readings is algorithm as abstraction. For each author there are varying implications, or consequences, of these abstractions reflected in their approaches and descriptions of algorithms. Whether the algorithms abstracted “effective computability,” “desired output,” or power and oppression, there’s a mutual recognition in some kind of currency (linguistic, political, racial) imbued within any algorithm. The notion of the non-cultural, apolitical algorithm, due to the  genealogy of algorithm through mathesis universalis, the Turing machine, information theory, etc. is held under scrutiny—to varying intensities. From these different intensities, I found Bucher’s and Noble’s selections to be more compelling, and frankly, easier to read because of their immediacy and urgency towards the consequences of, and around, algorithms. 

Both Bucher and Noble immediately foreground the urgency of their work, which guided me through their argument and its stakes. Referencing Bucher’s inclusion of the “if-then statement”: if there are future cases similar to the discriminatory incidents with Google’s photo app, its search engine, or Flickr’s racist labeling, then what does that say about the conditions/parameters/desired outputs that Google and Flicker implemented for that algorithm or model? Bucher and Noble hold someone(s) (Google, Flickr, Facebook as corporate giants comprised of design engineers)accountable instead of something (the algorithm’s ontology or epistemological claims), seeing past the algorithm’s computational “nature” or essence (Noble 22). 

One distinction between the Bucher’s and Noble’s selections are the scopes/scales framing their arguments:

Bucher’s concerns lie within the broad and varied scope in which algorithmic power can emerge from: "society and human interaction" and “states of domination”, generally referring to governments/governmentality (Bucher 33). However, algorithmic power varies according to the types of algorithms that she lists: one is more deterministic because it produces the same outputs by having an established set of rules/conditions, and the other is “predictive” or “machine-learning”. Finn similarly categorized these distinctions as the engineer’s algorithm, geared towards defining solutions for problems, and the computationist’s algorithm that hinges on the algorithm’s “desire” to learn what is “effectively computable” (Finn 23). 

Noble points to the technological racialization within a clearly defined landscape of neoliberal America that has historically developed social and economic policies written with the promises of individualism, “personal creativity,” and participation, bridging the (digital, racial, class) divide, remaining to be delivered. I also want to share my awe towards her critique of American neoliberalism/neoliberal capitalism through the allegorizing the “democratic landscape” of the internet through Google and the Alphabet (Noble 84); it was such an effective, compelling, and convincing move. 

Thus, with respect to their scopes, I found Noble’s selection to be more generative, and exemplary for future considerations of similar projects, insofar as the relations of power,—racial, economical, etc.—,were specific: the historical continuity of racial and gender formations, along with racial and gender normativity, around White-American-(male)ness reconfigured through forms of algorithmic oppression in contemporary, neoliberal, capitalist America. Lastly, I admire Noble’s unrelenting and bold tone/rhetoric in her critique of engineering curriculums across the biggest research universities as well as characterizing Google and the alphabet as agents of cultural imperialism, colonizing the web.

On a lighter note, one way that I might imagine myself exerting some kind of “power” over algorithms and their corporate overlords is not listening to navigational apps or their suggested routes. Small victories. For example, Google Map lists three metro-transportation routes that I can take back home. Google Maps navigational algorithms are based on the logic of Djikstra's algorithm and A*, both attempt to "solve" the "problem" of computing the shortest route possible. For these algorithms, shortest means some combination of distance travelled and ETA, presuming that users desire this kind of efficiency. However, along the most efficient route to home is an inescapable walkway filled with bird crap because it’s underneath a bridge in which a bunch of a pigeons nest and coop along the ceiling construction. Google maps insists that it’s the “fastest” or best route, but I would like to see any Google maps software /app engineer walking through hills of bird poop, roadkill, or dead animals. 

No comments:

Post a Comment