Thursday, October 3, 2019

Core post 3: Algorithms as self-affirming fortune cookies

When thinking about power, one cannot escape Paulo Freire and the Pedagogy of the Oppressed (1968). The power structures he emphasizes are so applicable to digital technologies, mechanization, and algorithms. As sources of revolutionary –and sometimes political– struggles, we find ourselves yet again before power structures of the oppressed and the oppressor. Man-made algorithms turned mechanical, but cannot escape their sociotechnical factor, both of those responsible for creating it, and those that feed it its knowledge, and reinforcing the embedded power in the algorithm. Even when disembodying Freire’s dialectical materialism, stereotypes are reinforced by these machine-learning processes. I understand that algorithms are created by humans who have certain understandings of the world, but I can’t help but unveil the sociological –and public– impact of these search results. “We have to ask what is lost, who is harmed, and what should be forgotten with the embrace of artificial intelligence in decision making,” reads Safiya Noble’s Algorithms of Oppression. It is a social and collective process, both of thought and organization, that reifies the marginalized, that finds as a first response to black girls search results of pornography. We take pleasure in looking; the gaze is unescapable, where diasporas are inflected, intersectionality is overthrown by vulgarity and the carnivalesque (Bakhtin, 1968), why do machines keep repeating a when looking into Black Popular Culture that scholars and activists have fought against for years? Who mediates this search engine we all rely on? Should it be mediated, since that would be mediating ourselves. We try to understand their complex nature, embedded with meanings and ways of seeing the world, they are part of the social fabric (Bucher, 2012), which makes the disentanglement barely impossible. I recently considered the power of the gaze related to algorithms, into the conception of publics –or production on calculated publics, in Gillepsie’s terms– and how something is perceived and the point when something else becomes public –particularly when talking about places and Google Street View. Nevertheless, there is also the possibility of a lack of gaze, when an algorithm decides something will no longer appear as a result, an not becoming part of the public. Balka’s (2011) shadowed bodies are also a direct impact on the gaze, that which can also be harmful, even when it comes from below in a capitalist society. The possibility of shadowing bodies makes objectiveness a concept complicated to grasp, for it is still related to how people use data, what they decide what to search for, and how they see themselves as a source of publics (Gillepsie, 2014). We find patterns of inclusion, we exclude, becoming part of the algorithm with the hope of being included, to be seen and feel relevant. We, as the machine-learning systems, become chameleonic and volatile, because we know that there is no impartiality in algorithms, even by acknowledging algorithm logic as self-affirming. We try to find affirmation, a connection to that trust system we are embedded to, to make sense of it all. Even so, we also rely on magazine horoscopes and fortune cookies for affirmation. That is, perhaps, what sets us apart from the machine.

No comments:

Post a Comment