An appropriate set of readings to follow Professor Ruha Benjamin’s
inspiring talk on machine bias, systemic racism and discriminatory design! All
of the authors this week invite us in one way or another to confront the
reality that we are living in the imaginations of others, and as Professor Ruha
Benjamin also reminds us, that is a very dangerous place to be in. Cheney-Lippold
elaborates in detail upon the “measurable type” (p. 47) constituted through data
mining practices, bringing our attention to the problematic and dangerous erasure
of subjectivities that emerge from conflating algorithmically construed
identities with vastly more complex ‘real-life’ (for lack of a better phrase)
ones. Rather than reiterate what many of the authors have already eloquently shed
light on, I would like to bring our attention to what Cheney-Lippold is perhaps
only implicit about in his chapter and what Couldry and Mejias sort of touch
upon, which is that a collective awareness of and consequential resistance to the
measurable type’s allegiance to data can forge new relationships between the self
and ‘self’ in which the aforementioned conflation becomes as ‘real’ as it is
problematic. I’m thinking here of data-driven self-curation practices in which the
awareness of a ‘quantified self’ regulates an individual’s behavior to self-present
through a strategic appropriation of datafication. While this may not be very salient
when the imagined audience of the curating individual is an ambiguous corporate
“they”, when platforms like Spotify or now dated Last.fm render visible patterns
of behavior like music consumption, which are themselves knotted in numerous interpretations
of identity and cultural capital, to other members of an indfividual’s specific social
group, the measurable type becomes something absurdly ‘productive’ to the
quantified individual. Here, the very possibility
of the measurable type can urge the individual to regulate his/her/their
behavior, sort of like a Foucauldian technology-of-the-self, to perform and strategically produce a ‘desirable’ algorithmic self, as problematic and reductive as that
might be. The conflation thus becomes somewhat ‘real’ – the present reality of hyper-mediated
online communication is that these reductive algorithmic identities have
already extended beyond their corporate and institutional use for
categorization and targeted advertising / objectifying / killing etc. by powerful
players like Google and the US government, and have become self-adjacent objects
through which individuals themselves are forced to navigate and unpack their
self-concepts. To this end, in addition to the reproduction and quantified
congealing of oppressive and discriminatory systems through which a normative
vision of the human past is reinscribed and projected into the future (Crawford
& Joler), algorithmic processes are also equipped with a Foucauldian capacity
to actually police and newly regulate future behaviors and self-concepts of individuals.
No comments:
Post a Comment