Tuesday, October 29, 2019

Core Post 1 Robots as Social Beings


I had similar concerns about the presentations as some other students proposed in their comments. The superiority and goals suggested by the title “Emotionally Intelligent Robots: More Human than Human” were not mirrored in the presented research. It seems that emotional intelligence on that level was rather meant as the capacity of robotic systems to assess, categorize and quantify human emotion based on a specific, mostly monetizable goal. If being human includes that to be a sentient and social being how does artificial intelligence strive to be more human than human?


The third presentation suggested that children growing up among little robots they relate to as they human peers might develop a different relationship as earlier generations. What was missing from the logic of the experiment is that kids in that age (5-7) still live in a kind of magical realism where they believe in the existence of magical creatures and tend to anthropomorphize not only robots but all sorts of objects. How does child development change or is even endangered by “emotionally intelligent” robotic playmates and surrogated that are not under any ethical control, and their primary goal is commercial and profit-oriented. What does it mean to have these non-human agents be part of our social interactions?

Looking at the child version of Alexa raises ethical questions not only about the introduction of an early and totalitarian system of surveillance but also about the damaged socialization of children by surrogate mates.

I had to think about Harry Harlow’s experiments with wire and cloth surrogate mothers and baby monkeys that are deemed unethical today.

Also about B. J. Fogg behavior scientist at Stanford referring to his methods as "Behavior Design Lab" former “Persuasive Tech Lab". The name change already reveals a shift from a more transparent description of what this lab is known for.

Fogg presented the results of a simple experiment he had run at Stanford, which showed that people spent longer on a task if they were working on a computer which they felt had previously been helpful to them. In other words, their interaction with the machine followed the same “rule of reciprocity” that psychologists had identified in social life. The experiment was significant, said Fogg, not so much for its specific finding as for what it implied: that computer applications could be methodically designed to exploit the rules of psychology in order to get people to do things they might not otherwise do. In the paper itself, he added a qualification: “Exactly when and where such persuasion is beneficial and ethical should be the topic of further research and debate.”
Fogg called for a new field, sitting at the intersection of computer science and psychology, and proposed a name for it: “captology” (Computers as Persuasive Technologies). Captology later became behavior design, which is now embedded into the invisible operating system of our everyday lives. The emails that induce you to buy right away, the apps and games that rivet your attention, the online forms that nudge you towards one decision over another: all are designed to hack the human brain and capitalize on its instincts, quirks and flaws. The techniques they use are often crude and blatantly manipulative, but they are getting steadily more refined, and, as they do so, less noticeable.”



No comments:

Post a Comment