Deception in Social Robotics Flashcards
Paper’s Main Questions
What constitutes deception, when is it wrong, who should be held responsible, can it be prevented/avoided
Social robot
physically embodied robot able to socially interact with people
Perspectives on Deception in Social Robotics
P1: Techniques enabling robots to detect human social gestures and respond with human-like social cues constitute deception.
- robot pretends to have mental or emotional capabilities it does not actually have
P2: Only misleading people into thinking that the robot is something it is not actually (like a human or animal) is deception.
Deception without intention
intention is not necessary for deception
When is deception wrong? Perspectives:
P1: Deception involved in a relationship with a robot is inherently wrong and violates a duty to see the world as it is
P2: Deception in robotics is wrong only when the deceiver deceives for their own interests.
P3 (author’s opinion): determining whether or not deception is wrong is based on the actual impact of the deception on the deceived
Risks emerging from development and presentation of social robots:
Those stemming from the deception involved in robots appearing to have emotions and care for us (significant for children, babies, elderly people)
Those that originate in over-estimations of the ability of robots to understand human behaviour and social situations
Can we prevent deception in social robotics?
Impossible, even undesirable, to prevent all deception in robotics
Who is responsible for deception in social robotics?
Robot itself cannot as they only do what they are programmed to
Users partially but not entirely since they did not create the robot and also may be vulnerable to different degrees.
Robot manufacturers and marketers bear most of the responsibility since they build and market the product, the two most important factors in deception potentially taking place
Suggestions to minimise the negative effects of deception
Legally requiring a robot to continuously remind the user that it is only a machine and has no emotions
Equipping the robot with an emotional system or developing robots with a sense of morality
Requiring manufacturers and sellers to provide evidence that a given robot application would not cause psychological harm or derogate any human rights before releasing it
Preventing robots that masquerade as friends or companions from using users’ data to manipulate them
Requiring any sharing of information obtained by the robot to be made explicit and transparent
Assessing and limiting promotional descriptions of robots that exaggerate their functionality and benefits
Pilot Assessment Framework for a Quality Mark for AI based robotics products:
8 principles:
Security
Safety
Privacy
Fairness
Sustainability
Accountability
Transparency
Well-being