Lecture 13 - Humans and Robots Understanding Each Other Flashcards
A single agent has ….. (4):
Goals
Intentions
Actions
Attention
Social interactions need …. (4):
shared goals
shared intentions
joint actions
joint attention
Name at least 5 necessary abilities to perceive intentions:
- read body language
- read faces
- detecting eye gaze
- recognizing emotional expressions
- perceiving biological motion
- paying joint attention
- detecting goal-directed actions
- discerning agency, imitation, deception, empathy
……..
A social agent needs to be aware of ………
………. the cognitive state of the other agent
With what two theories can we understand other peoples’ actions?
With theory of mind
Or with simulation theory
Explain Theory of Mind
We can automatically attribute beliefs, goals, attitudes, and mental states to ourselves and human co-actors.
People interpret observed behaviour from other people according to a generic model of human behaviour, and understand that other people can have different mental states
Explain Simulation Theory
Observer uses his or her own action system to calculate and predict the mental processes and actions of others
Includes intentional behaviour and expressions of emotions
Uses more experimental evidence than Theory of Mind
Mirror Neurons!
Why is it important that humans and robots understand each other?
Give three reasons
Robots interact more closely with people
People always try to understand the robot’s intentions
Robots will need to deal with (unexpected) human behavior
In order to construct a mental model, the robot should know …. (3) :
How to recognize social cues and actions
Rules for social and behavioral interactions
Personal space models
What are the advantages of a robot with human-like behaviour?
- Is perceived as more human-like, animate, and emotional
- Can enhance the perception of social intelligence
- Enables natural interactions
What kind of directional cues can robots give? (3)
Visual cues
Auditory cues
Behavioral cues
What was also found about directional cues?
- LEDs often unnoticed
- LEDs hard to decode
- Speech cues took longer because of orientation
- Results will depend on the type of robot and context
- Movement cue seems effective -> robot needs a clear front
What is the most comfortable and most predictable cue for HRI
gaze is an effective cue for indicating directional intentions
LED cues also ‘work best’