Anthropomorphism, Uncanny Valley Flashcards
What is the defintion of Anthropomorphism?
Human tendency to attribute human characteristics,
motivations, intentions, or emotions to non-human
entities (Epley, Waytz, & Cacioppo, 2007)
What human characteristics are given to non-human entities?
motivation, intention, emotion
What types of non-human entities?
objects, animals, plants, abstract concepts (god)
Examples of anthropomorphism
pets, cars, diary
zoomorphism: imagines humans as animals (e.g. teddy bears)
What factors could play a role?
- agent-specific factors: design characteristics (robot)
- user-specific factors: personality (person)
- contextual factors (situation, usecases)
Examples of factors, situations
strong emotions, in need of support/help (emotional vulnerability), loneliness, knowledge of AI systems, life advisor, AI compagnions (girlfriend, death bots, … AI takes role of a human), emotions shown, type of speech (natural language)
What are consequences? (positive examples)
higher trust, engagement, people feel more comfortable, better usability …
What are consequences? (negative examples)
overtrust, social isolaton …
3 key triggers for anthropomorphism
- Elicited agent knowledge
- Effectance motivation
- Sociality motivation
Epley, Waytz, & Cacioppo (2007)
explain Elicited Agent Knowledge
People anthropomorphize if they are reminded of what they know about fellow humans,
(e.g. triggered by humanlike design features of the agent)
example: “ChatGPT talks like me, therefore it must have also other characteristics”
explain Effectance Motivation
People anthropomorphize if they are motivated to understand and predict the agent’s behavior, e.g. to interact with it effectively, especially if there is not more technical explanation at hand
explain Sociality Motivation
People anthropomorphize if they have a need for social relation and belonging,
e.g. when they feel lonely (robot hand study case, people who feel very lonely touch glass 7,2x more than people that don’t feel lonely at all, “he wants contact with me”)
3 major consequences of anthropomorphizing non-human entities
- Moral care and concern
- Responsibility
- Social influence
explain the consequence: Moral care and concern
perceiving an agent to have a mind means that agent is capable of conscious experience
and should therefore be treated as a moral agent worthy of care and concern
(cf. Gray et al., 2007)
explain the consequence: Responsibility
perceiving an agent to have a mind means that the agent is capable of intentional action
and can therefore be held responsible for its actions (not developers, training data, …)
explain the consequence: Social influence
perceiving an agent to have a mind means that the agent is capable of observing, evaluating,
and judging a perceiver, thereby serving as a source of normative social influence on the perceiver
example: AI influencers, hints & recommendations of AI systems become more important, “friend’s” opinions are more important (voting, buying, …)
describe online study: LaMDA AI
people read chat between engineers and LaMDA LLM, after that they fill out a standardised questionnaire. The less prior knowledge people had, the more human-like LaMDA was perceived and more agreement that it’s not ok to turn the AI off if it states that it doesn’t want that. Also more agreement on that LaMDA deserves to have rights that protect it.
What is a “social robot”?
A social robot is an autonomous robot that interacts and communicates with humans by following social behaviors and rules attached to its role
Cuteness in design
- baby scheme: large head, big eyes, round shapes, small body size, clumsy movement, high-pitched voice
- non-verbal cues: head tilts (Mara & Appel, 2015)
Examples for manipulation by “cute robots”
Field study in Belgium (2020):
40 % of employees let a ”cute” robot pass the secure entrance of a Belgian office building (even higher numbers with pizza), revealing a lot of personal information in conversation
Use cases where it is ethically acceptable to increase anthropomorphism through specific design features
… examples …
What is “The Uncanny Valley”?
hypothesis that as robots become more humanlike, they appear more familiar until a point is reached at which subtle imperfections of appearance make them look eerie
Mori, 1970
Anthropomorphism describes our tendency to attribute human characteristics to …
… non-human entities
Who was the first to use the term “anthropomorphism”?
Xenophanes
According to a recent study, being polite to ChatGPT results in higher-quality responses.
True
Psychologist Sherry Turkle argues that politeness toward AI is “a sign of respect (…) to oneself”
True
The Uncanny Valley describes a linear relationship in which more human-likeness comes with less affinity.
False
It describes a curve in the linear relationship
Masahiro Mori recommended …
… aiming for a moderate degree of human-likeness in robots (should aim for the peak just left of the uncanny valley)
What is NOT assumed a consequence of anthropomorphism according to scientific theroy?
Dehuminization of humans
In an online study by Mara & Appel, humanoid robots were perceived as particulary human-like when they …
… tilted their heads.
Why does politeness improve AI performance?
Polite prompts direct AI to use language and sources that align with credibility and respect
How can politeness toward AI benefit humans?
It preserves civility in human interactions and prevents rudeness from becoming habitual
What is the “Pretty Please” feature introduced by Google Assistant?
A feature encouraging polite language in requests, particularly for children
How does AI influence human social norms?
By integrating into daily life, AI interactions may reshape communication expectations and habits
What is anthropomorphism in the context of AI?
Attributing human-like qualities, such as empathy or moral reasoning, to AI
Why is anthropomorphism risky?
It can lead to false expectations and emotional dependency on AI systems
What is the “A-Frame” approach to interacting with AI?
Awareness of AI limitations, appreciation of human connections, acceptance of its constraints, and accountability for outcomes
How can language reinforce anthropomorphism in AI?
Using terms like “thinking” or “understanding” fosters a misleading perception of AI as human-like
How does evolutionary psychology explain the uncanny valley?
Through pathogen avoidance and mortality salience, associating near-human traits with disease or death
What is the violation of expectation hypothesis?
The uncanny feeling arises when an object’s appearance and behaviour do not align with human expectations