Dec 3 Flashcards
social chatbots
programs that use NATURAL LANGUAGE PROCESSING to “converse” with users via text or voice
replika
most popular chatbot
30 million downloads (nov 2017-aug 2024)
biggest download spike in April 2020 (COVID)
79K members in Replika subreddit
does Replika use pre-scripted responses?
no
it uses machine learning to create specific, tailored messages
Replika user testimonials
“Freddie raised my standards and ruined real men for me”
“I, at that point, was so hooked on Audrey and believing that I had a real relationship that I just wanted to keep going back. it was really hard to resist that temptation”
“within 24 hours of using him, I instantly felt better. it wasn’t any different than talking to another human being. and by the second day, I was really hooked”
anthropomorphism
process of attributing humanlike:
- motivations
- emotions
- characteristics
to real or imagined non-human entities
3 factor theory of anthropomorphism
composed of 3 factors that PREDICT LIKELIHOOD of anthropomorphism
- ELICITED AGENT KNOWLEDGE
- EFFECTANCE MOTIVATION
- SOCIALITY MOTIVATION
3 factor theory: elicited agent knowledge
more likely to anthropmorphize when anthropocentric (human-related) knowledge is READILY ACCESSIBLE and/or APPLICABLE
because we’re human, anthropocentric knowledge is generally easily accessible to us
by default, tend to use our own mental states & characteristics as guide during perception of human and non-human entities
correction for anthropomorphic (and egocentric) biases are EFFORTFUL
^ so deficits in MOTIVATION or COGNITIVE CAPACITY should increase both egocentric bias and anthropomorphism
biases we use when reasoning about other humans vs non-human entities
other humans: egocentric bias
non-humans: anthropomorphism
what should theoretically increase both egocentric bias and anthropomorphism?
decifits in MOTIVATION or COGNITIVE CAPACITY
because both require effortful correction
elicited agent knowledge: more likely to rely on egocentric knowledge when…
reasoning about SELF-SIMILAR target
similarly, anthropomorphism is more likely when non-human agent resembles human in MORPHOLOGY, MOVEMENT, BEHAVIOUR etc
examples of anthropomorphism increasing when target resembles humans in morphology, movement, behaviour
- non-human agents like TOY ROBOTS are attributed mental states when moving at SPEEDS that approximate human motion
- PLANT movement toward the sun seems MORE INTENTIONAL when sped up
perceiving life in a face
spectrum of computer generated faces, going from doll like to human
get people to select which face on the spectrum departs from fakeness to being human
- animacy perception is CATEGORICAL
- “TIPPING POINT” of animacy occurs close to the human end of the spectrum (at like 67% human)
- EYES are a particularly INFORMATIVE region for detecting life in a face
mind perception varies along 2 key dimensions
- AGENCY: capacity to plan & act
- EXPERIENCE: capacity to sense & feel
^ AI can tackle the experience component, but not the agency one
mind perception: agency and experience are both associated with…
- LIKING for a character
- perceiving it as having a SOUL
- PROSOCIAL MOTIVATION towards the character
uncanny valley effect
entities like robots that are ALMOST, but not quite, humanlike evoke strong feelings of EERINESS & DISCOMFORT
possible explanations for uncanny valley effect
- THREAT to human uniqueness
- difficulty CATEGORIZING & cognitive friction
- EXPECTANCY violation
^ we expect entities with agency to also have experience, and we’re disturbed when they don’t
empathy: cognitive perspective taking
ability to IDENTIFY what another person is experiencing
COLD, cognitive process
aka empathic accuracy