Lecture 9 Flashcards

1
Q

What are the Theoretical underpinnings of trust in technology as humans:

A
  • Social presence theory
    (Short et al., 1976)
  • Social response theory
    (Nass et al., 1994)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

ANTROPOMORPHISM

A

If technology can be seen and responded to as we do with human beings…

A word elicitation task for humans and machines (Jian et al., 2000)

 Across conditions, trust was related to words such as “integrity”, “honesty”, “cruel”.

 Therefore, they built a scale regarding technology trust that includes items such as:

  • The system has integrity
  • I am suspicious of the system’s intent, action, or output.

people tend to attribute human-like qualities to technology when it comes to trust

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

THE FAT TRUST:

A

Trust in automated systems and algorithms also involve specific dimensions of machine-
trustworthiness:

  • Fairness: How fair is the algorithm?

[example:Machine-Translation and Gender (Vanmassenhove, 2024).
E.g., translate the following sentence in Dutch:
My husband is a kindergarten teacher  ?
]

  • Accountability: Consequences and impacts? (Management? Developers?)
  • Transparency: Countering the black box problem(an algorithm that people don’t understand including the developer as at some point it grew to complex)
    [] But there are the emergence of explainable AI that have models that explains how the AI’s internal working work.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

ALGORITHM TRUST

A

Trust in automated systems and algorithms also involve specific dimensions of machine-trustworthiness:

  • Fairness: How fair is the algorithm? (Algorithmic bias)
  • Accountability: Consequences and impacts? (Management? Developers?)
  • ## Transparency: Countering the black box problem

Also, the anthropomorphised elements have been translated for machines and algorithms (McKnight et al., 2011)

  • Reliability: Consistent proper operation
  • Functionality: Can supply for the need that I have
  • Helpfulness: Can provide adequate help to users
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

So is the fact that we perceive the system as human important?

A

It can be. Imagine discussing with a chatbot, for instance. The fact that you perceive it rather as human or as a machine could impact your trust regarding its abilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How could a bot be seen as more human than the another one?

A

Perceived warmth (+)
Communication delay (-)
Perceived competence (+)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

chatbot could show humanness in other ways! how

A

They could also have various communication style, denoting humanness: it could either be very formal with you, or rather informal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Chatbots humanity speaking style and brand response

A

The communication style of the chatbot did not really matter
directly when it comes to brand
attitude.

However, when we consider if it creates social presence, then we have something interesting.

 An informal chatbot creates positive social presence, which in turn predicts positively brand
attitude

The communication style of the chatbot predicted also the quality of interaction, even more so
when social presence was taken into account.

 An informal chatbot creates positive social presence, which in turn predicts positively the
quality of the interaction.
Weird: these results are constant across brand familiarity, and both chatbot styles seemed
equally appropriate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly