ch15 HCl Flashcards

1
Q

What is evaluation in design?

A

A systematic process of assessing effectiveness, efficiency, usability, and quality of a system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the goal of evaluation?

A

To improve the design by collecting and analyzing user data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does evaluation focus on?

A

Usability and user experience (e.g., ease of use, satisfaction).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

When should evaluation be done in design?

A

Throughout the design process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is formative evaluation?

A

Evaluation during development to improve the product.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is summative evaluation?

A

Evaluation of a finished product to assess quality or compare alternatives.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why evaluate?

A

To ensure all user experience aspects are considered.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What can be evaluated?

A

Models, prototypes, complete systems, competitor comparisons.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Where can evaluation occur?

A

Labs, real-world settings, or hybrid spaces like living labs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are controlled settings?

A

Environments like usability labs where user activities are monitored.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are natural settings?

A

Real-world environments with little control over user behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is evaluation without users?

A

Expert evaluations like heuristic reviews and walkthroughs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is remote evaluation?

A

Evaluation conducted from a distance, possibly online.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is mixed method evaluation?

A

Combining methods (e.g., lab and field studies) for broader insights.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What challenges exist in evaluation?

A

Practical difficulties like setting control, data interpretation, and ethical concerns.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are living labs?

A

Real-life environments for studying long-term user interaction with technology.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Why are living labs useful?

A

They allow naturalistic evaluation over time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What was the ‘Aware Home’?

A

An early living lab embedded with sensors for behavior tracking.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is iNaturalist.org an example of?

A

A citizen science project functioning as a living lab.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What do case studies show in evaluation?

A

How evaluation works in different settings with varying control.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What was the DeepTake study?

A

An evaluation of automated vehicle takeover behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What methods were used in DeepTake?

A

Eye tracking, machine learning, user questionnaires.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

How many participants were in DeepTake?

A

20 participants (11 females, 9 males), aged 18–30.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What did Ethnobot do at the Royal Highland Show?

A

Guided users and collected their experiences using a mobile app.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What’s the purpose of crowdsourcing in evaluation?
To gather feedback and reactions from large, diverse groups.
26
What do the case studies demonstrate about control?
Evaluations vary from tightly controlled labs to uncontrolled natural environments.
27
What do evaluators learn from mixed case studies?
Insights on engagement, user behavior, and method flexibility.
28
How is data collected in crowdsourced evaluations?
Via platforms like Mechanical Turk.
29
Why is it important to use different methods?
To gain varied perspectives and increase result reliability.
30
What did the Ethnobot study collect?
Prewritten experience responses from participants.
31
Which methods involve observing users?
Field studies, lab observations, in-the-wild evaluations.
32
What does “asking users” involve?
Interviews, questionnaires, and surveys.
33
What is “asking experts”?
Evaluations like heuristic analysis and cognitive walkthroughs.
34
What is usability testing?
Evaluating how easy and satisfying a product is to use.
35
What are modeling methods?
Predictive techniques like GOMS and Fitts' Law.
36
What is heuristic evaluation?
Experts use heuristics to identify usability issues.
37
What is a walkthrough?
Experts step through tasks to find potential problems.
38
What is analytics in evaluation?
Using data from user behavior logs to assess interactions.
39
What is ecological validity?
How realistic the evaluation setting is.
40
What are user studies?
Controlled evaluations with real users performing tasks.
41
What is bias in evaluation?
Any factor that distorts the results.
42
What is reliability in evaluation?
Consistency of results over time or repeated trials.
43
What is validity in evaluation?
Whether the method measures what it intends to.
44
What is scope in evaluation?
Generalizability of the findings to other situations or users.
45
What is informed consent?
Agreement by participants after understanding their rights and the study purpose.
46
What rights should be explained to participants?
Purpose of the study, what they will do, and data usage.
47
Who approves the evaluation process?
Typically a higher authority or ethics committee.
48
Why is reliability important?
To ensure findings are trustworthy and replicable.
49
Why is validity critical?
To ensure results reflect the intended measurements.
50
How can bias affect evaluation?
It may skew results, leading to incorrect conclusions.
51
What affects generalizability?
Sample characteristics and evaluation context.
52
Can unstructured interviews be reliable?
They typically have low reliability due to inconsistency.
53
Should lab studies be used to study home behavior?
No, natural settings like ethnographic studies are better.
54
Can evaluators be biased?
Yes, especially in expert reviews.
55
What should you avoid when interpreting findings?
Over-generalizing results.
56
Why analyze scope?
To know how widely the findings apply.
57
What factors influence method choice?
Context, user type, evaluation goals.
58
Are all methods equally valid?
No, validity depends on matching method to purpose.
59
What is the role of consent forms?
To outline rights and study info; they act as contracts.
60
Should consent be verbal or written?
Preferably written for ethical accountability.
61
What is a usability lab?
A controlled environment for testing user interaction with systems.
62
What is 'in-the-wild' evaluation?
Studying users in real-life environments with minimal interference.
63
What is a field study?
Evaluation done in natural settings to understand real-world use.
64
What is expert review?
Evaluation where usability experts assess the interface.
65
What is a controlled experiment?
An evaluation method that tests specific hypotheses in controlled settings.
66
What is distributed evaluation?
Evaluation involving participants in different locations.
67
What is predictive evaluation?
Forecasting usability issues based on models and expert judgment.
68
What is a pain point?
An area of difficulty or frustration for users.
69
What is scope in context of evaluation?
The extent to which evaluation results can be generalized.
70
Why combine different methods?
To provide a broader understanding from multiple perspectives.
71
What is the benefit of online experiments?
They are fast, inexpensive, and scalable.
72
What is Mechanical Turk used for?
Crowdsourced evaluations involving large numbers of participants.
73
What affects ecological validity?
The naturalness of the evaluation setting.
74
What is modeling in evaluation?
Predicting performance using computational or theoretical models.
75
What are the three main evaluation settings?
Controlled (labs), natural (real-world), and expert-based (without users).
76
Define 'analytics' in evaluation.
Analysis of user data, such as clickstreams or navigation paths.
77
Define 'heuristic evaluation.'
Using recognized usability principles to assess interface quality.
78
What is a formative evaluation used for?
Improving the product during its development phase.
79
What is summative evaluation used for?
Judging the final product’s effectiveness or comparing it to alternatives.
80
What does 'user study' refer to?
A structured evaluation involving user interaction.
81
What is meant by 'validity'?
Whether the method truly measures what it aims to.
82
What is 'reliability'?
The consistency and repeatability of evaluation results.
83
What is 'bias'?
Influence that skews data or interpretations unfairly.
84
What is 'scope'?
The range or applicability of evaluation results.
85
What does 'informed consent' entail?
Participant understanding and agreement to take part in the study.
86
What is a 'crowdsourced study'?
A study using input from many online contributors.
87
What is an 'in-the-wild' study?
Observing natural behavior without researcher interference.
88
What is a 'living lab'?
A real-world setting designed to support user studies over time.
89
What is a 'controlled experiment'?
A study manipulating variables to observe effects on outcomes.
90
What is 'ecological validity'?
The extent to which results apply to real-world conditions.
91
How is evaluation integrated with design?
They are closely tied; evaluation informs and improves design.
92
What methods overlap with requirements gathering?
Interviews, observations, and questionnaires.
93
How do lab and field evaluations differ?
Labs allow control; field studies reflect real-life use.
94
When do evaluators impose control?
In labs or controlled experiments to isolate variables.
95
Why are field studies less controlled?
To preserve natural behavior and context.
96
What is the benefit of mixing methods?
Richer insights and more robust findings.
97
Why must participants know their rights?
Ethical responsibility and legal compliance.
98
Why avoid over-generalizing findings?
It can lead to false assumptions about other contexts or users.
99
What does method selection depend on?
Goals, context, users, and resources.
100
What should you always consider when interpreting evaluation data?
Reliability, validity, bias, and scope.
101
Which methods involve observing users?
Field studies, lab observations, in-the-wild evaluations.
102
What does “asking users” involve?
Interviews, questionnaires, and surveys.
103
What is “asking experts”?
Evaluations like heuristic analysis and cognitive walkthroughs.
104
What is usability testing?
Evaluating how easy and satisfying a product is to use.
105
What are modeling methods?
Predictive techniques like GOMS and Fitts' Law.
106
What is heuristic evaluation?
Experts use heuristics to identify usability issues.
107
What is a walkthrough?
Experts step through tasks to find potential problems.
108
What is analytics in evaluation?
Using data from user behavior logs to assess interactions.
109
What is ecological validity?
How realistic the evaluation setting is.
110
What are user studies?
Controlled evaluations with real users performing tasks.
111
What is bias in evaluation?
Any factor that distorts the results.
112
What is reliability in evaluation?
Consistency of results over time or repeated trials.
113
What is validity in evaluation?
Whether the method measures what it intends to.
114
What is scope in evaluation?
Generalizability of the findings to other situations or users.
115
What is informed consent?
Agreement by participants after understanding their rights and the study purpose.
116
What rights should be explained to participants?
Purpose of the study, what they will do, and data usage.
117
Who approves the evaluation process?
Typically a higher authority or ethics committee.
118
Why is reliability important?
To ensure findings are trustworthy and replicable.
119
Why is validity critical?
To ensure results reflect the intended measurements.
120
How can bias affect evaluation?
It may skew results, leading to incorrect conclusions.
121
What affects generalizability?
Sample characteristics and evaluation context.
122
Can unstructured interviews be reliable?
They typically have low reliability due to inconsistency.
123
Should lab studies be used to study home behavior?
No, natural settings like ethnographic studies are better.
124
Can evaluators be biased?
Yes, especially in expert reviews.
125
What should you avoid when interpreting findings?
Over-generalizing results.
126
Why analyze scope?
To know how widely the findings apply.
127
What factors influence method choice?
Context, user type, evaluation goals.
128
Are all methods equally valid?
No, validity depends on matching method to purpose.
129
What is the role of consent forms?
To outline rights and study info; they act as contracts.
130
Should consent be verbal or written?
Preferably written for ethical accountability.
131
What is a usability lab?
A controlled environment for testing user interaction with systems.
132
What is 'in-the-wild' evaluation?
Studying users in real-life environments with minimal interference.
133
What is a field study?
Evaluation done in natural settings to understand real-world use.
134
What is expert review?
Evaluation where usability experts assess the interface.
135
What is a controlled experiment?
An evaluation method that tests specific hypotheses in controlled settings.
136
What is distributed evaluation?
Evaluation involving participants in different locations.
137
What is predictive evaluation?
Forecasting usability issues based on models and expert judgment.
138
What is a pain point?
An area of difficulty or frustration for users.
139
What is scope in context of evaluation?
The extent to which evaluation results can be generalized.
140
Why combine different methods?
To provide a broader understanding from multiple perspectives.
141
What is the benefit of online experiments?
They are fast, inexpensive, and scalable.
142
What is Mechanical Turk used for?
Crowdsourced evaluations involving large numbers of participants.
143
What affects ecological validity?
The naturalness of the evaluation setting.
144
What is modeling in evaluation?
Predicting performance using computational or theoretical models.
145
What are the three main evaluation settings?
Controlled (labs), natural (real-world), and expert-based (without users).
146
Define 'analytics' in evaluation.
Analysis of user data, such as clickstreams or navigation paths.
147
Define 'heuristic evaluation.'
Using recognized usability principles to assess interface quality.
148
What is a formative evaluation used for?
Improving the product during its development phase.
149
What is summative evaluation used for?
Judging the final product’s effectiveness or comparing it to alternatives.
150
What does 'user study' refer to?
A structured evaluation involving user interaction.
151
What is meant by 'validity'?
Whether the method truly measures what it aims to.
152
What is 'reliability'?
The consistency and repeatability of evaluation results.
153
What is 'bias'?
Influence that skews data or interpretations unfairly.
154
What is 'scope'?
The range or applicability of evaluation results.
155
What does 'informed consent' entail?
Participant understanding and agreement to take part in the study.
156
What is a 'crowdsourced study'?
A study using input from many online contributors.
157
What is an 'in-the-wild' study?
Observing natural behavior without researcher interference.
158
What is a 'living lab'?
A real-world setting designed to support user studies over time.
159
What is a 'controlled experiment'?
A study manipulating variables to observe effects on outcomes.
160
What is 'ecological validity'?
The extent to which results apply to real-world conditions.
161
How is evaluation integrated with design?
They are closely tied; evaluation informs and improves design.
162
What methods overlap with requirements gathering?
Interviews, observations, and questionnaires.
163
How do lab and field evaluations differ?
Labs allow control; field studies reflect real-life use.
164
When do evaluators impose control?
In labs or controlled experiments to isolate variables.
165
Why are field studies less controlled?
To preserve natural behavior and context.
166
What is the benefit of mixing methods?
Richer insights and more robust findings.
167
Why must participants know their rights?
Ethical responsibility and legal compliance.
168
Why avoid over-generalizing findings?
It can lead to false assumptions about other contexts or users.
169
What does method selection depend on?
Goals, context, users, and resources.
170
What should you always consider when interpreting evaluation data?
Reliability, validity, bias, and scope.