5: Evaluation Flashcards
Why should evaluation be conducted? When should evaluation be conducted?
Evaluation ensures that design and system actually behave as we expect and meet user requirements. Evaluations should be conducted through the design life cycle, with results of the evaluation feeding back into modification of the design.
What are the differences between empirical and predictive evaluation? What are their pros and cons?
Empirical evaluation is to be conducted with real users, observe or experiment to get feedback.
Predictive evaluation is to be conducted with experts.
The advantages of empirical evaluation are knowing what the users are thinking just by observing and using the think out loud method. The disadvantage of empirical evaluation is that users might not be clear about what they want to do if the evaluator role is unclear.
The advantage of predictive evaluation is that it is able to save time and let the experts do the work. The disadvantage of predictive evaluation is that experts can be expensive.
What is a “think aloud” usability study? How do you conduct one? is it empirical or predictive? Do you conduct it in the lab or the field?
A type of empirical evaluation where the user is given a task to complete and they have to speak out the steps they are taking to complete that task. They should also speak up if they face any issues or if they face any confusion. An Evaluator will be present to observe, take notes and record the data. Evaluators may also prompt users if they become too quiet.
Empirical, Conducted in the field.
What is the DECIDE framework? How is it useful?
Determine the aims and goals Explore the questions you will ask Choose the evaluation approach Identify practical issues Decide how to deal with ethical issues Evaluate, analyze, interpret and present the data collected
Used when conducting studies with many users
What is a heuristic evaluation? How do you conduct one?
A type of predictive evaluation performed by a small set of expert usability evaluators. They will assess the system based on simple and general heuristics independently and then aggregate their findings.
- pre-evaluation training
- give evaluators needed domain knowledge and information on scenario - Evaluation
- evaluators give the system at least 2 passes and aggregate results - Severity rating
- results are recorded in the form of severity and priority. (first individual, then group) - Debriefing
- Review problems with the design team
What is a cognitive walkthrough? How do you conduct one?
A type of predictive evaluation to assess the learnability and usability through simulation of the way users explore and become familiar with the system. C
- Construct carefully designed tasks from the system
- Walkthrough activities required to go from one screen to another
- Review actions needed for the task, attempt to predict how users would behave and what problems they’ll encounter