Block 4 - Unit 1: An evaluation framework Flashcards
Key points for evaluation. (4)
Evaluation is a key activity in ID lifecycle.
An essential requirement for any interactive product is understanding users’ needs.
Needs of users can be usefully expressed as goals for the product - both usability and UX goals.
Purpose of evaluation - check users can use the product and like it. Assess how well goals have been satisfied in a design.
3 main approaches to evaluation?`
Usability testing.
Field studies.
Analytical evaluation.
(Each differs according to its theories, philosophies (beliefs) and practices for evaluation.
Methods (def and 6 examples)
Practical techniques to answer questions set in relation to an evaluation goal, include:
Observing users.
Asking users their opinions.
Asking experts their opinions.
Testing users’ performance.
Modelling users’ task performance.
Opportunistic evaluation.
Designers informally and quickly get user feedback from users / consultants to confirm their ideas are in line with users’ needs, and are liked.
Generally used early and needs only low resources.
‘Quick and dirty’.
Evaluation and accessibility. (Include example).
If system should be usable by disabled, must evaluate both ‘technical accessibility’ (can user physically use?) and usability.
Eg. Blind user - screen reader might technically access data in a table, but user also need to read cells in a meaningful and useful way, eg. access contextual info about cells - relate cells to rows / columns.
6 evaluation case studies (SB)
Early design ideas for a mobile device for rural Indian nurses.
Cell phones for different world markets.
Affective issues - collaborative immersive game.
Improving a design - Hutchworld patient support system.
Multiple methods help ensure good usability - olympic messaging system.
Evaluating a new kind of interaction - an ambient system.
DECIDE intro.
Well planned evaluations are driven by ‘goals’ which aim to seek answers to clear ‘questions’, whether stated up front or emerge.
Questions help determine the kind of ‘evaluation approach’ and ‘methods’ used.
‘Practical issues’ also impact decisions.
‘Ethical issues’ must also be considered.
Evaluators must have enough time and expertise to evaluate, analyse, interpret and present the ‘data’ they collect.
DECIDE framework checklist.
Determine the ‘goals’.
Explore the ‘questions’.
Choose the ‘evaluation approach and methods’.
Identify the ‘practical issues’.
Decide how to deal with the ‘ethical issues’.
Evaluate, analyse, interpret and present the ‘data’.
(Common to think about and deal with items iteratively, moving backwards and forwards between them. Each is related to the others).
Determine the goals and Explore the questions. (3 points)
Determine ‘why’ you are evaluating - high-level goals.
If evaluating a prototype the focus should match the purpose of the prototype.
Goals identify the scope of the evaluation and need to be specific rather than general; identifying questions based on these goals clarifies the intention of the evaluation further.
Example of general -> specific goal.
‘Help clarify users’ needs have been met in an early design sketch.’
More specific goal statement:
‘Identify the best representation of the metaphor on which the design will be based.’
How to make goals operational (DECIDE)
We must clearly articulate questions to be answered.
Eg. what are customers’ attitudes to e-tickets (over paper)?
Questions can be broken down to very specific sub-questions to make evaluation more finegrained.
Eg. ‘Is interface poor?” to “… difficult to navigate?”, “… terminology inconsistent?”, “… response slow?”, etc.
What will an evaluation be focused on?
Guided by key questions, and any other questions based on the usability criteria to see how well usability goals have been satisfied.
Usability criteria - specific quantified objectives to assess if goal is met.
Also, how well UX goals have been satisfied - how interaction / experience feels to the user (subjective).
UX usually evaluated qualitatively, eg. ‘users shopping online should be able to order an item easily without assistance.
Possible to use specific quantified objectives for UX goals, eg. ‘85% + should be able to order without assistance.’
What effects choice of approaches / methods of evaluation.
Approach influences the kinds of methods used.
Eg. analytical evaluation - methods directly involving users won’t be used.
Choice of methods:
Where you are in the lifecycle.
Goals being assessed.
Practical issues - time, money, technology, appropriate participants.
WHAT you are evaluating and type of data being collected.
Eg. low-fi prototypes - any time in lifecycle, but predominantly useful fo qualitative data, or assessing certain UX goals or interface features (eg. underlying metaphor).
Why use more than one evaluation approach / method?
Often choosing just one approach is too restrictive for evaluation.
Take a broader view - mix and match approaches / methods according to goals, questions and practical / ethical issues.
Eg. methods used in field studies tend to involve observation, interviews or informal discussions.
Combining methods for evaluation study, especially if complementary, can give different perspectives for evaluation, and may help to find more usability problems than a single method might.
Usability defect (problem)
A difficulty in using an interactive product that affects the users’ satisfaction and the system’s effectiveness and efficiency.
Usability defects can lead to confusion, error, delay or outright failure to complete a task on the part of the user. They make the product less usable for the target users.