User Studies and UX Metrics Flashcards
1
Q
Evaluation
A
- Evaluation is a process of systematically assessing a system, guided by questions about the system and evaluation criteria
- Does the system work as it is supposed to work?
- Does it comply with relevant standards?
- Is the performance improved over a previous version?
- You need to have a clear objectives in order to plan your evaluation
- What questions should the evaluation answer?
- If you have developed a system, what claims would you want to be able to make about it?
2
Q
Evaluation of Usability and UX
A
- For systems that are used interactively, a main concern is their usability and user experience
- How well can users can learn and use a
system to achieve their goals - How satisfying is the use of the system
- How well can users can learn and use a
- Any evaluation should be guided by clear objectives and questions
- Can users complete the tasks the system is
meant to support? - Would a first-time user be able to figure out
how to use the system? - Has a change made in the interface had the
desired impact? … - Can users achieve their goals faster or with
less effort, compared to earlier versions, or
competing products?
- Can users complete the tasks the system is
3
Q
Forms of usability evaluation
A
- Analytical evaluation
- Informal or formal review, for example using scenarios, guidelines, checklists or models
- By the design team and/or external reviewers (usability experts)
- Empirical evaluation
- Evaluation with users (“User studies”)
- Assessment based on observation
4
Q
Forms of usability evaluation
A
5
Q
User Studies
A
- Why evaluate with users?
- Designers are experts in using in their own systems
- That does not make them experts in usability and UX
- Analytical evaluation is limited by the ability of the reviewer to test a system from a user perspective
- Analytical evaluation can answer some questions but not other
- Why evaluate without users, first?
- Many problems can be found analytically
- Rigorous testing of interactive workflows by the design team
- Respect users and their time
6
Q
Types of User Studies
A
- Usability tests
- Focus on identifying usability issues
- Problems that users encounter when they use a system
- Focus on identifying usability issues
- Lab Studies
- Focus on user performance and user experience
- Controlled experiments, often to compare interfaces or systems
- Field study
- Focus on use in the real-world
- Little or no control over the interaction, but observing use in context
7
Q
Usability Tests
A
- User are given typical tasks to perform with a prototype, part of a system, or finished product
- Identifying usability issues for improvement (formative evaluation)
- Validating the design against project goals (summative evaluation)
- Qualitative focus on issue, i.e. problems users encounter when they try to complete tasks
- But usability issues can also be quantified by measuring frequency of issues
8
Q
Usability issues
A
- Usability issues are problems users encounter toward achieving their goals
- Something they are not able to do, or find difficult to do
- Something they do that leads to problems
- Examples
- User actions that prevent task completion
- Performing an action that leads away from
task success - Not seeing something that should be
noticed - Participants says task is completed when it
isn’t - User misinterprets some information
presented to them
9
Q
Issue-based Metrics
A
- Using metrics to prioritise improvements
- Pareto Principle (80/20 rule):
20% of the effort will generate 80% of the results - Example: problem frequency in a usability study
10
Q
“What one thing would you improve”
A
- Asking users at the end of the usability test, what one problem to fix
- Coding responses to identify categories
- Example: top five cover 75% of suggested improvements
11
Q
How many users for a test?
A
“Five Participants is Enough”
* It is widely believed that >75% of issues are found by the first five users (Nielsen’s model)
* “Testing one user is 100 percent better than testing none”
- “You can find more problems in half a day than you can fix in a month” (Steve Krug)
- Do not expect to find and fix all issues
- Some issues can only be discovered after other issues have been fixed
- What works for most people might remain an issue for some people
12
Q
Lab Studies
A
- Lab studies focus on performance and user experience
- The purpose of design is to achieve an improvement of something
- Develop a prototype, system, app or product that in some respect is better than what we had before
- e.g., more efficient, easier to use, faster to learn, less error-prone, …
- Users are given tasks to perform under controlled conditions
- Observing the effect of specific designs on performance (e.g., completion time, error rate) and/or user experience (user-reported ratings)
13
Q
Comparative Evaluation
A
- Lab studies are usually comparative
- Comparing a new user interface with of a previous version
- Is there an improvement?
- Benchmarking of a new interactive system against the best existing solution (comparison against a “baseline”)
- Important in research and innovation
- Comparing alternative designs to see which one works best
- Formative studies
14
Q
Controlled Experiments
A
- Lab studies are conducted as controlled experiments
- Experiments are an empirical research method for answering questions of comparison and causality
- “Does the new feature added to the UI cause a lower error rate?”
- “Is search engine A more effective in finding what users are looking for than search engine B?”
- The aim of an experiment is to determine cause-effect relationship between variables
15
Q
Principles of Experiment Design
A
- Reduction to observation of specific variables
- Reducing a question about cause and effect to specific variables that can be manipulated and specific variable that can be observed
- Repetition: repeated runs/trials to gain sufficient evidence
- Experiments study a relationship between variables; Repetition is necessary to build up evidence of the relationship
- Control to limit confounding factors
- Expertiments are controlled to minimize the influence of other variables on the observed effect.