Usability Testing Flashcards
A user test with paper prototyping is an example of a:
a. Down the hall-way usability test
b. Summative Usability Test
c. Low-fidelity Usability Test
C. Low-fidelity Usability Test
Is usability testing largely a “qualitative or quantitative” research technique?
Qualitative
What are some examples of quantitative observable data?
Time on Task, Success/failure rates, Effort e.g. # of clicks/perception of progress
What are some examples of qualitative observable data?
Stress responses, Subjective Satisfaction, Perceived effort or difficulty
Why metrics?
Metrics is really important because usability testing is not just for you to help refine your design, but it’s also a tool to influence the rest of your team members.
If you can show, you know, 60% of users were struggling with this, or that this feature has a 10% success rate, that can be very powerful – as opposed to saying, “Oh, the users didn’t like it. The users found it difficult.” It’s less believable than if you can actually show the metrics, so I think it helps ground teams in the reality around how users will probably use a design
“Okay, well, let’s look at the stats.” And they also basically turn conversations into fact-based – as opposed to opinion-based – conversations. Again, if people are arguing over a design, you can sort of say, “Well, let’s look at the data; let’s look at how users did.” So, what are we measuring? We’re measuring behavior, opinions and the actual data.
Why shouldn’t you ask your testers “Do you like this design”?
If you ask someone what they like, they’re likely to tell you what they like or what they “think” they like.
What users think and what they do are…
two different things
What are we measuring in usability tests?
- Behavior: Task performance, speed, efficiency, goal fulfillment, expectation matching
- Opinions: How it looks, Thoughts & opinions
- Data: Visitors, Pages, Documents
What are we looking for when we’re observing user testers?
We’re observing if a user has not seen something; see if a user is going in the wrong direction; see if the user is thinking it’s correct when it isn’t correct, or maybe the user is missing a rule.
Not seeing something, Going in the wrong direction, Thinking it’s correct when it isn’t, Missing a “rule”
What is the think-aloud protocol?
When you ask the user to think aloud to tell you what they’re thinking and feeling.
- Test the interface not the user
- Ask them to externalize thoughts/feelings
- Verify researcher mental model
According to Frank Spillers, why is time on task not always the best measure of usability?
a. It’s difficult to measure precisely when a user starts performing a task.
b. How fast a task should be done depends on the context.
c. It is not always easy to divide the users’ actions into tasks.
b. How fast a task should be done depends on the context.
What are the top 3 metrics according to Frank Spillers?
- Success rate - do they get the task?
eg.) 7/10 users completed the task aka “Got it right” - Failure rate - do they not get it?
eg.) 3/10 users committed errors - Partial Success - do they get part of it?
6/10 users had confusions
How many users should you test with?
- You only need 8-15 people for most (Formative) tests. Summative (statistical) tests 20-50+
- The number depends on your user types
- User testing is Qualitative Research (the research rules are different)
If you test with 5 users, that’s called…
Agile test
weekly usability test
Lean UX, agile UX, then do 3 to 5 users one week.. 3 to 5 the next week, 3 to 5 the next week.
and by the end of the month, you have a nice sample of about 15-20 users.