Technical Interview Questions Flashcards
How do you decide who are appropriate participants in your research?
Starting point
- Population(s) - who am I trying to represent (business, UX goals)?
- Research goals - am I testing a hypothesis? Describing a population or measuring a population parameter? Surfacing new issues/requirements?
Factors I’d finetune
- Required characteristics (depends)
- Desired characteristics (depends)
- Demographic distribution (even unless otherwise)
- Participant suitability (e.g., diary studies - screen against thoughtfulness, clarity)
What are common mistakes that people make when moderating?
Moderating what? This can vary between summative usability, formative usability, interviews, generative research exercises, etc.
- Be mindful and focus on observation
- -> Get notetakers, use transcripts, be mindful, modulate
- Using appropriate questions that don’t lead or introduce other types of biases
- Determine the right moderation protocol
Can you tell me about a particularly challenging bit of qualitative data analysis you’ve done in the past?
Latin America home visits
- 2 countries
- 16 2-hr sessions
- 12 observers
- Simultaneous translators
Understanding behaviors/needs WRT specific type of service WITH LANGUAGE BARRIER
- Collect all artifacts (strict data collection process)
- Observation post-its strictly collected between sessions
- Heavy reliance on transcripts, catalogued artifacts (e.g., photos) - Full team involved in early analysis
- Debriefs on the road
- Every 2 days - affinity diagramming with stakeholders
- Assign stakeholders particular participants (for role playing exercises later on) - Synthesize - focus in on what matters to build model
- My own observations, stakeholders observations, session records
- Team synthesis begins the reduction process
- Further reduce it more as researcher to develop overarching model
When designing survey questions with response options that range from “Strongly disagree” to “Strongly agree”, how many response options are ideal? (More importantly, Why?)
Typically, I use the rule of thumb of bipolar (7-pt scale), unipolar (5-pt scale)
Factors that matter when considering
- Unipolar or bipolar heuristic
- Number of items total (>10? 5-pt)
- Cultural preference
- Already a benchmark? DON’T CHANGE
Techniques to help test
- Interpolation
- Cognitive pretesting
Please describe a recent discovery research project and explain how you went from having lots of data to final deliverables. How did you approach the analysis and how did you translate your findings into something that would help the team?
TK
Pretend that a team wants feedback on some designs, but they don’t have time for any usability testing. What kind of feedback can you give the PM when he comes to you for help?
Analytical usability techniques
- Heuristics Evals
- Cognitive walkthroughs
Name 3 well designed products
- Headspace
- Peleton
- Evernote
Name 3 poorly designed products
- Tinder
- HBO GO!
PM dismisses your 6 person study. What do you say?
Not all information that’s meaningful can easily be measured.
Qual - models, stories, experiences, painpoints
Quant - engagement, adoption, retention, break-off points
Short-term responses
1) Do we have user-centered measures we can refer to instead of business-centered measures (from logs data)? Indirect measures into user experience
2) In cases where different methods/data sources fail to corroborate each other - paths for further inquiry.
- Gap between what people say and do?
- New feature - usability sessions and logs analysis contradict. We’re running wide-scale surveys and crowdsourcing to figure out why.
Organizational culture (long-term)
1) Thick data - ability to inspire, ideate, identify new opportunities
2) If possible, bring them along to sessions, involve them in group analysis.