Qualitative Analysis Flashcards
Affinity Diagramming
Method to externalize and meaningfully cluster observations from research.
50-100 observations captured on post-its, each with an observation. The group interprets each observation, and (either inductively or deductively) groups together those that have a similar problem or issue.
Elito Method
Method to more easily translate observation into design ideas and business directives//logic.
As a group, translate observations into judgments, values, concepts, etc. Spreadsheet format.
What’s the difference between a Persona and an Archetype
Both draw on empathy, but in different ways.
Archetypes
- Complexity
- Earlier stage ideation and exploration
- Who, what, why
- Rich observational data
- People can fall into multiple archetypes
Personas
- Simplification
- Used when defining requirements, prioritizing audiences
- Who the main focus
- Observational data joined with market research metrics
- People should only fall into one persona
Personas were created with quite a straightforward focus in mind. Their goal was to reduce a risk of creating an unwanted solution through the use of empathy. They were born in the philosophy of reductionism, which is: “a search for the associations between phenomena, which can be described in terms of simpler or more fundamental phenomena”. Such an approach is pretty informative and risk avert yet it is not particularly inspirational for humans.
Archetypes relate to our subjective approach to life rather than attempting to analyze the world as an objective set of objects acting and reacting to one another. Such an approach opens doors for richness of interpretations, uncertainty of choices and space for risk taking. In such a way archetypes seem to be more equipped to bring change to life comparing to personas that are better at setting the external boundaries for the design space.
They support storytelling not only about the solution but also about the here creators who bring it to life. They build on the strengths and aspirations of humans leaving the space for playfulness and imagination.
Persona
Personas are representations of segments of your population of interest based on what qual/quant research has informed you (e.g., what relevant factors to consider).
Male
Age range of 30 years old
Higher education graduate
Senior-level employee who wants to get the most out of a few days’ vacationTravels a lot and has already visited many places
Interested in affordable travel offers to new places
Would like to always be connected with his team
Problem - generalizations about beliefs, needs, and values can be made that are stereotypical or otherwise invalid.
A UX team will assume a Female, 18-25 years old, with a high education level will experience a product one particular way. This often leads to bending the persona to wrongly validate a design decision. A person’s characteristics and behavior do not always align. The difference between characteristics and behavior, in some cases, could be very volatile.
Archetype
Archetypes are more useful for deeper empathetic design and ideation.
With archetypes, you are removing lots of the typical demographic and identity-based characteristics that are prevalent in market research, and instead looking for behavioral factors or deeper underlying characteristics of individuals that can drive how you design for users.
Archetypes are the embodiments of the universal stories that all human beings share, which means that they represent something that each or us has a mental model of be it an angel, rebel or citizen. After a short moment of thinking we are able to depict what such an archetype means.
For example - users that have an underly
Steps to Coding Qualitative Data
- Remove blanks / gibberish
- Randomly collect a sample of ~400 comments (95% confidence level, +-5% MOE
- Take 50 comments and develop an inductive coding scheme OR apply a deductive coding scheme OR hybrid approach
- Codes should be externally heterogeneous and internally homogenous
- Document your coding schema
- Primary coder or two-coder approach (in the former, second person only has to code a sample)
- Inter-rater reliability score (Kappa) -1 (bad!) 0 (just guessing) 1 (excellent)
Analytical Tests of Qualitative Coded Data
Chi-Square Tests
T-Test and ANOVA (mean number of comments of type X for group A v. group B)
Correlational and Regression (to what extent to mentions of a particular feature predict product usage or CSat)?
Usability Heuristics
- Visibility of system status
- Match between system and real world
- User control and freedom
- Consistency and standards
- Error prevention
- Recognition rather than recall
- Flexibility and efficiency of use
- Aesthetic and minimalist design
- Help users recognize, diagnose, and recover from errors
- Help and documentation
Max Diff
What is Max Diff Analysis? MaxDiff (otherwise known as Best-Worst) quite simply involves survey takers indicating the ‘Best’ and the ‘Worst’ options out of a given set. Implemented within an appropriate experimental design we can obtain a relative ranking for each option.
Conjoint Analysis
‘Conjoint analysis’ is a survey-based statistical technique used in market research that helps determine how people value different attributes (feature, function, benefits) that make up an individual product or service.
The objective of conjoint analysis is to determine what combination of a limited number of attributes is most influential on respondent choice or decision making. A controlled set of potential products or services is shown to survey respondents and by analyzing how they make choices between these products, the implicit valuation of the individual elements making up the product or service can be determined. These implicit valuations (utilities or part-worths) can be used to create market models that estimate market share, revenue and even profitability of new designs.
Van Westerndorp Pricing Sensitivity Meter
Participants in a PSM exercise are asked to identify price points at which they can infer a particular value to the product or service under study.
At what price would you consider the product to be so expensive that you would not consider buying it? (Too expensive)
At what price would you consider the product to be priced so low that you would feel the quality couldn’t be very good? (Too cheap)
At what price would you consider the product starting to get expensive, so that it is not out of the question, but you would have to give some thought to buying it? (Expensive/High Side)
At what price would you consider the product to be a bargain—a great buy for the money? (Cheap/Good Value)
Too cheap - Expensive
Cheap - Too expensive
The general explanation of intersecting cumulative frequencies varies. A common description of the intersections is that the crossing of “too cheap” and “expensive” can be the lower bound of an acceptable price range. Some describe this as the “point of marginal cheapness” or PMC. Similarly, the intersection of the “too expensive” and “cheap” lines can be viewed as the upper bound of an acceptable price range. An alternative description is the “point of marginal expensiveness” or PME.
Principal Components Analysis
A stastistical test for feature extraction - meaning, if I have 20 potential variables, what are the 5 features from their data that 1) take into account the valuable parts of all variables while 2) deprioritizing the least important ones and finetuning our focus.
One drawback to PCA is that it makes your predictor variables slightly less interpretable; for this reason YOU NEED to make sure the components (features) you extract match your real world and theoretical understanding of the subject.
Cluster Analysis
A statistical technique that groups observations into a set of clusters.
High quality clusters: Within a group the observations must be as similar as possible, while observations belonging to different groups must be as different as possible.
Factor Analysis v. Cluster Analysis
Factor Analysis
- Explain correlation and relate variables
- Simplification (similar to PCA)
Cluster analysis
- Heterogeneity in data set
- Form of categorization
- Becomes computationally limiting if your dataset is too large -
Natural Language Processing
- Field of AI that gives machines the ability to read, understand, and derive meaning from human languages
- Bag of words -
- Tokenization -
- Lemmatization