UX Research Flashcards

1
Q

What presentation on UX research should consist of

A
  1. Why we needed this research
  2. Key findings
  3. Influence on design
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is saturated answer to research question

A

It means that new interviews will not give us any new information

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a good qualitative research

A

When as a result we clearly understand what to do

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Why do we need research?

A

To understand where we now and which plan do we need to build the better reality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Qualitative or quantitative to choose

A

Both. Quantitative on their own explain nothing and qualitative prove nothing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Specifics of qualitative and quantitative research

A
  1. Quantitative — all about numbers, answer to ‘What’
  2. Qualitative — pains, needs, reasons, insights/ answer to ‘Why’
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Quantitative or Qualitative?

A

Quant — more respectful, numbers are easy to understand
Qual — more people-oriented, why better understand the reasons behind

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Phrase about Quant and Qual

A

Quant research alone explains nothing, Qual research alone proves nothing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which type of research is better for a designer?

A

Qualitative, because it’s more people-oriented and gives insights

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Types of qual research

A
  1. Usability testing (can be also quant)
  2. In-depth interviews
  3. Observations
  4. Context interviews (while the user is using some interface)
  5. Focus-groups (not so good for designers)
  6. Diary testing (when we want analyse experience for the period, long-term usability)
  7. Workshops (participative design)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Who is SME

A

A subject-matter expert (SME) is a person who has accumulated great knowledge

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Sources for qual study

A
  1. Users (real and potential)
  2. Not-users (used and quitted or didn’t come)
  3. SME (subject matter experts)
  4. Owner (product owner)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Triangulation

A

For:

  1. Credibility
  2. Complexity
  3. Depth
  4. Width
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Triangulation examples

A
  1. Start with quant study (or google analytics), discover problems, build hypotheses
  2. Continue with qual, go deeper
  3. Then back to quant, to check/prove
  4. In-depth interview (calm and long), context interview (quicker and more hot), open survey (anonymous)
  5. Triangulate people (researchers) to avoid confirmation bias
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The difference between research and study

A

Research is a discipline, study is a specific research for the specific project

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Divergent and convergent thinking

A

Phases of this process are either diverging or converging. During a diverging phase, you try to open up as much as possible without limiting yourself, whereas a converging phase focuses on condensing and narrowing your findings or ideas.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A landscape of user research methods

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

The Attitudinal vs. Behavioral Dimension

A

Kind of contrasting “what people say” versus “what people do” (very often the two are quite different).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

About card sorting

A

Card sorting is a UX research method in which study participants group individual labels written on notecards according to criteria that make sense to them.

Provides insights about users’ mental model of an information space and can help determine the best information architecture for your product.

Let’s imagine that you’re designing a car-rental site. Your company offers around 60 vehicle models that customers can choose from. How would you organize those vehicles into categories that people can browse to quickly find their ideal car rental? Your company might use technical terms such as family car, executive car, and full-size luxury car. But your users might have no idea of the difference between some of those categories. This is where card sorting can help: ask your users to organize vehicles into groups that make sense to them, and, then, see what patterns emerge.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Conducting the card sorting

A

Generally, the process works as follows:

  1. Choose a set of topics. The set should include 40­–80 items that represent the main content on the site. Write each topic on an individual index card.

Tip: Avoid topics that contain the same words; participants will tend to group those cards together.

  1. User organizes topics into groups. Shuffle the cards and give them to the participant. Ask the user to look at the cards one at a time and place cards that belong together into piles.

Some piles can be big, others small. If the participant isn’t sure about a card, or doesn’t know what it means, it’s ok to leave it off to the side. It’s better to have a set of “unknown” or “unsure” cards than to randomly group cards.

Notes:
There is no preset number of piles to aim for. Some users may create many small piles, others may end up with a few big ones. It all depends on their individual mental models.

Users should be aware that it’s OK change their mind as they work: they can move a card from one pile to another, merge two piles, split a pile into several new piles, and so on. Card sorting is a bottom–up process, and false starts are to be expected.

  1. User names the groups. Once the participant has grouped all the cards to her satisfaction, give her blank cards and ask her to write down a name for each group she created. This step will reveal the user’s mental model of the topic space. You may get a few ideas for navigation categories, but don’t expect participants to create effective labels.

Tip: It’s important to do this naming step after all the groups have been created, so that the user doesn’t lock herself in to categories while she’s still working; she should be free to rearrange her groups at any moment.

  1. Debrief the user. (This step is optional, but highly recommended.) Ask users to explain the rationale behind the groups they created. Additional questions may include:
    - Were any items especially easy or difficult to place?
    - Did any items seem to belong in two or more groups?
    - What thoughts do you have about the items left unsorted (if any)?

You can also ask the user to think out loud while they perform the original sorting. Doing so provides detailed information, but also takes time to analyze. For example, you might hear the user say, “I might put card Tomatoes into pile Vegetables. But wait, they are really a fruit, they don’t really fit there. I think Fruits is a better match.” Such a statement would allow you to conclude that the user did consider Vegetables a decent match for Tomatoes, even though Fruits was even better. This information could push you into crosslinking from Vegetables to Fruits or maybe even assigning the item to Vegetables if there are other reasons leaning in that direction.

If needed, ask the user for more-practical group sizes. You should not impose your own wishes or biases upon the participant during the original sorting (steps 1–3), but once the user’s preferred grouping has been defined, and after the initial debrief, you can definitely ask the participant to break up large groups into smaller subgroups. Or the opposite: to group small groups into larger categories.

Repeat with 15–20 users. You’ll need enough users to detect patterns in users’ mental models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Open Card Sorting vs. Closed Card Sorting

A
  1. Open card sorting is the most common type of card sort and what we described above. Generally, when practitioners use the term card sort, it’s implied that it will be an open card sort. In an open card sort, users are free to assign whatever names they want to the groups they’ve created with the cards in the stack.
  2. Closed card sorting is a variation where users are given a predetermined set of category names, and they are asked to organize the individual cards into these predetermined categories. Closed card sorting does not reveal how users conceptualize a set of topics. Instead, it is used to evaluate how well an existing category structure supports the content, from a user’s perspective. A critique of the closed card sort is that it tests users’ ability to fit the content into the “correct” bucket ­— to users, it can feel more like solving a puzzle than like naturally matching content to categories. The method does not reflect how users naturally browse content, which is to first scan categories and make a selection based on information scent. Instead of closed card sorting, we recommend tree testing (also known as reverse card sorting) as a way to evaluate navigation categories.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Moderated vs. Unmoderated Card Sorting

A
  1. Moderated card sorting includes step 4 in the process outlined above: the debrief (and/or think-aloud during the actual sorting). This step is a highly valuable opportunity to gain qualitative insights into users’ rationale for their groupings. You can ask questions, probe for further understanding, and ask about specific cards, as needed. If it’s feasible for your schedule and budget, we recommend moderating your card sorts to get these insights.
  2. Unmoderated card sorting involves users organizing content into groups on their own, usually via an online tool, with no interaction with a facilitator. It is generally faster and less expensive than moderated card sorting, for the simple reason that it doesn’t require a researcher to speak with each user. Unmoderated card sorting can be useful as a supplement to moderated card sorting sessions. For example, imagine a study involved highly distinct audience groups, and the research team decided to run a card sort with 60 users: 20 users for each of 3 different audience groups. In this case, it can be cost-prohibitive to run 60 moderated card-sorting sessions. Instead, the team may decide to do a small study of 5–10 moderated sessions for each audience group, followed by unmoderated card sorting for the remaining sessions.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Paper vs. Digital Card Sorting

A
  1. Paper card sorting is the traditional form of card sorting. Topics are written on index cards and users are asked to create their group on a large workspace. The biggest advantage to paper card sorting is that there is no learning curve for the study participants: all they have to do is stack paper into piles on a table. It’s a forgiving and flexible process: users can easily move cards around or even start over. It’s also easier for people to manipulate a very large number of cards on a big table than it is to manipulate many objects on a computer screen that often can’t show everything within a single view. The downside of paper card sorting is that the researchers have to manually document each participant’s groups and input them into a tool for analysis.
  2. Digital card sorting uses software or a web-based tool to simulate topic cards, which users then drag and drop into groups. This method is generally the easiest for researchers, because the software can analyze the results from all the participants and reveal which items were most commonly grouped together, what category names users created, and the likelihood of two items being paired together. The downside is that the usability of the tool can impact the success of the session — technology problems can cause frustration or even prevent users from creating the exact groups that they want.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Card sorting VS tree testing

A

Card sorting is invaluable for understanding how your audience thinks, but it does not necessarily produce the exact categorization scheme you should follow.

For example, participants in a card sort often create a generic category to hold a few items which don’t seem to fit anywhere else; this is understandable, but if you were to actually include an “other stuff” category in your menu, the same users would avoid it like the plague.

(Website visitors are notoriously reluctant to click on vague labels because they quite rightly suspect they’ll have to do a lot of work to sift through the content.)

For best results, a card sort should be followed up by a tree test to evaluate the proposed menu structure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is a tree testing

A

A tree test evaluates a hierarchical category structure, or tree, by having users find the locations in the tree where specific tasks can be completed.

Tree testing is incredibly useful as a follow-up to card sorting because it:

  1. Evaluates a hierarchy according to how it performs in a real-world scenario, using tasks similar to a usability test; and
  2. Can be conducted well in advance of designing page layouts or navigation menus, allowing inexpensive exploration and refinement of the menu categories and labels.

To conduct a tree test, you don’t need to sketch any wireframes or write any content. You only need to prepare two things: the tree, or hierarchical menu, and the tasks, or instructions which explain to study participants what they should attempt to find.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Risks during tree testing

A

Your tree should be a complete list of all your main content categories, and all their subcategories. Even if you are interested in testing only a specific section of the tree, excluding the other sections is risky because it assumes that users will know which section to go to. For example, if your website had both a Products and a Services category, and you chose to test only the Products tree, you would miss out on finding whether your audience understands the difference between these two categories.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Competitive tree testing

A

If you are considering different labels for the same tree category, you may want to test two different trees in order to compare how the terms perform. Such a test is especially easy to do with Userzoom’s tree-testing tool, which allows you to randomly assign participants to different versions of the tree, in a manner similar to an A/B test on a live website. If you do test multiple trees, avoid showing the same user two alternative trees in the same session — users’ behavior when interacting with the second tree would be skewed by their experiences with the first one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Testing locations via tree test

A

There’s no need to prepare and test a separate tree if you just want to compare different locations for a label — such as whether tomatoes should be placed under Fruits or Vegetables. Instead of testing two different trees for each location, you can test a single tree and compare how many users clicked Fruits vs. how many clicked Vegetables. (You’ll also be able to tell which category they tried first, if they clicked on both.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Digital tools for tree testing

A

Userzoom and Treejack are both good options for conducting tree testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Tree testing tasks

A

Ideally you should include tasks which target:

  1. Key website goals and user tasks, such as finding your most important product (Success rates in your primary navigation tasks can serve as a baseline against which you can compare secondary tasks, and a reference point for future testing.)
  2. Potential problem areas, such as new categories proposed by stakeholders or participants in a card sort
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Tree testing task phrasing

A
  1. Find information about starting a business.
  2. You are moving to Santa Fe next year, and once you arrive you would like to supplement your income by opening a side business providing lawn-care services. Find out what regulations you will need to follow.
  3. You are considering opening a lawn-care service. See if there are any resources on this site that can help you begin the process.

3rd is ❤️

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Tree testing limitations

A

Tree testing is often executed as a remote, unmoderated study. After recruiting representative users, you simply send them a link to the study, and the testing tool walks them through the process of completing the tasks using their own computer. The testing tool is much better than a human would be at keeping track of exactly which categories users click on.

However, this format does not capture the full context of user behavior (such as comments made while performing a task) and you can’t ask personalized follow-up questions.

To minimize the effects of the format, conduct at least a few moderated pilot sessions before collecting the bulk of your data. In these moderated sessions you can ensure the task wording is understandable and also get a chance to pick up on nuances that might otherwise be hard to spot in the quantitative data. For example, in a recent tree test we noticed in the pilot testing that many users avoided a certain category for the first half of their session, because the label was so broad that they feared the contents would be overwhelming. This trend wasn’t noticeable in the quantitative results due to the task order randomization, but it was quite obvious as you sat through each session and saw task after task where users ignored an obvious choice. That insight alone made the pilot testing a day well spent.

You can also partially compensate for the inability to ask follow-up questions by including a short survey after the tree test. Rather than asking users to recall any labels they found confusing, provide them with a list of labels and ask them to check which were difficult to understand. This question can be followed up with an open-ended question inviting users to share any further comments and feedback, to elicit unexpected assumptions or misunderstandings that may not be apparent from the click history.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Generative research methods

A

Research goal:
Find new directions and opportunities

Field studies, diary studies, interviews, surveys, participatory design, concept testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Formative research methods

A

Research goal:
Improve usability of design

Card sorting, tree testing, usability testing, remote testing (moderated and unmoderated)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Summative research methods

A

Research goal:
Measure product performance against itself or its competition

Usability benchmarking, unmoderated UX testing, A/B testing, clickstream / analytics, surveys

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Usability testing (aka usability-lab studies)

A

Participants are brought into a lab, one-on-one with a researcher, and given a set of scenarios that lead to tasks and usage of specific interest within a product or service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Fild studies

A

Researchers study participants in their own environment (work or home), where they would most likely encounter the product or service being used in the most realistic or natural environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Contractual inquiry

A

Researchers and participants collaborate together in the participants own environment to inquire about and observe the nature of the tasks and work at hand. This method is very similar to a field study and was developed to study complex systems and in-depth processes.

39
Q

Participative Design

A

Participants are given design elements or creative materials in order to construct their ideal experience in a concrete way that expresses what matters to them most and why.

40
Q

Focus groups

A

Groups of 3–12 participants are led through a discussion about a set of topics, giving verbal and written feedback through discussion and exercises.

41
Q

Interviews

A

a researcher meets with participants one-on-one to discuss in depth what the participant thinks about the topic in question.

42
Q

Eye tracking

A

an eyetracking device is configured to precisely measure where participants look as they perform tasks or interact naturally with websites, applications, physical products, or environments.

43
Q

Usability benchmarking

A

tightly scripted usability studies are performed with larger numbers of participants, using precise and predetermined measures of performance, usually with the goal of tracking usability improvements of a product over time or comparing with competitors.

44
Q

Remote moderated testing

A

Usability studies are conducted remotely, with the use of tools such as video conferencing, screen-sharing software, and remote-control capabilities

45
Q

Unmoderated testing

A

An automated method that can be used in both quantitative and qualitative studies and that uses a specialized research tool to capture participant behaviors and attitudes, usually by giving participants goals or scenarios to accomplish with a site, app, or prototype. The tool can record a video stream of each user session, and can gather usability metrics such as success rate, task time, and perceived ease of use.

46
Q

Concept testing

A

A researcher shares an approximation of a product or service that captures the key essence (the value proposition) of a new concept or product in order to determine if it meets the needs of the target audience. It can be done one-on-one or with larger numbers of participants, and either in person or online.

47
Q

Diary studies

A

Participants are using a mechanism (e.g., paper or digital diary, camera, smartphone app) to record and describe aspects of their lives that are relevant to a product or service or simply core to the target audience. Diary studies are typically longitudinal and can be done only for data that is easily recorded by participants.

48
Q

A/B testing (aka multivariate testing, live testing, or bucket testing)

A

A method of scientifically testing different designs on a site by randomly assigning groups of users to interact with each of the different designs and measuring the effect of these assignments on user behavior.

49
Q

Surveys

A

A quantitative measure of attitudes through a series of questions, typically more closed-ended than open-ended. A survey that is triggered during the use of a site or application is an intercept survey, often triggered by user behavior. More typically, participants are recruited from an email message or reached through some other channel such as social media.

50
Q

Amounts of users for each type of study

A
51
Q

SUPR-Q

A

Standardized User Experience Percentile Rank Questionnaire

This is an 8 item questionnaire for measuring the quality of the website user experience, providing measures of usability, credibility, loyalty and appearance. You can read details about SUPR-Q at www.suprq.com

52
Q

The best amount of stakeholders

A

3-7

53
Q

General goal of the research

A

To help business

54
Q

Risks of stakeholders mapping

A

Be sure that (almost) nobody will see it

55
Q

Researcher’s goal explained

A
  1. Where are we now
  2. What should we do to achieve better reality and as a result to achieve business goals

Transfer from reality 1 to reality 1’

56
Q

Rainbow spreadsheet

A

Approach to analyze results from usability tests

57
Q

Basic questions at the stakeholder (business) interview

A
  1. What is the general business goal, what it consists of?
  2. How the business sees the ‘better reality’ where they want to come?
  3. How the business sees the way to the achieving the goal?
  4. What are the challenges?
  5. Which relevant knowledges/data the business already have?
  6. Which data/knowledges are important and desired, but missed?
  7. What are the risks? What the business is afraid of?
  8. What hypothesises do they have, what already have been checked, what were the results?
  9. How the business is built: structure, processes. What are out possibilities and restrictions?
  10. Who are owners of important knowledges?
  11. Who else is involved?
  12. Why?
58
Q

What else is important to understand before the research?

A

Who has initiated the research and to interview them

59
Q

What question should we keep in mind while business interview?

A

How can I help the business

60
Q

What activities are good for the participative design workshops

A
  1. Business model canvas
  2. Value proposition canvas
  3. ‘How might we’ questions
  4. Empathy mapping
  5. CJM
  6. Brainstorming/brainwriting
61
Q

What is reframing

A

When during kick off interview it happens that the business needs something another that he came with

62
Q

Facilitating activities list

A

https://toolbox.hyperisland.com/
https://www.sessionlab.com/library

63
Q

How do we help business via UX research

A
  1. Give relevant and true data about the current reality
  2. Give definition of what exactly is ‘better reality’
  3. To articulate tasks that will help to reach the ‘better reality’
64
Q

Methods of business-interview

A
  1. ❌ Brief/survey
  2. ✅ In-depth interview
  3. ✅ Group interview
  4. ✅ Participative practics
65
Q

What is important before the interview

A

To understand who initiated all this process and to interview them

66
Q

Що таке програма дослідження або протокол

A

Документ, який розкриває контекст досліджуваної проблеми, гіпотези,
високорівневі дослідницькі запитання, а також містить обгрунтування
дослідницьких методів та вибірки.

67
Q

Складові програми дослідження

A
  1. Контекстуалізація: опис бізнес проблеми та проблеми користувача (Background)
  2. Цілі/ Задачі дослідження (Objectives)
  3. Дослідницькі питання (Research questions)
  4. Гіпотези (Hypothesis)
  5. Методи збору інформації (Methodology)
  6. Спосіб проведення (Set-up)
  7. Критерії рекруту та вибірка (Recruitment criteria & sample)
  8. Сценарії проведення сесій (Discussion guide/scenario)
  9. Протоколи документування (Debrief)
68
Q

What is a протокол дослідження на англ?

A

Research plan

69
Q

The 7 core components of a user research plan

A
  1. The background of the research project detailing why we are conducting this study. This can also include the internal stakeholders involved
  2. The objectives and goals of the research, what the teams want to learn from the research, or what they would like the outcome to be. I think about objectives this way: We should be able to answer all the objectives at the end of the research project
  3. A breakdown of the participants we are recruiting and how we are recruiting them
  4. How we are conducting the research, which includes the chosen research method
  5. An interview guide as a cheat sheet of instructions/questions to follow during the research session (This includes components within itself, such as the introduction, interview questions and conclusion)
  6. An approximate timeline of when the research will take place, and when a report could be expected
  7. Resources for people to find, such as links to any other documentation
70
Q

What is a research strategy

A
  1. Consists of main business purpose (which is human-centered and answers to a question ‘Why do we need this all’? Preferrably HMW
  2. The general purpose consists of tasks — each one gets us clother to the business goal and helps users, has its separate value
  3. Then tasks have their research questions (open)
  4. Each research question has to refer to the method and source of information
71
Q

One of the benefits of research strategy

A

Understand possibiilities and restrictions

72
Q

Rules of research strategy — requirements to goal, tasks and research questions

A

Goal — HMW, human-centered, sums up all tasks, valuable for business

Tasks — completion creates partly value the user, takes the business closer to its goal

Questions — open, which knowledges do we need to complete these tasks.

All methods should be triangulated

73
Q

What is Ethnography

A

To become a part of a social unit in order to understand the people in it

74
Q

Quantitative Research criterias

A
  1. Validity
  2. Reliability
  3. Replicability
75
Q

What is the difference between User Panel and User base (as research sources)

A

User panel is warm base, user base is a cold base

76
Q

What is a good research?

A

It’s actionable research

77
Q

What is affinity mapping

A

The process of organizing qualitative data to create an affinity diagram. Affinity diagrams help organize information into groups of similar items.

78
Q

How to create a traditional affinity diagram in 5 steps

A
  1. Record all notes or observations on individual cards or sticky notes
  2. Look for patterns in notes or observations that are related and group them
  3. Create a group for each pattern or theme
  4. Give each theme or group a name
  5. Create a statement of what you learned about each group (provide your analysis or key insight)
79
Q

CJM

A

Is an infographic visualization of the process that a persona segment goes through in order to accomplish a goal.

80
Q

How to analyze CJM

A
  1. Look for points in the journey where expectations are not met
  2. Identify any unnecessary touchpoints or interactions
  3. Identify the low points or points of friction
  4. Pinpoint high-friction channel transitions
  5. Evaluate time spent. In your journey map, provide time durations for the major stages of the journey
  6. Look for moments of truth
  7. Identify high points or points where expectations are met or exceeded
81
Q

Про суперцитати

A

тут немає єдино правильної відповіді, бо все залежить від ваших даних. В цілому використання “суперцитати” або POV (про це Юра в лекціях мав розповідати) є корисним. Звісно ніколи така узагальнена цитата не буде ідеально підходити, але вона має відображати суть сегменту і показувати ключову відмінність від решти сегментів. те, що для одного сегменту ви не можете підібрати таку цитату, то не старшно. Просто поясніть замовнику, що даних не вистачило на цьому етапі, а вигадувати і додумувати від себе ви не можете (бо ви дослідники), тому один сегмент без POV

82
Q

Про висновки після CJM

A

для CJM важливо чітке розуміння кроків, їх послідовності і основних відмінностей для різних сегментів. Тобто дивлячись на ваш джорні людина, яка не проводила це дослідження має приходити до тих же ключових висновків, що й ви. Тобто ті короткі твердження, які ви фіксуєте на стікерах на кожному з етапів і є вашими мікро-висновками, якими ви підштовхуєте до загального розуміння. Решту проговорюєте усно або виносите на окремі слайди.

83
Q

Коли використовуються відкриті та закриті опитування?

A

Відкриті — Дізнатися про те, чого ми ще не знаємо. Початок дизайн-процесу, дивергентний етап.

Закриті — Оцінити те, що ми вже знаємо. Конвергентний етап

84
Q

Спосіб застосування опитувань

A
  • почати, озирнутися
  • Доповнити, триангуляція
  • Оцінити
  • Колективне мапування
85
Q

Інструмент для пріоритезації

A

Impact effort matrix

86
Q

Про POV (Point of view)

A

Описывает контекст. Всегда должен быть конфликт — “Пользователь на самом деле хочет тото и тото, но…” или “и при этом стереотип”. Если нет конфликта — “Пользователь на самом деле хочет вот это”. То с этим сложно работать.

87
Q

Правило выбора уровня абстракции для JTBD (я хочу прожить счастливую жизнь или сделать шаг)

A

Выбираем тот уровень, на который можем повлиять на что-то. Лучше на один шаг более абстрактный, чтоб получить более широкую сферу деятельности. Однако в пределах здравого смысла.

88
Q

Альтернативный джоб сторям метод работы с JTBD (и очень хороший)

A

Карта сил

89
Q

Про JTBD и архетипы

A

По классике, богатый внутренний мир не так влияет на пользователя, как контекст. Но можно предположить, что набор работ зависит от того, кто этот человек по жизни. Студент, клерк. И что вообще входит в его жизнь. Контекст пользователя прямо пропорционально зависит от того, кто пользователь.

90
Q

Джобс по алану клементу

A

(Лінк в матеріалах)

91
Q

Warm up exercises for workshop

A
  1. Check in — everyone tells about his mood now and why he is feeling so
  2. Everybody draws a Times cover with news about our product
92
Q

What is moderated testing

A

Is where there is a facilitator (via remote tools without a facilitator is unmoderated)

93
Q

Data vs. Findings vs. Insights

A

Data refers to unanalyzed user observations, findings capture patterns among data points, and insights are the actionable opportunities based on research and business goals.