Analysis Flashcards
To start, I would conduct a competitive analysis to understand what already exists on the market and identify gaps. Based on this, I might formulate hypotheses about which features could be most beneficial for users. Then, I would run surveys to validate these ideas and gather feedback. After that, I would follow up with user interviews to dig deeper into their needs and pain points. Once I have enough solid data, I would begin creating wireframes and iterating based on the insights
Regarding the Habit Tracker app, I would start with a competitive analysis to understand what’s already out there on the market. I would also review user feedback to identify what’s working well and what isn’t. If there’s room for improvement or a unique feature, I would focus on that to create something distinctive. If we don’t have a unique idea yet, we would need to brainstorm and refine our concept based on the research
My main idea is that, since there are already many habit tracker apps on the market, we need to create something unique from the start that will surprise users or hook them. This is why it’s important to conduct thorough research to build an MVP with distinct features. Afterward, we should test our wireframes to ensure that users understand the features and UI, and that these elements meet their needs.
Once we have the wireframes, we need to conduct usability testing to ensure the app is easy to use and user-friendly. This step is crucial. Afterward, we can move on to design and test the UI, continuing to refine it before implementation.
Understanding Users: Research helps us identify who our users are, their needs, goals, behaviors, and pain points. This allows us to create more relevant and user-centered designs.
alidating Assumptions: We often have assumptions about what users need or what will work, and research helps us confirm or challenge these assumptions. This reduces the risk of designing a solution based on incorrect or incomplete information.
Identifying Market Gaps: Through competitive analysis and user research, we can spot opportunities in the market where existing products fail to meet users’ needs, allowing us to create something unique.
Guiding Design Decisions: Research provides us with data and insights that inform our design decisions. This could be related to the features to include, the UI style, or the specific functionalities users will find valuable.
Testing Hypotheses: Research allows us to test our ideas early in the process through methods like surveys, interviews, or usability testing. This feedback helps us iterate and refine our concepts before investing heavily in design or development.
Building Empathy: Research helps us see things from the user’s perspective, fostering empathy. This is essential for creating designs that are truly useful and intuitive for the target audience.
In short, research helps us make informed decisions, reduces uncertainty, and ensures that we’re solving real problems for real users in a way that’s both effective and efficient.
Research also plays a significant role in cost reduction, and it’s an important point to mention.
Avoiding costly redesigns: By conducting research early on, we can identify potential usability issues or flaws in the concept before investing time and resources into design and development. This prevents the need for major revisions down the line.
Focusing on valuable features: Research helps us understand what features users truly need and will engage with. This prevents us from spending resources on features that may not add value or resonate with users, reducing wasted time and effort.
Reducing development time: Having clear insights from user research allows us to focus on building the most important functionalities first, which leads to faster and more efficient development, saving costs associated with unnecessary features.
Prioritizing user needs: By identifying the core problems users face and addressing them early, research helps in designing a more streamlined, effective solution, leading to fewer changes in the later stages of the project, saving both time and money.
Before we design, we need to gather data to understand market needs and competitor offerings
Market research helps define the target audience and the unique value proposition.
In the case of redesigning an existing product:
We need to review current user feedback and analytics to identify what’s not working.
User feedback analysis helps in finding areas for improvement based on real user experiences.
For a mobile app:
First step: Focus on platform-specific research, such as understanding mobile user behaviors, device limitations, and specific mobile UI/UX best practices.
Example phrase: “Understanding mobile behavior and device constraints is key to designing a user-friendly app.”
Flashcard 3: Mobile research involves understanding specific UI/UX needs for smaller screens and touch interactions.
For a web app:
First step: Start by analyzing cross-platform compatibility and optimizing for both desktop and mobile devices, ensuring responsive design.
Example phrase: “We need to ensure the web app is fully responsive across all devices, with a focus on performance.”
Flashcard 4: Cross-platform compatibility ensures that the app works seamlessly across various screen sizes and devices.
Cross-platform compatibility ensures that the app works seamlessly across various screen sizes and devices, providing an optimal user experience on both mobile and desktop versions.
For a corporate website:
First step: Focus on branding research to understand the company’s identity and how it should be communicated online.
Example phrase: “We need to ensure the design reflects the brand values and clearly communicates the company’s mission.”
Flashcard 5: Branding research involves understanding the company’s voice, tone, and identity to reflect it accurately in the design.
For an e-commerce website:
First step: Conduct user journey mapping to define the typical customer path, from browsing to purchasing, and identify key touchpoints.
Example phrase: “User journey mapping helps us understand how customers interact with the site, from product discovery to checkout.”
Flashcard 6: User journey mapping identifies key stages in the purchasing process to improve the shopping experience.
Attitudinal Methods
A class of research methods that collects self-reported data about users’ perceptions and attitudes. Attitudinal data is based on “what users say.” Surveys, user interviews, and focus groups are attitudinal methods. Attitudinal methods are often contrasted with behavioral methods, which collect data about user actions and behaviors.
A/B Test (A/B Testing)
An analytics method that involves randomly deploying two different versions of a product to two different user groups in order to identify which works best. The winning version is usually selected based on metrics such as conversion rate or click-through rate. To conduct an A/B test, you will need to install specialized analytics software.
Analytics
A class of research methods that involve collecting real-time usage data for a product. Examples of collected metrics include the number of user visits, the number of clicks on a particular element, percentage of users who took a particular action (e.g., checkout, scroll) on a web page. Analytics research methods are not controlled: the data collected reflects users’ behaviors in their natural environment. Using such methods requires that the product is instrumented with analytics software such as Adobe Analytics or Google Analytics.
Behavioral Methods
A class of research methods that collect data reflecting users’ actions and behaviors. Unlike attitudinal methods, which are based on “what users say,” behavioral methods are based on “what users do.” Examples of behavioral methods include usability testing and analytics methods.
Card Sorting
A research method in which study participants group individual labels according to criteria that make sense to them. This method helps designers to group items into categories and create an information architecture of a site or application. Card sorting can be “open” (if the categories are not defined in advance of the study and participants group similar items into clusters) or “closed” (if a predefined set of categories is given to participants and they are asked to assign items to these categories).
Clickstream Analytics
An analytics method that involves analyzing the sequence of pages that users visit as they use a site or application. It can provide insights about potential issues, typical navigation routes, and the content that users interact with right before completing key actions on a site or in an application.
Concept Testing
An attitudinal research method that involves collecting users’ thoughts and attitudes about a product idea (“concept”) in its incipient stages, usually through a qualitative survey. It is used very early in the discovery phase of the design process to understand whether a specific product idea meets users’ needs and expectations.
Diary Study
A research method used to collect self-reported data about user behaviors, activities, and experiences over an extended period that can range from a few days to months. During that period, study participants are asked to keep a diary and log specific information about the activities of interest.
Ethnographic Study
A class of qualitative research methods that involves observing users in their natural habitat. In UX, the term is used as a synonym for “field study.” However, in social sciences, ethnographic studies involve immersion in a particular culture or community, to observe the behaviors and rules of that community.
Eyetracking
A behavioral research method that involves tracking users’ eye movements as they interact with a product or perform a specific activity, to determine where they focus their attention. Eyetracking studies require special equipment to capture participants’ eye movements. Eyetracking data can be used to understand which design elements attract users’ attention and which are ignored.
Five-Second Test
An attitudinal research method in which a study participant is shown a design for five seconds and then asked to describe what they saw. A five-second test is meant to gather users’ first reactions to the aesthetic qualities of a design.
Focus Group
A qualitative, attitudinal research method in which a facilitator conducts a meeting with a group of 6–9 people to discuss their experiences with a product or service. The term “focus” relates to the role of the facilitator, who maintains the group’s focus on certain topics during discussions. Focus groups are used in the early discovery stages of product development to gauge users’ mental models and expectations.
Prototype Testing
A type of usability testing in which the interface being tested is a design prototype rather than a live product. The prototype can be presented to the participant on paper (paper prototyping) or using interactive prototyping software. Prototype testing is used before a design is implemented to identify potential usability issues and fix them, or to explore how alternative design solutions fare with users.
Qualitative Method
A type of research method that aims to collect observational data about users’ behaviors and interactions. Such data may identify whether particular aspects of the interface are easy or hard to use. Focus groups and user interviews are examples of qualitative methods. Usability testing can also be qualitative when used to uncover issues in a design.
Quantitative Method
A type of research method that collects metrics such as success, satisfaction, conversion, task time, or number of user visits. Quantitative methods focus on numbers. Examples of quantitative methods include analytics-based methods and quantitative usability testing. In quantitative usability testing, metrics such as task time and success are gathered in order to assess whether particular tasks are easy to perform.
Survey
A research method in which a participant responds to multiple-choice or open-ended questions that are presented to them online, on paper, or by phone. Surveys are an attitudinal research method that collects participants’ self-reported perceptions and attitudes. Surveys can be used to collect qualitative or quantitative data.
Task Analysis
A research method that studies how users perform a specific task: their goals, the different steps they take, the order in which they do them, when and where they do it, and what information they need during the task. Task analysis often involves a mix of interviews and context methods such as contextual inquiry, field studies, or diary studies. It is used to inform the design of complex workflows for a product.
Tree Testing
A task-based research method that evaluates a hierarchical category structure (or tree) by having users find the locations in the tree where specific resources or features can be found. It is an information-architecture method used to assess how well the navigational hierarchy of a site matches users’ expectations.
User Interview
A one-on-one attitudinal research method in which an interviewer asks a participant questions about a topic, listens to their responses, and follows up with further questions to learn more details. User interviews can be used by themselves in discoveries to inform the early stages of product design or can be combined with other methods such as contextual inquiry and usability testing.
Usability (User) Testing
A research method in which a researcher (called a “facilitator” or a “moderator”) asks a participant to perform tasks, usually using one or more specific user interfaces. While the participant completes each task, the researcher observes the participant’s behavior and listens for feedback. Usability testing can be qualitative or quantitative. Qualitative usability testing is used to identify problems in an interface, whereas quantitative usability testing focuses on collecting metrics that help assess the overall user experience of the product.
Discovery Phase (Understanding the problem, users, and market)
This phase is all about gaining insights into the problem, defining the target audience, and identifying user needs.
Methods to use:
Interviews: Conduct one-on-one conversations with potential users to understand their goals, frustrations, and behaviors.
Example: “Tell me about your current experience with habit trackers.”
Surveys/Questionnaires: Reach out to a broader audience to gather quantitative data and insights.
Example: “What features do you expect most from a habit tracking app?”
Competitive Analysis: Study other products in the market to understand what’s working and what’s not.
Example: Analyzing similar habit tracker apps to identify gaps and opportunities.
Contextual Inquiry: Observe users in their natural environment to see how they interact with similar products or perform relevant tasks.
Ideation Phase (Brainstorming, generating ideas, and defining features)
During ideation, you focus on potential solutions and refine the ideas generated based on user needs.
Methods to use:
Brainstorming Sessions: Gather insights from stakeholders and users to generate ideas for features, interactions, and design directions.
Example: What features can we add to make habit tracking fun and motivating?
Card Sorting: Helps organize and structure information in a way that makes sense to users. It’s especially useful for defining app navigation and content hierarchy.
Example: How should we group different categories of habits in the app?
Personas: Create user personas based on research data to represent the target audience.
Example: A persona for a busy professional who wants to track habits like exercise and productivity.
Design Phase (Creating wireframes, prototypes, and detailed designs)
This phase involves sketching out solutions, creating wireframes, and building prototypes to visualize how the product will function.
Methods to use:
Wireframe Testing: Conduct usability testing on low-fidelity wireframes to understand how users interact with them and if they can navigate through the flow easily.
Example: Test how users understand the navigation or how they interact with habit-tracking tasks.
Prototyping and Usability Testing: Create interactive prototypes to test specific features and flows, gathering feedback to refine the design.
Example: Test how intuitive it is for users to set a habit goal on the app.
A/B Testing: Test two or more versions of a design to compare performance and understand what users prefer.
Example: Test two different onboarding flows for new users to see which one gets better engagement.
Methods to use:
Wireframe Testing: Conduct usability testing on low-fidelity wireframes to understand how users interact with them and if they can navigate through the flow easily.
Example: Test how users understand the navigation or how they interact with habit-tracking tasks.
Prototyping and Usability Testing: Create interactive prototypes to test specific features and flows, gathering feedback to refine the design.
Example: Test how intuitive it is for users to set a habit goal on the app.
A/B Testing: Test two or more versions of a design to compare performance and understand what users prefer.
Example: Test two different onboarding flows for new users to see which one gets better engagement.
Implementation Phase (Developing the product)
At this stage, the product is being built, and you need to ensure that it’s ready for launch and working as intended.
Methods to use:
Beta Testing: Release the product to a small group of users for early feedback before the full launch.
Example: A small group of users test the app for bugs and provide feedback on usability.
Bug Testing and Feedback Loops: Continuous feedback from users during the product’s use to identify pain points and usability issues.
Example: Collect real-world feedback on any errors or frustrations users encounter during habit tracking.
Post-Launch Phase (Gathering feedback and iterating)
After the product is launched, you continue collecting data to improve it based on real user experiences.
Methods to use:
Analytics (Quantitative): Track how users are interacting with the app to identify drop-off points, popular features, and areas needing improvement.
Example: Use heatmaps or track user flow to see where users abandon the app.
Follow-up Surveys: Ask users for feedback on their experience with the product after they’ve been using it for a while.
Example: A survey asking users what features they love most and what could be improved in the habit tracker app.
Customer Support Feedback: Gather insights from customer support interactions to identify common issues or feature requests.
Example: Users may complain about the difficulty of syncing the app across devices.
Early Stages (Discovery & Ideation):
Focus on qualitative methods (interviews, surveys, personas) to understand needs and generate ideas.
Middle Stages (Design):
Use methods like wireframe testing, prototypes, A/B testing to refine designs.