Product Execution & Analytics Flashcards

1
Q

What part of the PM role frequently involves analytics?

A

Assessing the impact of a feature

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why is it important to predict the impact of features ahead of time (before launch)?

A

So you can make feature prioritization decisions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How do analytics tend to change as a company grows/matures?

A

More mature companies tend to have more and higher-quality data available about all aspects of itself, so everything becomes more and more quantified.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the four common contexts in which analytics appears in a PM interview?

A
  • Assessing a product
  • Optimizing a user flow via funnels
  • Running feature experiments via A/B testing
  • Investigating a metrics anomaly
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What does the TOFU acronym represent in product analytics?

A

The four major buckets of metrics for monitoring product health.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

List the four major buckets of product health metrics (TOFU)

A
  • Tech (technical infrastructure)
  • Objects (key objects in the product)
  • Finance (financials/business metrics)
  • Users (user behavior, engagement)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Give some examples of technical infrastructure metrics

A
  • Page load times
  • API calls
  • Bandwidth usage
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Explain what key objects metrics are and give an example.

A

Generally every product has some key objects that it is centered around; something the user transacts or manipulates—some pieces of content that may or may not be user generated. These metrics are often used to optimize the content of the product (rather than its structure).

For example, a streaming app like spotify might track how many songs there are, what proportion of the overall library is played, or which songs are most popular.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Give some examples of financial/business metrics.

A

Revenue, customer acquisition cost (CAC), churn rate, lifetime value.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What’s the difference between annual recurring revenue and annual run rate?

A

Annual recurring revenue represents actual recurring revenue commitments from customers, while annual run rate represents an annualized projected revenue that may not be sourced from subscription revenue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the HEART framework for the categories of user behavior metrics?

A
  • Happiness
  • Engagement
  • Adoption
  • Retention
  • Task Success
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Give an example of a user happiness metric.

A

Net promoter score (“On a scale of 1-10, how likely are you to recommend this product to a friend or colleague.”)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What’s the difference between an adoption metric and an engagement metric?

A

Adoption metrics tend to be near the top of the funnel (number of sign-ups) while engagement metrics tend to be more mid-funnel (e.g. daily active users).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How is conversion rate defined?

A

It’s effectively the transmission coefficient between two stages of a funnel, but it is up to product managers to define which two stages most meaningfully can be labeled the “conversion rate”.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Explain the “task success” part of the HEART framework of user behavior metrics.

A

This subcategory of metrics captures how many users “succeed” at receiving the core value of the product. It is up to the product manager to decide what success means for any given product. For AirBnB it may be the percentage of users that book a stay.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What makes an experiment successful?

A

It was well defined, correctly executed and provided an accurate, statistically significant insight.

17
Q

What are the benefits of explicitly stating a hypothesis before performing an experiment?

A

It forces clarity, gives team members a chance to make a prediction or “put a stake in the ground”, and helps ensure that you set up the treatments in a way that tests the hypothesis.

18
Q

When choosing treatments for an experiment, what two factors should limit how many treatments you select?

A

How many treatments you actually need, and whether you have enough users to properly evaluate all those treatments.

19
Q

What is a perverse incentive in product experimentation that you should look out for?

A

The incentive to make experiments appear successful (which may cause you misinterpret the results or change which experiments you choose to do).

20
Q

What is a good MECE framework for beginning to investigate a metric anomaly?

A

Dividing potential causes into internal factors and external factors.

21
Q

Give some examples of internal factors that could explain a metrics anomaly.

A
  • Data integrity (e.g., is the metric correctly reported?)
  • New release (e.g., did we release code that broke something?)
  • Experiments (e.g., is an experiment having negative externalities?)
  • Infrastructure failure (e.g., are our own systems going down?)
22
Q

Give some examples of external factors that could explain a metrics anomaly.

A
  • User driven events (e.g., is there a flood of user driven activity on our system due to some event?)
  • Holidays / major cultural events (e.g., is it July 4th or the SuperBowl? )
  • Third party infrastructure (e.g., is there an AWS outage?)
  • Security breaches (e.g., are we under DDoS or some other type of attack?)
23
Q

When investigating a metrics anomaly, what sort of probing questions could you ask to find out more about which users are impacted?

A
  • Is it affecting all users or just some sub set of users?
  • Is the effect isolated to certain platforms (e.g., only on iOS or Android or web)?
  • Is it dependent on the browser the user users?
  • Is it dependent on the version of the app they’re on?
24
Q

When investigating a metrics anomaly, what sort of probing questions could you ask to find out more about the nature of the change?

A
  • Did the change occur gradually or suddenly?
  • Have we seen a change like this in the past? If so, in what context, or what was the cause?
25
Q

When investigating a metrics anomaly, what are some probing questions oen could ask to learn more about how the change was discovered?

A
  • Did a customer submit a bug report or service ticket? What else does the customer know?
  • Was the change noticed in our dashboards?
  • Is the metric derived from other more simple metrics that we can decompose it into?
26
Q

List examples of changes relating to a product that could explain a metrics anomaly.

A
  • New code
  • Experiments
  • Updates to infrastructure
  • New customer support policies
  • Changes in competitors products
  • New press coverage or other publicity
27
Q

List four common applications of funnels.

A
  • Sign up
  • Conversion
  • Sharing
  • Purchase
28
Q

What does LTV stand for and what does it mean?

A

Lifetime value. It defers to the sum of discounted cash flows that the company expects to receive from the average customer.

29
Q

Does the LTV formula account for customer acquisition cost (CAC)?

A

The way that Rocketblocks defines it, yes. But in general just make sure you are clear about how you intend to use it.

30
Q

What is a north star metric?

A

A high level metric that indicates the health of a product or component of a product that a particular team is responsible for. E.g. nights booked for AirBnB.

31
Q

When asked “what would you measure”, you should make sure that the metric you choose is…

A
  • Specific
  • Relevant
  • High Priority / Focused
32
Q

What is an “aha moment” metric, and why is it useful?

A

A user behavior that is a highly specific (and, ideally, sensitive) indicator that the user will be retained. It is especially useful because it’s a leading indicator of retention, which normally takes lots of time to measure.