Web Analytics & Testing Flashcards

1
Q

Web Analytics Five Key Dimensions

A
Clickstream, 
Outcomes,
Experimentation, 
Customer Voice (UGC, Social listening and online ratings)
Competitors
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Paywall

A

Subscribers get something different than free customers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Freemium

A

low quality or limited amount of product for free.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Clickstream

A
Just the first level of info, need to amplify insights. 
What pages did people visit?
What products did people purchase?
What was the average time spent?
What sources did they come from?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Web Analytics 2.0

A
Clickstream
What pages did people visit? Etc…
Outcomes
How much revenue was generated?
How much were costs reduced?
How much loyalty do users show?
Experimentation
Why do user behave as they do?
What drives behavior?
Voice of the Customer
Ask the customer.
Surveys, Usability Testing, UGC.
The Competition
Competitive Intelligence, Industry
Benchmarks
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Outcomes

A

How much revenue was generated?
How much were costs reduced?
How much loyalty do users show?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Experimentation

A

Why do user behave as they do?

What drives behavior?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Voice of the Customer

A

Ask the customer.

Surveys, Usability Testing, UGC.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Time on site

A

Be careful, you may want low time for customer satisfaction!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

The Competition

A

Competitive Intelligence, Industry

Benchmarks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Click Stream Foundational Metrics

A
Visitors / Unique visitors
§ Pay attention on definition of
“unique” (cookie? date? IP?)
§ Time on site
§ Tricky! Should consider the goal of
the site
§ Page views
§ Good for content/brand sites
§ Unclear for other sites
§ Increasingly outdated
(blogs, Gmail, Flash, dynamic content)
§ Session data
§ PV/Session
§ Bounce rate (“One-and-Dones”)
§ Reveals real visitors
§ % of single page visits
§ (or % of <5 second visits) Segment, segment, segment!
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

GA

A

Google Analytics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Segmenting Clickstream

A

Don’t look at the average user, look at the important segments for basic analytics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Goals for Clickstream!

A
“Unique Visitors” tends to be THE
metric to follow, BUT instead:
§ Set up Goals and measure
Conversion Rate and Goal Value
(SettingsàEditàGoal)
§ Segment by:
§ Referring sites
§ Search engines + Keywords
§ AdWords campaigns
• Analyze for Leads!
– “Wikipedia referrals are more engaged and have low bounce rate”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Clickstream content analytics

A
Top Content
§ Why users are coming
§ What they are looking for
§ Top Landing (Entry) Pages
§ First impression!
§ Polish and direct users to goals
§ Click Density Analysis
§ E.g. CrazyEgg.com
§ Funnel Analysis
§ In multi-page processes, where do users abandon?
§ Mortgage application at Agency.com à move personal information form later (after complex work of mortgage with easier stuff)
§ Abandoned carts / purchases at Lane Bryant à offer free shipping; did some surveys and found sticker shock on shipping price.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You can’t manage…

A

You can’t manage what you don’t measure

17
Q

o Clickstream referral metrics

A

Where are users coming from?
§ How does traffic from different referrers behave?
§ How well is this measured?
§ What about Social Referrals?

18
Q

Bot Traffic

A

 >60% non-human traffic
 Search Engine (Good) Bots (30%): Indexing the Web
 Bad BOTS: impersonators, spammers, hacking, scrapers
 You must NOT include the BOTs in your analytics
 - Otherwise you’ll tailor your analysis to BOTs and optimize to increase them
 - BOT detection algorithms are needed; these tend to be pretty accurate (~80-90%)
 -Off the shelf algorithms you can download from universities or DS can build one
- Also Google Analytics can remove BOTs (fairly accurate)

19
Q

Dark Social in Social Referrals

A

We know it’s coming from social, but referral source not being passed to the website (70+%)

20
Q

5 Critical App Metrics @ Humin

A

 § Growth § DAU/MAU § # Users § # Relationships § Retention § A7, A15, A30, A45 (how many new starters are still with us X days later) § Measured in Daily Cohorts § Engagement – Key User Actions / DAU § Profile Opens § Call/Text §

21
Q

DAU

A

Daily active users

22
Q

Humin Aha! moments

A

Profile Opens, Call/Text swipes, Voicemail and Location Services (Enablers)

23
Q

FUE Funnel

A

First user experience funnel

24
Q

Importance of Randomization

A

90% of randomized can be replicated, only 20% non-randomized. Jim Manzi, Uncontrolled

25
Q

A/B vs. multivariate testing

A

A/B is one: e.g. Zynga optimizing customer acquisition funnel
Multivariate Testing: Randomize elements and find impact of all together and interactions of different modules. So best combinations.

26
Q

The cost of knowledge

A

A/B Testing always exposes users to a lower performing version for some period of time

27
Q

Segments and testing

A

Post-hoc analysis

28
Q

Stratified Random Sampling

A

Population weighted random sampling or over-weighting

29
Q

Best Test Practice OEC

A

Establish the Overall Evaluation Criterion (OEC)
– Agree early on what you are optimizing
– Getting agreement on the OEC in the org is a huge step
forward
– Suggestion: optimize for customer lifetime value, not
immediate short-term revenue/growth
– Criterion could be weighted sum of factors, such as
• Time on site (per time period, say week or month)
• Visit frequency
– Report many other metrics for diagnostics, i.e., to
understand why the OEC changed and raise new hypotheses

30
Q

Best Test Practice

A

Run A/A tests – simple, but highly effective
– Make sure they’re truly different for the OEC @ 95% significance: validates operations, execution, samples, etc. EVERY TIME you run the experiment
- Run an experiment where the Treatment and Control
variants are coded identically and validate the following:
1. Are users split according to the planned percentages?
2. Is the data collected matching the system of record?
3. Are the results showing non-significant results 95% of the
time?
This is a powerful technique for finding problems
– Generating some numbers is easy
– Getting correct numbers you trust is much harder!

31
Q

Best Test Practice: Ramp Up

A

Ramp-up
§ Start an experiment at 0.1%
§ Do some simple analyses to make sure no egregious
problems can be detected
§ Ramp-up to a larger percentage, and repeat until 50%
§ Minimum sample size is “quadratic in the
effect” we want to detect
§ Detecting 10% difference requires a small sample and
serious problems can be detected during ramp-up
§ Detecting 0.1% requires a population 100^2 = 10,000
times bigger
§ Abort the experiment if treatment is
significantly worse on key metrics

32
Q

o Min sample size is quadratic in the effect we want to detect

A

 Detecting 10% difference requires a small sample
 Detecting 0.1% requires a population 100^2 = 10,000 bigger vs. 10% difference.
 Abort the experiment if treatment is significantly worse on key metrics

33
Q

How to see your travel on the internet

A

Google Maps Timeline