AB Testing Flashcards
Sample Size Equation
Z-value * pop w/ feature * pop w/o feature / margin error ^ 2
Type I Error
The Null Hypothesis should be accepted but is rejected
Type II Error
The Null Hypothesis should be rejected but is accepted
Confidence Interval
Confidence of not making a Type I Error, Common value is 90% - 95%
Null Hypothesis
Hypothesis that control and treatment having the same impact or disapproval of the testing feature
Statistical Power
Probability of finding a statistically significant results when Null Hypothesis is false.
Rejecting the Null Hypothesis
Creates statistically significant result for the tested feature, that there is a difference between the treatment and the control.
Relationship between CI and Test Sample
If you want higher CI, you will need a larger sample size.
Definition of Power
Formula to figure out the chance for the null hypothesis to be rejected, bigger the sample size, generally bigger the power. Power = 1 - Beta where beta is the type II error
Alpha - Power
p - value, this is the value that determines the likelihood for your feature data to have a type I error. The likelihood or p-value needs to be below the determined CI in order for us to reject the null hypothesis.
Beta - Power
beta is the type II error or failure to account for the effect of the feature in the population sample.
Assumptions of Power
Generally, the Beta will tell us the chance that the feature is ignored in the sample set [beta = .2, 20% of the time the feature is missed, Power will be .8 in this case]
Jacob Cohen
States that for most researchers, type I error is about 4 times more significant than type II errors
General practice for Power
Generally a power of .8 is enough for the experiment. As the requirement for bigger sample to increased CI is exponential.
Critical value
t-statistic, this will let us know the degree of freedom and how much type II error is present within the dataset