Stats Flashcards
Conditions for a binomial distrubution
- The probability remains constant
- The events are independent
- data is discrete
AND
Multiply
OR
ADD
by default how does a binomial distrubution measure
≤ / ≥
P(X less than x)
= P(X ≤ x-1)
P(a≤X≤b) =
= P(X ≤ b) - P(x ≤ a-1)
P(X≥x)=
= 1- P(X≤x-1)
P(X>x)=
= 1 - P(X≤x)
Where do points of inflection occur on a normal distribution curve
1 standard deviation from the mean
standard variable
z = (x - μ) / σ
Normal approximation to binomial test
np>5
nq>5
Normal approximation to binomial conditions
n must be large
p must be close to 1/2
Normal approximation to the binomial
X ~ N (np, npq)
μ+-σ
2/3 or 67%
μ+-2σ
95%
μ+-3σ
97.7%
standard normal distrubution
mean = 0 , standard deviation = 1
npq
varience (standard deviation^2)
When is stem and leaf good
when representing small amounts of discrete data
Outlier =
μ +- 2σ
or
UQ + 3/2 * IQR
LQ - 3/2 * IQR
When is the median + IQR better than mean and standard deviation?
when there are outliers.
IQR and Median are not effected by outliers
frequency standard deviation
√((Σx²f/Σf) - x̅²)
standard deviation
√((Σx²/n) - x̅²)
frequency density =
frequency / class width
estimating the mean for grouped data
Σx* midpoint / n
position of the nth percentile
n/100 * frequency
lower class boundary / upper class boundary
if ≤ / ≥ then its the actual value
if < / > then its the value +- 0.5
ploting culmulative frequency
always plot upper class boundary
P(A∪B) =
P(A) + P(B) - P(A∩B)
P(A|B)
P(A∩B) / P(B)
P(A∩B) for mutually exclusive
= 0
For independent events P(A∩B) =
P(A)P(B)
For independent events P(A|B) =
P(A)
sum of probabilities in a probability distribution =
1
nCr =
n!/r!(n-r)!
how many ways can n objects be arranged
n!
How many ways can n objects be arranged is r of those objects are the same?
n!/r!
For binomial: P(success) =
nCr * P(success)^r * P(failure)^n
conditions for a normal distrubution
- data is continous
- data is symmetrically distrubuted with a peak
- data tails off either side of the mean
critical value normal hypothesis tests method
critical value = μ +- kσ/√n
k = Ф(sig level)
if observed mean more extreme than critical value then reject H0
Normal hypothesis test method
Z = observed - expected / sample standard deviation
k = Ф(sig level)
if Z is more extreme than k reject H0