Week 5: Inequalities & Limits Flashcards
Independence meaning with random variables
Means X1,…, Xn are mutually independent random variables. They do not impact each other.
Law of Large Numbers (LLN)
If we take average of large number of identical, independent random values, this average will get closer and closer to the true expected value (mean) of these random values as the number of samples grows.
CLT (Central Limit Theorem)
Adds more detail to LLN stating that for iid random variables with finite variance, the average approaches the true mean at a rate of O(n^(-1/2)).
This means that the convergence rate slows down as number grows larger (faster earlier, slower later).
Cauchy-Schwarz Inequality
E[XY] ≤ (√E[X^2])*(√E[Y^2])
Markov’s Inequality
To estimate how likely it is that random variable X is larger than fixed positive value t (a tail of distribution) we use Markov’s Inequality.
P(X > t) ≤ E(X) / t
Chebyshev’s Inequality
P(|X - E(X)| > t) ≤ Var(X) / (t^2)
or when replacing t with (k*std)
P(|X - E(X)| > (k*std)) ≤ 1 / (k^2)
Mill’s Inequality
If X ~ N(0,1) then for all t > 0, we can calculate the probability using CDF of normal distribution.
P(X > t) ≤ ( ( √(2/π) ) )*( exp(-0.5t^2) ) ) / t
Weak Law of Large Numbers (WLLN)
Guarantees convergence in probability of the sample average to the true mean. For any small positive distance between sample average and µ being at most ϵ approaches 1 as n tends to infinity.
Strong Law of Large Numbers (SLLN)
This is stronger than WLLN because it guarantees that convergence of sample mean to true mean will happen as n tends to infinity. WLLN only can promise the difference being less than a fixed value ϵ as n tends to infinity.
X with line on top meaning
Sample X
In CLT (central limit theorem) what are sample mean and sample variance
E(Sample X) = µ
Var(Sample X) = Var(X) / n
For Bernoulli distribution, what is the normal approximation by the CLT?
Sample Xn ~ N( p, (p)(1-p)(1/n))