Prob & Statistics Flashcards
Define Binomial distr
Let X1,…,Xn be i.i.d. Ber(p) for some fixed p in (0,1).
Define Sn:=X1+…+Xn, then Sn is called a Binomial random variable with parameters n and p. We write Sn~Bin(n,p).
Define Geometric distr
Let (Xn)n be i.i.d. Ber(p) with p in (0,1). Define X as the first integer i>=1 for which Xi=1. Then X is called a geometric random variable with success probability p. We write X~Geo(p) or X~G(p).
Define Negative Binomial distr
Let (Xn)n be i.i.d. Bernoulli random variables with success probability p in (0,1). Define X as the random variable which is equal to the first integer i for which Sum(Xj : j in {1,…,i})=r. We write X~NB(r,p). Note X>=r.
Define Hypergeometric distr
Consider a population with N distinct individuals and composed exactly of D individuals of type I and N-D individuals of type II. Draw from this population n individuals at random and without replacement (an individual cannot be selected more than once). Define X= number of individuals of type I among the n selected ones.
Then, X is called a Hypergeometric random variable with parameters n, D and N. We write X ~ Hype(n,D,N).
Formulate Bayes Thm
Let A1,…,Am be a partition of some sample space s.t. P(Ai) in (0,1).
Then for j in {1,…,m} and event B s.t. P(B)>0 it holds that:
P(Aj | B)= P(B | Aj)P(Aj) / (Sum( P(B|Ai) P(Ai) | i in {1,…,m} )
If X~ f , what is aX~ ?
For f=U([0,1]),Beta(a,b),Exp(lam),Gamma(a,b), N(0,1)
X~Exp(lam) => lam*X~Exp(1)
X~N(0,1) => sig*X~N(0,sig^2)
If X1,…,Xn~ f, what is Sum(X)~ ?
For f=U([0,1]),Beta(a,b),Exp(lam),Gamma(a,b), N(mü,sig^2), N(0,1)
X~G(ai,b) => X1+…+Xn~G(Sum(ai),b)
X~N(0,1) => X1+…+Xn~N(0,n)
X~N(mü_i,sig_i^2)=>Sum(Xi)~N(Sum(mu_i),Sum(sig_i^2))
- If X~N(mü,sig^2) what is Z s.t. Z~N(0,1)?
- If Z~N(0,1) what is X s.t. X~N(mü,sig^2)?
- If X~N(mü,sig^2) then F_X(x) = …. ? F_X(mü)=..?
- Z=(X-mü)/sig
- X=mü+sig*Z
- Phi((x-mü)/sig); 1/2
Define quantile
Given a cdf F, the quantile ta of order a in (0,1) is defined as ta:=inf{t | F(t)>=a}=:F^-1(a), where F^-1 denotes the generalized inverse of F. When the latter is bijective (at least in the ngbh of ta), then F^-1 is the inverse of F in the classical sense.
Define converges in distribution
Let (Zn) be a sequence of random variables (not necessarily defined on the same prob space). We say that (Zn) converges in distr towards Z~N(0,1) if F_Zn(x)–>F(x)=Phi(x) for all x.
State the Central Limit Theorem
Let X1,..,Xn be i.i.d. r.v. with expectation mü in R and variance sig in (0,inf). Then Zn:=sqrt(n)(bar(Xn)-mü)/sig [or equivalently (Sn-nmü)/(sqrt(n)sig)] converges in distribution towards Z~N(0,1).
What is Var(Sum(sig*Xi))= ?
Var(Sum(sigXi))=Sum(vivj*cov(Xi,Xj))
Law of iterated expectation?
E[X]=E[E[X|Y]] (whenever E[X] < inf)
State the Jacobian formula
Let X ~ f and g in C^1(O) for some open O of R and g strictly monotone and g’(x)!=0 forx in O and P(X in O)=1. Then r.v. Y=g(X) is absolutely contin with density a.e. equal to f_Y(y)=(f_X o g^-1(y))/|g’ o g^-1(y)|*1_g(O)(y).
Define estimator
Let X1,…,Xn be i.i.d. f( . | theta0) random variables for theta_0 in Theta s.s.o. R^d, d>=1.
hat(theta) is an estimator for theta_0, based on X1,…,Xn i, if hat(theta) is a statistic of X1,…,Xn, that is any quantity of the form T(X1,…,Xn), where T is a measurable map on (R^n,B(R^n)).