Ch1 Probability Fundamentals Flashcards

1
Q

Discrete probability characterized by:

A

pmf = p(x)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

pmf = p(x) satisfies :

A

Kolmogorov Axioms

  1. p(x) = Pr(X=x)
  2. p(x) ge 0 for all x
  3. sum (p(x)) =1 over all x
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

cdf =

A

cumulative distribution function

F(x) = Pr(X le x)

F(x) = sum(p(X le x))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

E[X]=

A

sum(x*p(x)) over all x (continuous case use integrals)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

E[X]= words

A

mean X, long run average, 1st absolute moment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Var(X)=

A

= E[(X - E[X] )^2) ]

= E[X^2] - E[X]^2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Var(X)= (computational form)

A

E[X^2] - E[X]^2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

rth absolute moment

A

E[Xr]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

rth central moment

A

E[(X-µ)r]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Var(X)=

A

σ2 = E[(X-µ)2]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

marginal pmf

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

R.V.’s are independent iff

22
Q

Cov(X,Y)=

25
26
Corr(X,Y)=
27
Bernoulli distribution
28
Binomial distribution
29
Poisson distribution
30
Normal distribution
31
32
33
CLT
34
normal approximation to binomial
35
continuity correction for continuous approx to discrete distribution eg. normal approx to binomial
36
chi-square
37
t-distribution
38
39
Ô is an unbiased estimator of O iff
E[Ô] = O
40
Ô is a weakly consistent estimator of O iff
for any small positive constant, €, Pr( |Ô - O| \< € ) → 1 as n→inf also called convergence in probability Ô→P O
41
IF B(Ô) → 0 and Var(Ô)→0 as n→inf
then Ô is a consitent estimator of O
42
MSE(Ô) =
E[( Ô - O )2] = Var(Ô) + Bias2(Ô)
43
Relative efficiency
R.E. = RE(S1, S2) = MSE(S2) / MSE(S1)
44
RE(X, Y) \< 1
X is less efficient than Y
45
Markov's inequality
Pr( X \> a) le E[X]/a
46
Chebyshev’s inequality (words)
is the theorem most often used in stats. It states that no more than 1/k2 of a distribution’s values are more than “k” standard deviations away from the mean
47
Pr(|X-A|=\>KY)
Pr(|X-A|=\>KY)\<=1/K2, The absolute value of the difference of X minus A is greater than or equal to the K times Y has the probability of less than or equal to one divided by K squared.
48
Slutsky’s theorem
IF Ô1P O1 and Ô2P O2 Then sum and product also converge
49
E[S2]= Var[S2]=
remember (n-1)/σ2 \* S2 = chi-squared (df = n-1) E[S2] = σ2 Var(S2) = σ4 / (n-1)2