SOA Probability Flashcards

1
Q

If A ⊂ B then (A n B)

A

(A n B) = A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Probability Generating Function Defined as PGF where

Px(t) =

A

Px(t) = E [tX]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

E ( X ( X -1 ) ) = 2nd Moment of what Generating Function?

A

PGF - Probability Generating Function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

E [X ∣ j ≤ X ≤ k] - continuous case

A

Integrate numerator from j to k

( ∫ x ⋅ fX (x) dx )

÷
( Pr ( j ≤ X ≤ k ) )

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Percentile for Discrete Random Variables

A

Fxp) ≥ p

i.e the function at πp has to atleast be equal or greater than the percentile p

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

E [X | j ≤ X ≤ k] - Discrete Case

A

Sum numerator from x = j to k

( ∑ (x)( Pr [j ≤ X ≤ k] )

÷

( Pr [j ≤ X ≤ k] )

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Percentile for Continous Random Variable

A

density function fXp) = p

Has to equal the percentile

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Finding mode of discrete random variable

A

calculate probabilities of each possible value and choose the one that gives the largest probability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Finding mode of continuous random variable

A

take derivative of density function set it equal to 0 and solve for mode. (finding local maximum of function)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Cumulative Distribution Function (CDF) of a probability density function (PDF)

A

integrate from lowest value of X to the variable x itself

0

∫ f(t) dt

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Chebyshev’s Inequality

A

Pr( |X-µ| ≥ kσ ) ≤ ( 1 / k2 )

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How to break up the inequality of Chebyshev’s Equation

A

Pr( |X-µ| ≥ kσ )

=

Pr( (X-µ) ≥ kσ ) + Pr( (X-µ) ≤ -kσ )

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Univariate Transformation CDF METHOD

From X to Y

A

1.) Given PDF of X find CDF of X

2.) Perform Transformation where FY( y ) = P( Y ≤ y ) with subsitution

3.) Restate CDF of Y using CDF of X ,

then subsitute CDF of X found in step 1 into CFD of Y

4.) Take Derivative of CDF of Y to find PDF of Y

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Univariate Transformation PDF METHOD

From X to Y

A

1.) Get PDF of X if not given

2.) Find PDF of Y using the formula

fY( y ) = fX( [g-1( y )] ) • | (d/dy) g-1( y ) |

3.) Integrate PDF of Y to get CDF of Y if required

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Discrete Uniform PMF

A

( 1 / b - a + 1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Discrete Uniform E[X]

A

( a + b / 2 )

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Discrete Uniform Var[X]

A

[( b - a + 1 )2 - 1]

÷

12

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Bernoulli’s E[X]

A

p

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Bernoulli’s Var[X]

A

pq

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Bernoulli’s MGF

A

pet + q

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Bernoull’s Variance Short-cut for Y = (a-b)X + b

A

(b - a)2• pq

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Property of Expected One RV: E[c]=

A

E[c]=c, c = constant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Property of Expected One RV: E[c⋅g(X)]=

A

E[c⋅g(X)]= c ⋅ E[g(X)]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Property of Expected One RV: E[g1(X)+g2(X)+…+gk(X)] =

A

E[g1(X)+g2(X)+…+gk(X)]

=

E[g1(X)] + E[g2(X)]+ …+E[gk(X)]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Variance formula for One RV:

Var[X]

Var[g(X)]

A

Var[X] = E[(X−μ)2] = E[X2] − (E[X])2

Var[g(X)] = E[(g(X) − E[g(X)])2] = E[g(X)2] − (E[g(X)])2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Property of Variance One RV: Var[c] =

A

Var[c] = 0,

c = constant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Property of Variance One RV: Var [aX + b]

A

Var [aX + b] =

a2 • Var[X]

a,b = constant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Coefficient of Variation for One RV

CV[X]=

A

CV[X] =

(SD[X])

  • *÷**
  • *(E[X])**
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Binomial Mean E[X]

A

E[X] = np

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Binomial Variance Var[X]

A

Var[X] = npq

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Binomial MGF

A

MX(t) = (pet+q)n

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Binomial PGF

A

PX(t) = (pt+q)n

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Hypergeometric PMF: Pr(X = x)

A

Pr(X = x)

[( m Choose x ) • ( N - m Choose n - x )]

÷​

( N choose n )

x = sucess out of m = total sucess

n - x = failures out of N - m = total failures

N = population size; n = sample size

34
Q

Geometric PMF

A

Pr(X = x) = (1−p)x−1 • p

Where the first success is observed

Define X as the number of rolls to get

35
Q

Geometic E[X]

A

E[X] = ( 1 / p )

36
Q

Geometric Var[X]

A

Var[X] = [(1 - p)÷p2]

37
Q

Geometric MGF

A

MX(t) =

(pet)

÷

(1−(1−p)et )

for t < −ln(1−p)

38
Q

Memoryless Property of a Distribution

A

The memoryless property states that, for positive integer c,

Pr( X − c = x∣ X > c ) =

Pr(X = x )​

39
Q

Negative Binomial PMF

A

Pr(X = x) =

(x-1 choose r−1) •pr•(1−p)x−r

r = the desired number of “successes”

p = probability of a success

X = number of “trials” until the rth “success”

X represent the number of coin tosses necessary for three heads to occur

40
Q

Negative Binomial E[X]

A

E[X] = r ÷ p

41
Q

Negative Binomial Var[X]

A

Var[X] = r•( 1−p ÷ p2)

42
Q

Negative Binomial MGF

A

MX(t) =

(pet ÷ 1 − (1 − p)et)r

43
Q

Geometric Distribution for Pr( X ≥ x )

A

∑ (1-p)x-1 • p

=

(1-p)x-1

44
Q

Geometric Distribution Derivation

A

Independent Bernoulli trials​

P(X=x)

= Pr( first “success” on xth “trial”)

= Pr (“failure” on the first x-1 “trials” and “success” on xth “trial”)

= Pr( “failure” on the first x-1 “trials”) • Pr( “success” on the xth “trial”)

= (1 - p)x-1 • p

45
Q

Geometric Alternative Form (failures)

A

Let Y be the number of “failures” before the first “success”, rather than the number of “trials”

number of “trials” = number of “failures’’ + number of “successes”

X = Y+ 1 ⇒ Y = X − 1

Let Y be the number of rolls before getting...

46
Q

Negative Binomial Distribution Derivation

A

Independent Bernoulli Trials

Pr (X = x )

= Pr (rth “success’’ on xth “trial”)

= Pr( r−1 “successes’’ on the first x−1 “trials” ∩ “success’’ on xth “trial” )​

= Pr(r−1 “successes’’ on the first x−1 “trials” )•Pr(“success’’ on xth “trial”)

47
Q

Negative Binomial Alternative Form “failures”

A

Let Y be the number of “failures” before the rth “success”

= Pr(X − r = y)

= Pr(X = y + r)

Let Y represent the number of tails before getting the third head

48
Q

Exponential MGF

A

Mx(t) = ( 1 / 1−θt )

t < (1 / θ)

49
Q

Exponential Var[X]

A

X∼Exponential(θ) = θ2

X∼Exponential(λ) = (1 / λ2)

50
Q

X∼Exponential(θ) E[X]

A

E[X]; X∼Exponential(θ) = θ

E[X]; X∼Exponential(λ) = λ

51
Q

Gamma PDF

A

fX(x) =

(1 / Γ(α)) ⋅ (xα−1α) ⋅ e(−x / θ)

if α is a positive integer then

Γ(α) = (α - 1)!

52
Q

Gamma E[X]

A

E[X] = αθ

53
Q

Gamma Var[X]

A

Var[X] = αθ2

54
Q

Gamma MGF

A

MX(t) = (1 / 1−θt)α

55
Q

Normal Distribution PDF

A

fX(x)=

[(1 / (σ • sqrt(2π))] •

( e−[((x−μ)^2) / (2 • σ^2)] )

56
Q

Standard Normal Distribution

A

fZ(z) =

(1 / sqrt(2π) •

(e−((z)^2) / 2 ) )

Z = ( X - μ ) / ( σ )

57
Q

Joint Density Function for Pr( X + Y < 2 )

inner integral limit

A

double integration:

Determine limits for inner integral following the picture

∫ ( ∫ fX,Y(x,y) dy ) dx

58
Q

Pr(X ≤ c ∣ Y = y)

A

Integrate from −∞ to c

∫ fX∣Y(x∣y) dx

=

∫ [fX,Y(x,y) / fY(y)] dx

59
Q

Pr(X ≤ x ∣ Y ≤ y)

A

[ Pr(X ≤ x Y ≤ y)

/ Pr(Y ≤ y) ]

60
Q

Weighted Average of CDF

A

FY(y) = a1FC1(y) + a2FC2(y)

61
Q

Weighted Average of Survival Function

A

SY(y) = a1SC1(y) + a2SC2(y)

62
Q

To construct the mixed (or unconditional) distribution of Y

A

Pr(Y = y)

=

Pr(Y = y∣X = x)⋅Pr(X = x) + Pr(Y = y∣X = x)⋅Pr(X = x)

63
Q

Pr(A ∣ B) + Pr(A′ ∣ B) =

A

[ Pr(AB) + Pr(A′B)

/

Pr(B) ]

64
Q

Pr(A ∣ B) + Pr(A′ ∣ B) =

A

[Pr(B)

/

Pr(B) ]

=

1

65
Q

Double Expectation

A

E[X] = E[E [X ∣ Y] ]

66
Q

Law of total Variance

A

Var[X]

=

E[Var[X | Y]] + Var[E[X ∣ Y]]

EVVE

67
Q

Pr(X = x∣Y ≤ y) =

A

Pr(X=x Y≤y)

/

Pr(Y ≤ y)

68
Q

Pr(X = x∣Y = y)

A

Pr(X=x Y=y)

/

Pr(Y=y)

69
Q

Cov[X,X]

A

E[X⋅X] − E[X]⋅E[X]

=

E[X2] − (E[X])2

=

Var[X]

70
Q

Cov[a,X]

A

0

71
Q

Cov[a,b]

A

0

72
Q

Cov[aX , bY]

A

ab⋅Cov[X , Y]

73
Q

Var[aX]

A

Cov[aX,aX]

=

a2⋅Cov[X,X]

=

a2⋅Var[X]

74
Q

Cov[X+a , Y+b]

A

Cov[X , Y]

75
Q

Cov[aX + bY , cP + dQ]

A

ac⋅Cov[X,P] + ad⋅Cov[X,Q] + bc⋅Cov[Y,P] + bd⋅Cov[Y,Q]

76
Q

Var[aX + bY]

A

a2⋅Var[X] + 2ab⋅Cov[X,Y] + b2⋅Var[Y]

77
Q

Cov[X,Y] =

A

E[XY] − E[X]⋅E[Y]

78
Q

ρX,Y=Corr[X,Y]

Coefficient of Correlation

A

Cov[X,Y]

/

SD[X]⋅SD[Y]

79
Q

Multivariate Transformation CDF Method

Case 1: Transforming two variables (X and Y) to one variable (W).

A

1.) Using equation of transformation, W=g(X,Y), express FW(w)=Pr(W≤w) = Pr[g(X,Y)≤w]

.

  1. ) Calculate FW(w)=Pr(W≤w) by integrating over region of integration defined by domain of fX,Y(x,y) and g(X,Y)≤w
    (transformation) .
  2. ) Differentiate FW(w)

to get fW(w) if required.

80
Q

Multivariate Transformation PDF Method

Case 2: Transforming two variables (X1 and X2) to two variables (W1 and W2)

A

1.) Find fX1,X2(x1,x2)

if not given.

  1. ) Introduce dummy variable (for Case 1 only).
  2. ) Find inverse of equations of transformation, h1(w1,w2)

and h2(w1,w2)

4.)Calculate determinant of Jacobian matrix, J and take its absolute value

5.) Find fW1,W2(w1,w2) using the following formula.
fW1,W2(w1,w2) = fX1,X2[h1(w1,w2),h2(w1,w2)]⋅|J|, J≠0