Probability Flashcards
Expected value E(x)
Also known as the mean
Equal to the sum of all x*f(x)
Example: the expected value of the square of a fair dice roll is 1/6 + 4/6 + 9/6 + 16/6 + 25/6 + 36/6 = 91/6
Variance V(X)
The measure of dispersion or variability.
V(X) = E[(X-μ)^2] = (sum of all f(x)*x^2) - μ^2
Formula linking variance and expected value
V(X) = E(X^2) - E(X)^2
Expected value of multiple variables
E(X + Y + Z) = E(X) + E(Y) + E(Z)
Variance of multiple variables
V(X + Y + Z) = V(X) + V(Y) + V(Z)
independent variables only
E(h(X))
sum of all h(x)f(x)
So E(x^2) = sum of all x^2f(x)
E(aX + b)
a*E(X) + b
V(aX + b)
V(X)*a^2
P(A or B)
P(A) + P(B) - P(A and B)
P(A given B)
P(A and B)/P(B)
Number of ways of choosing r elements from a set of n, and taking order into account
P(n, r) = n!/(n-r)!
Our set has n objects, with n1 of one type and n2 of a second type, …, nk of a kth type.
Number of possible permutations?
n!/(n1! * n2! * … * nk!)
Number of combinations (unordered arrangements) of r elements in a set of n
C(n, r) = n!/(r! * (n-r)!)
Two events are independent…
if and only if P(A and B) = P(A)P(B)
==> P(A given B) = P(B given A) = P(A)P(B)
Bayes’ theorem
P(A given B) = P(B given A) * P(A)/P(B)
General version of Bayes’ theorem for mutually exclusive events
P(Ek|A) = P(Ek)P(A|Ek) / [P(E1)P(A|E1) + … + P(En)*P(A|En)]
Bernoulli distribution
This is when there’s one random experiment with two possible outcomes
Notation: X~Ber(p), where p=P(X=1)
E(X) = p
V(X) = p(1-p)
Discrete binomial distribution
X is the number of successful trials out of n, each with probability p
Notation: X~Bin(n, p)
P(X=k) = C(n, k) * p^k * (1-p)^(n-k)
E(x) = np
V(X) = np(1-p)