Chapters 1,2,3 Flashcards
- sample space
- event
- probability associated
- set of possible outcomes
- subset of sample space
- probability associated P[A] ∈ [0,1] is numerical measurement from outcome. A function X: S-> R - associated to each element in S is random variable X
Definition: the distribution function
The distribution function of the random variable X is the function F_X : R -> [0,1] given by F_X(x) = P[X (less than or equal to) x]
This is not PDF and also known as cumulative distribution
Doesn’t have to be continuous! Eg discrete
Discrete random variable vs continuous random variable
Discrete: integer valued and values in finite/countable set.
p(x) =P[X=x]. R_X is subset of Z.
Probability function of X is p(x). F_X jumps upwards at each x and size shows p(x) = F(x) - F(x-1)??, p(x)=0 if x is not in R_X.
Continuous: take values in R , not countable. Has PDF- probability density function.
PDF?
Probability density function
A function f:R->E is a probability density function if:
1) f(x) is “bigger than or equal to” 0 for all x ∈R
2) integral( -∞, ∞) of f(x) .dx =1
Ie PDF INTEGRATES TO GIVE 1
• if f(x) is a PDF then there is a random variable X such that the distribution function of X satisfies F_X(x) =integral( -∞, x) of f(u) .du
Remover for continuous functions
Relation between PDF and probability distribution first
- if f(x) is a PDF then there is a random variable X such that the distribution function of X satisfies F_X(x) =integral( -∞, x) of f(u) .du
- integral on [-∞,x] of the PDF gives the cumulatative distribution function F_X(x)= P[X less than or equal to x]
Remember continuous PDF!!!
d/dx of F(x) = f(x)
PDF integrates to distribution
Distribution differentiates to PDF
Relation between PDF and probability distribution second
If we can write the distribution of a random variable X in the form F_X(x) = integral over ( -∞, x) of f_X(x).dx where f_X(x) is a PDF Then X is a continuous random variable
• only continuous X have PDF as discrete isn’t continuous and probability distribution not differentiable
Finding P[ x≤ X ≤y]
P[X=x]
(Continuous case)
P[ x≤ X ≤y] =F(y) - F(x) (distribution function)
= integral over [x,y] of PDF f(t) .dt
•in the continuous case P[X=x] = integral over [x,x] of f(u).du =0 (PDF gives 0) for all x. Chance of particular value is 0.
For discrete case this is isn’t
For a distribution function:
For a general function F:R-> [0,1] we sat that F is a distribution function if:
1) 0 ≤ F(x) ≤ 1 with limit as x tends to -∞ of F(x) =0 and limit as x tends to ∞ of F(x) =1.
2) F(x) is non decreasing in x, if x
(3) F is right continuous with left limits)
At every point x_o in R both one sided limits exist and f(x_o)=limit as x -> x_o - of f(x)
Continuous on right.
Conversely if we have a function F satisfying the properties there exists a random variable X with distribution function F
Some values for PDF and probability distributions
PDF f must be non-negative, F cannot decrease, but PDF is not a distribution function so doesn’t need to satisfy 1, 2 or 3.
f can be greater than 1 for some values of x.
P(X=x) not equal to f(x)
If the density has a large value over a small region then probability is comparable to value x size.
Discrete and continuous expectation.
Expectation of a function of X.
Discrete:
E(x) = sigma( x p(x)) for x in R_x
E(x) = integral of (-∞, ∞) xf(x) .dx
In general: For g(X) function of X
E(g(x)) = sigma( g(x)p(x)) or integral over -∞,∞ of g(x)f(x) .dx
Discrete or continuous
µ = µ_x = E[X]
g(X) function of X
rth moment E[x^r]
rth moment: g(x) = x^r r in N formula or rth moment E[x^r]
Variance
Var(X) = E [(X - µ) ^2] = E[X^2] - µ^2
Sigma squared = variance of X
Random variables without a mean:
Example is Cauchy distribution
If sum or integral in the definition of the mean doesn’t converge. Ie the mean doesn’t exist.
Let X be a random variable with probability density function:
f(x) = 1/( π (1 + x^2)) a random variable with this PDF is said to have a Cauchy distribution
Calculating mean of X: integral( -∞,∞) x/ ( π (1 + x^2)) .dx = lim as s and t tend to infinity of
integral( -t,s) x/ ( π (1 + x^2)) .dx = lim as s and t tend to infinity of [ ln(1+x^2) / 2π] t and -s
= lim as s and t tend to infinity of (1/2π) (log(1+t^2) - log(1+s^2))
This doesn’t have a well definite limit hence mean is undefined.
Cauchy distribution was not the only example of a distribution without a defined finite mean
The weak law of large numbers
The weak law of large numbers states that if we have a sequence of independent random variables X_1, X_2,… with the same distribution and with mean mu, then for any ε>0, as n tends to infinity.
P[ |Xbar_n - μ| > ε ] tends to 0 where Xbar_n = (1/n) sum of (n to i=1) X_i
Ie when it exists mean is the long term average of samples for distribution without defined mean
When X1,…,X_n are independent random variables with Cauchy, Xbar n also has regardless of n. So sample mean does not tend to mu for large n. (without a defined finite mean)
Properties of the normal distribution
For X normal distribution
aX+b ~ N( aμ +b , a²σ²) given in exam
Standardise normal Z= (x-μ)/ σ(given in exam)
•FOR INDEPENDENT VARIABLES
Sum of n independent X_i
~N ( sum of μ_is, sum of σ²_i’s)
THE GAMMA FUNCTION
Γ: (0,∞ ) to R
Γ(α) =
integral over (0,∞)
Of
u^{α-1} • exp^(-u) .du
Lemma2: relating to gamma function
•Γ(1) = 1
•For α bigger than 1
Γ(α) = (α-1)Γ(α-1)
• for n=1,2,3,…
Γ(n) = (n-1)!
THE BETA FUNCTION
For α,β bigger than 0.
B(α,β) = (Γ(α)Γ(β)) / ( Γ(α+β))
Prod over sum
Lemma 2.3 relating to gamma function and beta
integral over (0,∞) Of u^{α-1} • exp^(-βu) .du
= Γ(α)/β^(α)
βu instead of u in gamma function
Properties of variables with the gamma distribution
•For two independent variables with gamma distribution with same beta:
X₁~ Ga( α₁ , β). X₂~ Ga( α₂,β). Independent then
X₁+X₂ ~ Ga( α₁+α₂, β)
• sum of n independent Exp(λ) variables is Ga(n, λ)
(Given Ga( 1, lambda) = Exp(lambda))
UNIVARIATE TRANSFORMS LEMMA 3.1
Suppose that g:R_X to R is
STRICTLY MONOTONE on Rx
Then the PDF!!!! Of Y=g(X)
is
f_Y (y) =
{ f_X ( g-1 (y)) • | dg-1 /dy| for y in g(Rx)
{ 0 o/w
Steps:
• check PDF of X identify Rx
• Function of g st Y= g(X)
• check STRICTLY MONOTONE find g-1 , dg-1/dy, g(Rx)
Use sketch!!!
Linear transforms lemma 3.1 when not strictly monotone on range
Check specific region and use probabilities to define Fx(x)
Differentiate for PDF.
Eg X^2 implies X between -sqrtx and sqrtx, use Integrals to find this probability
Check the range!!! Be careful!!!!