Random vectors and Joint distribution Flashcards
Definition Joint Density
If X and Y are discrete random variables, also (X, Y) is discrete, and we denote ƿ(x,y) its density
More explicitly
p(x,y) = P(X=x, Y=y)
called the joint density of X and Y
▪ In this context the densities ƿₓ and ƿy are called marginal densities
Marginal densities
▪ Theorems
1)
▪ pₓ (x) = Σy pₓ,y (x,y)
▪ py (y) = Σₓ pₓ,y (x,y)
2) Let X and Y be two discrete random variables, taking values in E and F respectively, and
▪ f: E * F –> R then
▪ E(f(x,y)) = Σₓ,y f(x,y) pₓ,y (x,y)
Independent random variables
▪ Definition
Let:
▪ X be a random variable with values in E
▪ Y be a random variable with values in F
X,Y are independent if for every
▪ A ⊆ E
▪ B ⊆ F
P(XϵA, YϵB) = P(XϵA) * P(YϵB)
Independent random variables
▪ Theorem
1)
X and Y are independent iff
▪ ∀XϵA, ∀YϵB
▪ pₓ,y(x,y) = pₓ(x) * py(y)
2)
Let X,Y
▪ independent
▪ discrete
▪ real valued random variables
Then
E(X,Y) = E(X) * E(Y)
Covariance
▪ Definition
Let X,Y be two real valued random variables
The covariance between X and Y is defined by
Cov(X,Y) = E[(X-E(X)(Y-E(Y))]
|
Cov(X,Y) = E(XY) - E(X)E(Y)
Covariance
▪ Properties
1) Cov(X,Y) = Cov(Y, X)
2) Cov(X,aY + bZ) = aCov(X,Y)+bCov(X,Z)
3) Cov(X,X) = Var(X)
4) If X and Y are independent the Cov(X,Y)=0
If Cov(X,Y) = 0 we say that X and Y are uncorrelated
We have seen that independent –> uncorrelated
(but the inverse is not necessarily true)
Correlation coefficient
▪ Definition
The correlation coefficient detects exact or approximate linear relations between random variables
⍴(X,Y) = ( Cov(X,Y) ) / ( √Var(X) * √Var(Y) )
Correlation coefficient
▪ Properties
▪ ⍴(aX + b, cY + d) = ⍴(X,Y)
▪ -1 ⩽ ⍴(X,Y) ⩽ 1
▪ ⍴(X,Y) = 1 iff Y=aX + b for some a>0
▪ ⍴(X,Y) = -1 iff Y=aX + b for some a<0
Correlation coefficient
▪ Theorem and Corollary
Theorem
Var(X+Y)= Var(X) + Var(Y) + 2Cov(X,Y)
Corollary
Var(X+Y)= Var(X) + Var(Y)
If X and Y are uncorrelated (in particular if they are independent)
Random vectors
▪ Notions and results of any dimensions
▪ If X₁ ,X₂, …, Xₙ are discrete random variables, their joint density is defined by
px₁ ,x₂, …, xₙ (x₁ ,x₂, …, xₙ) =
P(X₁ = x₁ ,X₂ = x₂, …, Xₙ = xₙ)
▪ pxi = Σx₁,x₂,…,xₙ p(x₁ ,x₂, …, xₙ)
▪ E[f(x₁ ,x₂, …, xₙ)] =
Σx₁,x₂,…,xₙ f(x₁ ,x₂, …, xₙ)*px₁,x₂,…,xₙ(x₁ ,x₂, …, xₙ)
▪ X₁ ,X₂, …, Xₙ are said to be independent for all choices of A₁ ,A₂, …, Aₙ
P( X₁ϵ A₁, X₂ϵ A₂, …, Xₙϵ Aₙ) = ℿₖ P(Xₖ ϵ Aₖ)
px₁ ,x₂, …, xₙ (x₁ ,x₂, …, xₙ) = ℿ pₓₖ (xₖ)