Random Variables Flashcards
Definition Random Variables
A random variable is an object, usually a number of a vector, whose value depends on the outcome of a random experiment
Definition
Let
–> S be the sample space of a probability model
–> E be a Set
A function:
X : S –> E
s –> X(s)
is called a random variable with values in E
Distribution of X
To every
B is a subset of E
we associate
P(XϵB) = µx(B)
It can be shown that µx is a probability on the subsets of E, called the distribution of X
Discrete Random Variable
A random variable X is said to be discrete if it can attain finitely or countably many values.
More precisely, there exists a
N is a subset of E finite or countable.
Such that
P(XϵN) = 1
Discrete Random Variable : countable elements
a set N is countable if its elements can be enumerated
N = {x1, x2, …, xn}
–> countable set:
- integer numbers
- rational numbers
–> not countable set:
- real numbers
Discrete density
If X is a discrete random variable, define
px: E –> [0,1]
x –> P(X=x)
px(x) = P(X=x)
px is called probability mass function or discrete density
By using ͳ-additivity of the probability we have for A is a subset of E
µx(A) = P(xϵA) = Σ P(X=x)
µx(A) = Σ px(x)
Expected value (or mean value)
Let X be a discrete random variables with values in R
Denote by
N = {x1, x2, …, xn}
the finite or countable sets of its values
The expected value of X is defined by
E(X)= Σxi px(xi)= Σx px(x)
Properties of expectation
1) Constant random variables
2) Invariance by linear transformation
3) E[f(x)]=Σ f(xi) px(xi)
4) Linearity
Property of expectation (1): constant random variable
Let C be a real number. A constant can be viewed as a random variable that attains the value c with probability 1. So:
E(c) = c * 1 = c
Property of expectation (2): Invariance by linear transformation
Suppose
–> X is a:
- discrete
- real valued
- random variable
–> a,b ϵ R
Then
E(aX + b) = a * E(x) + b
Property of expectation (3)
Suppose X is a:
- discrete random variable
- values in the set E
Let f: E –> R
Remark: a random variable is a function that goes from a sample space to E x: S –> E
S -x-> E -f-> R
2 function related one after another one
E[f(x)] = Σ f(xi) px(xi)
How to read this formula
If I have a discrete random variable with his density and I consider a generic transformation of this random variable than I can know the expectation value of this transformation by knowing the density of the discrete random variable
Property of expectation (4): Linearity
Suppose:
X and Y are two real valued variables defined on the same sample space S
If a,b ϵ R
E(aX+bY) = aE(X) + bE(Y)
Moments of X
Suppose
X is a discrete real valued random variable
For m>1 consider the random variable X^m
By property 3
E(X^m)= Σ xi^m px(xi)
Variance of X
High variance indicates that X attains values far from its expectation with a significance probability
Var(x) = Σ (xi - µ)² px(xi)
Standard deviation
SD(x) = √Var(x)
Properties of Variance
1) Var(x) = E(x²) - (E(x))²
2) Var(aX+b) = a² * Var(X)
Bernoulli random variables
Only take values 0 and 1
Density –> X~Be(p)
P(X=1) = p
P(X=0) = 1-p
E(X) = p
Var(X) = p(1-p)
Binomial random variables
Consider the random experiment of independent trials with probability of success p
Denote by X the number of successes in the first n trials
px(k) = P(X=k) =(n) p^k (1-p)^(n-k)
(k)
Density –> X~Bin(n,p)
E(X) = np
Var(X) = np(1-p)
E(X^m)=npE[(y+1)^(m-1)]
(With Y~Bin(n-1,p))
Poisson random variables
We say that X is a Poisson random variable with parameter λ>0, and we write
X~Pois(λ)
px(n) = P(X=n) = (e^-λ) * (λ^n)/(n!)
Note: this is a density since
Σ (λ^n)/(n!) = (e^λ)
They emerge as approximation of Binomial random variables
This bond is used when:
▪ p«1 and n»1
▪ np = λ is neither small nor larger
▪ np²«1
E(X) = λ
Var(X) = λ
E(X^m) = λ E((X+1)^(m-1))
If m = 2 then E(X²) = λ²+λ
Geometric random variables
Consider repeated independent trials with probability of success p, and denote by X the trial corresponding to the first success
X = n <–> the first n-1 trials are losses and the nth trial is a success
px(n) = P(X=n) = p * (1-p)^(n-1)
X~Geo(p)
E(X) = 1/p
Var(X) = (1-p)/p²
Loss of memory property
P ( X>n+k|X>n ) = P ( X>k )