Ch 3 Flashcards

1
Q

When is a continuous rv uniform(X~Unif(a,b)?

A

If X has a constant pdf
fX(x)=1/(b-a), for a<=x<=b
0, otherwise

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How do we define the indicator function?

A

I(S) :=(1, S is true,
0, S is false.
Sometimes the statement S is implicit and we write for a set A
IA (x) := I(x ∈ A) =(1, x ∈ A,
0, x ∉ A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Define a Bernoulli random variable. (X~Bern(p))

A

A Bernoulli random variable with parameter p (where 0 ≤ p ≤ 1) is a random variable such that P (X = 1) = p
and P (X = 0) = 1 − p

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Explain simply what a Bernoulli random variable is.

A

A one off trial that succeeds with probability p

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Define Binomial distribution. (X~Binom(n,p))

A

The probability of getting exactly
k successes in n trials is given by the binomial probability mass function (PMF):P(X=k)=nCkp^k(1-p)^(n-k)

Where:•the binomial coefficient, which gives the number of ways to choose
k successes from n trials
•p is the probability of success on any given trial.
•(1−p) is the probability of failure.
•k is the number of successes (where
k=0,1,2,…,n).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Explain simply the binomial distribution.

A

a discrete probability distribution that describes the number of successes in a fixed number of independent and identical trials of a binary experiment, where each trial has only two possible outcomes: success or failure. The binomial distribution is characterized by the number of trials n and the probability of success p in each trial

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Does the binomial distribution model a discrete or continuous random variable?

A

Discrete( number of successes in n trials)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Define geometric distribution(X~Geom(p))

A

P(X=k)=(1-p)^(k-1)*p

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Explain simply what geometric distribution is.

A

a discrete probability distribution that models the number of trials needed to get the first success in a sequence of independent and identical Bernoulli trials

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Explain the memoryless quality of geometric distribution

A

For all k,j€{1,2,…} P(X=k+j|X>j)=P(X=k)
If we have already had j failures without a success then the probability of getting a success in k tries is the same as if we had just started.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Define Poisson distribution(X~Pois(lambda))

A

A random variable X such that for k ∈ {0, 1, 2, . . . }
P (X = k) = e^(-λ)*λ^k/k!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Explain Poisson distribution simply

A

a discrete probability distribution that models the number of events occurring within a fixed interval of time or space, under the assumption that these events occur independently and at a constant average rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What does lambda represent in poisson?

A

n*p

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Define an exponential random variable (X~exp(lambda))

A

A random variable X with pdf
fX (x) = λe^(−λx)1[0,∞)(x) =
(0, x < 0,
λe^(−λx) , x ≥ 0,
where λ > 0,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Explain simply exponential distribution

A

a continuous random variable that models the time between events in a Poisson process, which describes a series of events happening independently at a constant average rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the cdf of the pdf of an exponential random variable?

A

Let X ∼ Exp(λ) for λ > 0. Then the cdf of X is given byFX (x) =1 − e^(−λx) 1[0,∞)(x).

17
Q

Explain the memoryless quality of exponential distribution

A

for all t , s ≥ 0
P (X > t + s|X > s) = P (X > t ) .
This means that when we have already waited for s time units and nothing happened then the probability of
something happening in the next t time units is the same as if we had just started. I.e. there is no memory of
already elapsed time.

18
Q

When are two random variables independent?

A

We call X and Y independent if for all sets A, B we have
P (X ∈ A and Y ∈ B) = P (X ∈ A) P (Y ∈ B) .

19
Q

If X an Y are discrete random variables when are they independent?

A

Then X and Y are independent if and only if
P (X = x, Y = y)= P (X = x) P (Y = y)for all x, y.