Exam Questions Flashcards

1
Q

How to argue that something is a Markov-chain?

A

Claim that the events that trigger a change of state are scheduled to take place after a random time with an exponential distribution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How to the the transistion rates Q?

A

Usually you can set the λ for all increasing (by 1), iμ for decreasing (by 1), -λ-iμ for i = j, set -λ for i =j = 0, -iμ for i = j = n.

For the rest set 0.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How to explain the qij for non trivial values?

A

First describe the distribution of increasing, and the distribution of decreasing. Then explain that the min of the 2 is just the two distributions added together (in an exponential distribution).

Explain that the probability of arrival is the values in the arrival distribution divided by the values in the total distribution. (Same for the leaver).

Then by multiplying the probability and the total distribution you get the transition rates.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How to reason that a discrete time Markov chain is ergodic?

A

Argue: “We can reach any state from any other state, and it is finite, and aperiodic, thus it is positive recurrent i.e. ergodic.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is PASTA?

A

According to PASTA the chance that you arrive to a system in state 𝛼i = 𝜋i. Thus this means that the chance that you arrive in that state is equal to the long run probability of being in that state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How to find the number of customers that are lost in a limited Markov-chain?

A

You can use PASTA to find the probability that a customer arrives when in state 𝛼n=𝜋n (i.e. you just need to find the stationary distribution).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What formula states that we can find a long run average?

A

Note: you can leave out the |X(0) = j part.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How to find the limiting distribution (using Q)?

A

𝜋Q = 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the Markov instantaneous reward theorem?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

If you have two different Poisson distributions what can you do with them?

A

You can easily add them together in one Poisson distribution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How to calculate time probability that time to first is longer than t? (Using Poisson)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How to find the expectation of two different exponential distributions which are Poisson distributed?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How to show the memoryless property of the Poisson distribution?

A

Example:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the PMS function of the Poisson distribution?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How to find probability of needing n steps until in absorbing state?

A

We can just use (Pn)0,k. Note: since it is absorbing you do not have to add a sum.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is often important to mention when using the Poisson distribution?

A

That it is memoryless.

17
Q

How to show that the stationary distribution exists in a Queueing system?

A
  1. State that all states are communicating
  2. Verify that λ < (c)μ
18
Q

When stating the state space S, what is important to define for a continuous Markov chain, and how to define it in a distrete time Markov chain?

A

Xi(t) : meaning of it,

discrete: Xn, n = 0, 1, 2, …

19
Q

In the case of multiple exponential processes, what is the transition rate?

A

nμ, where n is the number of exponential processes.

20
Q

How to reason all the transition rates qij?

A
21
Q

If we combine two exponential distributions, what kind of probability distribution do we get?

A

a hyperexponential distribution

22
Q

What are the two Markov reward theorems? What are they based on?

A
  • Markov Instanteonous Reward Theorem
  • Markov Jump Reward Theorem
23
Q

What is the Markov Jump Reward formula?

A