Markov Flashcards

1
Q

What is the Markovian hypothesis?

A

The probability of the current state depends only on the previous state, not on the entire history.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How is a homogeneous Markov chain parameterized?

A

By a starting distribution π and a transition matrix A.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does the transition matrix A represent in a Markov chain?

A

A[i,j] represents the probability of transitioning from state i to state j.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How is the probability of a sequence calculated in a Markov chain?

A

P(x₁, …, xₙ) = πx₁ * ∏ᵢ₌₂ⁿ A[xᵢ₋₁, xᵢ]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What does Aᵏ(i,j) represent in a Markov chain?

A

The probability of going from state i to state j in exactly k steps.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are two key properties of a “well-behaved” Markov chain?

A

Irreducibility and aperiodicity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is an irreducible Markov chain?

A

A chain where it’s possible to get from any state to any other state in a finite number of steps.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is an aperiodic Markov chain?

A

A chain where there are no closed loops that can only be traversed at fixed intervals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the stationary distribution of a Markov chain?

A

A probability distribution μ that satisfies μ · A = μ.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How quickly does a well-behaved Markov chain converge to its stationary distribution?

A

Exponentially fast.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a key limitation of Markov chains?

A

They cannot model long-range effects due to their “short-term memory”.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is an order-k Markov model? What is a drawback of higher-order Markov models?

A

A model where the current state depends on the k previous states. The number of parameters increases exponentially with the order.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Probability of non consecutive events (state A to state B) in Markov chain?

A

Sum of probabilities of all possible sequences that start with state A and end with state B.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How are the parameters of a Markov chain typically estimated?

A

Using Maximum Likelihood Estimation based on observed state transitions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is a Hidden Markov Model (HMM)?

A

Statistical Model used to represent systems that transition between hidden states, while producing observable outputs (emissions). It is used to make inferences about sequential data with short-term dependency and implicit constraints.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the three main questions addressed with an HMM?

A

Evaluation (probability of a sequence), Decoding (most probable hidden state allocation), and Estimation (parameter estimation).

17
Q

Parameters of a HMM?

A

By a starting distribution π and a transition matrix A and Emission probabilities.

18
Q

What is the purpose of the forward algorithm in HMMs?

A

To compute the probability of an observed sequence and perform filtering in an efficient manner: Sum over all state sequences using a combination of recursion and Dynamic Programming to avoid redundant computations.

19
Q

What is the Viterbi algorithm used for in HMMs?

A

To find the most likely hidden state sequence. It is also an efficient implementation that uses DP and recursion to keep track of the best path t o each state at each time given the observed emissions. vs. Forward algorithm: It uses max in each recursive step instead of sum and keeps track of best path of hidde nstate.

20
Q

What is the purpose of the forward-backward algorithm?

A

To compute the posterior probabilities of hidden states given the observed sequence.

21
Q

What is the Baum-Welch algorithm?

A

It’s an implementation of the EM algorithm for estimating HMM parameters when hidden states are unknown.

22
Q

What is the main advantage of using HMMs over simple mixture models?

A

HMMs can model dependencies between adjacent positions in a sequence.

23
Q

In the context of HMMs, what is meant by “emission probabilities”?

A

The probabilities of observing specific outputs given a particular hidden state.

24
Q

How can HMMs be used to model transcription factor binding sites?

A

By using a background state for non-binding regions and specific states for each position in the binding motif.

25
Q

How does an HMM differ from a simple Markov chain?

A

An HMM has hidden states that are not directly observable, while all states in a Markov chain are observable.

26
Q

What is meant by “decoding” in the context of HMMs?

A

Inferring the most likely sequence of hidden states given the observed sequence.

27
Q

What is the main difference between the Viterbi algorithm and the forward-backward algorithm?

A

Viterbi finds the single most likely state sequence, while forward-backward computes posterior probabilities for each state at each position.