CHAPTER 1: Intro Flashcards
Markov property
given the present, future independent of past
Discrete variables Y_1, Y_2,..,Y_n form a discrete markov chain if
*P_{y_{k-1},y_k} = P( Y_k =y_k | Y_{k-1}= y_{k-1})
(y_{k-1}, y_k)^th element in the transition probability matrix (one-step probability)
- Y_k, Y_{k-1} RVs taking values in state space S, states of process at times k and k-1
We can use observations of Y_1,.., Y_n to learn about *discrete time MC, 2 states with unknown transition matrix
[θ_1 1-θ_1]
[1-θ_2 θ_2 ]
and unknown parameters bold(θ) = (θ_1, θ_2),
parameters
θ_i is the probability of staying at state i (at next time step) given the current state is i
1-θ_2 is the probability of state 1 given current is state 2
1-θ_1 is the probability of state 2 given current is state 1
LIKELIHOOD
LOG LIKELIHOOD
L( bold(θ)(y_1,..,y_n))
= P(Y_1=y_1,…, Y_n=y_n| bold(θ))
probability of particular sequence of observation to see how likely parameters are
log likelihood:
l =log L