Dynamic Programming Flashcards
What is the greedy algorithm and what’s its complexity
A greedy algorithm for finding a route from location A to location J is to move to the nextlocation with the lowest cost in each particular move.
Principle that a local optimal solution (for a particular stage) generated by the greedy algorithm does not necessarily lead to a global optimal solution (for the overall
problem).
O(NK),
where N and K are the number of stages and number of nodes
in each stage
What’s the complexity of an exhaustive search (stagecoach problem)
An exhaustive search can guarantee an optimal solution,
but it is time consuming with complexity of O(KN).
where N and K are the number of stages and number of nodes
in each stage, respectively.
How do you apply the dynamic programming approach to the stagecoach problem
Ø Start from the last stage and work backward.
Ø At each node of a particular stage, find an optimal route to location
J based on the optimal routes to location J from the child nodes of
the current node.
Ø The algorithm terminates when the first stage is done.
What’s dynamic programming
Dynamic programming is an approach to problem solving that decomposes a large problem that
may be difficult to solve into a number of smaller and similar problems that are usually much
easier to solve.
What’s the characteristic property of Markov processes
that it is a random process that has no memory
of where it has been in the past. In other words, only the current state of the system can influence
where it will go next.
In the case where the process can only have a finite (or countable) set of
states it is referred to as a Markov chain.
What forms classes in a markov chain
Communicating states
State i communicates with j (i ↔ j) if both i → j and j → i.
What makes a class absorbing
A class is called absorbing if it is not possible to escape from it.
What makes a class accessible
A class A is said to be accessible from a class B if each state in A is accessible from each state in B.
What is the first passage time or hitting time
The hitting time or first
passage time from state i to state j is the number of transitions until the process, starting
from state i, hits state j for the first time.
What’s a recurrent state
A recurrent state is a state, which you keep coming back over and over again as n tends to infinity
What’s a transient state
a transient state is a state which you might only visit a finite number of times
What two traits of a Markov process means that a steady state probability matrix can be found
Irreducible and aperiodic
What does it mean if a matrix is irreducible
If all the states in the Markov Chain belong to one closed communicating class, then the chain is called an irreducible Markov chain.
How to find the expected return times from limiting distribution
Take the inverse of the terms in the limiting distribution