Dynamic Programming Flashcards

1
Q

What is the greedy algorithm and what’s its complexity

A

A greedy algorithm for finding a route from location A to location J is to move to the nextlocation with the lowest cost in each particular move.

Principle that a local optimal solution (for a particular stage) generated by the greedy algorithm does not necessarily lead to a global optimal solution (for the overall
problem).

O(NK),
where N and K are the number of stages and number of nodes
in each stage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What’s the complexity of an exhaustive search (stagecoach problem)

A

An exhaustive search can guarantee an optimal solution,
but it is time consuming with complexity of O(KN).
where N and K are the number of stages and number of nodes
in each stage, respectively.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How do you apply the dynamic programming approach to the stagecoach problem

A

Ø Start from the last stage and work backward.
Ø At each node of a particular stage, find an optimal route to location
J based on the optimal routes to location J from the child nodes of
the current node.
Ø The algorithm terminates when the first stage is done.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What’s dynamic programming

A

Dynamic programming is an approach to problem solving that decomposes a large problem that
may be difficult to solve into a number of smaller and similar problems that are usually much
easier to solve.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What’s the characteristic property of Markov processes

A

that it is a random process that has no memory
of where it has been in the past. In other words, only the current state of the system can influence
where it will go next.
In the case where the process can only have a finite (or countable) set of
states it is referred to as a Markov chain.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What forms classes in a markov chain

A

Communicating states

State i communicates with j (i ↔ j) if both i → j and j → i.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What makes a class absorbing

A

A class is called absorbing if it is not possible to escape from it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What makes a class accessible

A

A class A is said to be accessible from a class B if each state in A is accessible from each state in B.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the first passage time or hitting time

A

The hitting time or first
passage time from state i to state j is the number of transitions until the process, starting
from state i, hits state j for the first time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What’s a recurrent state

A

A recurrent state is a state, which you keep coming back over and over again as n tends to infinity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What’s a transient state

A

a transient state is a state which you might only visit a finite number of times

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What two traits of a Markov process means that a steady state probability matrix can be found

A

Irreducible and aperiodic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What does it mean if a matrix is irreducible

A

If all the states in the Markov Chain belong to one closed communicating class, then the chain is called an irreducible Markov chain.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How to find the expected return times from limiting distribution

A

Take the inverse of the terms in the limiting distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly