Dynamic programming Flashcards
What is the basic idea behind dynammic programming?
- Dynammic programming solves problems by combining the solutions to subproblems.*
- Dynamic programming is applicable when the sub problems are not independent, that is, when sub problems share subsubproblems.*
- A dynamic-programming algorithm solves every subsubproblem just once and then saves its answer in a table.*
- Dynamic programming is typically applied to optimization problems. There are many solution, and we wish to find an optimal one.*
What are the 4 steps of dynamic programming?
- Characterize the structure of OPT*
- Recursively define the value of OPT*
- Compute the value of OPT in a bottom-up fashion*
- Construct OPT from computed information*
- Step 4 can be ommited if only the value of an optimal solution is required.*
- When we do perform step 4, we sometimes maintain additional information during step 3 to ease the construction of OPT.*
What shall we do if we found a working recursion which is not efficient?(becuase it computes the same subproblem over and over again)
- We arrange for each subproblem to be solved only once, saving its solution.*
- Dynamic programming uses additional memory to save computation time*
- Dynamic programming approach runs in polynomial time when the number of* distinct sub problems involved is polynomial in the input size.
There are usually two equivalent ways to implement a dynamic-programming approach. What are they?
top-down with memoization
- we write the procedure recursively in a natural manner, but modified to save the result of each subproblem.(usually in an array)
- this method first checks to see whether it has previously solved this problem. If so, it returns the saved value, else it computes it.
bottom-up method
- this method is derived from the notion that solving any particular subproblem depends only on solving “smaller” subproblems.
- we sort the subproblems by size and solve them in size-order, smallest first. When solving a partricular subproblem we already solved all of the smaller problems its solution depends upon, and we have saved their solutions.
Rod cutting problem
can you describe the recursive alogirthm which runs in exponential time?

Rod cutting problem
Describe top-down with memoization algorithm

Rod cutting problem
describe the bottom-up algorithm

- Rod cutting problem*
- describe the extended bottom-up algorithm, where we wish to print the solution.*

What are the steps in solving a problem using dynammic programming?

Matrix-chain multiplication
give a formal definition of the problem

Matrix-chain multiplication
describe the subproblems and OPT

Matrix-chain multiplication
describe the formula

Matrix-chain problem
What are the base cases?
- The base cases are subproblems which suggest an immidiate result.*
- In the Matrix-chain problem, every instance of the problem with only one matrix is a base case. There is only one solution which is 0 - no multiplication is needed.*
- Matrix-chain problem*
- what is the proof for the formula?*
- *notice how we start with OPT[i,j] and using definitions (and a theorem), we end up with a recursive defintion, as in the formula.*

Matrix-Chain
describe the top-down solution

Matrix-chain
describe the bottom-up algorithm

Matrix-chain
can you recall the table-sketch for the matrix-chain problem?

What do we need to prove in the bottom-up algorithm?
- prove that once our algorithm encounters a new subproblem it has already computed the subsubproblems for our subproblem
- prove that the computation of a subproblem returns the value of OPT(subproblem) we defined in the formula.
- Matrix-Chain*
- describe the reconstruction algorithm*

The knapsack problem
describe the problem

The knapsack problem
define the set of solutions for a typical subproblem and their OPT

The knapsack problem
divide the set of solutions into subsets

The knapsack problem
describe the structure formula

The knapsack problem
- in which two theorems we use in order to prove the formula.*
- Tip: recall the formula; OPT is divided into cases, what is not trivial and needs to be needs to be explained*

The knapsack problem
- describe the proof for the 2nd theorem.*
- recall: Normally we prove by defining two sets and show equality between them.*

The knapsack problem
describe the proof for the formula structure

The knapsack problem
describe the iterative algorithm

The knapsack problem
correctness proof

The snapsack problem
The run-time for the iteration algorithm is quite bizzare. Describe it

The knapsack problem
describe the reconstruction algorithm

The knapsack problem
what is the proof for the reconstruction algorithm?

Optimal substructure varies across problem domains in two ways, what are they?
- How many subproblems an optimal solution to the original problem uses
- How many choices we have in determining which sub problems to use in an optimal solution.
The running time of a dynamic problem algorithm depends on the product of two factors:
- The number of subproblems overall
- How many choices we look at for each subproblem
If subproblems are not independent we cannot obtain an optimal structure. Give an example of dependence of sub problems in the unweighted longest simple path problem
Assume we decompose the longest simple path from s to v, P= {s,..,k,..v} into two paths:
- longest simple path from s to k
- longest simple path from k to v
- Now, it might happen that both in the first path and the second we pass through vertex t - creating a circle. The 2nd subproblem requires the 1st to not hold any of the vertices of its path. Thus these two are dependent.*
- With shortest path it cannot happen.*
- Consider the path P ={s,…k,…v} and its two subpaths:*
- shortest path from s to k
- shortest path from k to v
- Assume both the shortest path from s to k and the shortest path from k to v pass through vertex t.*
- That means vertex “k” is not needed, we could move from s to t, and from t to k - which is shorter. contradicition.*
Important point of “transforming” a dynamic programming problem, to such that can’t be solve dynamically.

Main point is that we turned the subproblems from beind independent to being dependent.
