Dynamic Programming Flashcards
(Select all that apply.) Dynamic programming is used for problems with:
A) Optimal substructure
B) Non-overlapping subproblems
C) Overlapping subproblems
A & B
Without memoization, a recursive solution for Fibonacci Number would have the time complexity:
O(2^n)
(Each call to F(n) makes 2 addtional calls, to F(n-1) and F(n-2). Those 2 calls generate 4 calls, and so forth.
T/F:
Usually, top-down DP algorithms are faster than bottom-up ones.
False
Usually, a bottom-up algorithm will be faster than the equivalent top-down algorithm as there is some overhead with recursive function calls. However, sometimes, a top-down dynamic programming approach will be the better option, such as when only a fraction of the subproblems need to be solved.
_ approaches can be parallelized while ___ approaches cannot.
A) (Dynamic programming), (divide and conquer)
B) (Divide and conquer), (dynamic programming)
C) Neither can be parallelized.
B) (Divide and conquer), (dynamic programming)
What must be done when converting a top-down dynamic programming solution into a bottom-up dynamic programming solution?
Find the correct order in which to iterate over the states.
(bottom-up dynamic programming solutions iterate over all of the states (starting at the base case) such that all the results necessary to solve each subproblem for the current state have already been obtained before arriving at the current state. So, to convert a top-down solution into a bottom-up solution, we must first find a logical order in which to iterate over the states.)
Difference in recursive calls in DP vs Divide and Conquer
DC recursive calls commits to a single way of diving the input
DP considers multiple ways of defining subproblems and choosing the most optimal
Fibonacci Subproblem
“Let F(i) be the i-th Fibonacci number in the sequence.”
Fibonacci Recurrence
F(0) = 0, F(1) = 1
F(i) = F(i-1) + F(i-2)
where 2 <= i <= n
[LIS]
Subproblem
Let L(i) be the length of LIS of A[1 … i] which INCLUDES i
[LIS]
Recurrence
T(1) = 1 T(i) = max{1 + T(j) if a[i] > a[j] : where 1 <= j < 1} = 1 otherwise where 2 <= i <= n
[LIS]
Implementation Analysis
1. Number of subproblems:
2. Runtime to fill the table:
3. How/where the final return is extracted:
4. Runtime to extract final return:
- Number of subproblems: O(n)
- Runtime to fill the table: O(n^2)
- How/where the final return is extracted: return max{L(*)}
- Runtime to extract final return: O(n)
LCS Subproblem
Let L(i,j) be the length of longest common sequence for x[1…i] and y[1…j]
LCS Recurrence
L(i,0) = 0, 0 <= i <= n L(0,j) = 0, 0 <= j <= m L(i,j) = 1 + L(i-1, j-1) if x[i] = y[j] = max{L(i-1, j), L(i, j-1)} if x[i] != y[j] where 1 <= i <= n, 1 <= j <= m
[LCS]
Implementation Analysis
1. Number of subproblems:
2. Runtime to fill the table:
3. How/where the final return is extracted:
4. Runtime to extract final return:
- Number of subproblems: O(n^2)
- Runtime to fill the table: O(n^2)
- How/where the final return is extracted: return L(n,m)
- Runtime to extract final return: O(1)
[Knapsack 0-1]
Subproblem
Let K(i, b) be the max value achievable using a subset of objects x[1 … i] where total weight is less than or equal to b.
[Knapsack 0-1]
Recurrence
K(i,0) = 0, 0 <= i <= n K(0,b) = 0, 0 <= b <= B K(i,b) = max{v[i] + K(i-1, b-w[i]), K(i-1, b)} if w[i] <= b = K(i-1, b) otherwise where 1 <= i <= n, 1 <= b <= B
[Knapsack 0-1]
Implementation Analysis
1. Number of subproblems:
2. Runtime to fill the table:
3. How/where the final return is extracted:
4. Runtime to extract final return:
Implementation Analysis
1. Number of subproblems: O(nB)
2. Runtime to fill the table: O(nB)
3. How/where the final return is extracted: T(n, B)
4. Runtime to extract final return: O(1)
[Knapsack repeat]
Subproblem
Let T(i) be the max value attainable using a multiset of items A[1…n] with a total capacity less than or equal to b.
[Knapsack repeat]
Recurrence
K(0) = 0 K(b) = max{v[i] + K(b - w[i]) : w[i] <= b}, 1 <= i <= n where 0 <= b <= B
[Knapsack Repeat (1D)]
Implementation Analysis
1. Number of subproblems:
2. Runtime to fill the table:
3. How/where the final return is extracted:
4. Runtime to extract final return:
Implementation Analysis
1. Number of subproblems: O(B)
2. Runtime to fill the table: O(nB)
3. How/where the final return is extracted: T(B)
4. Runtime to extract final return: O(1)
[Knapsack 0-1]
Explain the greedy approach
- Sort objects by value per unit of weight, v[i] / w[i]
- Starting from highest value item, take item or skip if items doesn’t fit budget
[Knapsack 0-1]
Explain why greedy approach fails
It can’t make a suboptimal choice to get a better solution in the future.
[Knapsack]
Why is the runtime of Knapsack, O(nB), not polynomial?
B is simply a number, thus the number of bits required to represent number B takes space O(logB). Thus, run time is polynomial in O(n, log B).
[CMM]
How to represent problem in binary tree
Root = final result
Leaf = individual matrices
Subtree = parenthetization of the problem (how solving it is structured)
[CMM]
Subproblem
Let C(i,j) be the minimum cost for multiplying matrices A[i…j]
[CMM]
Recurrence
C[i,j] = 0 where 1 <= i <= n C(i,j) = min{ C(i,l) + C(l+1, j) + m_{i-1} m_l m_j } where 1 <= i < j <= n
[CMM]
Implementation Analysis
1. Number of subproblems:
2. Runtime to fill the table:
3. How/where the final return is extracted:
4. Runtime to extract final return:
Implementation Analysis
1. Number of subproblems: O(n^2)
2. Runtime to fill the table: O(n^3)
3. How/where the final return is extracted: return C(1,n)
4. Runtime to extract final return: O(1)
[CMM]
Why can’t we use prefixes as subproblems?
If we set our recurrence on prefixes, when splitting the input into a tree, we will hit substrings where it is neither a prefix or a suffix of the original input.
Our subproblems will only have solutions for prefixes, so we can look up previous entries to solve this substring.
[CMM]
Total cost of splitting list of matrices m[1..i] at index l
(hint: 3 components)
Sum of the 3 components:
Root: m[i-1], m[l], m[j]
Left: C(i,l)
Right: C(l+1, j)
[CMM]
What do the root, left and right components of m[i-1]m[l]m[j] represent?
left/right subtree = subproblem
root = cost of new product
[Contiguous Subsequence Max Sum]
Subproblem
Let T[i] be the max sum from substring of A[1 … i] which includes i
[Contiguous Subsequence Max Sum]
Recurrence
T(0) = 0 T(i) = a[i] + max{0, T[i-1]} where 1 <= i <= n
- The key lesson here is that when dealing with contiguous subsequences, we don’t need to look at all previous positions - we only need to look at the immediately preceding position since any valid subsequence must be connected.
When do algorithms “need” to include i
When your subproblem definition T[i] must include input[i].
Ex. LIS, T[i] always includes A[i]
[Contiguous Subsequence Max Sum]
Implementation analysis
1. Number of subproblems
2. Runtime to fill the table
3. How/where the final return is extracted from that table
4. Runtime to extract the final return
- Number of subproblems: O(n)
- Runtime to fill the table: O(n)
- How/where the final return is extracted from that table: max{L[*]}
- Runtime to extract the final return: O(n)
[Edit Distance]
Input: x[1 … n], y[1 … m]
Subproblem
Subproblem:
Let D(i,j) be the minimum edit distance between x[1 … i] and y[1 … j].
[Edit Distance]
Input: x[1…n], y[1…m]
Recurrence
D(i, 0) = i, 0 <= i <= n // Cost of deleting i characters
D(0, j) = j, 0 <= j <= m // Cost of inserting j characters
D(i, j) = min{
D(i-1, j-1) + diff(x[i], y[j]) # Cost to replace x[i] to y[j]
D(i-1, j) + 1 # Cost to delete x[i]
D(i, j-1) + 1 # Cost to insert y[j]
}
[Edit Distance]
Implementation analysis
1. Number of subproblems
2. Runtime to fill the table
3. How/where the final return is extracted from that table
4. Runtime to extract the final return
- Number of subproblems: O(nm)
- Runtime to fill the table: O(nm)
- How/where the final return is extracted: return D(n,m)
- Runtime to extract final return: O(1)
Algos which INCLUDE i
- LIS
- Contiguous Subsequence Max Sum
- Chain Matrix Multiply
[Master Theorem]
What is runtime of below with no merge work:
T(n) = T(n/2)
O(1)
T(n/2)
= O(n^{log_2 1})
= O(n^0)
= O(1)