dynamic programming Flashcards
Is dynamic programming a recursive algorithm?
No.
Actually, recursion is a very bad idea: the resulting procedure would require exponential time!
Unlike divide-and-conquer reduce subproblems to a substantially smaller size. In contrast, in a typical dynamic programming formulation, a problem is reduced to subproblems that are only slightly smaller. For instance, L(j) relies on L(j - 1). Thus the full recursion tree generally has polynomial depth and an exponential number of nodes.
However, it turns out that most of these nodes are repeats, that there are not too many distinct subproblems among them. Efficiency is therefore obtained by explicitly enumerating the distinct subproblems and solving them in the right order.
pseudo-code for Longest Increasing Subsequences
pseudo-code for Longest Common Subsequence
pseudo-code for knapsack without repeat
pseudo-code for knapsack with repeat
pseudo-code for finding the minimum cost of Chain matrix multiplication
overall running time for finding the minimum cost of Chain matrix multiplication
O(n3)
overall running time for Knapsack
O(nB)
overall running time for Longest common subsequence
O(n2)
overall running time for Longest increasing subsequences
O(n2)
pseduo-code for bellman-ford: single-source shortest paths
GigO time for bellman-ford: all-pairs shortest paths
O(|V|3)
Pseudo-code for Floyd-Warshall algorithm
Algorithms difference between Knapsack with and without repetition
Knapsack with repetition can be reduced to one dimension.
for w = 1 to W:
K(w) = max{K(w -wi) + vi : all wi <= w}
This algorithm fills in a one-dimensional table of length W + 1, in left-to-right order. Each entry can take up to O(n) time to compute, so the overall running time is O(nW).
Knapsack without repetition needs two dimensions.
K(w, j) = max{K(w - wj, j - 1) + vj , K(w, j - 1)}
running time for bellman-ford: single-source shortest paths
O(mn)