121 Week 19 - Dynamic Programming Flashcards
Computational problem
Have a domain of inputs In. Could be integers, reals, Booleans, binary strings, arrays etc.
Have a domain of outputs Out. Could be integers, reals, Booleans, binary strings, arrays etc.
A relation that specifies for each input x, you have a set of valid outputs valid(x)
Formal definition for if an algorithm solves a problem
Some algorithm A solves a problem P if for a given input x ∈ In of size n , A(x) gives one of the valid outputs for x. In other words, A(x) ∈ valid(x).
Exhaustive search
Also known as brute-force search or generate and test.
Exhaustive search solves a problem by, given an input x, test all possible outputs y to see whether y is a valid output.
In other words, test all possible outputs to find the valid solutions for the input.
Brute force will always find a solution to a solvable problem however it is often very time inefficient.
Exhaustive search example - GCD
Want to find the greatest common divisor (GCD) of a and b. E.g., GCD of 32 and 48 is 16.
Brute-force search solves GCD by iterating through all integers from 1 to min(a,b). It keeps track of the largest integer that divides both a and b and returns it after it has reached min(a,b).
Exhaustive search example - Travelling salesman problem
Finds all possible orderings of other nodes from a source node, finds the distance of each and keeps the route with the smallest distance.
Will always find the shortest distance however has a time complexity of O( (n-1)! ) - very bad.
Exhaustive search example - knapsack
Given a set of items 1-n that each have a weight and value as well as a total weight W, determine which items to put in the collection such that items total weight < W and their total value is maximised.
Brute force will try ever combination of items in the knapsack then pick the combination with the highest value that meets the weight condition.
Has a time complexity of O(2^n)
Candidate space
The set of all candidates for solutions to a given problem.
Exhaustive search uses all candidates in the candidate space and checks if each is a solution to the problem.
Potential search tree
Root of the tree represents the generic candidate - a blank candidate with the correct format/structure/type.
As you move down the tree, candidates get slowly more defined.
At the leaves you have the candidates that make up the candidate space.
Potential search tree allows you to eliminate many possible candidates if a candidate further up the tree is found to be incorrect, increasing efficiency of finding a solution.
Backtracking using a potential search tree
Using backtracking on a potential search tree allows you to eliminate many possible candidates if a candidate further up the tree is found to be incorrect, increasing efficiency of finding a solution.
This is done by completing a depth first search on the tree but when you reach an invalid candidate, you backtrack to its parent and continue the search, but excluding the invalid candidate.
Backtracking on a potential search tree is more efficient than an exhaustive search.
Increasing efficiency of a potential search tree.
Some problems have additional constraints meaning you can eliminate a subtree of the search tree as early as possible by using the additional constraints to predict how likely a subtree is to have a valid candidate.
E.g., for knapsack if the total weight > W, you can immediately backtrack and mark that possible solution as invalid.
Example uses of backtracking
Eight queens puzzle, crossword, sudoku, knapsack, parsing etc.
Solving problems without candidate space
Other ways of solving problems include splitting up the problem into simpler sub-problems.
E.g., recursion, divide and conquer.
Problem decomposition
Problem can be solved from a combination of optimal solutions to its subproblems.
If subproblems overlap or are disjoint you can solve them using divide and conquer.
Overlapping subproblems mean that the same subproblem will be solved multiple times. Because the same problem is solved many times, this causes inefficiency.
Top-down dynamic programming
Top-down dynamic programming eliminates the inefficiency that can be caused by overlapping subproblems by storing the result to each subproblem in memory. This means that instead of needing to re-solve the problem, the solution can be retrieved from memory.
E.g., for solving Fibonacci of 6, store the value of F(5), F(4) … F(1) as you go down the subproblems and use these to solve for F(6).
Memoization
Storing computed values to avoid recomputing them.
Memoization causes a computation vs memory trade-off as it increases efficiency but also increase the amount of memory needed.
Bottom-up dynamic programming
Bottom-down dynamic programming eliminates the inefficiency that can be caused by overlapping subproblems by solving the smallest subproblems first and storing the results in memory. It then solves larger and larger subproblems using the stored results.
This can have better memory usage than top-down dynamic programming.
E.g., for solving Fibonacci, store currentFibonacci and prevFibonacci. Start with them being 0 and 1 and use them to find the next value. Once the next value has been found, update the values and continue.
Dynamic programming to solve knapsack
Problem: with j items and w weight, find the maximum value using the first j items and a maximum weight of w. (can be different to W).
Subproblems:
- If j = 0, m[0,w] = 0.
- If the jth w > w, the max value is the first j-1 items. m[j-1, w].
- If the jth item <= w, the max value is the largest out of the first j-1 items or the value of adding j to the knapsack.
Use dynamic programming to compute m for all j items with max total weight w and store the results in a table.