Recursion & DP Flashcards
Guidelines for recursion function
- Find the base case
- Define the recurrence relation (what keeps happening)
Useful tips:
- Just try to solve one base case and let the recursion do the magic. Stepping into the call flow usually ends up causing confusion
- Always try to make a tree diagram for recursive method calls being made to determine the complexity (which would be space mostly, since the calls are stored on a stack). Each node of the tree can represent the parameters being passed to the recursive function
- There is always an iterative solution for a recursive function. It’d just be a little difficult
What is memoization?
By caching and reusing the intermediate results, memoization can greatly reduce the number of recursion calls. Like creating a hashmap or array to store the frequently used values.
- Memoization is often referred as the top-down dynamic programming (note: the bottom-up is just being referred as dynamic programming)
- Like for recursive, it’s almost always a good idea to draw a tree with the cached state in a box, to determine the runtime complexity
Master Theorem for time complexity
Cannot be used in scenarios where the sub problem sizes are different like fibonacci. Useful for those that follow pattern of divide-and-conquer.
It is defined as follows:
T(n)= a * T(n/b) + f(n), where
a = number of times recursion is called
b = number of subproblems
f(n) time taken to merge the results back ~~ O(n^d), where d >= 0
The Master Theorem then provides us three formulas based on the relationship between a, b and d
Case 1: if d < log a / log b (or, log a to the base b)
then, T(n) = O(n ^ (log a to the base b))
this indicates the time taken to merge results is less that the time for recursion. Example: DFS/BFS
Case 2: if d = log a / log b (or, log a to the base b)
then, T(n) = O(n^d (log n)) = O(n^(log a to base b) (log n))
Example: Binary Search, Merge sort
Case 3: if d > log a / log b (or, log a to the base b)
then, T(n) = O(n^d)
Example: Quickselect
Backtracking algorithm
def backtrack(candidate): if find_solution(candidate): output(candidate) return
# iterate all possible candidates. for next_candidate in list_of_candidates: if is_valid(next_candidate): # try this partial candidate solution place(next_candidate) # given the candidate, explore further. backtrack(next_candidate) # backtrack remove(next_candidate)
Problems that could be solved using Recursion
- fibonacci (could also be done using dynamic programming)
- string permutations
- tree traversals
- dfs/bfs
Dynamic Programming
Is mostly just a matter of taking a recursive algorithm and finding the overlapping subproblems (the repeated calls), then cache those results for future recursive calls
What’s the time complexity of a recursive algorithm?
Given a recursion algorithm, its time complexity O(T) is typically the product of the number of recursion invocations (denoted as R) and the time complexity of calculation (denoted as O(s)) that incurs along with each recursion call:
O(T) = R ∗ O(s)
What are the two parts to calculate space complexity for recursive algorithms
- recursion related (space on stack calls): this space is freed as soon as the function call ends
- non-recursion related (memoization): space (normally in heap) allocated for global variables
Is the following snippet a tail recursion?
private static int helper_non_tail_recursion(int start, int [] ls) {
if (start >= ls.length) {
return 0;
}
return ls[start] + helper_non_tail_recursion(start+1, ls);
}
not a tail recursion because it does some computation after the recursive call returned.
// we can make it as tail recursion by changing it to this: private static int helper_tail_recursion(int start, int [] ls, int acc) { if (start >= ls.length) { return acc; } // this is a tail recursion because the final instruction is the recursive call. return helper_tail_recursion(start+1, ls, acc+ls[start]); }
What is the benefit of tail recursion? [note: C#, Java, Python does not support tail recursion optimization]
The benefit of having tail recursion is that it could avoid the accumulation of stack overheads during the recursive calls. Thus at times help us avoid stack overflow errors.
In tail recursion, we know that as soon as we return from the recursive call we are going to immediately return as well, so we can skip the entire chain of recursive calls returning and return straight to the original caller. Instead of allocating new space on the stack, the system could simply reuse the space allocated earlier for this second recursion call