Complexity and Big O Flashcards
The execution tree of a recursive function would form an n-ary tree, with n being the branches or the number of times recursion appears in the recursion relation.
When you have a recursive function that makes multiple calls the runtime will often look like:
O(branches ^ depth)
As mentioned in the question, branches denote the number of times the recursion appears in the recursion relation. The depth varies based on each problem. For instance, a fibonacci series [f(n) = f(n-1) + f(n-2)] yields us a binary tree and goes all the way down to the 0th element thus the depth is the number of elements, n.
This gives us O(2^n) time complexity where n is the number of elements and we have 2 branches f(n-1) & f(n-2).
Though, to be more strictly speaking and if we derive it purely mathematical way, we get the complexity as O(1.6^N)
Sum of 1 through (N-1)
N(N-1)/2
Sum of a sequence of powers of two is roughly equal to ___ ?:
The next value in the sequence.
So, 2^0 + 2^1 + 2^2 … 2^N = 2^(N+1) - 1 = 2^N - 1
Number of times we can half n until we get 1 is ___ ?:
log n
When you see a problem where the number of elements in the problem space gets halved each time, that will likely be a ___ runtime:
O(log N)
eg.
1. binary search on an N element sorted array
2. Finding an element in Balanced binary search tree
The term linearithmic is used to describe programs whose running time for a problem of size N has order of growth of ___:
N (log N)
Binary Search Running Time:
O(log N)
Merge sort runtime complexity:
O(N logN)
Quick Sort runtime complexity:
O(N logN)
Quadratic runtime complexity examples:
Selection sort and Insertion sort
Selection sort runtime complexity:
O(N^2)
Insertion sort runtime complexity:
O(N^2)
How would you find maximum number of digits a number can represent. For eg. How many digits can 999 have? This is good to know when you have to iterate through the digits for a given number:
digits ~ log Z
Let’s say Z is a 2 digit number then the max 2 digit number would be 99 = 10^2 - 1 ~ 10^2. Similarly Z being a d digit number would give the max value look like:
Z ~ 10^d
taking log on each side, log Z = d log 10 = d
What is order of dominance for- O(X log X) O(X^2) O(2^X) O(X) O(X!) O(log X):
O(X!) > O(2^X) > O(X^2) > O(X log X) > O(X) > O(log X)
Which is dominant 2 ^ N or N ^ 2 when deciding the big O for an algorithm that has complexity O(2^N + N^2):
2^N is dominant, hence the complexity will boil down to O(2^N)
What will be the complexity of an algorithm that runs two different for loops each for N and M terms?:
O(N + M) since there is no established relationship between N and M we need to account for both
How would you determine time complexity for two “nested for loops” each for N & M terms respectively:
O(N * M)
How do we describe the runtime of dynamically adding elements in ArrayList:
using the Amortized time concept.
For instance, the cost of adding an element to an array that is full is the amortized runtime O(1).
You will create an array double the size and copy over the elements so O(N).. Every time we ran out of space we keep doubling so:
1 + 2 + 4 + 8 + … X adds
or, X + X/2 + X/4 + X/8 + … 1 = 2X adds in O(X) time
so 1 add in O(X) / X ~ O(1) time i.e., each add operation takes constant time which is how we describer amortized time