Algorithm and data structure time complexity Flashcards
memorize time complexity for algorithms and data structures
Insertion sort
best: Ω(N)
avg: Θ(N^2)
worst: O(N^2)
Space complexity: O(1)
Merge sort
best/avg/worst: Θ(N log2 N)
space complexity: O(N)
Quick sort
best: Ω(N log2 N)
avg: Θ(N log2 N)
worst: O(N^2)
Space complexity: O(log N)
Quicksort worst case: only remove one element from sorted or reverse-sorted data, will only remove one element from data with many duplicates; leads to incremental execution resembling insertion sort: N levels of recursion, N - i elements to partition at level i
Worst case Θ(N^2)
Best case: partitioning is balanced, subproblems no larger than N/2
Θ(N log2 N)
Random pivot selection alters the worst case to become matter of chance for all inputs
possibility of choosing worst pivot in every call gets stalling as N increases
average case Θ(N log2 N) “expected”
Space complexity: O(log N)
Heap sort
Best/avg/worst: Θ(Nlog2N)
Space complexity: O(1) possible, but I believe method we learned in class was O(N)
Array
Search/insert/delete: Θ(N)
acces: Θ(1)
Space complexity: O(N)
Sorted array
Search: O(log N)
Insert/Delete: O(N)
Space complexity: O(N) (?)
Linked list
Search/access: O(N)
Insert/delete: O(1)
Space complexity: O(N)
Stack
Search/access: O(N)
Insert/delete: O(1)
(average and worst)
Space complexity: O(N)
Hash table
Average:
Search/insert/delete: Θ(1)
Worst case:
Search/insert/delete: Θ(N)
if m proportional to N
chaining: get is amortized constant, put is constant ; expected time complexity is Θ(N/m) because of SUHA, worst case O(N)
uniform hashing - expected number of keys compared when inserting an object depends on the load factor N/m
probing: put and get amortized constant under uniform hashing
Space complexity: O(N)
Binary search tree
Avg search, insert, delete: Θ(log2 N)
Worst search, insert, delete: O(N)
Space complexity: O(N)
Red-Black tree
Search/access/insert/delete: O(log2 N)
(average and worst)
Space complexity: O(N)
BFS
O(|V + E|)
space complexity: O(|V|) (worst case hold all nodes in queue)
DFS
O(|V + E|)
space complexity: height of tree or if using queue get O(|V|)
Prim’s
O(E log2 V) assuming queue is implemented as a binary heap
queue operations determine the running time
all edges added to queue
worst case: all edges removed from queue
E log2 E = O(E log2 V) as in Kruskal’s
Bellman-Ford
O(V*E)