Mod 1 - 2 Flashcards
How does a Radix sort work?
A radix sort uses multiple passes of counting sort to sort a number bit-by-bit, from the least to most significant digit. This method relies on the fact that counting sort is stable, and is more efficient for large numbers.
Radix sort has a time efficiency of O(n+2^d) for each pass where d is the number of bits being sorted.
Hence, the total time efficiency is O((w/d)(n+2^d)) where w is the number of bits in a number.
How does counting sort work?
Counting sort counts occurrences of an element in a new array. It then converts the counts into a running sum (in place) by adding the value of the previous two terms. It uses these running sums to identify the values that go into certain index. The counting sort then scans the initial array backward to copy elements into a sorted array. If elements do not need to be copied, they can be generated using the range of the initial array.
Time efficiency = O(n+k)
Very efficient when length is not much smaller than the maximum value in the initial list.
How would you identify an O(log(n)) function?
-look for a loop where the loop counter is divided by a constant
- think of binary search, where we eliminate half (so 2 is the base of our log) of the possibilities each time
-another example is traversing through data in a branching hierarchy
How to identify a O(sqrt(n)) function?
Look for a loop that counts up to the square root of n, with each iteration occurring in constant time.
How to identify an O(n) function?
A basic loop directly related to n.
“For i in range(n):”
How to identify a function in O(n*log(n)?
- an algorithm that does log n work at every element
- an algorithm with an outer loop based on log n and inner loop of n
- log (n!)
How to identify a function in O(n^2)?
-Has nested O(n) loops, with innermost loop performing a constant time operation
How to identify a function in O(2^n)?
- a recursive call that decrements a counter by a constant factor and calls itself more than once
- the code is 2 making branching versions of itself
How to identify a function in O(1)?
-no loops based on input size
-not recursive
- no calls to other functions that break these conditions
What is Big-O notation?
- Big-O notation was created by Bachmann and Landau to characterize the time complexity of algorithms by their upper bounds
- it focuses on how algorithms grow in their worst case and disregards constants and lesser terms
- g(x) is an upper bounds of f(x) if for some x.0, g(x) > f(x) for all x > x.0.