Week 1 Flashcards
What is an algorithm?
A step-by-step procedure for solving a problem using a computer program.
What is a procedure?
Method to solve the problem.
Is a programming language a type of procedure?
Nope.
Why do we analyze algorithms?
To predict performance, provide guarantees and understand theoretical basis.
Syntax for stopwatch Java.
Stopwatch stopwatch = new Stopwatch();
Code
double time = stopwatch.elapsedTime();
StdOut.println(“elapsed time” + time);
What are the limitations of run-time analysis?
- Implementing the algorithm may be difficult.
- Doesn’t include all possible inputs.
- Only valid on same software environments and hardware.
How much time to primitive operations take?
Constant time.
If a perform a primitive operation many times what is the time taken?
N where N is the number of times.
Define O(1), O(log N), O(N), O(N log N)
O(1) = constant time regardless of input
O(log N) = logarithmic time, proportional to the logarithmic of the input size.
O(N) = linear time, directly proportional to input size.
O(N log N) = log-linear time, proportional to the logarithmic of the input size times the input size.
Why is Big O notation desirable?
Abstracts away implementation details as we expect every algorithm to have a run time proportional to its big O notation across every OS.
What is O of O(N) + O(N^2)?
O(N^2)
How can I find the value of n for which the Big-Oh notation holds? Use example 11n + 5 to explain.
We take the big O representation O(x) and express c * x >= 1 / 2 * (original equation). The value of n is the point where it holds.
What does big Ω do?
Tells us that the time taken grows at least by this function.
What does Θ notation do?
Tight bound, gives a minimum running time k1 * N and a maximum running time k2 * N.
Summarize difference between Big Θ, Big Oh and Big Omega.
Θ (N^2) must have N^2 only as biggest.
O(N^2) must have N^2 or lower as biggest.
Ω (N^2) must have N^2 or larger as biggest.
Explain briefly selection sort.
Iterate through the list from i to find smallest entry and swap with i. Then increment i and repeat till end of list.
List sorting algorithms that are in-place?
Selection Sort
Insertion Sort
Shell Sort
Quick Sort
List sorting algorithms that are non-recursive.
Selection Sort
Insertion Sort
Shell Sort
List sorting algorithms that utilize comparisons.
Selection Sort
Insertion Sort
Shell Sort
Merge Sort
Quick Sort
List sorting algorithms that are internal.
Selection Sort
Insertion Sort.
Shell Sort
Quick Sort
List sorting algorithms that are external?
Merge Sort
List sorting algorithms that are unstable.
Selection Sort
Shell Sort
Quick Sort
Explain stability in relation to sorting algorithms.
Stable sorting algorithms preserve the existing relative order of elements when comparing equal keys.
Explain internal vs external in relation to sorting algorithms.
- Internal sort is one where the items being sorted can be kept in main memory / RAM.
- External sort is one where the items being sorted need to use external memory.
Explain insertion sort?
In iteration i, swap a[i] with each larger entry on its left. Iterate through full list until every element has been inserted in correct place.
List Sorting algorithms that are stable.
Insertion Sort
Merge Sort
List best case of all the sorting algorithms.
Selection Sort: O(N^2)
Insertion Sort: O(N)
Shell Sort (3x + 1): O(N log N)
Merge Sort: 1 / 2 N log N
Quick Sort: N lg N
List worst case of all the sorting algorithms.
Selection Sort: O(N^2)
Insertion Sort: O(N^2)
Shell Sort (3x + 1): O(N^(3/2))
Merge Sort: N lg N
Quick Sort: 1/2 N^2
List average case of all the sorting algorithms.
Selection Sort: O(N^2)
Insertion Sort: O(N^2)
Shell Sort (3x + 1): O(N^(3/2))
Merge Sort: N lg N
Quick Sort: 2 N lg N
Explain shell sort.
Elements a certain gap apart are compared and swapped if the order is incorrect. This is incremented throughout the list and then the gap is decremented. This process repeats until the gap is 1 and then a standard insertion sort is performed.
Explain merge sort.
Divide array into two halves, recursively sort each half, merge two halves by comparing elements sequentially from each half and inserting into new temp array.
List all sorting algorithms that are recursive.
Merge Sort
Quick Sort
List all sorting algorithms that are out of place.
Merge Sort
Explain quick sort.
Take first / mid element of array. Place all elements greater than pivot to left of pivot and all elements less than pivot to right and then quicksort these two subarrays.
What actually happens is pivot set to first element, left-mark set to 2nd element and right-mark set to last element. We advance left-mark until it finds an element that is > pivot. Then we decrement right-mark till we find element < pivot. We swap these elements and continue with left again. This process continues until left and right cross and then we swap pivot with right-mark.
Explain Key-Indexed Counting Sort.
We have an array of N integers. We create another array and store in this the ordered frequency that each element appears in the original array. Then we update this array to contain the ordered cumulative frequencies. We then know that the number contained in the 1st entry, is the first position that the first element goes in the sorted array and it goes in every other position until the position in the 2nd entry is reached. We fill up an auxiliary array in this manner and than overwrite the original array.
Explain radix sort.
For sorting integers we have 10 buckets, we place each value of array into bucket based on its LSD and then we save the relative order into a resulting array. Then we do the same thing with the resulting array and the next LSD till we reach the end of the number.
What is linear search?
Brute force approach look at every element in turn until you find the correct one.
What is the advantage of linear search?
Can find in unsorted input.
What is the worst case time complexity of linear search?
O(N)
Explain binary search.
Selected middle element of sorted array, if target is that element return, if target is lower than that element search in lower half of array, if target is higher than that element search in higher half of array.
What is the worst case time complexity for binary search?
O (log N)
Explain Knuth-Morris-Pratt substring search algorithm.
Take the pattern string, and create dfs array. Take first letter of pattern string, there are no proper repeated suffixes so write 0 in dfs array now take two characters and repeat and keep adding letters until the whole pattern is evaluated for suffixes and the longest length of a repeated suffix is put in dfs array.