Sorts Flashcards
Is quick sort in place? Stable?
Quick sort is in place, and can be made stable (while still fast) with some neat optimizations.
What is quick sort’s time complexity (consider all cases)?
Θ(n log n) in the best and average case. Θ(n^2) in the worst case.
What leads to a worst case performance of quick sort?
When the pivots selected are consistently greater than or less than the rest of the values (leading to partitions of length 0).
How long does each of quicksort’s partitioning steps take?
Each partition step takes Θ(n) operations.
Name the two common quick sort partitioning schemes.
Hoare and Lomuto.
Describe the Hoare partitioning scheme.
Two scanning indices move towards each other until they swap or cross.
A swap occurs when both point to an element on the wrong side (inversion).
Describe the Lomuto partitioning scheme.
Two indices, but only one is scanning - out of place items are swapped to the beginning.
Which pairtitioning scheme is more efficient?
Hoare.
Quicksort performance is primarily driven by what?
Pivot selection.
Besides pivot selection, how else can quicksort be made more efficient?
Diverting to insertion sort for smaller partitions is generally more efficient.
What are the divide and conquer sorting algorithms (that we are studying)?
Quick sort and merge sort.
Write pseudo code for quick sort (assume partitioning is handled by a separate function).
void sort(int[] a, int lo, int hi) {
if (hi <= lo) return;
int j = partition(a, lo, hi);
sort(a, lo, j-1);
sort(a, j+1, hi);
}
Write pseudo code for quick sort’s paritioning.
int partition(int[] a, int lo, int hi) {
int i = lo, j = hi + 1;
int v = a[lo];
while (true) {
while (a[++i] < v) if (i == hi) break;
while (v < a[–j]) if (j == lo) break;
if (i >= j) break;
swap(a, i, j);
}
swap(a, lo, j);
return j;
}
What is merge sort’s time complexity? (consider all cases)
Θ(n log n) in all cases.
Is merge sort in place? Stable?
Stable, but not in place.
What is merge sort’s space complexity?
Θ(n)
Can merge sort readily be an in-place algorithm using arrays?
No.
Can merge sort be implemented with lists?
Yes.
How does merge sort find the halfway point of list in 1n time, rather than 2n time?
Using the “alternate and skip” method.
Can merge sort be made to work on a list of lists? (nested lists)
Yes, by merging depth first.
What’s the cost of finding the halfway point of an array in merge sort?
Θ(1)
What are the benefits of identifying already sorted runs in merge sort?
Less overall comparisons & swaps, reduced merge operations.
Why is it possible to develop logarithmic time insert and remove the maximum operations within a heap?
Because the height of a complete binary tree of size n is floor(lg n).
What are the two approaches to heapifying?
Top-down (swim-based) or bottom-up (sink-based).
Bottom-up heapifying uses how many compares and how many exchanges to construct a heap from N items?
fewer than 2N compares, and fewer than N exchanges.
Write the pseudo code for merge sort. (assume merging is handling by a separate function).
list mergesort(list a) {
if (a’s size <= 1) return a;
l1 = a[0 to half];
l2 = a[half to end];
return merge(mergesort(l1), mergesort(l2));
}
Write a function that merges two arrays together.
int[] merge(l1, l2) {
L = new List();
while (neither l1/l2 are empty) {
if (l1.get(0) <= l2.get(0)) {L.add(l1.remove(0));}
else L.add(l2.remove(0))
}
if (l1.isEmpty()) L.addAll(l2);
if (l2.isEmpty()) L.addAll(l1);
}
Write the pseudo code for heapsort.
void heapsort(int[] a) {
for (int i = a.length/2; i >= 1; i–) {sink(a, i, a.length);
int N = a.length;
while (N > 1) {
exch(a, 1, N–);
sink(a, 1, N);
}
}