Algorithms and Logic Flashcards
What is an algorithm?
A set of step by step instructions to solve a given problem or specific goal.
Review these two functions, explain what they do and why they are useful.
private static boolean less(Comparable v, Comparable w)
{ return v.compareTo(w) < 0; }
private static void swap(Comparable[] a, int i, int j)
{
Comparable x = a[i];
a[i] = a[j];
a[j] = x;
}
The first one is running a simple compareTo operation, that is checking and returning if the number is less than 0, which -1 from compareTo means less than.
The second is a swap, that takes an array and swaps two items. at the specified index. It achieves this by storing a temporary index before the swap, saving it in memory.
What is selection sort? What does it do?
It sorts an array of N elements, by finding the minimum element in the array and moving it to the front of the array each traversal.
It will sort the array length N - 1.
Review this implementation of selection sort, explain what it is doing.
public static void sort(Comparable[] a) {
int N = a.length;
for (int i = 0; i < N; i++) {
int min = i;
for (int j = i+1; j < N; j++)
if (less(a[j], a[min])) min = j;
swap(a, i, min);
}
}
The method is taking a parameter of array a and first declaring N to be the length of the array. Then an iteration through N elements and assigning the minimum element to the index. Then another iteration through N with a check against if index j or current minimum element to see which is less, if index j is, then the minimum element becomes j.
Finally, the two indexes are swapped using the swap method.
What is a Loop invariant?
A loop invariant is a condition (true or false) that holds before and after every iteration of the loop.
The trivial condition true works as invariant for any loop, but a loop invariant has to hold before the loop and should tell us something useful about the algorithm, to understand it.
Thus a loop invariant helps us with constructing a new loop and understanding a given loop.
A loop invariant confirms the correctness of a loop.
How could you update selection sort to be recursive?
public static void sort(Comparable[] a) {
int N = a.length;
for (int i = 0; i < N; i++) {
int min = i;
for (int j = i+1; j < N; j++)
if (less(a[j], a[min])) min = j;
swap(a, i, min);
}
}
We just need to ensure that on each pass, we are changing the minimum element and updating the array, then each recursive call, the list will be sorted one by one.
public static void sort(Comparable[] a) {
sort(a, 0, a.length);
}
public static void sort(Comparable[] a, int i, int N) {
if (i < N) {
int min = i;
for (int j = i+1; j < N; j++)
if (less(a[j], a[min])) min = j;
swap(a, i, min);
sort(a, i+1, N);
}
}
Can all loops be replaced by recursion? Also give a definition for recursion in computer science
Yes any loop can be replaced by recursion.
Recursion is the art of a function calling itself, but this can create endless loops if not managed correctly, so using loops is preferred in some programming languages who set a recursion depth limit.
What is Insertion sort?
Insertion sort, is where we sort through an array of N elements from left to right, by swapping the index i with the larger version to the left.
Review this implementation of Insertion sort, what is it doing, how does it work?
public static void sort(Comparable[] a) {
int N = a.length;
for (int i = 0; i < N; i++) {
int j = i;
while (j > 0 && less(a[j], a[j-1])) {
swap(a, j, j-1);
j–;
}
}
}
First the element N is set to the length of the array, then a simple iteration through N and a temporary variable of j being set to i. Then a check to see if j element is less than the previous j - 1, if so then the elements are swapped.
List some issues with Insertion sort
- Worst case is that the array is in descending order so every element will need to be swapped. This means that the swaps grow exponentially.
- Best case is that there are only a few swaps
Define Mergesort
Mergesort takes an array of N elements and divides it in half, then recursively sorts each half, before merging the halves together.
Review this MergeSort method
… merge(Comparable[] a, Comparable[] aux, int lo, int mid, int hi) {
for (int k = lo; k <= hi; k++) {
aux[k] = a[k];
}
int i = lo;
int j = mid+1;
for (int k = lo; k <= hi; k++) { if (i > mid) a[k] = aux[j++]; else if (j > hi) a[k] = aux[i++]; else if (less(aux[j], aux[i])) a[k] = aux[j++]; else a[k] = aux[i++]; } }
private static void sort(Comparable[] a, Comparable[] aux, int lo, int hi) {
if (hi <= lo) return;
int mid = lo + (hi - lo) / 2;
sort(a, aux, lo, mid);
sort(a, aux, mid+1, hi);
merge(a, aux, lo, mid, hi);
}
The merge method is first creating a copy of the array in the auxiliary array.
Then assigning i and j the low and mid + 1 values, then iterating through the range of low to high values and running these checks:
- If the lowest value is greater than the middle value, then copy the remaining right half elements
- If all elements from the right half have been sorted, copy the remaining left half elements
- If the right element is smaller then copy it to array k index and move j forward.
- Otherwise copy auxiliary i to the array k index and move i forward.
What are pre and post conditions?
A precondition is a condition (true or false) that should hold upon entry into a method. It expresses a method’s expectation on its arguments and/or the state of objects that may be used by the method.
A postcondition is a condition (true or false) that should hold upon exit from a method. It expresses the conditions that a method should ensure for the return value and/or the state of objects that may be used by the method.
What is the time complexity for mergesort?
- All comparisons are made in the method merge, the loop in merge counts from low to high, so at every recursion depth, merge is called for all slices of the array so overall N comparisons.
On every call, the array is halved, so the depth is the binary logarithm of N:
2^depth = N so N = lg(depth)
Hence mergesort uses N lg(N) comparisons.
What is Quicksort?
Quicksort is an algorithm that partitions an array so that for some j:
- entry a[j] is in the correct place
- no larger entry to the left if j
- no smaller entry is to the right of j
Then sort each slice recursively
Review this partition method, what is it doing and how does it work?
private static int partition(Comparable[] a, int lo, int hi) {
int i = lo
int j = hi+1;
Comparable pivot = a[lo];
while (true) { while (less(a[++i], pivot)) if (i == hi) break; while (less(pivot, a[--j])) if (j == lo) break; if (i >= j) break; swap(a, i, j); } swap(a, lo, j); return j;
We first define i and j as low and high values and the pivot index which we are setting as the low value.
Then in the main loop, the goal is to find an element on the left side that is too large and an element on the right side that is too small.
To achieve this, we use the left pointer i + 1 and moves right while the i index is smaller than the pivot, if i reaches high then it stops.
For the right pointer j, which is high + 1, it moves left while j is greater than the pivot, if j reaches low then it stops.
Then we swap index i and index j if i < j or if i >=j then we stop the loop.
We then swap the pivot low with j, so that now a[j] is the new pivot.
Then we return j to be used as the pivot for the next recursive sort.
What are the two approaches to measuring an algorithm?
- Empirical analysis: measure how much time / space an algorithm takes for a given input.
- Mathematical analysis: build a mathematical cost model and calculate time / space.
What are some abstractions that we make when analysing algorithms?
- We only consider the size of the input data not its content
- To make up for information loss we consider best, average and worst cases.
How would you conduct empirical analysis?
Run the algorithm with varying sizes of input data and record the results.
So if we have 65, 138, 256, 513 data inputs we would record each and measure the running time.
Define Quicksort, how does it work?
Quicksort works by partitioning an array so that all items to the left are smaller and to the right are bigger. Each slice is then sorted recursively.
Explain this partition method
private static int partition(Comparable[] a, int lo, int hi) {
int i = lo, j = hi+1;
Comparable pivot = a[lo];
while (true) {
while (less(a[++i], pivot))
if (i == hi) break;
while (less(pivot, a[–j]))
if (j == lo) break;
if (i >= j) break;
swap(a, i, j);
}
swap(a, lo, j);
return j;
We begin by initialising the low and high values as i and j, then the pivot index which is the lowest value.
Then while items to the left of the pivot are true, iterate through the array. same goes for if items are larger than the pivot to the right then iterate backward.
If either of these fail then the i and j index are swapped.
The last check makes sure that if i becomes greater than j, the loop is broken and the low and j index are swapped for the new pivot index.
Review the full Quicksort method
private static void sort(Comparable[] a, int lo, int hi) {
if (hi <= lo) return;
int j = partition(a, lo, hi);
sort(a, lo, j-1);
sort(a, j+1, hi);
}
We first check if the high index is less than the low.
Then we partition the array to find the pivot index which is assigned to j.
Then we sort from the low until j -1 and from j + 1 to the high value.
Recursively repeat until all elements are sorted.
How many comparisons are in Quicksort?
As the algorithm uses the high - low comparisons and at the most the high-low swaps, it is O(n log n) and at worst O(n^2) if the input is ordered.
What are the main orders of growth?
- Quadratic - worst case as n grows exponentially
- Linear - n grows in proportion to the input
What would O(2N^2 + 8N) be?
O(N^3) as only the fastest growing term is important.
What would be the runtime for this method?
public static void sort(Comparable[] a) {
int N = a.length;
for (int i = 0; i < N; i++) {
int min = i;
for (int j = i+1; j < N; j++)
if (less(a[j], a[min]))
min = j;
swap(a, i, min);
}
}
A quick way to do this is to count the loops, which in this case are twp, therefore it is O(N^2)
What is a divide and conquer algorithm?
A divide and conquer algorithm breaks down a problem into tow or more sub problems of the same or related type to be solved directly. The solutions to the sub problems are then combined to give a solution to the original problem.
For typical divide and conquer algorithms what is the time complexity?
It is usually O(n log n) if each of the problems is being divided in half, making the recursion n log n deep.
How would you solve the LCS problem?
The longest common substring problem can be solved by deleting 0 or more characters.
If the string is “ABC”
Then we get “A”, “B”, “C”, “AB”, “AC”, “BC”, “ABC”
The longest common substring would be 2.
How would you implement an LCS algorithm?
- Compare last character of both strings
- if characters are equal, recursively find remaining strings of length m -1 and n -1 and add 1 to result.
- If unequal, make two recursive LCF calls for m -1 and then for n - 1, then take the maximum of two results.
- The base case if any of the strings are empty, return 0