Algorithms Flashcards

1
Q

What are divide and conquer algorithms?

A

Divide and Conquer algorithms are a paradigm for solving problems that involve several basic steps. First, we divide the problem into smaller pieces and work to solve each of them independently. Once we’ve solved all of the pieces, we take all of the resulting smaller solutions and combine them into a single integrated comprehensive solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

common examples of divide and conquer.

A

Some common examples of problems that lend themselves well to this approach are binary search, sorting algorithms (e.g., Merge Sort, Quicksort), optimization of computationally complex mathematical operations (Exponentiation, FFT, Strassen’s algorithm), and others.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How do insertion sort work?

A

Insertion sort takes elements of the array sequentially, and maintains a sorted subarray to the left of the current point. It does this by taking an element, finding its correct position in the sorted array, and shifting all following elements by 1, leaving a space for the element to be inserted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How does heapsort work?

A

Heapsort starts by building a max heap. A binary max heap is a nearly complete binary tree in which each parent node is larger or equal to its children. The heap is stored in the same memory in which the original array elements are. Once the heap is formed, it completely replaces the array. After that, we take and remove the first element, restore the heap property, thus reducing the heap size by 1, after which we place the max element at the end of that memory. This is repeated until we empty out the heap, resulting in the smallest element being in the first place, and the following elements being sequentially larger.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How does quicksort work?

A

Quicksort is performed by taking the first (leftmost) element of the array as a pivot point. We then compare it to each following element. When we find one that is smaller, we move it to the left. The moving is performed quickly by swapping that element with the first element after the pivot point, and then swapping the pivot point with the element after it. After going through the whole array, we take all points on the left of the pivot and call quicksort on that subarray, and we do the same to all points on the right of the pivot. The recursion is performed until we reach subarrays of 0-1 elements in length.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How does merge sort work?

A

Merge sort recursively halves the given array. Once the subarrays reach trivial length, merging begins. Merging takes the smallest element between two adjacent subarrays and repeats that step until all elements are taken, resulting in a sorted subarray. The process is repeated on pairs of adjacent subarrays until we arrive at the starting array, but sorted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What’s the best, average, and worst runtime for insertion sort?

A

Insertion sort has an average and worst runtime of O(n^2), and a best runtime of O(n). It doesn’t need any extra buffer, so space complexity is O(1). It is efficient at sorting extremely short arrays due to a very low constant factor in its complexity. It is also extremely good at sorting arrays that are already “almost” sorted. A common use is for re-sorting arrays that have had some small updates to their elements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What’s the best, average, and worst runtime for quicksort sort?

A

Quicksort is sensitive to the data provided. Without usage of random pivots, it uses O(n^2) time for sorting a full sorted array. But by swapping random unsorted elements with the first element, and sorting afterwards, the algorithm becomes less sensitive to data would otherwise cause worst-case behavior (e.g. already sorted arrays). Even though it doesn’t have lower complexity than Heapsort or Merge sort, it has a very low constant factor to its execution speed, which generally gives it a speed advantage when working with lots of random data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What’s the best, average, and worst runtime for heapsort sort?

A

Heapsort has reliable time complexity and doesn’t require any extra buffer space. As a result, it is useful in software that requires reliable speed over optimal average runtime, and/or has limited memory to operate with the data. Thus, systems with real time requirements and memory constraints benefit the most from this algorithm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What’s the best, average, and worst runtime for mergesort?

A

Merge sort has a much smaller constant factor than Heapsort, but requires O(n) buffer space to store intermediate data, which is very expensive. Its main selling point is that it is stable, as compared to Heapsort which isn’t. In addition, its implementation is very parallelizable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What’s a hash table?

A

A Hash Table is a data structure for mapping values to keys of arbitrary type. The Hash Table consists of a sequentially enumerated array of buckets of a certain length. We assign each possible key to a bucket by using a hash function - a function that returns an integer number (the index of a bucket) for any given key. Multiple keys can be assigned to the same bucket, so all the (key, value) pairs are stored in lists within their respective buckets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What’s the average case and worst case time for each of its operations?

A

Choosing the right hashing function can have a great impact on performance. A hash function that is good for a dataset that we want to store will result in hashes of different keys being a rare occurrence. Even though accessing, inserting and deleting have a worst case time complexity of O(N) (where N is the number of elements in the Hash Table), in practice we have an average time complexity of O(1).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How would you describe dynamic programming?

A

Dynamic programming is a paradigm for solving optimization problems. It consists of finding solutions for intermediate subproblems, which can be stored and reused for solving the actual problem. Dynamic programming is the best approach for difficult problems that always become trivial once we know the solution for a slightly easier instance of that problem - the intermediate subproblem. Dynamic programming solutions are best represented by a recursive relation, and easily implemented.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly