Algorithms Flashcards

1
Q

What is runtime

A

A function f: N -> N, which maps the input size to the number of elementary operations completed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How to compare asymptotic complexity of functions

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the racetrack principle

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How does Fast Peak Finding work

A

Imagine the array is sloping downwards, then there must be a peak to the left and vice versa; perform binary search to get the peak

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Log laws

A

n = 2^log(n) (base 2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Derivative of log(n) (base 2)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Sum of arithmatic sequence formula

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Sum of geometric sequence formula

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is big O notation

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Runtime complexity hierarchy

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How to prove/disprove that a function is in O(n)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Composing two functions effect on O notation

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Loop / multiplying functions effect on O notation

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Loop / multiplying functions effect on O notation

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the 5 steps of merge sort

A

1) If n = 1 return A
2) Recursively sort left sub-array
3) Recursively sort right sub-array
4) Merge together the two arrays
5) Return the sorted array

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How does the tree diagram of merge sort look like

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the steps of a divide and conquer algorithm

A

1) Divide into smaller subproblems
2) Conquer sub-problems
3) Merge back up the tree to find a general solution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What does it mean for an algorithm to be in-place

A

It means that no auxillary arrays are used. At most O(1) elements stored outisde array

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What does it mean for an algorithm to be stable

A

The ordering of elements are not un-neededly swapped. Ex: If A[i] = A[i+1] they will not be swapped. Useful for satilite data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Is insertion sort stable and in-place

A

Insertion sort is both stable and in-place

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Is merge sort stable and in-place

A

Merge sort is stable but not in-place since it uses auxillary arrays in the merge step

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

How does the merge step work in mergesort (4 steps)

A

Require both halves of A are sorted
1) Copy left half into auxillary array B
2) Copy right half into auxillary array C
3) Merge B and C in sorted order
4) Return result

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

How is a loop represented in sigma notation

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What type of binary tree is mergesort

A

Complete binary tree

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is a full binary tree

A

Each element has either 0 or 2 children

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is the runtime of mergesort

A

O(nlogn)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is the depth of the tree for a divide and conquer algorithm which splits in half at each step

A

log(n) (base 2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is the general runtime for most divide and conquer algorithms

A

O(nlogn)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

How to determine the runtime of a divide and conquer algorithm

A

Look at how long each node takes to complete, the amount of nodes per level and the number of levels

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What is the maximum subarray problem

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What are the 3 cases for the maximum subarray problem

A

If it is included in L then can recursively solve, same for R. However if spans across the middle then will need to find the indices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

How to find the indices for i,j for when the maximum subarray crosses the midpoint

A

Can be broken into two subproblems, maximising A[i,n/2] and A[n/2,j], thus can just loop through all values to find which gives the highest result. Takes O(n) time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What are the 5 steps for the maximum sub array problem d&c algorithm

A

1) If n=1 return A
2) Recursively compute the max subarr for L
3) Recursively compute max subarr for R
4) Compute the max subarr that crosses the midpoint
5) Compare all these arrays and return the greatest one

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What is the definition of theta notation

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What is the symetry of theta notation

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

How does theta notation relate to big-o

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What is the definition of omega notation

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

How does omega notation relate to big-o

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

How does the RAM model work ( what can be done in a single time-step)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

How does the RAM model work (memory-wise)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What are the two costs of an algorithm according to the RAM model

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

What does the RAM model mean about the O() for each operation

A

Every operation of our algorithm can be implemented in O(1) elementary operations in the RAM model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

How are loops expressed as sums

A

Note: k would be the runtime of the inner loop

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

What is the formula for best case runtime

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

What is the formula for worst case runtime

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

What is the formula for average case runtime

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

What is the algorithm for linear search

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

What is the best and worst case runtime for linear search

A

Worst case: O(n)
Best case: O(1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

What is the average case runtime for linear search on a binary array

A

Average case O(1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q
A

In O(1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

What is the algorithm for binary search

A
53
Q

Worst case runtime of binary search

A

O(logn)

54
Q

What are the steps of proof by induction

A
55
Q

What is strong induction

A
56
Q

What is a loop invariant

A

A loop invariant is a property P that, if true before iteration i, it is also true before iteration i + 1

57
Q

What are the steps to prove a loop invariant

A

Note: true before itteration i+1 is the same as being true after itteration i; assume true for before itteration i.

58
Q

What is the difference between a formal and informal loop invariant proof with 2 nested for loops

A

Informal: can just say what the nested loop is doing
Formal: need a separate invariant for the nested for loop

59
Q

What is the algorithm for insertion sort

A

Note: works by inserting each element into correct position in growing ‘safe area’

60
Q

What is the main loop invariant for insertion sort

A

The sub-array safe area A[0,j-1] is sorted before itteration j

61
Q

What is the best and worst case runtime for insertion sort and why

A

Best case: O(n) -> goes through array once
Worst case: O(n^2)

62
Q

What is the definition of a tree

A
63
Q

What is the definition of a rooted tree

A
64
Q

What is the definition of a leaf and an internal node?

A
65
Q

What is a parent node of v (a node in a tree)

A
  • The parent of a node v is the closest node on a path from v to the root.
  • The root does not have a parent
66
Q

What is a child node of v (a node in a tree)

A

The children of a node v are v’s neighbours except its parent.

67
Q

What is the height of a tree

A

The height of a tree is the length of a longest root-to-leaf
path (number of edges).

68
Q

What is the degree of a node? And what is the total degree of every node in the tree?

A
69
Q

What is the level of a vertex (node) v in a tree?

A

The level of a vertex v is the length of the unique path from the root to v plus 1.

70
Q

Is it possible for a tree to have less than two leafs

A

No

71
Q

What is a full, complete and perfect k-ary tree?

A
72
Q

What is the number of nodes in a k-ary tree based on its height?

A

Height = # levels -1

73
Q

What is the height of a perfect k-ary tree based its number of nodes

A

Reminder: height = # levels -1.

In addition: A complete k-ary tree also has height of O(log_k(n))

74
Q

Is it possible to sort faster than O(nlogn)

A

Yes with knowleedge of the data (ie only integers), however not possible to sort faster than O(nlogn) using comparison based sorting (sorting anything).

75
Q

How can you sort a binary array in O(n)

A
76
Q

What is the premise of comparison based sorting? And what are some example algorithms?

A
77
Q

What is the lower bound for comparison based sorting

A

OMEGA(nlogn)

78
Q

How many possible orderings are there for an array of size n

A

n!

79
Q

What is a comparison sorting decision tree? How many leafs and what is the height?

A

n! leaves (for each permuation)
To accomodate n! leaves 2^h >= n!, the height is log(n!) = OMEGA(nlogn).
The height is the amount of decisions needed.

80
Q

What is Stirling’s approximation?

A
81
Q

Is heap sort in place and/or stable?

A

Heap sort is in-place but is not stable (due to way elements are rearranged in the heap - and root swapped with last element)

82
Q

What are the two functions of a priority queue structure?

A
83
Q

How is an array interpreted as a tree (and what is the location of a parent, l-r child of any node)

A
84
Q

What is the heap property

A
Max is at root. Important for extract-max
85
Q

How is a heap constructed from an array (build(.))

A
R-L array ordering since R of the array is at the bottom right of the heap
86
Q

What does the function Heapify() do

A

Traverses from current node to bottom of array (in a single path), swapping elements if they don’t fufill the heap property.

87
Q

What are the steps of Heapify()

A
88
Q

What is the runtime of Heapify() and why

A
At most takes a single path from root to bottom swapping parent with child
89
Q

What is the runtime for constructing a heap?

A

You would think O(nlogn) since for n nodes * runtime of heapify O(logn), however given the fact that trees are bottom heavy, computing the sum actually gives O(n)

90
Q

What are the steps of heapsort

A
91
Q

What is the runtime of heapsort

A
92
Q

What is the worst case + average case runtime of QuickSort and why is it used?

A

Worst case: O(n) - bad pivot
Average case: O(nlogn)
It is used because in practice is much faster than most other sorting algorithms (the hidden constants are small)

93
Q

Is QuickSort stable and in-place?

A

QuickSort is in-place but is not stable, due to the way it swaps elements (a larger elem initially to the left could be swapped with a smaller elem to the right)

94
Q

What are the divide, conquer and combine steps of QuickSort

A
95
Q

nHow does the partition function of QuickSort work and what is its runtime

A

Runtime O(n)

Assuming the pivot has been swapped to the end of the array
96
Q

How does quicksort work (visualised)

A
97
Q

What is the main loop invariant for quicksort?

A

Assuming x is the pivot

98
Q

What is the psuedocode for QuickSort?

A
99
Q

How is the quicksort runtime calculated

A

Partitioning takes O(n) meaning total of O(n) per level (as nodes sum to n)

Runtime = n * L (number of levels) -> = O(nlogn) if a ‘good’ split is selected

100
Q

What is the general runtime recurrence for QuickSort

A
101
Q

What is the best and worst case splits for QuickSort

A

Worst case: Largest or Smallest
Best case: median

102
Q

What does the worst case QuickSort recursion tree look like

A
103
Q

What does the best-case QuickSort recursion tree look like

A
104
Q

What does the best-case QuickSort recursion tree look like

A
105
Q

What distinguishes a good and bad QuickSort split

A
106
Q

What is the recursion tree for QuickSort with only good splits

A

Recurses until n = 1, so log(n)+1

107
Q

What does the recursion tree for QuickSort look like with alternating good/bad splits

A
108
Q

How is pivot selection typically done in QuickSort

A

While it is possible to use an O(n) algorithm to find median, in practice it is better to use a random pivot since algorithm will make enough progress leading to O(nlogn)

109
Q

What type of algorithms are recurrences especially useful for

A

Divide and conquer algorithms

110
Q

What is the general format for a recurrence

A

Base case: T(1) =1
General case: T(n) = 2T(n/2) + O(n) – or can just write n
The number of recurrences along with their magnitude (in terms of n elements passed) plus the time taken for this recursive step O(n).

111
Q

What 3 methods can be used for solving recurrences

A
112
Q

What are the 3 steps of the substitution method for solving recurrences

A

1) Guess a good upper bound
2) Substitute into the general equation and solve to show the guess is larger than substituted value
3) Verify base case (can increase base case if needed)

113
Q

How do you solve the following problem

A

Add a minus one at the end

114
Q

What is the premise of drawing a recursion tree for the recursion tree method

A
  • Each node represents the local-cost of a sub-problem (the non recursive element in the recurence with the current value of n)
  • Recursive invocations become children of a node
  • Compute sum per level (and # level until n=1) then add up
  • This gives a good gues for the substitution method
115
Q

How to deal with upper and lower bounds in recursion tree method

A

Discard them - only need a vague guess

116
Q

Sum of infinite geometric series formula

A
117
Q

How to find the number of levels of a recursion tree

A

Determine at what rate the size of n is getting smaller and set it equal to one

118
Q

Is it possible to sort integers faster than O(nlog(n))

A

Yes using counting sort or radix-sort

119
Q

What is the psuedocode for counting sort

A
120
Q

What are the steps of counting sort

A

1) Using aux-array count how often each element appears
2) Count how many smaller elements appear
3) Go through initial array (right-to-left) and insert element at counted position (-1) and decrement count

121
Q

What is the runtime of counting sort

A

O(n+k) where k is the highest integer in the array

122
Q

Is counting sort stable and in-place?

A

Yes is stable but not in-place since it uses aux arrays

123
Q

What is the psuedo-code for Radixsort

A
Usually use counting sort algorithm within it

Sort from lowest to highest usually

124
Q

What is the runtime of radixsort

A

O(d(n+b)) – b is base and d is number of digits

125
Q

How do you compute the fibonacci series using a dynamic programming algorithm

A

Either store all values, or just last two

126
Q

What is a method for working out solution to pole-cutting

A

For each element, starting with the lowest one, work out the optimal revenue. Scan through the array with the price of the legnth of a single cut and the cuts from previous entries to find the optimal cut for each entry.

Assumption: R(i) = P(i_1) + R(i_2) for i_1 + i_2 = i

127
Q

What is the optimal substructure property

A

Optimal substructure is a key property for dynamic programming that asserts that the optimal solution of a problem can be constructed efficiently from the optimal solutions of its subproblems. In other words, if a problem can be broken down into smaller subproblems, which can be independently solved optimally, then these optimal solutions can be used to solve the original problem optimally as well.

128
Q

What is the optimal substructure of pole cutting

A

In the rod cutting problem, the optimal solution for a rod of length ‘n’ is achieved by maximizing profit from all possible cut lengths ‘i’. If ‘P[i]’ represents the maximum profit from a rod of length ‘i’, the optimal substructure can be summarized as:

P[n] = max(p_i + P[n-i]) for 1 <= i <= n

This equation means that the best profit comes from the best choice of initial cut plus the best profit from the remainder.

The first piece is considered a single cut with no recursive sub cuts, this is accounted for in the 2nd part

129
Q

What is bottom up psuedo-code for the dynamic programming approach to polecutting

A
Runtime O(n^2)
130
Q

What is the difference between top-down and bottom up dynamic programming

A

**Top-down (Memoization): **Begins with the original problem, breaks it into subproblems, and stores the results of each subproblem to avoid duplicate work. It uses recursion and starts solving the problem from the ‘top’, using the results of smaller subproblems as needed.

Bottom-up (Tabulation): Starts from the simplest subproblem and iteratively solves larger subproblems using the results. It typically involves filling up an n-dimensional table in a manner that avoids redundant calculations. It’s more efficient in terms of function calls and stack space as it’s iterative, not recursive.