1324 Algorithmics Flashcards

1
Q

What is an algorithm?

A

Uses a finite sequence of computational steps to produce the output using an input.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How do we evaluate algorithms?

A

Is it correct?
Is it efficient?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are data structures?

A

A standardised way to store and organise data for ease of access and modification.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the complexity of searching a binary search tree?

A

theta (log n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What happens if we use a key to lookup items?

A

Search, insert and delete become O(1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Why is hashing wasteful?

A

Nearly all memory locations end up empty.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a hash function?

A

A function that takes an objects x and returns a positive integer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Where the table size is 2^n what can we use for efficiency?

A

A bitmask

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the features of a desirable hash function?

A

Fast to compute
Keys must be distributed as uniformly as possible
Consistency with equality testing functions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the two methods of collision resolution?

A

Separate Chaining
Open addressing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is separate chaining?

A

At each table entry we have a singly linked-list. When there is a collision, we add a new element to these lists. This means when we search we have to iterate through the list.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the complexity of Separate Chaining?

A

O(n) with the list
O(1) without the list

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is open addressing?

A

When a collision occurs we find the nearest open spot in the table and then add it there.

Overtime, entries tend to cluster up, and increase the number of probes needed to locate the item.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the loading factor?

A

The proportion of full entries in the table is known as the loading factor.
lambda = 0.9 means that 1 in ten locations is free

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How can we avoid clustering?

A

Quadratic Probing
Double Hashing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is Quadratic Probing?

A

We try locations h(x) + d_i where d_i = i^2, we therefore search in steps of 1,4,9,16,…
This prevents primary clustering, but we might not be able to add an element even if the table isn’t full.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is double hashing?

A

h(x) = d_i, d_i = i*h_2(x)
h_2(x) is a second hash function that depends on the key. h_2(x) = R- (x MOD R) where R is a prime smaller than the table size. h_2(x) cannot be a divisor of the table size, so the table size must be prime

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How to remove with open addressing?

A

Compute the array index, if the location is empty it is not present
If it contains the key the search succeeds
Otherwise, find a new address using open addressing and restart.,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the problem with deleting an element in open addressing?

A

If we delete an entry between the first address and the actual value address, the actual value becomes inaccessible.

We can solve this by inserting a dummy value when we delete.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How do we resize a hash table?

A

Create a hash table, twice the size
Iterate through the old hash table, adding each element to the new table.
The requires recomputing all the hash codes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What does the UNION command do?

A

Connects two objects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What does isConnected do?

A

Determine if two objects are connected via a path

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What does find do?

A

Check which class a given element belongs to

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

How can we implement Union, Find, and isConnected?

A

An array of Size N, ID. P and Q are connected if their IDs are the same
To merge, id[q] = id[p]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What is the complexity of Union-Find?
Initialisation: O(N) Find is O(1) Union is O(N)
26
What is the difference between Union-Find and Quick-Union?
Components are stored as trees, making the whole thing a forest. The id of an element is it's parent, if not root.
27
What is the complexity of Quick-Union?
Initialisation is O(N) Find is O(N) Union is O(N)
28
What is weighted Quick-Union?
Keep the tree balanced. Maintain a size[] to keep track of the number of times in each tree Merge the smaller tree into the larger one (link the root of the smaller tree directly to the root of the larger one) Union method merges.
29
What is the Path Compression?
When we perform find operation in Quick-Union, we actually look for a path from element p to its root. Set the id of items on this path to be the root each time a root is computed, flattening the truth.
30
What is the complexity of the Quick-find improvements?
Initialisation is O(N) Find is O(log* N) Union is O(log* N)
31
What is log* N?
Iterative logarithm The number of times needed to apply a logarithm before you get a number less than or equal to one.
32
What is insertion sort?
Keeps a subsequence of elements on the left in the correct report. The subsequence is increased by inserting the next element into it's correct position in the sorted subsequence. With each iteration we move the current element one to the right. This is stable and in-place
33
What is the complexity of Insertion sort?
Worst: theta(n^2) Average theta(n^2) Best theta(n)
34
What is selection sort?
Search for the smallest element, then put it in the right place, and so on for the second. This is in-place but not stable.
35
What is the complexity of selection sort?
theta(n^2)
36
What is a min-heap?
A binary tree where every level above the lowest is fully occupied. And the nodes on the lowest are all to the left. Each child has a value greater than or equal to it's parent.
37
How do we add an element to a min heap?
Add the element to the next available space, percolate up tree to maintain ordering.
38
How to remove an element from a min-heap?
The minimum element is the root of the tree Remove this element Replace it with the last element in the heap Percolate this element down to the bottom of the heap choosing the minimum child.
39
What is the complexity of a min-heap?
Percolating one level is theta(1) For the height is theta(log n) The remaining operations are theta(1)
40
What is a priority queue?
A queue where we assign a priority to each element, the element with the highest priority is the head of the queue.
41
What is a heap sort?
We simply add elements to the heap and then remove them. This is not in-place and uses theta(n) additional memory The worst case complexity is O(n log n)
42
What is a B-tree?
Balanced Multiway trees for fast search, finding successors and predecessors, insert, delete, maximum and minimum. They are designed to keep data close together on disk, to reduce retrieval time.
43
What are Multiway Trees?
log_M(n) = log_2(n)/log_2(M) is the new access time Trees where each non-leaf node has M children
44
What are B- Trees?
All internal nodes and leaf nodes contain data pointers along with keys Sequential access of nodes is not possible Searching is slower Insertion and deletion is slower
45
What are B+ Trees?
Only leaf nodes contain data pointers, internal nodes contain keys only Sequential access is possible like a linked list as leaf nodes are linked to each other Searching is faster Insertion and deletion is faster and easier.
46
What are the rules of B+ Trees?
The data items are stored only at the leaves The non-leaf nodes store up to M-1 keys to guide the search: key i represents the smallest key in the subtree i+1 The root is either a leaf or has between 2 and M children All non-leaf nodes except the root have between M/2 and M children All leaves except the root are at the same depth and have between L/2 and L data entries
47
How do we choose M and L in B+ trees?
This depends on the block size (amount that can be read from the disk in one go) or just L = M -1
48
What is a trie?
A multiway tree often used for storing large sets of words. A tree with a possible branch for every letter of an alphabet, where all words end with a special $
49
How can tries be used? And what are there disadvantages?
They are another way of implementing sets. Providing quick insertion, deletion, and find. However, they waste a lot of memory.
50
What are Binary Tries?
An extreme solution to address the memory issue is to build a bit-level trie so the resulting data structure is a binary tree. It differs from a regular binary tree as the decision is dependant on the current bit. Although you loose the advantage of a multiway tree (of reducing the depth)
51
What do algorithms need to be?
Correct, efficient and easy to implement. In that order.
52
When is a property a loop invariant?
When we can show: It holds true prior to the first iteration It holds true before an iteration of the loop and remains true after the next iteration When the loop terminates, the property is useful in showing the algorithm is correct.
53
What is a loop invariant?
A property of a loop that helps us understand why an algorithm is correct
54
What is a lower bound on complexity?
A lower bound of f(n) is a guarantee that no one can user fewer than f(n) opeartions
55
What is a decision tree?
A method for visualising the lower bound on complexity The worst case: depth of the deepest leaf Average: average depth Best: Depth of the shallowest leaf.
56
What are the correctness requirements for a decision tree?
It must have a leaf for every possible way of sorting the list by considering all permutations. These correspond to different paths in the tree Each leaf gives a possible ordering n! permutations log_2(n!) will be the height of the tree
57
How do we merge tow lists?
Compare the elements of a and b Choose the smallest and put in C and repeat.
58
Is merge sort stable and in-place?
It is stable but not in-place
59
Give some psuedocode for MergeSort
if start < end THEN - mid = floor((start+end)/2) - MERGESORT(a,start,mid) - MERGESORT(a, mid+1, end) - MERGE(a,start,mid,end) else THEN - return
60
What is the complexity of merge sort?
O(n log n) T(n) = 2T(n/2)+O(n)
61
How can we optimise merge sort?
If the sub array falls below a certain size we can use insertion sort instead.
62
Give some pseudocode for Quicksort
if start < end THEN - pivot = CHOOSEPIVOT(a, start, end) - part = PARTITION(a, pivot, start, end) - QUICKSORT(a,start, part-1) - QUCIKSORT(a, part+1, end) else THEN - return
63
What is the worst case scenario for QUICKSORT?
Where the pivot is either the smallest or largest element We need to choose an efficient pivot.
64
How do we choose an efficient QUICKSORT pivot?
We can choose the median of the first, middle and last element. Choose the pivot randomly Or use an algorithm.
65
What is the complexity of Quicksort?
Partitioning is theta(n) The worst case is O(n^2) when the pivot is smallest or largest Best case: omega(log n) Average case: O(n log n) Worst case: O(n^2) as above In practise it is close to O(n) with 39% more comparison compared to merge, but less data is moved.
66
List the complexities (worst, best, average), in-place, and stability of Insertion, Selection and bubble.
Insertion: theta(n^2), theta(n), theta(n^2) YES YES Selection: theta(n^2), theta(n^2), theta(n^2), YES, NO Bubble: theta(n^2), theta(n), theta(n^2), YES, YES
67
How can we make quicksort more efficient?
When the array gets significantly small we can use insertion sort.
68
What is the Lomuto partition method? And it's complexity?
A theta(n) algorithm for generating the next partition point. If the value of a at j < p then we move i to the right and swap the value at j and the value at i.
69
Give psuedocode for the Lomuto Partition method
p=a_right i = left - 1 for j = left to right - 1 DO - if a_j < p THEN - - i = i + 1 - - swap(a_i,a_j) swap(a_i+1, p) return i+1
70
Give the psuedocode for the Hoare Partition method
i = left -1 j = right + 1 while True DO - do - - j = i - 1 - while a_j > p - do - - i = i + 1 - while a_i < p - if i < j THEN - - swap(a_i,a_j) - else THEN - - return j
71
How can the partition methods be improved?
Choose the median of first, middle and last elements Randomly select pivot points Randomly shuffle the input in the beginning.
72
What is Knuth/Fisher-Yates shuffling?
for i = a.last to 2 DO - j = a.random in range (1,i) - swap(a_i,a_j) This produces a random permutation of the array and is theta (n) and in-place
73
What is multi-pivoting?
Where we choose two or more pivots to improve efficiency The number of required comparisons remains the same, so does the number of swaps, however, the number of cache misses is decreased.
74
What is Radix Sort?
An array a of size n Each item in the array is a d-digit number where each digit can take at most k values and their order. for i = 1 to d DO - Use a stable sort to sort the items
75
What is counting sort? And its complexity?
A stable but not in-place algorithm The complexity is theta(n+k) if n>>k, this is linear sorting algorithm if k>>n, there is no meaning for the use of counting sort.
76
What is the purpose of combining radix and counting?
theta (d(n+k)) Where d is the number of digits, and k is the base. If d is constant, Radix runs in linear time Radix is stable, but not in-place and not as general as other spots. Radix sort is unpopular
77
Give the psuedocode for counting sort
Let C[0,...,k] for i = 1 to k DO - C[i] = 0 for i = 1 to A.length DO - C[A[i]] = C[A[i]] + 1 for i = 1 to k DO - C[i] = C[i] + C[i-1] for j = A.length to 1 DO - B[C[A[j]]] = A[j] - C[A[j]] = C[A[j]] - 1
78
What does the running time depend on?
The size of the input
79
What is the difference between big-Omega and big-O?
Omega is a lower bound for the rate of growth, while O is the upper bound
80
What is big theta?
The average case
81
What is an Abstract Data Type?
Specifies the operations through which a certain data structure can be accessed. Allowing you to declare your intentions.
82
What is a Stack
LIFO Easily implemented using arrays
83
What are the standard operations of a stack?
Push Pop Peek isEmpty isFull
84
What are the standard operations of a Queue?
Enqueue Dequeue Peek IsEmpty
85
Why do we use stacks?
They have a simple interface Reduce the access to memory
86
How is a queue implemented?
Arrays or linked lists
87
What are the standard operations of a Priority Queue?
Insert findMin deleteMin
88
How is a priority queue implemented?
Linked list or binary tree (binary tree heap is most efficient)
89
What are the standard operations of a List?
Add Remove Set Get
90
What is a List?
A collection of elements where the order of the elements is important
91
What is a set?
A mathematical set with no ordering or repetition
92
What are the standard operations of a set?
Add Remove Contains Size isEmpty
93
How can a set be implemented?
HashSet -> Fastest access TreeSet -> For comparisons
94
What is a map?
A constant addressable memory for pairs, accessed using a key.
95
What are the standard operations of a map?
Put Get Remove size
96
How can a map be implemented?
Binary Trees Hash tables
97
What is the complexity of an array?
Access: theta(1) Insert: O(n) Delete: O(n)
98
What is the problem with using arrays?
If they become they have to be resized which is slow.
99
Why is resizing an array slow?
Because you have to create a new array and re-add everything to the new array.
100
What is a linked list?
Each element is a node, where a node contains a reference to a value and a reference to the next node.
101
What is a singly linked list?
Each node has only a reference to the one ahead
102
What is a doubly linked list?
Each node has a reference to the next and previous node.
103
What are the standard operations of a linked list?
Size isEmpty add remove_head Contains get_head get
104
How do you add an element to a linked list?
Create a new node, and set the next of the last node to this new node, and if applicable the previous of this new node to the previous node
105
What is the complexity of a singly linked list stack?
All operations are theta(1)
106
What is the complexity of a doubly linked list queue?
Find: O(n) Add or remove from tail: theta(1) Insert or delete given position: theta(1)
107
What is a dummy node?
Contain no data, but it tells you where the head and tail are to improve the implementation
108
What are skip lists?
Hierarchies of linked lists which support binary search that provide theta(log(n)) search and have similar complexity to a binary tree
109
What is the recursion strategy?
We reduce solving a problem to solving one or more smaller problems of the same type. We repeat this until we reach a trivial case we can solve by other means.
110
What is a recursive definition?
Base case: Where solving the problem is trivial Recursive clause: which is a self-referential part driving the problem towards the base case.
111
When shouldn't we use recursion?
When the subproblems overlap, like in the Fibonacci sequence with n-1
112
What is a tree?
An acyclic undirected graph.
113
What is a rooted tree?
A tree with on root node, so each node has one parent except the root node itself.
114
How can a tree be thought of recursively?
A subtree plus the root.
115
How can the height of a tree be calculated?
Max level + 1
116
What is a binary tree?
A tree where each node has either 0, 1 or 2 children.
117
What is the total number of possible nodes at height h in a binary tree?
2^h - 1
118
When implementing a binary tree, what attributes does a node have?
T element <- Actual Value Node left Node right Node parent
119
What are the rules of a binary search tree?
Each element in the left subtree is smaller than the root Each element in the right subtree is bigger than the root. Both the left and right subtrees are binary search trees.
120
How do we search a binary search tree?
Start at the root Go left if less Go right if greater if equal then found if leaf then not in tree
121
What is the complexity of a binary search tree?
O(log n)
122
What is an inorder tree walk?
An algorithm to print the elements of a binary search tree in sorted order. print(e) if e != null THEN - print(e.left) - print(e.element) - print(e.right)
123
What is the complexity of inorder tree walk algorithm for BST?
theta(n)
124
How can we find the successor of a node?
Follow two rules: If the right child exists then move right once and go as far left as possible. Otherwise, go up to the left as far as possible and then move on up right.
125
How can we delete from a binary search tree?
The case is trivial if it only has one child or no children, but if it has two children then we replace it with it's successor and the repeat this on the successor.
126
What time complexity is removing from a binary search tree?
O(1)
127
How can we calculate the maximum depth of a tree?
l = log(n+1), which is theta(log(n))
128
What is an unbalanced tree?
A tree with a height difference between leaf nodes > 1
129
What are the 4 types of rotations?
Left Right Left - right double Right - left double
130
What is the difference between a single and double rotation?
Single is for when the unbalanced subtree is on the outside Double is for when the unbalanced subtree is on the inside.
131
What are the rules of an AVL tree?
The heights of the left and right subtree differ by at most 1. The left and right subtrees are AVL trees, guaranteeing logarithmic depth.
132
What is the complexity of the height of an AVL tree?
h is O(log(n))
133
What can the formula for the height of an AVL tree be compared with?
The Fibonacci sequence
134
What is the balance factor?
Height of the left subtree - the height of the right subtree.
135
What do the values of the balance factor mean?
0 - equal -1 - right is deeper than left 1 - left is deeper than right < -1 - we need to rotate > 1 - we need to rotate If there is a change of sign between balance factor values a double rotation is needed.
136
What needs to be done after each add and remove in an AVL tree?
Rebalancing
137
What is a graph?
A set of vertices and a set of vertices
138
What is a path?
A sequence of vertices.
139
What is a simple path?
If all vertices except the first and last are distinct.
140
What is a complete graph?
A graph in which from every vertex there is a path to every other vertex.
141
What is the degree of a vertex?
The number of edges incident on it.
142
When is an adjacency list better than a matrix?
When the graph is sparse
143
What is the space complexity of an adjacency list? Compared to a matrix
O(|E|+|V|) O(|V^2|)
144
Explain the BFS algorithm
Start from a node. Then discover all nodes reachable from the current node. The move one to the next node. Noting that we have visited all nodes.
145
What is the complexity of BFS?
O(|V|+|E|)
146
Why is the BFS complexity O not theta?
As some vertices may not be reachable from S
147
Why do we store the predecessor node in BFS?
To calculate the shortest path
148
What is a bipartite graph?
A graph which can be partitioned into two sets such that (u,v) is an edge where u is a member of set A and v is a member of set B.
149
Explain the DFS algorithm
Edges are explored out of the most recently discovered node, we can backtrack to the predecessor to complete the graph.
150
T/F DFS will visit all nodes
True
151
What is the complexity of DFS?
O(|V|+|E|)
152
What is topological sorting?
Linear ordering of nodes of a directed graphs. Such that for each edge from u to v, u comes before v.
153
How can topological sorting be implemented?
Call DFS on the graph Every time the processing of a node is finished it is added to the front of a linked list Once finished, the resulting list is the topological sorting.
154
What is a strongly connected graph?
A directed graph where there is a path from every vertex to every other vertex.
155
What is Kosaraju's algorithm?
Call DFS to compute finishing time for all nodes Create G^T Call DFS in order of decreasing finishing time of G^T
156
What is a minimum spanning tree?
Tree T subset E that spans G and hast the least weight. T is A tree Spanning Least weight
157
Explain Kruskal's algorithm
We build an MST by adding edges one at a time. We can iterate over the edges in the sorted E (by weight) Then using the UNION-FIND data structure to check if adding an edge creates a cycle.
158
Explain Prim's algorithm
Maintain sets S and V-S Find x where x in V, y where y in V-S such that w(x,y) is less than or equal to w(u,v) for all v and u Remove y from V-S and add it to S and add the edge (x,y) to T Repeat until V-S is empty.
159
What is the complexity of Kruskal's?
O(m log n) as it Union is log n and that occurs m times
160
How can we save time on Prim's
We only need to consider the previous nodes neighbours.
161
What is the complexity of Prim's?
O(m log n)
162
Explain the general MST algorithm.
Maintain a set of edges A. A is a subset of some MST At each step add an edge such that A is still a subset of the MST
163
What is the function for the shortest path?
P(u,v) = { min w(p) infinite }
164
Can a path with negative weights be defined?
Yes
165
When is a path with negatives weights undefined?
When it contains a negative weight cycle
166
What is the value of v.delta at the end of a shortest path calculation?
The total cost of the shortest path from s to v
167
What is relaxing an edge?
Testing if we can improve the shortest path cost of v by using that edge
168
What is the difference between Bellman-Ford and Dijkstra's?
Bellman-Ford works with negative cycles.
169
Explain the Bellman-Ford algorithm
Relax is applied to all edges in the graph |V| - 1 times
170
What is the complexity of Bellman-Ford?
O(|V| x |E|) Slower than Dijkstra's
171
Explain Dijkstra's algorithm.
Maintains a set of S nodes whose paths have been determined. All other nodes are kept in a priority queue, to keep track of the next node to the process. Predecessor stored for recovery of shortest path
172
What is the complexity of Dijkstra's algorithm?
O((|V| + |E|) log |V|) Relax is log n due to having percolate up the heap when you change values.
173
What is a greedy algorithm?
Make locally optimal solutions at each step with the hope that such choices will lead to a globally optimal solution, keyword: hope.
174
Why are greedy algorithms used so commonly?
They are efficient
175
List some examples of greedy algorithms.
Dijkstra's, Prim's and Kruskal's
176
What is the job scheduling problem?
We have a set of n jobs, each with a weight w, and length l. They must be run sequentially
177
With Job scheduling, what formula are we minimising?
f = sum to n from k = 1 of w_k * c_k
178
What are the special cases of job scheduling?
Same weighting -> shortest length first Same length -> largest weight first
179
In job scheduling, what metric do we use to compare jobs?
The ratio between the weight and the length
180
What is the interval scheduling problem?
We have a set of n intervals (s,e) where s and e are the start and ending time. We need to choose a non-overlapping subset of those intervals such that the total number of selected intervals is maximum.
181
In the interval scheduling problem, what metric do we take for the optimal solution?
Use the earliest finishing time
182
Explain the solution to the interval scheduling problem.
We build a min-heap based on finishing times. We the iterate through until Q is empty extract the minimum node from the min heap and if the start is greater than the last finishing time we add it to the solution.
183
What is the complexity of the greedy solution to interval scheduling?
O(n log n)
184
If we add weights to the interval scheduling problem (e.g each task has a cost) can we find a greedy solution?
No A greedy solution has not been found but has not been disproven
185
What is the fractional knapsack problem?
Given n items having value v_1,...,v_n and weights w_1,...,w_n and a knapsack of capacity C. We need to maximise sum from i=1 to n of x_i * v_i where that is <= C
186
How do we solve the greedy fractional knapsack problem?
Add the item with the biggest ratio of value per weight until the next item does not fit in the knapsack so we add a fraction of it. We can either sort items by ratio or add them to a max-heap with the ratio as key.
187
Does the greedy solution work if we cannot add fractions of items?
No
188
What is the Huffman coding problem?
We have an alphabet S={s_1,...s_k} and we need to encode a message M consisting of n symbols from S. We need ton use less than n * log(k) bits to store the message
189
How can the Huffman coding problem be solved?
Be using less bits to represent more common characters, but, now we have variable size.
190
How can the variable size of Huffman coding be solved?
Using prefix codes where no code is a prefix of another. a = 0 then b != 01 but b = 10 There is a 1:1 mapping to binary trees
191
How many bits is used to represent layer 3 in Huffman Coding Binary Trees?
3, it just the depth of the tree
192
In Prefix Trees, what is the formula for the average number of bits used?
A = sum from i =1 to k of f_i * d(s_i) Where d(s_i) is the depth of the corresponding to s_i
193
Is there only one optimal Prefix tree?
No, there are many
194
What is the complexity of Huffman coding - searching a prefix tree?
O(n)
195
What is dynamic programming?
Writing a solution of a problem in terms of the solutions to its subproblems.
196
When can a problem be solved by dynamic programming?
Optimal substructure: a recurrence relation between the optimal solutions to the problem and its subproblems. Overlapping subproblems: otherwise it is divide-and-conquer.
197
What is the problem with dynamic programming solutions?
Since subproblems overlap we end up computing the same subproblem multiple times.
198
What is the solution to overlapping subproblems?
Memorisation or bottom-up solutions
199
What is the linear independent set problem?
Given a graph G=(V,E) an independent set is a subset of vertices such that for all (x,y) in S, (x,y) is not in E We want to maximise |S|
200
Explain how the linear independent set problem is solved dynamically
Let O be the optimal solution with the value of opt(n) Consider the last node, either that is in the solution or it is not. If it is, then the node prior is not in the solution, so we compute the optimal for the node two prior. Otherwise, the optimal solution is the same as the one obtained from the node prior
201
How do we get the solution from the DP of independent linear sets?
We walk backwards
202
What is the difference between the greedy and DP version of interval scheduling?
DP can handle intervals with weights
203
Explain the DP version of interval scheduling
Let O_n be the optimal solution for the first n intervals and opt(O_n) be the value. Consider the last interval, should we include it? It depends on if the opt become bigger by adding the interval or not.
204
What is the complexity of DP version of interval scheduling?
O(n log n)
205
What is the longest common subsequence problem?
If we have two strings X and Y, we have to find the longest common subsequence (characters do not have to be consecutive)
206
Explain the table solution to longest common subsequence.
Start from the right: if the characters are not the same we can go up of left, depending on the Max. If they are the same both up and then left, record a letter. Until we reach a zero.
207
What is the complexity of longest common subsequence?
O(n.m)
208
Explain the solution to 0-1 Knapsack
Let S_n be the optimal solution Either an item is or isn't in the sack. If it isn't then there is n-1 remaining items and no change to capacity. If it is then there are n-1 remaining items, but with a decreased capacity. Since we don't prior if adding is the correct option we compute both and take the largest
209
What is the rod cutting problem?
A company buys rods and cuts them into smaller pieces. The company then sells them. Given a rod of length n, the company would like to cut it to maximise profits.
210
Explain the solution to rod cutting.
Either we cut the rod at position i or we don't. We can simplify this into a recurrence relation r(n) = max^n_k=1 [p[k+r(n-k)]
211
What is the complexity of rod cutting?
2^n without optimisation
212
What is the maximum subarray sum problem?
Given an array of numbers, with at least one negative. Find the subarray whose sum of elements is maximum.
213
What is the subset sum problem?
We are given an array and a target. Is there a subset that sum is the target.
214
What is the solution to the subset sum problem?
Start with the last element, either it is in the solution or it is not. If it is, then we find a subset of n-1 elements whose sum is the target - value Else, we find a subset whose sum is the target and is of length n-1 If s = 0 then we return true or if n=0 was true we return false.
215
Explain the dynamic solution to Bellman-Ford.
If the original is f[e,i[ then we have new options we can go down d[v_1,i] and then w or d[v_2,i] and then w. We should still pick d[e,i] as it is the minimum. To compute the indegree we create an array of linked lsts of length n. foreach v in V for each u in adj[r] inDegree[v] push(u)
216
What is the complexity of the dynamic Bellman-Ford?
O(nm) for the main algorithm O(n^2) for the indegree computation
217
What does the Floyd-Warshall algorithm solve?
It is used to find the shortest path between all Paris of vertices in weighted graph.
218
Explain the Floyd-Warshall algorithm.
Given a simple path, p = an intermediate vertex is any vertex of p other than v_1 and v_k. Denoted by pi^k_ij be set of paths from v_i to v_j whose intermediate vertices are drawn from the main set. v_k not in pi in which case pi is in PI Or v_k in pi then pi can be decomposed into two shortest paths pi_1 and pi_2 where pi_1 is in PI^{k-1}_ik and pi_2 in PI^{k-1}_k
219
What is a linear programming problem?
The problem of maximising or minimising a linear function subject to a finite number of linear inequalities
220
How can we solve a linear programming problem?
Simplex or the interior point method
221
What is a standard form?
Maximisation of a linear function subject to linear inequalities
222
What does each simplex iteration aim to do?
Obtain a new basic feasible solution which has a higher value of z.