Quizes Flashcards
We apply insertion sort to the input-array A = [7,3,5,1,6,2,4]. What is A after two iterations of the crucial for-loop?
A) A = [1,3,5,7,6,2,4]
B) A = [3,7,5,1,6,2,4]
C) A = [1,2,3,7,5,6,4]
D) A = [3,5,7,1,6,2,4]
D) A = [3,5,7,1,6,2,4]
What is the worst-case running time of insertion sort, for inputs of size n?
A) Theta (n log(n))
B) O (n log(n))
C) Theta (n^2)
D) O (log n)
C) Theta (n^2)
What is the best-case running time of insertionsort, for inputs of size n?
A) Theta (n)
B) Theta (n log(n))
C) Theta (n^2)
D) Theta (log(n))
A) Theta (n)
We consider insertionsort. What happens if we swap the order of the tests for the while-loop in line 5?
A) We may then possibly compare A[0] with key, but A[0] does not exist.
B) The algorithm then sorts in reverse order.
C) It does not matter.
D) The program then does not terminate.
A) We may then possibly compare A[0] with key, but A[0] does not exist.
We annotate the following code fragment with the amount of times a line is executed in the worst case. What do we get?
for j:=2 to n do
i := j-1
while i>0 do
i:= i-1
A) For the first line n, for the second line n-1, for the third line the summation from j=2 to n of j, for the fourth line the summation from j=2 to n of (j-1).
B) For the first line n-1, for the second line n-1, for the third line the summation from j=1 to n of j, for the fourth line the summation from j=1 to n of (j-1).
C) For the first line n, for the second line n-1 for the third line j-1, for the fourth line n.
D) For the first line n-1, for the second line n-1, for the third line j-1, for the fourth line n.
A) For the first line n, for the second line n-1, for the third line the summation from j=2 to n of j, for the fourth line the summation from j=2 to n of (j-1).
What is the result of applying the first two iterations of the algorithm for selection sort to the input array [7, 2, 9, 6, 1, 3, 5, 4]?
A) 1,2,9,6,7,3,5,4
B) 1,3,9,6,7,2,5,4
C) 1,2,9,6,7,3,4,5
D) 1,3,9,6,7,2,4,5
A) 1,2,9,6,7,3,5,4
What is the best-case running time of selection sort, for inputs of size n?
A) O(1) because we find immediately the smallest elements of the unordered part.
B) O(n) because we just check whether the array is ordered.
C) O(n log(n)) because that is the best-case for comparison-based sorting.
D) O(n^2) because we always have to completely traverse the unordered part.
D) O(n^2) because we always have to completely traverse the unordered part.
What is the worst-case running time of selection sort, for inputs of size n?
A) Theta(n)
B) Theta(n^2)
C) Theta(n log(n))
D) Theta(log(n))
B) Theta(n^2)
In the recurrence tree for merge sort
A) the height is log(n) and the work per layer is the linear work to split.
B) the height is log(n) and the work per layer is the linear work to merge.
C) the height is n and the work per layer is the linear work to split.
D) the height is n and the work per layer is the linear work to merge.
B) the height is log(n) and the work per layer is the linear work to merge.
Which of the following is(are) correct?
A) n^2 in O(n^3)
B) n^2 in O(n^2)
C) n^2 in Theta (n^2)
D) n^2 in Theta (n^3)
E) n^3 in Theta (n^2)
F) n^3 in O(n^2)
A) n^2 in O(n^3)
B) n^2 in O(n^2)
C) n^2 in Theta (n^2)
An algorithm with worst-case time complexity Theta (n log(n)) is always, for any input, faster than an algorithm with worst-case time complexity in Theta (n^2). Is this statement true or false, and why?
A) True, because n log(n) has a lower rate of growth than n^2
B) True, because Theta gives a strict bound
C) False, because for a small input or for a special input an algorithm with worst-case time complexity in Theta (n^2) may perform better than an algorithm with worst-case time complexity in Theta (n log(n))
D) False, because n log(n) and n^2 have the same rate of growth anyway.
C) False, because for a small input or for a special input an algorithm with worst-case time complexity in Theta (n^2) may perform better than an algorithm with worst-case time complexity in Theta (n log(n))
We consider two statements.
Statement A is: 2^(n+1) is in O(2^n).
Statement B is: 2^(2n) is in O(2^n).
Which of the following holds?
A) A and B are both false
B) A and B are both true
C) A is true but B is false
D) B is true but A is false
C) A is true but B is false
Which of the following ordering of functions is(are) in increasing order of rate of growth?
A) 2^7, n log(n), n^2, n!
B) 3n, 2^3, n log(n), n^3
C) n^2, n^3, n log(n), 2^n
D) n log(n), n^2, n^7, 2^n
E) n log(n), n^2, 2^7, n!
A) 2^7, n log(n), n^2, n!
D) n log(n), n^2, n^7, 2^n
Which of the following ordering of functions is in increasing order of rate of growth?
A) 2^{10}, n (log(n))^2, n^2 log(n), 2^{log(n)}
B) n (log(n))^2, n^2 log(n), 2^{log(n)}, 2^{10}
C) n (log(n))^2, n^2 log(n), 2^{10}, 2^{log(n)}
D) 2^{10}, 2^{log(n}, n (log(n))^2, n^2 log(n)
D) 2^{10}, 2^{log(n}, n (log(n))^2, n^2 log(n)
We have T(n) in O(n^2) with T a function for the worst-case time complexity of algorithm A. Do we have T(n) in Theta(n^2)?
A) Yes, because an upper bound for the worst-case gives a lower bound.
B) No, T is not necessarily bound from below by n^2.
B) No, T is not necessarily bound from below by n^2.
We consider an array A that contains n keys. What is the asymptotic worst case running time of a best algorithm that checks whether the keys are in increasingly sorted order?
A) Theta (1)
B) Theta (log (n))
C) Theta (n)
D) Theta (n log (n))
C) Theta (n)
When is a sorting algorithm said to be stable?
A) If the algorithm is linear time on an array that is already sorted.
B) If sorting the array twice in a row does not affect the result.
C) If it does not need extra memory next to the input array.
D) If it respects the order of equal keys in the array.
D) If it respects the order of equal keys in the array.
consider the following pseudo-code:
Algorithm Loop(n):
s := 0
for i := 1 to n^2 do
for j : = 1 to i do
s := s+ i
What is the worst-case running time of Loop?
A) O(n).
B) O(n^2).
C) O(n^3).
D) O(n^4).
D) O(n^4).
We consider pseudo-code for finding a maximum element in an array of integers. What is an invariant useful for proving correctness?
FindMax(A)
max := A[1]
for i := 2 to A.length do
if A[i] > max then
max := A[i]
A) max is the largest value in A[1..i]
B) max is the largest value in A[1..(i-1)]
C) max is the largest value in A[1..(i+1)]
D) max is the largest value in A
B) max is the largest value in A[1..(i-1)]
We consider an algorithm that computes for input n >= 1 the sum 1+…+n. What is an invariant that can be used to prove correctness?
ComputeSum(n)
sum := 0
for i := 1 to n
sum := sum + i
return sum
A) sum is 1 + … + (i-1)
B) sum is 1 + … + i
C) sum is i + … + n
D) sum is 1 + … + n
A) sum is 1 + … + (i-1)
Merge sort uses the procedure merge as subroutine. What is the running time of merge on inputs of total size n?
A) It is in Theta (1).
B) It is in Theta (n^2).
C) It is in Theta (n log(n)).
D) It is in Theta (n).
D) It is in Theta (n).
We apply merge sort to already sorted input arrays of size n. Then the running time is
A) in Theta (n) because we just need to check that the array is sorted.
B) in Theta (n log(n)) because the height of the recursion tree is log(n)+1 and the merge procedure has linear runnning time.
C) in Theta (log(n)) because the height of the recursion tree is log(n)+1 and the merge procedure is trivial.
D) in Theta (n log(n)) because the height of the recursion tree is n log(n) + 1 and the merge procedure is trivial.
B) in Theta (n log(n)) because the height of the recursion tree is log(n)+1 and the merge procedure has linear runnning time.
What is one of the main drawbacks of merge sort?
A) It is a recursive algorithm.
B) It uses additional memory, proportional to the input size.
C) It uses an additional subroutine, merge.
D) Its recursion tree has height approximately the log of the input size.
B) It uses additional memory, proportional to the input size.
What is a correct clause for the case n>1 for a recurrence equation describing the worst-case running time of merge sort, for inputs of size n?
A) T(n) = T(n-1) +cn, with c a constant
B) T(n) = T(n/2) + c, with c a constant
C) T(n) = 2T(n/2) + c, with c a constant
D) T(n) = 2T(n/2) + cn, with c a constant
D) T(n) = 2T(n/2) + cn, with c a constant
Consider the following recurrence equation: T(0) = 1, and T(n)= T(n-1) + n for n > 1. What is the big-O of T?
A) T(n) in O(n^2)
B) T(n) in O(n log(n))
C) T(n) in O(n)
D) T(n) in O(2^n)
A) T(n) in O(n^2)
What is a correct clause for the case n>1 for a recurrence equation describing the worst-case running time of max-heapify, for inputs of size n?
A) T(n) <= T((2n)/3) + c, with c a constant
B) T(n) <= T(n-1) + c, with c a constant
C) T(n) <= 2T(n/2) + cn, with c a constant
D) T(n) <= T(n/2) + c, with c a constant
A) T(n) <= T((2n)/3) + c, with c a constant
Which one of the following arrays is(are) a max-heap?
A) [7,6,3,5,4,2,1]
B) [7,6,2,5,4,3,1]
C) [1,2,3,4,5,6,7]
D) [7,6,4,1,2,5,3]
E) [7,4,6,3,1,5,2]
A) [7,6,3,5,4,2,1]
E) [7,4,6,3,1,5,2]
We consider a max-heap containing n different keys. Why do we have that the height of such a max-heap is approximately log(n)?
A) Because the keys in a max-heap are partially ordered.
B) Because a max-heap is a binary tree.
C) Because a max-heap is an almost-complete binary tree.
D) Because all elements are different
C) Because a max-heap is an almost-complete binary tree.
How many different max-heaps with keys 1, 2, 3, 4 exist?
A) 1
B) 2
C) 3
D) 4
C) 3
What is the least and the largest number of elements that an almost complete binary tree of height h can have?
A) At least 2^h and at most 2^(h+1).
B) At least 2^(h-1) and at most 2^(h+1).
C) At least 2^(h-1) and at most 2^(h+1) - 1.
D) At least 2^h and at most 2^(h+1) - 1.
D) At least 2^h and at most 2^(h+1) - 1.
We represent a heap of size n as an array. At what indices of the array are the leaves of the heap? (We use floor(x) for the largest integer that less than or equal to x, and ceiling(x) for the smallest integer that is larger than or equal to x.)
A) At indices floor(n/2)+1, floor(n/2)+2, … , n.
B) At indices floor(n/2), floor(n/2)+1, …, n.
C) At indices 1, 2, …, floor(n/2).
D) At indices 1, 2, …, floor(n/2)-1.
A) At indices floor(n/2)+1, floor(n/2)+2, … , n.
We consider executions of MaxHeapify on input an array A of length n, and an index i which is at least n/2. What is intuitively the running time of such executions?
A) Elementary time.
B) Approximately log(n) time, with n the length of the array A.
C) Approximately n time, with n the length of the array A.
D) Approximately n/2 time with n the length of the array A.
A) Elementary time.
We consider the worst-case running time of MaxHeapify. Which of the following (one or more) options are correct?
A) T(h) is in O(h) for inputs with height h.
B) T(h) is in O (log(h)) for inputs with height h.
C) T(n) is in O(n) for inputs with n keys.
D) T(n) is in O(log(n)) for inputs with n keys.
E) T(n) is in O(n/2) for inputs with n keys.
A) T(h) is in O(h) for inputs with height h.
C) T(n) is in O(n) for inputs with n keys.
D) T(n) is in O(log(n)) for inputs with n keys.
E) T(n) is in O(n/2) for inputs with n keys.
What is the result of performing the first iteration of the bottom-up procedure buildMaxHeap to [1, 2, 3, 4, 5, 6, 7, 8]?
A) [1, 2, 7, 4, 5, 6, 3, 8]
B) [1, 2, 7, 8, 5, 6, 3, 4]
C) [1, 2, 3, 8, 5, 6, 7, 4]
D) [1, 5, 3, 8, 2, 6, 7, 4]
C) [1, 2, 3, 8, 5, 6, 7, 4]
Which of the following is a description of heapsort?
A) We sort the array such that in the corresponding tree on every path from the root to a leaf the labels are increasing.
B) First we turn the input-array into a max-heap. Then we repeatedly swap the first element with the last element from the max-heap, disconnect the last element, and reconstruct the heap.
C) First we turn the input-array into a max-heap. Then we repeatedly perform the operation remove-max and store the so found max at the end of the output-array.
D) We split in the input-array into two more or less equal parts. The second half is already a max-heap. We sort the first half and merge it with the second one to obtain a max-heap.
B) First we turn the input-array into a max-heap. Then we repeatedly swap the first element with the last element from the max-heap, disconnect the last element, and reconstruct the heap.
We apply heapsort to the input-array [5, 8, 1, 3, 7, 2, 4, 6]. What is the result of the first part (apply BuildMaxHeap) and what is the result of performing the first iteration of the second part?
A) [8, 7, 4, 6, 5, 2, 1, 3] and [7, 5, 6, 4, 3, 2, 1, 8]
B) [8, 7, 6, 4, 5, 2, 1, 3] and [7, 5, 6, 4, 3, 2, 1, 8]
C) [8, 7, 6, 4, 5, 2, 1, 3] and [7, 6, 4, 3, 5, 2, 1, 8]
D) [8, 7, 4, 6, 5, 2, 1, 3] and [7, 6, 4, 3, 5, 2, 1, 8]
D) [8, 7, 4, 6, 5, 2, 1, 3] and [7, 6, 4, 3, 5, 2, 1, 8]
What is the running time of heapsort if the input is an array that is already sorted (in increasing order)?
A) The running time is then in O (nlog(n)).
B) The running time is then in O(n).
C) The running time is then in O(n2).
D) The running time is then in O (log(n)).
A) The running time is then in O (nlog(n)).
We consider the algorithm for the bottom-up max-heap construction. What happens if we let the loop-index increase from 1 upwards into floor of A.length/2 ?
A) Then the algorithm is no longer correct.
B) It works equally well.
C) Then we need to take the ceiling of A.length / 2 instead of the floor in order for the algorithm to be correct.
D) Then we should do the recursive call of MaxHeapify on A and i-1 instead of on A and i in order for the algorithm to be correct.
A) Then the algorithm is no longer correct.
We apply ExtractMax to a max-heap; the result is output 10 and the max-heap [6,3,5,2,1]. Which of the following could have been the input?
A) [10,6,5,3,2,1]
B) [10,3,6,2,1,5]
C) [10,6,5,2,3,1]
D) [10,5,6,2,1,3]
E) [10,5,1,2,3,6]
B) [10,3,6,2,1,5]
C) [10,6,5,2,3,1]
What is the effect of applying the ExtractMax procedure on the max-heap [8,7,4,6,1,2,3,5] ?
A) [7,6,4,5,1,2,3,5] with heapsize 7
B) [7,6,4,5,1,2,3,5] with heapsize 8
C) [7,5,4,6,1,2,3,5] with heapsize 7
D) [7,6,5,4,3,2,1,5] with heapsize 7
A) [7,6,4,5,1,2,3,5] with heapsize 7
What is the result of adding the key 10 to the max-heap [7,6,5,4,3,2,1,0,0,0] with initially heapsize 7?
A) [10,7,6,5,4,3,2,1,0,0] with heapsize 8.
B) [10,7,6,5,2,3,4,1,0,0] with heapsize 10.
C) [10,7,5,6,3,2,1,4,0,0] with heapsize 8.
D) [7,5,6,1,2,3,4,10,0,0] with heapsize 8.
C) [10,7,5,6,3,2,1,4,0,0] with heapsize 8.
What is the result of executing the first four iterations (j = 1,2,3,4) of the algorithm partition to the input array A = [3,7,6,1,8,5,2,4] and indices p=1 and r=8? Note that the pivot is A[r].
A) A = [3,1,6,7,8,5,2,4]
B) A = [3,7,6,4,8,5,2,1]
C) A = [3,1,7,6,8,5,2,4]
D) A = [1,3,6,7,8,5,2,4]
A) A = [3,1,6,7,8,5,2,4]
We (completely) perform the procedure partition which is used in quicksort. The resulting array is [2, 3, 1, 4, 5, 8, 7, 9, 10]. Which are all keys that could have been the pivot?
A) 5, 9, 10
B) 4, 5, 8, 10
C) 4, 5, 9, 10
D) 4, 5, 8, 9, 10
C) 4, 5, 9, 10
What is the worst-case running time of the algorithm partition used in quicksort, applied to an array A of n elements, and indices p and r in A?
A) The worst-case running time is in Theta(n).
B) The worst-case running time is in Theta
(log (n)).
C) The worst-case running time is in Theta(n^2).
D) The worst-case running time is in Theta(n log(n)).
A) The worst-case running time is in Theta(n).