Lecture 18-19 - Divide-And-Conquer Flashcards

1
Q

Define the divide and conquer problem.

A

Recursive in structure
– Divide the problem into sub-problems that are
similar to the original but smaller in size
– Conquer the sub-problems by solving them
recursively. If they are small enough, just solve
them in a straightforward manner.
– Combine the solutions to create a solution to
the original problem

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Define in words how Merge sort is accomplished using divide and conquer

A

Sorting Problem: Sort a sequence of n elements into
non-decreasing order.
• Divide: Divide the n-element sequence to be
sorted into two subsequences of n/2 elements
each
• Conquer: Sort the two subsequences recursively
using merge sort.
• Combine: Merge the two sorted subsequences to
produce the sorted answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q
Prove the correctness of Merge
Merge(A, p, q, r)
1. n1 ¬ q – p + 1
2. n2 ¬ r – q
3. for i ¬ 1 to n1
4. .....do L[i] ¬ A[p + i – 1]
5. for j ¬ 1 to n2
6. .....do R[j] ¬ A[q + j]
7. L[n1+1] ¬ inf
8. R[n2+1] ¬ inf
9. i ¬ 1
10. j ¬ 1
11. for k ¬p to r
12. .....do if L[i] <= R[j]
13. ..........then A[k] ¬ L[i]
14. ...............i ¬ i + 1
15. ..........else A[k] ¬ R[j]
16. ...............j ¬ j + 1
A

Loop Invariant property (main for loop)
• At the start of each iteration of the for loop, Subarray A[p..k – 1] contains
the k – p smallest elements of L and R in sorted order.
• L[i] and R[j] are the smallest elements of L and R that have not been copied back into A

Initialization:
Before the first iteration:
• A[p..k – 1] is empty.
• i = j = 1.
• L[1] and R[1] are the smallest elements of L and R not copied to A.

Maintenance:
Case 1: L[i] £ R[j]
• By LI, A contains p – k smallest elements of L and R in sorted order.
• By LI, L[i] and R[j] are the smallest elements of L and R not yet copied into A.
• Line 13 results in A containing p – k + 1 smallest elements (again in sorted order). Incrementing i and k reestablishes the LI for the next iteration.
Case 2: Similar arguments with L[i] > R[j]

Termination:
• On termination, k = r + 1.
• By LI, A contains r – p + 1 smallest elements of L and R in sorted order.
• L and R together contain r – p + 3 elements including the two sentinels. So all elements are sorted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What’s the running time of Merge sort? Measure it in terms of Divide, Conquer and Combine.

A

• Divide: computing the middle takes Q(1)
• Conquer: solving 2 subproblems takes 2T(n/2)
• Combine: merging n elements takes Q(n)
• Total:
T(n) = Q(1) if n = 1
T(n) = 2T(n/2) + Q(n) if n > 1
=> T(n) = Q(n lg n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Assuming n is in power of 2, prove that Merge sort is T(n) = nlog2n with a recursion tree.

A

Draw a recursion tree for each term T(n) = 2T(n/2). Show that height is log2n and n nodes so T(n) = nlgn

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Assuming n is in power of 2, prove that Merge sort is T(n) = nlog2n using induction on n.

A

・Base case: when n = 1, T(1) = 0.
・Inductive hypothesis: assume T(n) = n log2 n.
・Goal: show that T(2n) = 2n log2 (2n).

T(2n) = 2 T(n) + 2n
= 2 n log2 n + 2n
= 2 n (log2 (2n) – 1) + 2n
= 2 n log2 (2n). ▪

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Prove that T(n) <= nlogn
for T(n) <= 0 if n = 1
T(n) <= T[upper(n/2)] + T[[lower(n/2)] + n otherwise.
Use strong induction.

A

・Base case: n = 1.
・Define n1 = [upper(n/2)] and n2 = [lower(n/2)]
・Induction step: assume true for 1, 2, … , n – 1.
T(n) <= T(n1) + T(n2) + n
<= n1 * upper[log2n1] + n2 * upper [log2n2] + n
<= n1 * upper[log2n2] + n2 * upper [log2n2] + n
= n * upper[log2n2] + n (see @)
<= n * (upper[log2n] - 1) + n
= n *upper[log2n]

@ n2 = upper (n/2)
<= upper(2[log2n/2])
= 2 [log2n]/2
log2n2 <= upper (log2n) -1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Define the multiplication of x and y in divide an conquer.

A

To multiply two n-bit integers x and y:
・Divide x and y into low- and high-order bits.
・Multiply four ½n-bit integers, recursively.
・Add and shift to obtain result.

(2^m a + b) (2^m c + d) = 2^2m ac + 2^m (bc + ad) + bd
where 
m = n/2
a = x/2^m
b = xmod2^m
c = y/2^m
d = ymod2^m
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Prove the following proposition:

The divide-and-conquer multiplication algorithm requires Θ(n^2) bit operations to multiply two n-bit integers.

A

Apply case 1 of the master theorem to the recurrence:
T(n) = 4T(n/2) recursive call
+ Q(n) add shift
= Q(n^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Define the karatsuba trick. How does this help divide and conquer multiplication?

A

bc + ad = ac + bd – (a – b) (c – d)

This makes it so that Only three multiplication of n / 2-bit integers are needed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Prove that karatsuba algorithm requires O(n^1.585) bit operations to multiply two n-bit integers.

A

T(n) = 3 T(n / 2) + Θ(n) ⇒ T(n) = Θ(n^lg3) = O(n^1.585)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How do you determine the running time of a divide-and-conquer algorithm? Explain each term.

A

The Master Theorem
T(n) = a * T(n/b) + f(n)

where
・a ≥ 1 is the number of subproblems.
・b > 0 is the factor by which the subproblem size decreases.
・f(n) = work to divide/merge subproblems.

Recursion tree.
・k = logb n levels.
・a^i = number of subproblems at level i.
・n / b^i = size of subproblem at level i.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What’s the T(n) of:

  1. Merge Sort
  2. Binary Search
  3. Karatsuba
A
T(n) = 2 * T(n/2) + n
T(n) = T(n/2) + 1
T(n) = 3 * T(n/2) + n
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Prove that if T (n) satisfies T (n) = 3 T (n / 2) + n, with T (1) = 1, then T (n) = Θ(n^lg 3).

A
r = 3/2
T(n) = (1 + r + r^2 + ... + r^log2n) * n = 3n^log2(3) - 2n = Θ(n^lg 3).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Write 3^(log2n) in a different way.

A

n ^ log2(3)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Prove: If T (n) satisfies T(n) = 2 T (n / 2) + n, with T (1) = 1, then T (n) = Θ(n log n).

A
r = 1 
T(n) = (1 + r + r^2 + ... + r^log2n) * n = n(log2n + 1)
17
Q

Prove: If T (n) satisfies T (n) = 3 T (n / 4) + n^5, with T (1) = 1, then T (n) = Θ(n^5).

A

r = 3/(4^5)

n^5 <= T(n) <= (1 + r + r^2 + … + …) * n^5 <= (1/(1-r)) * n^5

18
Q

Use the Master theorem to find the running time of T (n) = 3 T(n / 2) + n

A

Using case 1 and epsilon =log2(3) -1, T(n) = Θ (n^lg3).

19
Q

Use the Master theorem to find the running time of T (n) = 2 T(n / 2) + Θ(n log n).

A

Using case 2 and p = 1,

T(n) = Θ(n *log^2 (n)).

20
Q

Use the Master theorem to find the running time of T (n) = 3 T(n / 4) + n^5.

A

Using case 3 with epsilon = 1-log4(3) and c = 3/4,

T (n) = Θ(n^5).

21
Q

Use the Master theorem to find the running time of:

  1. T (n) = 3 T(n / 2) + n^2.
  2. T (n) = T(n / 2) + 2^n.
  3. T (n) = 16 T(n / 4) + n.
  4. T (n) = 2 T(n / 2) + nlogn.
  5. T (n) = 2^n * T(n / 4) + n^n.
A
  1. Θ(n^2) (case 3)
  2. Θ(2^n) (case 3)
  3. Θ(n^2) (case 1)
  4. nlog^2n (case2)
  5. Does not apply because a = 2^n
22
Q
Use the Master theorem to find the running time of:
1. T (n) = 8 T(n / 2) + Θ(n^2)
(normal matrix multi)
2. T (n) = 7 T(n / 2) + Θ(n^2)
(strassen's trick)
A
  1. Θ(n^3)

2. Θ(n^2.81)