LN6 Problems Flashcards
Median finding, Sorting
What is the general form of the recurrence relation for analyzing divide-and-conquer algorithms?
T(n) = aT(n/b) + cn^k, where “a” is the number of subproblems, “b” is the factor by which the problem size is reduced, and “cn^k” represents the time complexity of the “conquer” phase.
What are the three cases in the Master Theorem for analyzing recurrences?
- a > b^k, leading to T(n) = O(n^log_b(a))
- a = b^k, leading to T(n) = O(n^k * log(n))
- a < b^k, leading to T(n) = O(n^k).
How does the Master Theorem determine the time complexity based on “a” and “b^k”?
By comparing “a” to “b^k”:
If “a” is greater, the recurrence grows according to “a”.
If “a” equals “b^k”, logarithmic growth is introduced.
If “a” is less than “b^k”, the polynomial term cn^k dominates.
In sorting, what is the time complexity of Bubble Sort, and why is it inefficient for large datasets?
Bubble Sort has a time complexity of O(n^2) because each element must be swapped iteratively through each position, making it slow when many elements are out of order.
Why does Insertion Sort achieve O(n^2) complexity despite O(n log n) comparisons?
Because each insertion may require shifting elements, costing O(n) time per insertion on average, leading to an overall time complexity of O(n^2) for sorting.
Describe how Mergesort works and its time complexity.
Mergesort divides the array into two halves, recursively sorts them, and merges the sorted halves. It has a time complexity of O(n log n) due to the T(n) = 2T(n/2) + O(n) recurrence relation.
What is the primary disadvantage of Mergesort?
Mergesort requires additional memory for merging phases, which can be inefficient regarding space complexity.
How does Quicksort achieve O(n log n) on average?
By using a pivot to partition elements and recursively sorting the partitions, where each partitioning step is O(n), and with a balanced pivot, recursion results in O(n log n).
What is the worst-case time complexity of Quicksort, and how can it occur?
The worst-case time complexity is O(n^2), which occurs when the pivot is consistently the smallest or largest element, leading to unbalanced partitions.
Why is choosing the median of three random elements beneficial in Quicksort?
It increases the likelihood of a balanced partition, reducing the chances of the worst-case O(n^2) time complexity and improving average performance.
What problem does the Selection Algorithm solve, and what is its optimal time complexity?
The Selection Algorithm finds the kth smallest element in an unordered set, with an optimal time complexity of O(n) in the average case using random pivots.
Why is the Selection problem less complex than Sorting?
Selection requires finding a single element rather than ordering all elements, which allows for O(n) time complexity without full sorting.
How does the Master Theorem apply to the Mergesort recurrence T(n) = 2T(n/2) + O(n)?
With a = 2, b = 2, and k = 1, we get a = b^k, leading to T(n) = O(n log n), which is the time complexity of Mergesort.
Describe the “Counting Inversions” problem.
Given a sequence, an inversion is a pair where i < j but a_i > a_j. Counting inversions measures the sequence’s “unsortedness” or similarity to another sequence.
What is the significance of log(n!) in the context of sorting lower bounds?
log(n!) represents the minimum number of comparisons required to sort “n” elements, resulting in a lower bound of O(n log n) for comparison-based sorting algorithms.