LN7 Master Theorem, Counting Inversions, Fast Multiplication Flashcards
What is the goal of the Counting Inversions problem?
The goal is to count the number of inversions in a sequence, where an inversion is defined as a pair of indices (i, j) such that i < j and a[i] > a[j].
What is the naive solution for Counting Inversions, and what is its time complexity?
The naive solution is to use a nested loop to count all inversions, which has a time complexity of O(n^2).
What is the main idea behind using divide-and-conquer for Counting Inversions?
The main idea is to split the sequence into two halves, count inversions in each half recursively, and count cross-inversions between the two halves, similar to mergesort.
What is the significance of log(n!) in the context of sorting lower bounds?
log(n!) represents the minimum number of comparisons required to sort “n” elements, resulting in a lower bound of O(n log n) for comparison-based sorting algorithms.
Explain the lower bound concept for sorting and searching algorithms.
The lower bound is based on information-theoretic arguments; for instance, sorting requires O(n log n) comparisons because each comparison only provides a binary answer.
How does Bucketsort achieve O(m + n) time complexity, and in what cases is it optimal?
Bucketsort works in O(m + n) when elements are uniformly distributed across a fixed range of size “m,” ideal for datasets with known ranges or uniformly distributed values.
What is the optimal time complexity for finding the median in the “Center of a Point Set on the Line” problem?
The optimal point minimizing the sum of distances to given points on a line is the median, achievable in O(n) time for selection without full sorting.
What are the main differences between in-place and non-in-place algorithms?
In-place algorithms (e.g., Quicksort) require only constant extra memory, whereas non-in-place algorithms (e.g., Mergesort) may need additional memory for auxiliary arrays.
How does using random pivots in Quicksort help achieve expected O(n log n) complexity?
Random pivots reduce the likelihood of extremely unbalanced splits, creating balanced partitions on average and preventing the worst-case O(n^2) complexity.