LN3 Dynamic programming and WIS Flashcards
Why does the Earliest End First (EEF) algorithm fail for the Weighted Interval Scheduling problem?
Because intervals have different weights, choosing the interval with the earliest end time may not lead to the maximum total weight. An interval with an early end time could have a small weight and conflict with more valuable intervals, making EEF suboptimal for the weighted case.
What is the key challenge in solving the Weighted Interval Scheduling problem compared to the unweighted version?
The challenge is that intervals have different weights, adding complexity because we must consider both the interval’s weight and its overlap with other intervals. This means a simple greedy approach based on end times does not guarantee an optimal solution.
What observation allows us to limit the combinatorial explosion in the Weighted Interval Scheduling problem?
We observe that for any interval [s_j, f_j], among all partial solutions ending at f_j, it suffices to keep only one partial solution X_j with the maximum total weight. This means we can ignore other partial solutions with the same end time but lesser total weights.
How is the function OPT(j) defined in the context of Weighted Interval Scheduling?
OPT(j) is defined as the maximum total weight that can be achieved by selecting a subset of disjoint intervals from the first j intervals sorted by end time f₁ < f₂ < … < f_j.
What is the recursive formula used to compute OPT(j) in the Weighted Interval Scheduling problem?
The recursive formula is OPT(j) = max{OPT(j – 1), OPT(p(j)) + v_j}, where p(j) is the largest index i such that f_i < s_j (the last interval that does not conflict with interval j), and v_j is the weight of interval j.
How is p(j) defined and computed in the Weighted Interval Scheduling problem?
p(j) is the largest index i < j such that f_i < s_j, meaning the last interval that ends before the start of interval j, ensuring no overlap. p(j) can be computed efficiently by sorting intervals and using a pointer or binary search to find the correct index.
Why should we avoid implementing the recursive formula for OPT(j) in a recursive fashion?
Implementing OPT(j) recursively would lead to exponential time complexity because it would recompute OPT values multiple times. Instead, we should store computed OPT(j) values in an array and compute them iteratively to ensure polynomial time.
How do we reconstruct the optimal set of intervals after computing OPT(n) in the Weighted Interval Scheduling problem?
We use backtracing: pointers from the optimal choices to the next one.
What is dynamic programming and how does it differ from greedy algorithms?
Dynamic programming is a method for solving complex problems by breaking them down into simpler subproblems and solving each subproblem just once, storing the solutions. Unlike greedy algorithms, which make locally optimal choices, dynamic programming considers multiple options to find the globally optimal solution.
What are the key characteristics of problems suitable for dynamic programming?
Problems suitable for dynamic programming have optimal substructure (the optimal solution can be constructed from optimal solutions to subproblems) and overlapping subproblems (the same subproblems are solved multiple times).
Why is it important to define the objective value when explaining a dynamic programming algorithm?
Defining the objective value clarifies what each computed value represents, ensuring that the recursive relations and computations are correctly understood and can be verified for correctness.
In dynamic programming algorithms, why is it crucial to define all newly introduced mathematical symbols before using them?
Because without clear definitions, the meaning and role of these symbols in computations and recursive formulas may not be understood, leading to confusion and potential errors in algorithm implementation and analysis.
Describe the Knapsack problem.
The Knapsack problem involves selecting a subset of items with given weights w_i and values v_i to maximize total value without exceeding the knapsack’s capacity W.
How is the Subset Sum problem related to the Knapsack problem?
The Subset Sum problem is a special case of the Knapsack problem where each item’s value v_i equals its weight w_i. The goal is to find a subset of weights that sum up as close as possible to, but not exceeding, a target sum W.
What is the recursive relation used in the dynamic programming solution of the Knapsack problem?
For capacity w and first i items, the recursive relation is OPT(i, w) = max{OPT(i – 1, w), OPT(i – 1, w – w_i) + v_i} if w_i ≤ w; else OPT(i, w) = OPT(i – 1, w).