LN3 Dynamic programming and WIS Flashcards

1
Q

Why does the Earliest End First (EEF) algorithm fail for the Weighted Interval Scheduling problem?

A

Because intervals have different weights, choosing the interval with the earliest end time may not lead to the maximum total weight. An interval with an early end time could have a small weight and conflict with more valuable intervals, making EEF suboptimal for the weighted case.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the key challenge in solving the Weighted Interval Scheduling problem compared to the unweighted version?

A

The challenge is that intervals have different weights, adding complexity because we must consider both the interval’s weight and its overlap with other intervals. This means a simple greedy approach based on end times does not guarantee an optimal solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What observation allows us to limit the combinatorial explosion in the Weighted Interval Scheduling problem?

A

We observe that for any interval [s_j, f_j], among all partial solutions ending at f_j, it suffices to keep only one partial solution X_j with the maximum total weight. This means we can ignore other partial solutions with the same end time but lesser total weights.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How is the function OPT(j) defined in the context of Weighted Interval Scheduling?

A

OPT(j) is defined as the maximum total weight that can be achieved by selecting a subset of disjoint intervals from the first j intervals sorted by end time f₁ < f₂ < … < f_j.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the recursive formula used to compute OPT(j) in the Weighted Interval Scheduling problem?

A

The recursive formula is OPT(j) = max{OPT(j – 1), OPT(p(j)) + v_j}, where p(j) is the largest index i such that f_i < s_j (the last interval that does not conflict with interval j), and v_j is the weight of interval j.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How is p(j) defined and computed in the Weighted Interval Scheduling problem?

A

p(j) is the largest index i < j such that f_i < s_j, meaning the last interval that ends before the start of interval j, ensuring no overlap. p(j) can be computed efficiently by sorting intervals and using a pointer or binary search to find the correct index.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why should we avoid implementing the recursive formula for OPT(j) in a recursive fashion?

A

Implementing OPT(j) recursively would lead to exponential time complexity because it would recompute OPT values multiple times. Instead, we should store computed OPT(j) values in an array and compute them iteratively to ensure polynomial time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How do we reconstruct the optimal set of intervals after computing OPT(n) in the Weighted Interval Scheduling problem?

A

We use backtracing: pointers from the optimal choices to the next one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is dynamic programming and how does it differ from greedy algorithms?

A

Dynamic programming is a method for solving complex problems by breaking them down into simpler subproblems and solving each subproblem just once, storing the solutions. Unlike greedy algorithms, which make locally optimal choices, dynamic programming considers multiple options to find the globally optimal solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the key characteristics of problems suitable for dynamic programming?

A

Problems suitable for dynamic programming have optimal substructure (the optimal solution can be constructed from optimal solutions to subproblems) and overlapping subproblems (the same subproblems are solved multiple times).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why is it important to define the objective value when explaining a dynamic programming algorithm?

A

Defining the objective value clarifies what each computed value represents, ensuring that the recursive relations and computations are correctly understood and can be verified for correctness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

In dynamic programming algorithms, why is it crucial to define all newly introduced mathematical symbols before using them?

A

Because without clear definitions, the meaning and role of these symbols in computations and recursive formulas may not be understood, leading to confusion and potential errors in algorithm implementation and analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Describe the Knapsack problem.

A

The Knapsack problem involves selecting a subset of items with given weights w_i and values v_i to maximize total value without exceeding the knapsack’s capacity W.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How is the Subset Sum problem related to the Knapsack problem?

A

The Subset Sum problem is a special case of the Knapsack problem where each item’s value v_i equals its weight w_i. The goal is to find a subset of weights that sum up as close as possible to, but not exceeding, a target sum W.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the recursive relation used in the dynamic programming solution of the Knapsack problem?

A

For capacity w and first i items, the recursive relation is OPT(i, w) = max{OPT(i – 1, w), OPT(i – 1, w – w_i) + v_i} if w_i ≤ w; else OPT(i, w) = OPT(i – 1, w).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

In the Knapsack problem’s dynamic programming solution, what do the parameters i and w represent in OPT(i, w)?

A

i represents the first i items under consideration, and w represents the current capacity of the knapsack. OPT(i, w) is the maximum value achievable with these constraints.

17
Q

How does the time complexity of the dynamic programming solution for the Knapsack problem depend on the input?

A

The time complexity is O(nW), where n is the number of items and W is the capacity of the knapsack, because we fill a table of size n × W.

18
Q

What is the primary difference between the Knapsack problem and the Subset Sum problem in terms of their goals?

A

The Knapsack problem seeks to maximize total value within capacity, whereas the Subset Sum problem aims to find a subset of numbers whose sum is as close as possible to a target sum without exceeding it.

19
Q

Why is it important to consider efficient computation of p(j) in the Weighted Interval Scheduling problem?

A

Efficient computation of p(j) is essential to maintain the overall efficiency of the algorithm, preventing an increase in time complexity due to this step.

20
Q

What is the significance of the observation that we can limit the number of partial solutions considered in dynamic programming for Weighted Interval Scheduling?

A

By recognizing that only one optimal partial solution ending at each f_j needs to be kept, we avoid exponential growth in the number of partial solutions, enabling a polynomial-time algorithm.

21
Q

How does the concept of “overlapping subproblems” manifest in the dynamic programming solution to Weighted Interval Scheduling?

A

Overlapping subproblems occur because the optimal solution for interval j depends on the optimal solutions of previous intervals, which are reused multiple times in computing OPT(j) for different j.

22
Q

In the context of algorithm design, what does “optimal substructure” mean?

A

Optimal substructure means that the optimal solution to a problem contains within it optimal solutions to subproblems.

23
Q

Explain why the Knapsack problem is considered NP-hard.

A

The Knapsack problem is NP-hard because there is no known polynomial-time algorithm that solves all instances of the problem optimally, and it can be used to solve other NP-hard problems.

24
Q

What practical applications do the Knapsack and Subset Sum problems have?

A

They are used in resource allocation, budget management, manufacturing, cryptography, and other fields requiring optimization under constraints.

25
Q

How does dynamic programming provide an advantage over exhaustive search?

A

Dynamic programming reduces the time complexity by storing solutions to subproblems and reusing them, avoiding redundant computations inherent in exhaustive search.

26
Q

How does the Weighted Interval Scheduling problem illustrate the use of an exchange argument in algorithm design?

A

Although the problem is solved using dynamic programming, the initial observation about partial solutions and replacing less optimal ones with better ones is akin to the exchange argument used in greedy algorithms.