Dynamic programming Flashcards

1
Q

General approach in dynamic programming

A

“These facts form the first crucial component on which a dynamic pro- gramming solution is based: a recurrence equation that expresses the optimal solution (or its value) in terms of the optimal solutions to smaller subproblems.

one needs a collection of subproblems derived from the original problem that satisfies a few basic properties.

(i) There are only a polynomial number of subproblems
(ii) The solution to the original problem can be easily computed from the solutions to the subproblems. (For example, the original problem may actually be one of the subproblems.)
(iii) There is a natural ordering on subproblems from “smallest”to“largest,” together with an easy-to-compute recurrence (as in (6.1) and (6.2)) that allows one to determine the solution to a subproblem from the solutions to some number of smaller subproblems.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Weighted Interval Scheduling (recursive)

A

” in which each interval has a certain value (or weight), and we want to accept a set of maximum value.
Problem: Default algorithm explores exponential branches. Thats why we use memoization:

We could store the value of Compute-Opt in a globally accessible place the first time we compute it and then simply use this precomputed value in place of all future recursive calls.

(6.4) The running time of M-Compute-Opt(n) is O(n) (assuming the input intervals are sorted by their finish times).”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Weighted Interval Scheduling (iteration over subproblems)

A

The key to the efficient algorithm is really the array M. It encodes the notion that we are using the value of optimal solutions to the subproblems on intervals {1,2,…,j} for each j, and it uses (6.1) to define the value of M[j]based on values that come earlier in the array. Once we have the array M, the problem is solved: M[n]contains the value of the optimal solution on the full instance,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Subset sums

A

“we have a single machine that can process jobs, and we have a set of requests {1, 2, . . . , n}. We are only able to use this resource for the period between time 0 and time W, for some number W. Each request corresponds to a job that requires time wi to process. o find out the value for OPT(n) we not only need the value of OPT(n − 1), but we also need to know the best solution we can get using a subset of the first n − 1 items and total allowed weight W − wn.

(6.9) The Subset-Sum(n , W ) Algorithm correctly computes the optimal value of the problem, and runs in O(nW) time.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

The knapsack problem

A

“This subsets sums problem is a natural special case of a more general problem called the Knapsack Problem, where each request i has both a value vi and a weight wi.
The goal in this more general problem is to select a subset of maximum total value, subject to the restriction that its total weight not exceed W.

(6.12) The Knapsack Problem can be solved in O(nW) time.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Sequence alignment

A

“How should we define similarity between two words or strings?
We want a model in which similarity is determined roughly by the number of gaps and mismatches we incur when we line up the two words.

Our definition of similarity will be based on finding the optimal alignment between X and Y, according to the following criteria. Suppose M is a given alignment between X and Y.
(6.14) Let M be any alignment of X and Y. If (m, n) ̸∈ M, then either the mth position of X or the nth position of Y is not matched in M.

The running time is O(mn), since the array A has O(mn) entries, and at worst we spend constant time on each.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Sequence alignment graph representation

A

answer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Shortest paths (dynamic programming)

A

“Here we consider the more complex problem in which we seek shortest paths when costs may be negative.

two related problems.

Given a graph G with weights, as described above, decide if G has a negative cycle—that is, a directed cycle C such that (summed cost of cycle < 0)
If the graph has no negative cycles, find a path P from an origin node s to a destination node t with minimum total cost:

It makes sense to consider the minimum-cost s-t path problem under the assumption that there are no negative cycles.

Use Bellman-Ford! “

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Bellman-Ford

A

“Solution to shortest path problem with negative weights (and no negative cycles).

(6.22) If G has no negative cycles, then there is a shortest path from s to t that is simple (i.e., does not repeat nodes), and hence has at most n − 1 edges.

Let’s use OPT(i, v) to denote the minimum cost of a v-t path using at most i edges.
We now need a simple way to express OPT(i, v) using smaller subproblems.

We can bound the running time as follows. The table M has n2 entries; and each entry can take O(n) time to compute, as there are at most n nodes w ∈ V we have to consider.
(6.24) The Shortest-Path method correctly computes the minimum cost of an s-t path in any graph that has no negative cycles, and runs in O(n^3) time.

If we are a little more careful in the analysis of the method above, we can improve the running-time bound to O(mn) without significantly changing the algorithm itself.
(6.25) The Shortest-Path method can be implemented in O(mn) time”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly