10 | Tricks from LP w applications Flashcards

1
Q

Before lecture 10 we encountered linear and quadratic programs modeling metabolism such as FBA, FVA, ROOM, MOMA, etc.
Why consider more complex objectives or constraints?

A
  • Better capture biological reality
  • Model behaviors like enzyme efficiency or regulatory logic
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the tradeoff of using complex objective functions?

A
  • Increased computational complexity
  • Need for transformations to LP or MILP
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is an example of transforming a non-linear problem to LP?

A
  • Flux Coupling Analysis (linear fractional program)
  • Transformed using Charnes-Cooper transformation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the goal of pFBA?

A

Aims to determine the subset of genes and proteins that may contribute to the most efficent metabolic topology under the given growth conditions
* Identify minimal set of active enzymes, most efficient subset of pathways
* Model efficient enzyme usage under growth constraints
* Mimics minimal enzymatic cost for maximal growth

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What modifications are made to reactions in pFBA?

A
  • Reversible reactions are split into two irreversible reactions
  • (but we learnt a trick how to deal with an LP without splitting)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the two steps of pFBA?

A
  • Step 1: Solve FBA to determine vbioopt
  • Step 2: Minimize total flux of gene-associated rxns under steady-state, fixed growth
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How can genes be classified based on pFBA?

A
  • Essential
  • pFBA optima
  • Enzymatically Less Efficient (ELE)
  • Metabolically Less Efficient (MLE)
  • pFBA no-flux
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

pFBA classifications - what does ELE mean?

A
  • Enzymatically Less Efficient
  • there are alternative pathways that meet same need with less steps
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

pFBA classifications - what does MLE mean?

A
  • Metabolically less efficient
  • leading to reduced growth, if used
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

pFBA classifications - what does pFBA no-flux mean?

A

No flux in the experimental conditions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the pFBA optimization problem (with split reversible reactions)?

A
  • min ∑ vi (for gene-associated reactions)
  • s.t. Nv = 0
  • vbio = vbio,lb
  • 0 ≤ v ≤ vmax
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How can a flux minimization problem be handled without reaction splitting?

A
  • Replace vi with vi+ - vi-
  • Enforce vi+, vi- ≥ 0
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Consider the pFBA LP without splitting reversible reactions.
minv ∑ |vi|
s.t.
Nv = 0
vbio = vbio,lb
vmin ≤ v ≤ vmax
Why can’t LP be directly applied to minimize ∑|vi|?

A
  • Because vi can be negative
  • such an absolute value function is not additive nor homogeneous → nonlinear!
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

In pFBA, how can we transform vi to make it LP-compatible?

A

Introduce the following substitution with two new non-negative variables:
vi = vi+ − vi
→ |vi| = vi+ + vi
Such that: vi+, vi ≥ 0
∑|vi| = ∑(vi+ + vi) → objective is now linear

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

When using the trick for making problems minimizing all reaction fluxes without having to split reversible reactions by introducing vi = vi+ − vi

Why can we assume at most one of vi+ or vi is positive at optimality?

A

Because the substitution vi = vi+ − vi allows us to reduce both by min(v+, v) and keep vi unchanged.
→ both being positive reduces the objective → contradicts optimality
→ optimal solution forces one of the two to be zero

Very elegant idea:
If both are positive, you can subtract the smaller of the two from both (i.e., move v closer to zero) → reduces the objective value

So at optimality, one of the two is always zero

In other words:
You never need to explicitly add a constraint forcing them to not be positive simultaneously
It emerges naturally from the LP optimality condition


from slide:
Suppose that both 𝑣𝑖+</sup) and 𝑣𝑖 are non-zero (positive) and let the smaller be of value 𝛿. Subtracting 𝛿 from both does not affect 𝑣𝑖 = 𝑣𝑖+</sup) - 𝑣𝑖
However, it reduces the value of the objective 𝑣𝑖+</sup) + 𝑣𝑖 by 2𝛿.
contradicts the optimality!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the form of a minmax problem?

A
  • minx maxk∈K ∑ ckj xj
  • k∈K indexes multiple cost functions - for each k theres a different linear cost.
  • The objective is to find the x that makes worst of the values as small as possible.
  • ie trying to minimize the maximum over several linear expressions!
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Consider the minmax problem:
minx maxk∈K ∑ ckj xj
st:
Ax ≤ b
x ≥ 0
How can the minmax problem be turned into LP?

A
  • Introduce an additional decision variable, eg z
  • Add constraints: ∑ ckj xj ≤ z for all k ∈ K
  • This ensures that optimal value for z will not be larger than the maximum over all k
  • New objective: minimize z over x,z
18
Q

What is an everyday example of a min–max problem?

Why would you use min–max in this bakery example?

A

A bakery delivers to 3 cafés

Each café wants a different amount of baked goods (e.g., 3x₁ + 2x₂)
You can’t perfectly satisfy all

Goal: Choose x₁ and x₂ to minimize the maximum dissatisfaction
Turns into:
 min z
 s.t. each café’s dissatisfaction ≤ z

Why would you use min–max in this bakery example?
* To distribute resources fairly
* Prevent any one café from being too disappointed
* Smooth out extremes
* Objective: smallest worst-case cost

In detail:
🍞 Scenario: You run a bakery
You bake:
* Baguettes (x₁)
* Croissants (x₂)
You want to deliver these to 3 cafes, and each one wants a different amount:
* Café A - 3x₁ + 2x₂
* Café B - 2x₁ + 5x₂
* Café C - 4x₁ + 1x₂
But due to delivery delays, you might not satisfy every café perfectly.
You don’t want any one café to be too disappointed.
🎯 Goal:
Minimize the maximum dissatisfaction of any café.
This is a min–max problem:
minx₁,x₂ max {
 3x₁ + 2x₂, // Café A
 2x₁ + 5x₂, // Café B
 4x₁ + 1x₂  // Café C
}
You’re trying to find x₁ and x₂ (how many baguettes and croissants to bake) that make the worst café the least unhappy.
💡 Why use min–max?
This is a fairness strategy:
* You’re not trying to make one café super happy at the cost of others
* You’re trying to balance deliveries, so no café is hit the hardest
* It’s like smoothing out the extremes.
✅ LP transformation
Introduce a variable z = “maximum dissatisfaction”
Add constraints:
* 3x₁ + 2x₂ ≤ z
* 2x₁ + 5x₂ ≤ z
* 4x₁ + 1x₂ ≤ z
Now you can minimize z subject to those constraints (and whatever other constraints you have, like time or flour limits), and you’ve turned it into an LP.

19
Q

What is an example of min–max in metabolic modeling?

A
  • You want a mutant to stay close to wild-type behavior
  • Instead of minimizing total deviation (MOMA), minimize largest individual rxn change
  • avoids big deviations in any one reaction
  • Useful when one large change would be biologically unrealistic
20
Q

What is an example of a discontinuous variable and what is the trick to incorporate these into LPs?

A
  • x = 0 or l ≤ x ≤ u
  • Linear programming allows only modeling with variables that take continuous values
  • → Introduce binary variable eg y ∈ {0,1}
  • Constraint: ly ≤ x ≤ uy
    We then get:
  • y = 0 ⇒ x = 0
  • y = 1 ⇒ l ≤ x ≤ u

eg optReg etc! –> MILP

21
Q

When do fixed costs arise in metabolic modelling?

A
  • e.g. for newly synthesized enzymes - Modeling set-up costs
  • Modeling of two mutually exclusive situations
21
Q

What is the LP and cost function for a fixed cost setup? Why is this not solvable as is with LP?

A

Max over x of C(x)
St:
aix + ∑j aijwj ≤ bi
xi, wj ≥ 0
Where:
C(x) = 0 if x = 0
C(x) = k + c*x if x > 0
not linear and not continuous → not an LP

22
Q

Consider the fixed cost setup:
Max over x of C(x)
St:
aix + ∑j aijwj ≤ bi
xi, wj ≥ 0
Where:
C(x) = 0 if x = 0
C(x) = k + c*x if x > 0
How can this be modeled in LP?

A

(NEED TO UNDERSTAND THIS BETTER)
* Introduce binary indicator variable y
x = 0 → y = 0
x > 0 → y =1
* Link with x using the constraint: εy ≤ x ≤ My
* Cost function becomes: C(x,y) = ky + cx

Final LP:
Maxx,y,wC(x)
St:
aix + ∑j aijwj ≤ bi
εy ≤ x ≤ My
xi, wj ≥ 0
** y ∈{0,1}**

23
Q

What is an either-or constraint? Give an example. Is this an LP

A

Exactly one of two (or more?) constraints must be satisfied
maxx cjxj
s.t.
∑ a1jxj ≤ 𝑏1
∑ a2jxj ≤ 𝑏2
Where exactly one of the conditions (1) or (2) must hold
Not an LP - In linear programmig, all constraints must be satisfied.

24
Q

How can a problem with either-or constraints be transformed to become an LP?

A

One selects sufficiently large bounds 𝑀1and 𝑀2, upper bounds on the activity of the constraints, chosen as tight as possible, such that the left-hand sides of the constraints are smaller than 𝑏1 + 𝑀1 and 𝑏2 +𝑀2, respectively.
* Introduce binary y
* Constraint 1: ∑ a1j xj ≤ b1 + M1y
* Constraint 2: ∑ a2j xj ≤ b2 + M2
(1 - y)

25
Q

How does the binary variable y work in the transformation for either-or constraints seen below?

Introduce binary y
Constraint 1: ∑ a1j xj ≤ b1 + M1y
Constraint 2: ∑ a2j xj ≤ b2 + M2
(1 - y)

A
  • y = 0 ⇒ Constraint 1 enforced, Constraint 2 relaxed, ie Constraint (1) is imposed, but (2) is weakened as is non binding
  • y = 1 ⇒ Constraint 2 enforced, Constraint 1 relaxed, ie Constraint (1) is non-binding, but (2) is imposed
26
Q

What is an OR constraint?

A

eg:
∑ a1j xj ≤ b1
∑ a2j xj ≤ b2
Where at least one of multiple constraints must be satisfied

27
Q

Consider a problem with OR constraints - i.e at least one of multiple constraints must be satisfied. eg:
∑ a1j xj ≤ b1
∑ a2j xj ≤ b2
How can these OR constraints be modeled in an LP?

A

(try to understand this better! GeneReg?)
One selects bounds 𝑀1and 𝑀2
* Introduce binary variables y, z
* Relax one or both constraints based on y, z values
Add following constraints
* ∑ a1j xj ≤ b1 + M1y
* ∑ a1j xj ≤ b1 + M1
z
* ∑ a2j xj ≤ b2 + M2(1 - y)
* ∑ a2j xj ≤ b2 + M2
(1 - z)

28
Q

What is a conditional constraint? Give an example

A
  • p: ∑ a1j xj ≤ b1
  • q ∑ a2j xj ≤ b2
  • Where if p is satisfied, q must also be satisfied.
  • Written logically: p ⇒ q is equivalent to ~p ∨ q
  • Leads to the statement: ∑ a1j xj > b1 is satisfied or ∑ a2j xj ≤ b2 is satisfied
29
Q

With a conditional constraint, eg:
If condition p is satisfied, then q must also be satisfied
Written logically: p ⇒ q is equivalent to ~p ∨ q
We can reframe this to the statement: ∑ a1j xj > b1 is satisfied OR ∑ a2j xj ≤ b2 is satisfied
What is the problem here? How can this be modeled in LP?

A
  • The constraint with > is not closed
  • We resolve this by adding ε so the constraints are then:
    • ∑ a1j xj > b1 + ε
    • ∑ a2j xj ≤ b2 + ε
  • We can then proceed as discussed for OR logic (binary y variable, M bounds)
31
Q

Why use piecewise linear approximations?

A

To model non-linear objectives in LP

32
Q

What is a piecewise linear function?

A

??
* A function defined by multiple linear segments
* Each segment applies to a different range of x
* Example: f(x) = a₁x + b₁ if x ∈ [x₁, x₂]; a₂x + b₂ if x ∈ [x₂, x₃]
* Can pproximates a non-linear function

33
Q

How can you encode a piecewise linear function in MILP? How are functions approximated?

A

??

  • Split x into segments
  • Use binary variables to select active segment
  • Enforce constraints for each segment
  • Only one segment is active at a time
    How are functions approximated?
  • Use breakpoints
  • Represent other points as convex combinations of breakpoints
34
Q

In piecewise linear formulations, what is an example function with breakpoints?

A

??
* f1(x) = 0.5x for 0 ≤ x ≤ 1
* f2(x) = 1.5x - 1 for 1 ≤ x ≤ 2
* f3(x) = 3x - 4 for 2 ≤ x ≤ 4

35
Q

In piecewise linear formulations, how is a minimum of multiple f(x) segments modeled?

A

??
* Introduce variable t
* Add constraints: fi(x) ≤ t for all i
* Minimize t

36
Q

In piecewise linear formulations, how do you model x = min(x1, …, xn)?

A

??
* xi ≥ x for all i
* Use binary yi to pick minimum
* ∑ yi = 1
* x ≥ xi - M(1 - yi)

37
Q

What is the goal of the minimum trick in LP?

A
  • To model the function: x = min {x1, …, xn}
  • Using only linear constraints and binary variables
  • Needed because min() is nonlinear
38
Q

In the min trick for the objective x = min {x1, x2, …, xn}
How do you enforce that x ≤ xi for all i?

A

??
Add constraint: xi ≥ x for all i

39
Q

In the min trick for the objective x = min {x1, x2, …, xn}
How do you make x equal to one specific xi?

A

??
* Introduce binary yi ∈ {0,1}
* Add: x ≥ xi − M·(1 − yi) for all i
* Also add: ∑ yi = 1

40
Q

In the min trick for the objective x = min {x1, x2, …, xn}
What does the constraint ∑ yi = 1 do?

A

??
* Ensures that exactly one xi is selected
* That x becomes equal to

41
Q

In the min trick for the objective x = min {x1, x2, …, xn}
Why is the big-M term used in this minimum trick?

A

??
* To deactivate constraints for unselected yi = 0
* And to force x = xi when yi = 1