10 | Tricks from LP w applications Flashcards
Before lecture 10 we encountered linear and quadratic programs modeling metabolism such as FBA, FVA, ROOM, MOMA, etc.
Why consider more complex objectives or constraints?
- Better capture biological reality
- Model behaviors like enzyme efficiency or regulatory logic
What is the tradeoff of using complex objective functions?
- Increased computational complexity
- Need for transformations to LP or MILP
What is an example of transforming a non-linear problem to LP?
- Flux Coupling Analysis (linear fractional program)
- Transformed using Charnes-Cooper transformation
What is the goal of pFBA?
Aims to determine the subset of genes and proteins that may contribute to the most efficent metabolic topology under the given growth conditions
* Identify minimal set of active enzymes, most efficient subset of pathways
* Model efficient enzyme usage under growth constraints
* Mimics minimal enzymatic cost for maximal growth
What modifications are made to reactions in pFBA?
- Reversible reactions are split into two irreversible reactions
- (but we learnt a trick how to deal with an LP without splitting)
What are the two steps of pFBA?
- Step 1: Solve FBA to determine vbioopt
- Step 2: Minimize total flux of gene-associated rxns under steady-state, fixed growth
How can genes be classified based on pFBA?
- Essential
- pFBA optima
- Enzymatically Less Efficient (ELE)
- Metabolically Less Efficient (MLE)
- pFBA no-flux
pFBA classifications - what does ELE mean?
- Enzymatically Less Efficient
- there are alternative pathways that meet same need with less steps
pFBA classifications - what does MLE mean?
- Metabolically less efficient
- leading to reduced growth, if used
pFBA classifications - what does pFBA no-flux mean?
No flux in the experimental conditions
What is the pFBA optimization problem (with split reversible reactions)?
- min ∑ vi (for gene-associated reactions)
- s.t. Nv = 0
- vbio = vbio,lb
- 0 ≤ v ≤ vmax
How can a flux minimization problem be handled without reaction splitting?
- Replace vi with vi+ - vi-
- Enforce vi+, vi- ≥ 0
Consider the pFBA LP without splitting reversible reactions.
minv ∑ |vi|
s.t.
Nv = 0
vbio = vbio,lb
vmin ≤ v ≤ vmax
Why can’t LP be directly applied to minimize ∑|vi|?
- Because vi can be negative
- such an absolute value function is not additive nor homogeneous → nonlinear!
In pFBA, how can we transform vi to make it LP-compatible?
Introduce the following substitution with two new non-negative variables:
vi = vi+ − vi−
→ |vi| = vi+ + vi−
Such that: vi+, vi− ≥ 0
∑|vi| = ∑(vi+ + vi−) → objective is now linear
When using the trick for making problems minimizing all reaction fluxes without having to split reversible reactions by introducing vi = vi+ − vi−…
Why can we assume at most one of vi+ or vi− is positive at optimality?
Because the substitution vi = vi+ − vi− allows us to reduce both by min(v+, v−) and keep vi unchanged.
→ both being positive reduces the objective → contradicts optimality
→ optimal solution forces one of the two to be zero
—
Very elegant idea:
If both are positive, you can subtract the smaller of the two from both (i.e., move v closer to zero) → reduces the objective value
So at optimality, one of the two is always zero
In other words:
You never need to explicitly add a constraint forcing them to not be positive simultaneously
It emerges naturally from the LP optimality condition
–
from slide:
Suppose that both 𝑣𝑖+</sup) and 𝑣𝑖− are non-zero (positive) and let the smaller be of value 𝛿. Subtracting 𝛿 from both does not affect 𝑣𝑖 = 𝑣𝑖+</sup) - 𝑣𝑖−
However, it reduces the value of the objective 𝑣𝑖+</sup) + 𝑣𝑖− by 2𝛿.
contradicts the optimality!
What is the form of a minmax problem?
- minx maxk∈K ∑ ckj xj
- k∈K indexes multiple cost functions - for each k theres a different linear cost.
- The objective is to find the x that makes worst of the values as small as possible.
- ie trying to minimize the maximum over several linear expressions!
Consider the minmax problem:
minx maxk∈K ∑ ckj xj
st:
Ax ≤ b
x ≥ 0
How can the minmax problem be turned into LP?
- Introduce an additional decision variable, eg z
- Add constraints: ∑ ckj xj ≤ z for all k ∈ K
- This ensures that optimal value for z will not be larger than the maximum over all k
- New objective: minimize z over x,z
What is an everyday example of a min–max problem?
Why would you use min–max in this bakery example?
A bakery delivers to 3 cafés
Each café wants a different amount of baked goods (e.g., 3x₁ + 2x₂)
You can’t perfectly satisfy all
Goal: Choose x₁ and x₂ to minimize the maximum dissatisfaction
Turns into:
min z
s.t. each café’s dissatisfaction ≤ z
Why would you use min–max in this bakery example?
* To distribute resources fairly
* Prevent any one café from being too disappointed
* Smooth out extremes
* Objective: smallest worst-case cost
In detail:
🍞 Scenario: You run a bakery
You bake:
* Baguettes (x₁)
* Croissants (x₂)
You want to deliver these to 3 cafes, and each one wants a different amount:
* Café A - 3x₁ + 2x₂
* Café B - 2x₁ + 5x₂
* Café C - 4x₁ + 1x₂
But due to delivery delays, you might not satisfy every café perfectly.
You don’t want any one café to be too disappointed.
🎯 Goal:
Minimize the maximum dissatisfaction of any café.
This is a min–max problem:
minx₁,x₂ max {
3x₁ + 2x₂, // Café A
2x₁ + 5x₂, // Café B
4x₁ + 1x₂ // Café C
}
You’re trying to find x₁ and x₂ (how many baguettes and croissants to bake) that make the worst café the least unhappy.
💡 Why use min–max?
This is a fairness strategy:
* You’re not trying to make one café super happy at the cost of others
* You’re trying to balance deliveries, so no café is hit the hardest
* It’s like smoothing out the extremes.
✅ LP transformation
Introduce a variable z = “maximum dissatisfaction”
Add constraints:
* 3x₁ + 2x₂ ≤ z
* 2x₁ + 5x₂ ≤ z
* 4x₁ + 1x₂ ≤ z
Now you can minimize z subject to those constraints (and whatever other constraints you have, like time or flour limits), and you’ve turned it into an LP.
What is an example of min–max in metabolic modeling?
- You want a mutant to stay close to wild-type behavior
- Instead of minimizing total deviation (MOMA), minimize largest individual rxn change
- avoids big deviations in any one reaction
- Useful when one large change would be biologically unrealistic
What is an example of a discontinuous variable and what is the trick to incorporate these into LPs?
- x = 0 or l ≤ x ≤ u
- Linear programming allows only modeling with variables that take continuous values
- → Introduce binary variable eg y ∈ {0,1}
- Constraint: ly ≤ x ≤ uy
We then get: - y = 0 ⇒ x = 0
- y = 1 ⇒ l ≤ x ≤ u
eg optReg etc! –> MILP
When do fixed costs arise in metabolic modelling?
- e.g. for newly synthesized enzymes - Modeling set-up costs
- Modeling of two mutually exclusive situations
What is the LP and cost function for a fixed cost setup? Why is this not solvable as is with LP?
Max over x of C(x)
St:
aix + ∑j aijwj ≤ bi
xi, wj ≥ 0
Where:
C(x) = 0 if x = 0
C(x) = k + c*x if x > 0
not linear and not continuous → not an LP
Consider the fixed cost setup:
Max over x of C(x)
St:
aix + ∑j aijwj ≤ bi
xi, wj ≥ 0
Where:
C(x) = 0 if x = 0
C(x) = k + c*x if x > 0
How can this be modeled in LP?
(NEED TO UNDERSTAND THIS BETTER)
* Introduce binary indicator variable y
x = 0 → y = 0
x > 0 → y =1
* Link with x using the constraint: εy ≤ x ≤ My
* Cost function becomes: C(x,y) = ky + cx
Final LP:
Maxx,y,wC(x)
St:
aix + ∑j aijwj ≤ bi
εy ≤ x ≤ My
xi, wj ≥ 0
** y ∈{0,1}**
What is an either-or constraint? Give an example. Is this an LP
Exactly one of two (or more?) constraints must be satisfied
maxx cjxj
s.t.
∑ a1jxj ≤ 𝑏1
∑ a2jxj ≤ 𝑏2
Where exactly one of the conditions (1) or (2) must hold
Not an LP - In linear programmig, all constraints must be satisfied.
How can a problem with either-or constraints be transformed to become an LP?
One selects sufficiently large bounds 𝑀1and 𝑀2, upper bounds on the activity of the constraints, chosen as tight as possible, such that the left-hand sides of the constraints are smaller than 𝑏1 + 𝑀1 and 𝑏2 +𝑀2, respectively.
* Introduce binary y
* Constraint 1: ∑ a1j xj ≤ b1 + M1y
* Constraint 2: ∑ a2j xj ≤ b2 + M2(1 - y)
How does the binary variable y work in the transformation for either-or constraints seen below?
Introduce binary y
Constraint 1: ∑ a1j xj ≤ b1 + M1y
Constraint 2: ∑ a2j xj ≤ b2 + M2(1 - y)
- y = 0 ⇒ Constraint 1 enforced, Constraint 2 relaxed, ie Constraint (1) is imposed, but (2) is weakened as is non binding
- y = 1 ⇒ Constraint 2 enforced, Constraint 1 relaxed, ie Constraint (1) is non-binding, but (2) is imposed
What is an OR constraint?
eg:
∑ a1j xj ≤ b1
∑ a2j xj ≤ b2
Where at least one of multiple constraints must be satisfied
Consider a problem with OR constraints - i.e at least one of multiple constraints must be satisfied. eg:
∑ a1j xj ≤ b1
∑ a2j xj ≤ b2
How can these OR constraints be modeled in an LP?
(try to understand this better! GeneReg?)
One selects bounds 𝑀1and 𝑀2
* Introduce binary variables y, z
* Relax one or both constraints based on y, z values
Add following constraints
* ∑ a1j xj ≤ b1 + M1y
* ∑ a1j xj ≤ b1 + M1z
* ∑ a2j xj ≤ b2 + M2(1 - y)
* ∑ a2j xj ≤ b2 + M2(1 - z)
What is a conditional constraint? Give an example
- p: ∑ a1j xj ≤ b1
- q ∑ a2j xj ≤ b2
- Where if p is satisfied, q must also be satisfied.
- Written logically: p ⇒ q is equivalent to ~p ∨ q
- Leads to the statement: ∑ a1j xj > b1 is satisfied or ∑ a2j xj ≤ b2 is satisfied
With a conditional constraint, eg:
If condition p is satisfied, then q must also be satisfied
Written logically: p ⇒ q is equivalent to ~p ∨ q
We can reframe this to the statement: ∑ a1j xj > b1 is satisfied OR ∑ a2j xj ≤ b2 is satisfied
What is the problem here? How can this be modeled in LP?
- The constraint with > is not closed
- We resolve this by adding ε so the constraints are then:
- ∑ a1j xj > b1 + ε
- ∑ a2j xj ≤ b2 + ε
- We can then proceed as discussed for OR logic (binary y variable, M bounds)
Why use piecewise linear approximations?
To model non-linear objectives in LP
What is a piecewise linear function?
??
* A function defined by multiple linear segments
* Each segment applies to a different range of x
* Example: f(x) = a₁x + b₁ if x ∈ [x₁, x₂]; a₂x + b₂ if x ∈ [x₂, x₃]
* Can pproximates a non-linear function
How can you encode a piecewise linear function in MILP? How are functions approximated?
??
- Split x into segments
- Use binary variables to select active segment
- Enforce constraints for each segment
- Only one segment is active at a time
How are functions approximated? - Use breakpoints
- Represent other points as convex combinations of breakpoints
In piecewise linear formulations, what is an example function with breakpoints?
??
* f1(x) = 0.5x for 0 ≤ x ≤ 1
* f2(x) = 1.5x - 1 for 1 ≤ x ≤ 2
* f3(x) = 3x - 4 for 2 ≤ x ≤ 4
In piecewise linear formulations, how is a minimum of multiple f(x) segments modeled?
??
* Introduce variable t
* Add constraints: fi(x) ≤ t for all i
* Minimize t
In piecewise linear formulations, how do you model x = min(x1, …, xn)?
??
* xi ≥ x for all i
* Use binary yi to pick minimum
* ∑ yi = 1
* x ≥ xi - M(1 - yi)
What is the goal of the minimum trick in LP?
- To model the function: x = min {x1, …, xn}
- Using only linear constraints and binary variables
- Needed because min() is nonlinear
In the min trick for the objective x = min {x1, x2, …, xn}
How do you enforce that x ≤ xi for all i?
??
Add constraint: xi ≥ x for all i
In the min trick for the objective x = min {x1, x2, …, xn}
How do you make x equal to one specific xi?
??
* Introduce binary yi ∈ {0,1}
* Add: x ≥ xi − M·(1 − yi) for all i
* Also add: ∑ yi = 1
In the min trick for the objective x = min {x1, x2, …, xn}
What does the constraint ∑ yi = 1 do?
??
* Ensures that exactly one xi is selected
* That x becomes equal to
In the min trick for the objective x = min {x1, x2, …, xn}
Why is the big-M term used in this minimum trick?
??
* To deactivate constraints for unselected yi = 0
* And to force x = xi when yi = 1