Exam Flashcards
What is convex set? [1-INTRO]
A set C is convex if for any x1, x2 ∈ C, for any lambda [0, 1], it holds that lambda * x1 + (1 - lambda) * x2 ∈ C
What is convex function? [1-INTRO]
A function f is convex if dom f is convex and for any x, y ∈ dom f and lambda [0,1] it holds that
f(lambda * x + (1 - lambda) * y) <= lambda * f(x) + (1 - lambda) * f(y)
What are steps to standardise LP? [2-GEOMETRY]
1 Maximization → Minimization
2 ≤ inequalities → equalities (slack variables)
3 ≥ inequalities → equalities (surplus variables)
4 Negative right hand sides → multiply by −1
5 Eliminate free variables
What is a polyhedra? [2-GEOMETRY]
Convex set that is the intersection of a finite number of hyperplanes and halfspaces
What is an extreme point of polyhedron P? [2-GEOMETRY]
We can not find x,y ∈ P s.t. lambda * x + (1 - lambda) * y ∈ P
[Theorem: Suppose P has at least one extreme point. Then, either the optimal cost is −∞ or there exists an extreme point that is optimal.]
What is basic solution and BFS? [2-GEOMETRY]
Let P be a polyhedron. The vector x ∈ Rn is is a basic solution if
1) All equality constraints are active at x, and
2) Out of the constraints that are active, n of them are linearly independent
BFS - A basic feasible solution(BFS) is a basic solution that satisfies all of the constraints
What is a vertex of polyhedron P? [2-GEOMETRY]
Point x is the vertex of P if there exists c ∈ Rn such that x is the unique optimal solution of problem
min c^T*x subject to x ∈ P
What is the connection between Extreme Pt, Vertex and BFS? [2-GEOMETRY]
If we let P be a non-empty polyhedron and let x ∈ P. Then, these are equivalent
[Property of being a BFS is representation independent] [P has a BFS <=> P does not contain a line (Th)]
When polyhedron P = {x : Ax = b, x ≥ 0} does have a BFS? Let A = [A1, A2, …, An] (Ai: ith column of A) [2-GEOMETRY]
Theorem: Assume that rank(A) = m. A vector x ∈ Rn is a basic solution if and only if Ax = b, and there exist indices B(1), . . . , B(m) such that:
(a) The columns AB(1), . . . , AB(m) are linearly independent
(b) xi = 0 ∀ i != B(1),…,B(m)
When a basic solution is called degenerate? [2-GEOMETRY]
A basic solution x is degenerate if the cardinality of the set {i : a_i^T * x = b_i } is greater than n (more than n constraints are active)
[ Degeneracy may depend on the particular representation of a polyhedron!]
In case of simplex method - θ* can equal 0 → new BFS = current BFS -> Finite termination is not guaranteed: cycling is possible.
What is the standard view of a LP? [3 - Simplex 1]
minimize c^T*x
subject to Ax = b, x≥0
with data A ∈ Rm×n, c∈Rn, b∈Rm (b≥0)
We assume:
*rows of A are linearly independent (otherwise, the constraints are
redundant or inconsistent).
* Requires that # of variables = n ≥ m = # of equations
=> rank(A) = m
What are basic variables in terms of A matrix? [3 - Simplex 1]
The set of columns that makes the matrix A invertible. The rest columns of A are called non-basic variables.
Note - By construction, the non-basic variables are always zero, but the basic variables can be zero or non-zero.
Write the Basic Representation [3 - Simplex 1]
it is on the list == revise it!!!
How do we define the reduced costs vector? [3 - Simplex 1]
Definition: Let x be a basic solution, let B be an associated basis matrix, and let c_B be the vector of costs of the basic variables.
For each j, we define the reduced cost rj of the variable xj according to
r_j = c_j − (c_B)^TB^(− 1)A_j.
when r ≥ 0 - we found the optimal solution! If not - we increase the variables with the negative reduced costs!
note - for basic variables they equal 0
r_B = c_B − (B)^TB^(− 1)^Tc_B
for NBVs:
r_N = c_B − (N)^TB^(− 1)^Tc_B
Theorem: Consider a basic feasible solution x associated with a basis matrix B, and let r be the corresponding vector of reduced costs. [3 - Simplex 1]
1.If r ≥0, then x is optimal.
2 If x is optimal and non-degenerate, then r ≥ 0.
What is the expression for the basic direction? [3 - Simplex 1]
So basically it is the direction in which we reduce the costs (we move as fas as possible)
A(x + θd) = b
Since θ>0 ,we require Ad =0,i.e.,
0=Ad =Bd_B +A_j.
The unique solution is d_B = −B^(−1)*A_j.
The vector d = (d_B,d_N) is called j-th basic direction.
What are key steps of Simplex Method? [3 - Simplex 1]
Identify Basis, BFS
Calculate Reduced Costs
Determine Direction, Feasibility
Adjust Basis, Recalculate
for adjusting the basis - Smallest subscript pivoting rule + for the direction - we select the smallest x_old/x_new fraction
How does the full tableau look like? [4 - Simplex 2]
it is on the list == revise it!!!
How to work with tableau? [4 - Simplex 2]
Initialize Tableau: Begin with the tableau for the current basis and BFS.
Evaluate Optimality: If all reduced costs are non-negative, the solution is optimal; stop.
Select Pivot: Choose a column with a negative reduced cost for improvement. If no positive pivot direction, the cost is unbounded.
Determine Pivot Row: Use the minimum ratio test to find which variable leaves the basis.
Update Tableau: Perform row operations to pivot, making the new basis element canonical.
How to find an initial BFS? Also share the observations [4 - Simplex 2]
Introduce a vector of artificial variables y ∈ Rm and use simplex algorithm to solve the auxiliary problem
minimize e^Ty
subject to Ax+y=b (AP)
x,y ≥ 0
[Note: Initialization is easy for (AP) - take (x, y) = (0, b).]
Observations from the auxiliary problem:
* If x is feasible for (LP), then (x,0) is feasible for (AP) with objective value 0.
* Thus, if optimal objective of (AP) is > 0, we can conclude that (LP) is infeasible.
Suppose optimal objective is zero (must have y = 0):
- if no artificial variables are on the basis - we have found an initial BFS for (LP).
- if some are still in the basis - we must drive them out of it
What is the motivation of the duality? [5 - Duality]
Relax the constraints Ax = b by introducing a penalty
so, we get
g(p) = minimize c^Tx + p^T(b − A*x) subject to x ≥ 0
minimal bound property - g(p) = x* (optimal solution of LP)
How does dual problem look like? [5 - Duality]
maximize b^Tp
subject to A^⊤p ≤ c
Optimal cost in dual problem is equal to optimal cost in primal problem c^T (x)
What is weak duality theorem? [5 - Duality]
Theorem: If x is primal feasible and p is dual feasible then b^Tp ≤ c^Tx.
Corollary: If x is primal feasible, p is dual feasible, and b⊤p = c⊤x, then x is optimal in the primal and p is optimal in the dual.
What is strong duality theorem? [5 - Duality]
If a linear programming problem has an optimal solution, so does the dual, and the respective optimal costs are equal.
What is economic interpretation of dual? [5 - Duality]
Each component pi of the optimal dual vector can be interpreted as the marginal cost (or shadow price) per unit increase of the ith requirement bi.
Complementary Slackness theorem [5 - Duality]
Let x primal feasible and p dual feasible. Then, x and p are optimal for the respective problems if and only if
p_i((a_i)^T x−b_i)=0, ∀i
x_j(c_j −p^TA_j)=0, ∀j
How does Problem Reformulation look for large-scale optimisation? [6 - Large Scale]
it is on the list == revise it!!!
Which algorithm helps to solve large scale problems and what are key steps? [6 - Large Scale]
Answer - Benders Decomposition
Initialize Master Problem: Solve relaxed master problem.
Solve Dual Subproblems: For each scenario, solve the dual.
Check Optimality: Assess feasibility and optimality of solutions.
Add Cuts: Introduce optimality and feasibility cuts as needed.
Iterate: Refine master problem, iterate until convergence.
What is LSA and General Framework for it? [7 - Sensitivity]
We are given an optimal basis B and optimal solution x*
- Assume some entry of A, b or c changes, or that a new variable or constraint is added.
* Under what conditions is the current basis still optimal?
* If these conditions are violated, how to find a new optimal solution without solving the problem from scratch?
Since B is optimal in original problem (P), we have:
Feasibility: B^(-1)b >= 0
Optimality: c^T - c_B^(T)B^(-1)*A >= 0^T
LSA - Local Changes in b [7 - Sensitivity]
change is here: Ax=b+δ*e_i
Changes in b affect feasibility
(OPTIMAL RANGE FOR delta is on the list)
If δ is outside allowed range:
* Current solution is primal infeasible, but satisfies optimality conditions.
* Can apply dual simplex method starting from the current basis.
LSA - Local Changes in c [7 - Sensitivity]
Suppose cj changes to cj + δ
-> examine condition c ̃⊤ ≥ c ̃^(T)B^(-1)A
Changes in c affect optimality conditions
Case 1: xj nonbasic
c_B: unaffected
Only inequality affected is the one for the reduced cost of xj ; we need:
cj +δ≥c_B^(T)B^(-1)A_j i.e.,δ≥−rj
- If this condition holds, current basis remains optimal.
- Otherwise, can apply primal simplex algorithm, starting at current BFS.
Case 2: xj basic - SEE THE LIST
LSA - A New Variable is Added [7 - Sensitivity]
new LP
min c^Tx + c_(n+1)x(n+1)
Ax + A_(n+1)x_(n+1) = b
x >= 0
Need only examine optimality conditions:
B is optimal <=> r_(n+1) = c_(n+1) -c^(T)B^(-1)A_(n+1) >= 0
if r_(n+1) < 0 - add column to simplex tableau and use primal simplex algorithm, starting from current basis B.
LSA - A New Constraint is Added [7 - Sensitivity]
LP stays the same, just 1 line added
(a_(m+1))^⊤*x = b_(m+1)
If current solution feasible, it is optimal
- Otherwise, apply dual simplex (obtaining a dual BFS may be difficult).
LSA -Local Changes in A [7 - Sensitivity]
aij ← aij + δ
Assume Aj does not belong in the basis
Feasibility conditions - unaffected
Optimality conditions:
c_i -c_B^(T)B^(-1)A_i >= 0 when i != j
c_j - p^⊤*(Aj +δei)≥0 => rj −δpi ≥0
If condition is violated, continue with primal simplex
How to model the requirement that event E2 can only occur if event E1 occurs [8 - Discrete]
yj = 1 if event Ej occurs , else yj = 0
E2 occurs only if E1 - using the constraint y2 ≤ y1
The capacity of a facility can only be expanded if the facility has been built, how to express it? [8 - Discrete]
x ≤ My, where y is a binary, M is the capacity
What is the key idea of Cutting Plane Algorithm? [9 - BB and Cutting Planes]
1) Solve the LP relaxation. Let * be an optimal solution.
2) If x* is integer, STOP; x* is an optimal solution to IP.
3) If not, add a linear inequality constraint to LP relaxation that all integer solutions satisfy, but x* does not; go to Step 1.
Note: Assumes that original LP relaxation is not unbounded
Describe Gomory Cuts shortly [9 - BB and Cutting Planes]
SEE THE LIST!
Describe BB algorithm shortly [9 - BB and Cutting Planes]
Set U = +∞ (the current best upper bound)
1 Branching. Select an active subproblem Fi
2 Pruning. If the subproblem is infeasible, delete it and go to 1.
3 Bounding. Otherwise, compute a lower bound b(Fi ) for the subproblem.
4 Pruning. If b(Fi) ≥ U, delete the subproblem and go to 1.
5 Partitioning. If b(Fi) < U, either obtain an optimal solution to the subproblem (update U if optimal value < U), or break the corresponding problem into further subproblems, which are added to the list of active subproblems (we can split by taking x_ and x^ here). Go back to step 1.
The algorithm stops (with optimal value U) when there are no active subproblems left.
What is the connection between convexity and minima? [10 - Nonlinear optimisation]
Suppose that F is a convex set, f : F → R is a convex function, and x* is a local minimum of P. Then x⋆ is a global minimum of f over F. So - In convex optimization, any local minimum is a global minimum
Gradient & Hessian [10 - Nonlinear optimisation]
Proposition. Suppose that f is twice differentiable on the open convex set S. Then, f is convex on S if and only if H(x) is sym- metric positive semi-definite (PSD) for all x ∈ S
Key propositions about the convex functions [10 - Nonlinear optimisation]
1) If f1 (x ) and f2 (x ) are convex functions, and a,b ≥ 0, then f(x) := af1(x) + bf2(x) is a convex function
2) if f(x) is a convex function and x = Ay + b, then
g(y) := f(Ay + b) is a convex function
What are necessary conditions for a local minimum? [11 -Optimality Conditions]
Theorem. Let f : Rn → R be twice continuously differentiable and let x⋆ ∈ Rn be a local minimum of f. Then
1 1st order necessary condition: ∇f(x⋆) = 0, and
2 2nd order necessary condition: ∇2f(x⋆) is PSD
What are sufficient conditions for a local minimum?
[11 -Optimality Conditions]
Theorem. Let f : Rn → R be twice continuously differentiable. Suppose that x⋆ satisfies
1 1st order condition: ∇f(x⋆) = 0, and
2 2st order condition: ∇2f(x⋆) is PD
Then x⋆ is a local minimum.