Sensitivity analysis Flashcards
Why do we care about sensitivity analysis?
It is very common for problem information to change after the original problem has been formulated. In order to gain quick access to cases where we change minor things about hte problem, we use sensitivity analysis rather than solving the entire problem again.
Also, using sensitivity analysis gives us GREAT insight into how our optimal solution look like. What constraints are most important, etc.
Finally, there is often great UNCERTAINTY in the parameters we use as input. For instance, demand constraints. By using sensitivity analysis, we can gain an understanding in how conclusive our solution is.
There are some “questions” we typically have that we consider as part of sensitvity analyssi. Elaborate
1) Changes made to an objective function coefficient
2) Changes made RHS coefficient
3) Change in constraint coefficient
4) Add or remove a constraint
5) add or remove a variable
Given the questions, what are we actually itnerested in figuring out, like what response from those changes are we looking for?
1) How is the objective function value affected by the change
2) Is the current solution still optimal after the change? is it still feasible?
3) At what change will another basic solution be optimal?
4) At what change will the current basic solution become infeasible?
What is relaxation
Relaxation refer to making the feasible region larger
Restriction
Making the feasible region smaller
If a problem is relaxed, what can we say about the objective function value?
It will become better or remain the same. It will never be worse
if we change RHS value, is this relaxation or restriction?
Depends on the direction and the constraint.
if we increase the value of RHS coefficient in a <= constraint, we relax the problem.
elaborate on changing constraint coefficients in terms of relaxation or restriction
Again, it depends on the direction and the type of constraint. In general, changing such a coefficient is analoguous to changing RHS value in the same constraint. If we make LHS value larger, and the constraint is <=, then we are restrictign the problem.
Define shadow price
The shadow price for a constraint is given by the change in objective function value when making a marginal increase in the RHS.
So, a shadow price is connected to a specific constraint, and is determined by how a change in its RHS (b coefficient) ill affect the objective function value.
Other names for shadow price
Dual price, dual value, marginal price
What is the relationship between the dual and primal problem
For every LP problem, there is a dual problem. There is a dual variable for every constraint in the primal problem. The shadow price is the optimal value to this dual variable in the optimal oslution.
What can we say about shadow prices and ocnstraints in tregards to changing the optimal objective function value?
Only constraints that are active/binding in the optimal solution can affect the objective function value. This means that only these shadow prices will have non-zero values.
One has to understand that a shadow price tell us how the objective function is affected by changing the RHS of a constraint. Either relaxing it, or constraining it. If the constraint is not binding, then this change makes no difference to the objective function value (for small marginal steps).
Given a shadow price, what stops us from multipliying it with the size of the change, and find the total change in obj func val?
The shadow price is only constant in a certain neighborhood. We need to find the interval where the shadow price remains constant.
Recall that feasability requirement for xb on matrix notation AND its optimality condition
xb = B^(-1)b >= 0
It is optimal if the reduced costs are larger than 0 (min problem) or smaller than zero (max problem). Recall how simplex tableau adds another flipped sign on top of this.
cn-bar = cn - cbB^(-1)N >= 0 (min problem)
why is the matrix notation for reduced costs in optimal tableau so important?
It tells us how the solution is defined from input variables.
We have the b vector, the cn, cb, N, and the inverse of B.
So, we actually need two identitites:
1) The xb = B^(-1)b >= 0
2) reduced cost
given these two, we can evaluate how changes to the input parameters affect the optimal solution
for these two, it is basically about whether the solution still remains optimal, or whether it still remains feasible.