Chapter 4 - Simplex method Flashcards

1
Q

When, and by who, was the simplex method developed?

A

1947 by George Dantzig

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

The simplex method is widely used. Why?

A

Simplex is a solver for LP problems. LP problems occur in a large number of applications. Therefore, LP is common.

Simplex is used specifically, because it is very efficient. GIven the standard form model, simplex will efficiently determine how to progress the search in a manner that only explore a very small subset of the total amount of possible solutions, namely the corner points feasible solutions only.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

“what” is the simplex method?

A

An algebraic procedure, but with underlying geometric concepts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the “optimality test”?

A

Optimality test goes like this:
Consider any LP problem that posesses at least one optimal solution. If a CPF solution has no adjacent CPF solutions that are BETTER, then this current CPF solution is optimal.

ex:

Z = 30 + 3x_1 - 5/2 x_4

We see here that increasing x_1 vs x_4 will yield the answer. Since x_4 has a negative rate of change/improvement, the task becomes trivial. However, formally, x_1 is the “entering basic variable”

Geometrically, we say that Z(a, b) is not optimal because moving in the direction of x_1 will increase Z and do nothing with x_4. Extra geometrically we say “moving along the edge between the current point and some other point”.

In conclusion, the optimality test is about: If there are no adjacent BFS that are better than the current BFS, then the current BFS is optimal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

There are 6 solution concepts, name them

A

1) The simplex method focus solely on CPF solutions. For any problem that contains an optimal solution, we only have to find the “best” CPFS. This is much of the power of simplex. We only consider an extremely small subset of all possible points.

2) Iterative process

3) Whenever possible, initialization should choose the origin as the starting point, meaning all decision variables equal to zero.

4) Given a CPFS it is MUCH QUICKER to find information about its adjacent CPFS than any other CPFS. Therefore, we always move to adjacent CPFS. We dont “search” for the points, we calculate them using the shared constraint boundaries.

5) When a CPFS is found, simplex considers the rate of change of any possible adjacent direction. Then it picks the best one.

6) The optimality test at some CPFS consists of checking whether there are ANY possible adjacent edge/direction that gives Z a positive rate of change.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Given an LP problem in “our standard form”, what is the first step?

A

The first step is to convert the model into a set of linear equations by using slack variables. This is simple:

Say we have: x_1 <= 10

Then we must add a variable, x_2, such that the shite becomes an equality:

x_1 + x_2 = 10

We must also include the non-negativity constraint.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

When we have included slack variables, what do we call our model+

A

augmented (utvided).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

if a slack variable equals 0 in the current solutions, what does this mean?

A

It means that the solution lies directly on the constraint boundary for the corresponding functional constraint.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

if a slack variable is greater than 0, what does it mean+

A

It means that the solution is within the feasible region.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is an augmented solution?

A

An augmented solution is any solution that is augmented to include the slack variables as well.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a basic solution?

A

A basic solution is a corner-point solution that is augmented to include the slack variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a BFS?

A

BFS = Basic Feasible Solution. Augmented corner-point feasible solution. CFS with slack variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Elaborate on degrees of freedom

A

Degrees of freedom refer to the number of variables that exceeds the number of equations in our model.

They are called degrees of freedom since they are essentially free to be any value and the system of equations will be solved. However, we are not interested in a solution to the system. We want the best solution.

Although we can select any value for the free variables, we force them to be zero. This is because we want to consider corner-points.

The number of free variables are related to the number of non-basic variables, in that they are the same number. What variables we choose to be the free variables is completely arbitrary. However, it is worth noting that some variables as 0 will produce systems that cannot be solved. In such cases, we must try something else (try another combination of free variables).

The variables that are chosen to be “free variables” are called non-basic variables. Simplex set them to be 0.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the properties of basic solutions?

A

1) Each variable in a basic solution will be either basic or non-basic.
2) The number of basic variables equals the number of functional constraints (now equations).
3) The nonbasic variables are set equal to zero.
4) The values of the basic variables are obtained as the simultaneous solution of the system of equations.
5) If the basic variables satisfy non-negativity, the basic solution is BFS (Basic Feasible Solution)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How to tell if two BFS are adjacent?

A

Two basic-feasible solutions are adjacent if their set of non-basic variables is equal for all but one variable.

This is due to the fact that if we want to move to an adjacent BFS, then one, and only one, non-basic variable will change. If not, they are not adjacent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What happens when we move from one BFS to another?

A

One of the non-basic variables will be turned into a basic variable, and one of the basic variables will be turned into a non-basic variable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Give the outline of the simplex method

A

1) Initialization: Identify an initial solution
2) Perform the optimality test to determine if this init solution is optimal. If YES, then STOP. if NO then CONTINUE
3) Step 1 of iteration) Determine which direction to move.
4) Step 2 of iteration) Determine where to stop to reach the next solution.
5) Step 3 of iteration) Solve system for this new equation.
6) Return to the optimality test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

WHat is the difference between original form and augmented form of a model?

A

THe augmented form includes slack variables. This means that the augmented matrix contains equalities and not inequalities etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Elaborate on the initialization step of the algebraic simplex method

A

Recall this:
A problem/model consisting of n variables and r constraints that is in our standard form, will introduce slack variables equal to the number of constraints. Therefore, the augmented model will require (n+r) variables, but the number of constraints (now turned to equations) remains the same.
We know that the number of non-basic variables is then (n+r)-r=n. This means, we will always have a set of nonbasic variables that has the same size as the number of decision variables from the original problem. This allows us to always pick exactly the set of decision variables as our initial starting point, given that there are no constraints violating this.

When we pick this init point as the origin (since we set value equal to 0), we then solve the remaining system to get the values of the slack variables.

On the degrees of freedom: Recall that we get n degrees of freedom because there are n more variables than equations. Therefore, we can solve the system by choosing ANY value for these free variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

when we solve the augmented model using Gauss, what are we actually doing/finding?

A

We are finding out how much gap there is between the point given by the values of the decision variables and every constraint-border, given by the slack variables.

At first, we want to pick (0,0,…) as the nonbasic variable set. Since there are n decision variables, we get n nonbasic variables. Therefore, we are finding out how much slack there is in each constraint from this point.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Elaborate on the optimality test

A

Say we have been given the objective function:

Z = 3x_1 + 5x_2

If we are at point (0,0) it is easy to see that Z is 0.
Since both variables give a positive contribution by increasing them, we see that Z is currently not optimal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Elaborate on determining the direciton of movement (Step 1 of iteration)

A

We consider the current shape/form of the objective function. The objective function will change along with the procedure.
We look at the different parts of the objective function, and find the variable that has the largest positive coefficient. This represent the direction with the greatest rate of improvement.

The important part is that we always only change one non-basic variable by moving to an adjacent node. This means that we control the movement. If we have 2 decision variables that are non-basic, moving in either of those directions will only change one of them. Likewise, if we have a combination of slack variables and decision variables as non-basic variables, if we move to an adjacent node, we only change on non-basic variable. This is key. We choose one non-basic to be “entering basic”, and force the ohthers to remain at zero.

IMP: We choose one NONBASIC variable to increase/move along. The variable we choose is called the “entering basic variable”, because when we have moved as much as we can in this direction, this variable has turned to a basic variable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Elaborate on how we determine how “far” to move in a speicfic direction given that we have chosen a variable to move along

A

Recall that we have chosen a nonbasic variable to move along. We want to move all the way until we reach a constraint boundary. The idea is to move until one of the basic variables becomes 0. At this point, the decision variables will indicate a point that is a BFS (CPF + augmented), and it will be better than the point we previously explored, according to the objective function.

Given the nonbasic variable we have chosen, we want to check how far it can be changed. Therefore, we find all constraints involving the chosen nonbasic variable, and see how far it can be increased without violating the constraint. to do this, we use the non-negativity constraints.
Say the equation is like this, and we have chosen x_2 as the nonbasic to increase:
2x_2 + x_4 = 12
Then we use the nonnegativty constraint of x_4 >= 0

Thus we get:
x_4 = 12 - 2x_2 >= 0
This gives us…:
12 >= 2x_2
6 >= x_2

Therefore, we know that x_2 must be smaller than, or equal to, 6.
We do this for every constraint involving x_2.

Then we pick the smallest number, and use it as the max length we can move/increase the variable.

We also see how choosing x_2=6 makes x_4 = 0. Therefore, we want to pick x_4 as the leaving basic variable.

Recall that x_2 is now no longer a nonbasic variable.

IMPORTANT: When we check the constraints to figure out how far we can move, we must always use the most up-to-date equations given by gauss. Or do we?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Consider the second step of each iteration in the simplex method. Recall what it is.

Elaborate on how to perform it

A

The second step of each iteration in the simplex method is about figuring out the value of the entering basic variable.

We perform this step by considering every constraint that includes the “entering basic variable” and utilizing the fact that all variables should be non-negative.

IMP: Since we have solved the system prior, we have a linear system of equations where every basic variable is a function of the non-basic variables. This makes us able to use the fact that this variable should be nonneg. We then take every equation where the “entering basic variable” is included, and applies the fact that the basic variable in this equation should be non negative. This allows us to find the largest allowed value for the entering basic variable from this specific constraint.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Give the procedure of simplex by simplex tableau

A

The simplex tableau consists of 6 column parts:
1) Basic variable column
2) Equation number
3) Z column
4) A column per variable
5) Right side of equations
6) Ratio column

The simplex tableau has rows per basic variable, Z included.

First we use the optimality test. This is just checking to see if Z can be improved.

Then if this is not an optimal solution ,we start the iteration. We determine an “entering basic variable” that is the variable from the objectice function with the highest rate of improvement. Then we but a “box” around the column in the tableau below this coefficient that represent the entering basic variable from the objective function. We call this the “pivot column”.

Now we determine the leaving basic variable by the minimum ratio test. To do this, we use each entry in the pivot column that is greater than 0, and use this coefficiant as a divider that we use to divide its corresponding “b” (right side) value. so, if the right side value is 12, and the element in the pivot column at one of the rows is 2, then the ratio is 6. Thus the name “minimum ratio test”.
The row that has the smallest number at “ratio” will be called the “pivot row”. We draw a box around this as well. There will be a single number that now is in both boxes. this number is called the “pivot number”.

We use this pivot number in a single iteration. First we get it to be equal to 1 by dividng the row by itself. Then we use this row to eliminate the coefficients in the pivot column in the other rows, including the objective function row.

Then we perform the optimality test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is the purpose of the pivot column?

A

The pivot column is the column below the “entering basic variable” from the first row (objective function row).

The purpose of the pivot column is to identify all places where this variable is included in the constraints so that we can check for boundaries. We do this to include every case, so that we dont accidently miss one in the MRT.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What do we do if there is a tie for the entering basic variable?

A

Entering basic variable is chosen by inspection of the objective function.
There is no way to tell what to do if they are equal.
Therefore, we pick one arbitrarily.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

IMPORTANT:

Elaborate deeply on the simplex tableau method

A

We use a schematic approach that helps us make the correct calculations. the tableau relies on patterns instead of having to understand what you’re doing.

NB: I can definitely see how using the tableau will make you forget what the hell you’re doing.

Anyways, here is the approach:

We need to transform the model into a system of linear equations. To do this, we use slack variables. Recall how n variables and r constraints will turn to become (n+r) variables and r constraints/equations.

First we need to set up the tableau itself. This involves creating a table consisting of the following columns:
1) Basic variables
2) Equation number (used to explain gauss-ops)
3) Z-column
4) One column per variable we have that is not Z
5) The right hand side value of the equations (b-vector)
6) Ratio column, to display the results of the minimum ratio test

Then we select basic variables and non-basic variables. We automatically choose the decision variables to be our nonbasic variables.

Then we fill the values we have initially into the system.

Then we perform the first optimality test. Check to see if Z = … has positive parameters so that Z can be increased. If yes, then we start the iteration. If no, then the current set of decision variable-values is the optimal solution.

Then we find the “entering basic variable”. The entering basic variable will be the variable that has the coefficient with the largest value, so that increasing this variable will lead to the biggest possible increase in Z.
We use this “entering basic variable” to mark a column on the simplex tableau, which we call the “pivot column”. The pivot column will consist of all entries in the column BELOW the variable we have picked as the entering basic variable.

Then we find the “leaving basic variable”. We need this variable to make the “pivot row”. We find this variable by applying the minimum ratio test with the values that are not 0 in the pivot column we just found.

When we mark the pivot row, we will, because of the already-marked pivot column, get a single coefficient as the “pivot number”. This is a number we want to transform to 1 by dividing this row on itself. Then we use this “1” to eliminate the number on all other places in the pivot column. So, the result should be that the pivot row is the only row that now has a value in the pivot-column position.

Along with the system-solving, we update the basic-variables-set by exchanging the leaving-basic-variable and the entering-basic-variable.

Now that we have solved the system, we can apply the optimality test again. Then the same process applies as before.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Elaborate on tie-breaking related to “entering basic variables”

A

There is no way of predicting the better choice. Therefore, we can make arbitrary choice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What happens if the minimum ratio test only gives negative values or tries to divide on 0?

A

Basically, it means that no variable qualifies as the leaving basic variable.
Recall that we pick “leaving basic variable” based on the one that reach its boundary first. If no one reach boundary first, it graphically means that we are just extending further and further out into the feasible region.

Of course, this means that Z is unbounded.

Typically, the model is wrong. We very rarely find cases where the actual real life scenario is unsolvable due to unbounded feasible region.

Think about what unbounded Z actually means. When maximizing, it means that we have unlimited resources or something similar. This rarely makes sense.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Explain CPS vs CPFS

A

A corner-point solution is just a point indicating that constraint boundaries are intersecting there. This point can be in or out of the feasible region.

A Corner-point feasible solution is also an intersection between constraint boundaries, but this one is a valid solution, as it doesnt violate any constraints.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Recall the definition of adjancency of CPF

A

There are two definitions that are essentially the same:

1) If we have “n” decision variables, then any two CPFS’ that share at least (n-1) constraint boundaries will always be adjacent.

2) Two BFS/CPFS’ are adjacent if their set of non-basic (or set of basic) variables is identical but for one variable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Why can we be certain that we always have n nonbasic variables with value 0?

A

Because we are only interested in corner points. Due to the nature of corners (intersections) and slack variables, we can be certain that this is the case.

Say we have 3 decision variables and 3 constraints, where each constraint place a limit on the decision variable. In the space, this will create a prism. When considering the furthest out point on this prism, we can see that it lies on the boundary of all 3 constraints. Therefore, at this point, all slack variables are equal to 0. Therefore, our augmented values set looks like this: {x,y,z,0,0,0}
Moving there will require 3 iterations, as there are 3 edges we have to move along.
{0,0,0,a,b,c}
{x,0,0,0,b,c}
{x,y,0,0,0,c}
{x,y,z,0,0,0}
Looks something like this.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

IMP: When we do the minimum ratio test, what are we really doing?

A

We have at this point already found our “entering basic variable”. This would be the variable with the largest growth factor in regards to Z.

Let us say our entering basic variable is x_2. We essentially want to move from a current BFS to an adjacent BFS. However, we dont know how far to move.

So, we do this:
We use the slack variables to figure out what happens if we keep increasing the “entering basic variable”. At some point, increasing the “entering basic variable” will make our “cursor” hit the adjacent BFS. At this point, since it is a BFS, we know that the point will be at a place where one or more boundaries are at the limit. Therefore, their corresponding slack variables will be 0. So, the question is, how can we figure out how far out this point is?

Say we have these two relevant constraints:
x_4 = 12 - 2x_2
x_5 = 18 - 2x_2

We utilize the fact that x_4 and x_5 must be nonnegative.

x_4 = 12 - 2x_2 >= 0 which leads to 2x_2 <= 12
This gives us: x_2 <= 6
Same for the other one gives: x_2 <= 9

These numbers say the following:
“as we increase a nonbasic variable, we reach a point where the slack variable is 0. If we keep increasing the variable, the slack variable will become negative. So, essentiually, it tells us that if we want x_4 to be nonnegative, we must keep x_2 smaller than or equal to 6.
Since x_5 has the x_2 bound on 9, we naturally use the tightest one.

IT IS VERY IMPORTANT to understand the connection between MRT and the variable placing the tightest constraint on it. In this example, it was the variable x_4 that placed the tightest bound. Therefore, we use x_3 as the leaving basic variable. Why is it the leaving basic variable? It is the leaving basic variable since it is the first variable to reach the value of 0 as x_2 is increased. We dont want to increase beyond this point.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

How do we find the values of the basic variables?

A

We solve the system of equations.

The set of their corresponding solved values is called a “basis”.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What is “kanonisk form”? (proper form from Gaussian elimination)

A

Refers to the case where every equation only has a single basic variable, and this has the coefficient of 1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

How many constraints will be involved in each BFS?

A

If there are n decision variables, there will be at least “n” constraints in all BFS’. However, the maximum can be equal to the number of constraints + the number of decision variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

At a general level, what could the slack variable represent (in real life?

A

Could be the amount of resources left.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Hva er en bindende restriksjon?

A

En bindende restriksjon er en restriksjon som er på formen “likhet”. Det er det vi får av å bruke slakk variabler slik at vi går fra ubundet til bundet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What is the formal definition of adjacency for CPF solutions?

A

For any LP problem with n decision variables, two CPF solutions are adjacent if they share (n-1) constraint boundaries. The line segment connecting two adjacent nodes is called an “edge”.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

why are we interested in adjacency?

A

We are interested in adjacency because of their relation to optimal solutions. The general property is that if any LP problem that possesses at least one optimal solution, if a CPF solution has no adjacent CPF solutions that are better (measured by Z), then it MUST be an optimal solution. This is basically the optimality test we perform.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Geometrically, given a 2D problem, how do we solve it?

A

Initialization: We pick (0,0) as starting point. This is our initial CPF solution that we examine. Because of the non-negativity constraints (Wyndor) we know that (0,0) is a CPF.

Optimality test: We conclude that there are adjacent solutions that are better. Therefore, we move.

Iteration 1) Move to a better adjacent CPF, by performing the following 3 steps:
1) Consider the two edges that emanate from the current CPF.
Choose to move along the edge that improve Z the fastest.

2) Stop at the first new constraint boundary. IF we were to move further, we would no longer have a corner-point feasible solution to examine.

3) Solve for the intersection of the new set of constraint boundaries. We are at a point where we have a CPF solution lying on the boundaries of some of the constraints. We solve for the decision variables.
ex: x_1 = 0, 2x_2 = 12
Gives: x_1 = 0, x_2 = 6
Thus, our new point is (0,6)

Optimality test: Conclude that (0,6) is not optimal. Continue new iteration.

*******@
This is what we do, geometrically:

We pick an initial point that is a CPF solution. Then we progress by picking the “best” edge (according to Z), stopping at the first constraint boundary on this edge (further movement moves us outside of the feasible region), solve the system of equations (relevant constraints) to find the point we’re at. Then we perform an optimality test to see if we need to iterate more or not.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

How many edges will emanate from a CPF solution?

A

maximum of n. Can be less. C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

What is required for the problem to possess CPF solution(s)?

A

The feasible region MUST be bounded. This means that the area is fixed, so we cannot continue to move further and further out to infinity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Why do we need algebraic form?

A

Multiple reasons. Firstly, computers cant do it graphically. Therefore, we need to use a language they can understand.

Secondly, the scale of problems quickly makes it extremely difficult to solve graphically. Therefore, we need a systematic language (algebra) to solve such complex cases.

The algebraic language of the simplex method is designed to make it manageable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Elaborate on the algebraic procedure

A

The algebraic procedure is essentially structured around solving systems of equations. This is why we take the problem (in standard form) and convert it to its corresponding equality form. We do this by using slack-variables to represent the inequality-slack.

The transformed standard-form problem is now called “augmented form”. Augment means “utvided”.

It is important to understand that the augmented form and the standard form represent exactly the same problem. However, the augmented form is much more easy to handle due to our knowledge of solving linear systems of equations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

What happens if a slack-variable is 0 on some current solution?

A

This simply means that the solution lies on the boundary of the equation-constraint that this slack-variable is used in. Every slack variable is unique for a constraint initially.

Also, a slack-variable-value greater than 0 means that the solution is feasible. Less than 0 means that solution is infeasible, as the solution lies on the “infeasible” side of the constraint boundary corresponding to the slack variable of interest.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Recall “our standard form”

A

There are 3 main requirements:
1) Maximize the objective function
2) All constraints must be on the form of <=, while the right hand side constant must be nonnegative.
3) All variables have nonnegativity constraints

47
Q

What is the “main problem” regarding the other forms (not our standard model)?

A

It is the problem surrounding the choice of “initial BF solution”.

Why?

Recall what we would do if we have a problem on our standard model:
We have functional constraints on the form of:

LHS <= RHS. Recall that RHS must be nonnegative.

We introduce the slack variable, SV:

LHS + SV = RHS

To choose initial BFS, we picked the list of SV’s to be basic variables with value equal to their corresponding RHS. Thus, the initial BFS consists of for instance (0, 0, RHS1, RHS2, RHS3).

However, what happens if we have constraints on the form of:

LHS = RHS

Then there is no slack variable.

The same is the case with constraints on the form:

LHS >= RHS.

Remember that we want to pick some initial point that is actually a BFS (Augmented CPFS).

48
Q

How do deal with problems that are NOT in our standard model?

(Short answer, one sentence)

A

We use the artificial-variable technique.

49
Q

Elaborate on the artificial-variable technique.

Long answer

A

we construct an artificial problem that is more convenient for us. We do this by using a dummy variable (artificial variable) into each constraint that needs an initial basic variable for the resulting equation.

In essence, we ENLARGE the FEASIBLE REGION by the addition of the artificial variables.
It is only whenever ALL artificial variables are set to zero that we get the regular feasible region back.

The main point is this: We create an enlarged feasible region, in which we find the initial BFS. Then the first goal of the simplex method is to convert the region back. When this is done, we progress the method like before.

So, how does the method actually play out?

Say we have a constraint like this:

a_i1x_1 + a_i2x_2 + … + a_inx_n = b_i (row i)

This EQUALITY is actually the exact same as applying TWO inequalities:

a_i1x_1 + a_i2x_2 + … + a_inx_n <= b_i
a_i1x_1 + a_i2x_2 + … + a_inx_n >= b_i

They are the same as they force the middle point. This is an important concept.

Consider what actually happens to the feasible region whenever we have equality instead of inqeuality. In a 2D example, the feasible region will go from being an area, to become a line segment.

What we do, is construct an artificial problem that enlarge the feasible region, but has the same objective function as the real problem by making one modification to the real problem. We apply the nonnegative artificial variable, x-bar, and use it just like a normal slack variable. However, the solution HAS to include this variable as 0 later.

IMPORTANT: The artificial variable essentially change the equality to an inequality. The artificial variable assumes the role of the slack variable. Therefore, we can treat the artificial problem as a regular standard form problem with ineqialities. Of course, we need to somehow force the artificial variable-value equal to 0 in the final solution.

We use Big M or two-phase to solve.

50
Q

Elaborate on what we do with negative RHS values

A

We multiply both sides of the inequality with (-1). This allows simplex to start, since the slack variables gets immediate values as the RHS values become the initial basic variables.

51
Q

Elaborate on what we do with inequalities that are on the “wrong” form? LHS >= RHS

A

We first use a SURPLUS VARIABLE. This is the same principle as a slack variable, but we extract the surplus from the greater-than part and converts the inequality to an equality:

0.6x_1 + 0.4x_2 >= 6

0.6x_1 + 0.4x_2 - x_5 = 6 (x_5 is the surplus variable, negative since it extracts the surplus)

Then we use an artificial variable, just like with other regular equalities:

0.6x_1 + 0.4x_2 - x_5 + bar(x_6) = 6

The bar(x_6) allows the remaining part of the equation to work as a smaller-or-equal-to inequality. However, me must be careful with the artificial variable. It must be equal to 0 in the optimal solution.

What happens is that we get this result:

ax_1 + bx_2 = 6 + x_5 - bar(x_6)

Looking at the part (x_5 - bar(x_6)). This value can be anything since they are only restricted by nonnegativity. THis means that the LHS can be equal to ANY number. This will in effect REMOVE this constraint from play in the artificial problem. The feasible region is larger because this constraint no longer plays a role (artificial problem). Thus, we can choose the origin as the initial BFS.

52
Q

how do we deal with minimization?

A

NOTE that one could just change the roles of the positive and negative coefficients in row 0 for both the optimality test and step 1 of an iteration. However, this can be confusing, so we rather transform the problem.

Min Z = ∑(c_j x_j)[j=1, n]

… is equivalent to:

Max -Z = ∑((-c_j)x_j)[j=1, n]

This works because the smaller Z is, the larger (-Z) will be.

53
Q

Consider negative variables. What could this represent?

A

Negative variables could represent decrease in production of some product that is already being produced.

In most cases, negative variables doesnt make a lot of sense, but it makes sense in cases where we’re talking zero sum games. Increase one variable means we must decrease some other variable.

54
Q

How do we deal with negative variables?

A

As we know, determining “leaving basic variable” using the minimum ratio test requires the algebraic foundation of nonnegativity constraints.

x_4 = 12 - 2x_2 >= 0 could represent such a case. We get that x_2 cannot grow beyond 6 as a direct result of x_4 being required to be nonnegative. However, if x_4 is not required to be nonnegative, what happens then?

We solve this problem with one of two cases.
One is for variables with a bound on the negative values allowed.
THe other is for variables with NO bound on negative values allowed.

CASE: Negative bound.

The constraint will look like this:

x_j >= L_j, where L_j is a negative constant.
We can convert this constraint to a nonnegative constraint by making a change of variables:

x’_j = x_j - L_j, so x’_j >= 0

In the model, we therefore replace x_j with x’_j + L_j (direct replacement), but now we have constant which we move to RHS and x’_j which is nonnegative.

CASE: No negative bound on values allowed

We replace x_j with the difference “x_j^(+) - x_j^(-)”
Both are nonnegative. Their difference can be any number.
At the solution, they have the following property:
x_j^(+). = x_j, if x_j>= 0, 0 otherwise
x_j^(-) = abs(x_j) if x_j <= 0, 0 otherwise

55
Q

How do we perform Big M method?

A

First of all, we need to convert problems that are not in standard form to artificial problems. We do this by including artificial variables that enlarge the feasible region, which allows us to pick the origin as the starting point.

However, we cannot use solutions from the artificially induced feasible region. Therefore, we dont want to use artificial variables with values other than 0 in our final solution. As a result, we add M to represent a very large value, which we multiply with the artificial variables, which we then subtract from the objective function. Thus, the M values works as punishment for using the artificial values in the objective function. For instance, in a maximization problem:

Z = 3x_1 + 5x_2 - Mx_5, where x_5 is the artificial variable. We subtract it from Z because it is a punishment to use x_5.

The problem now is that this equation is not on the proper form from gaussian elimination, as it includes basic variables. The proper form from gaussain elimination is supposed to only include nonbasic variables. Therefore, we must perform som preliminary row operations (using the other constraints) to remove these basic variables. This will alter the RHS value of the objective function as well (most likely).

We now perform simplex method until we have all artificial variables equal to nonbasic variables (zero) and we have an optimal solution as well.

The reason why big-M works, is because of the large penalty on using non-zero artificial variables in the optimal solution. It will never happen. Therefore, we can be sure that the final solution is indeed both feasible and optimal.
NB:Only feasible as long as the artificial variables are equal to 0.

56
Q

How do we tell whether the real problem contains feasible solutions or not, using Big M or two-phase?

A

These methods will yield solutions where at least one of the artificial variables have value greater than 0.

57
Q

What happens when we use artificial variables?

A

We enlarge the feasible region.

The artificial variables are used on equalities, which essentially converts the equality into a regular inequality with a slack-variable. Therefore, when the artificial variable is used like a slack-variable, the enlarged feasible region is basically the inequality region.

58
Q

How do we deal with equalities in our model? As constraints

A

Say we work in 2D. An equality is a line segment. As constraints, this means that our solutions (feasible) are restricted to this line segment. However, it is therefore difficult to start the simplex method, as we dont really know where to begin (origin is not feasible).

The whole points is that when we have equality-constraints, we dont know where to start the simplex method. Consider the first image, where tf do we start?

However, when we use the artificial variable (constant ‘a’ in the second picture), we can essentially extend the feasible region.
By changing the artificial variable, we change the feasible region so that we can find the origin.

59
Q

What is the first step in dealing with >= inequalities?

A

Using a surplus variable that removes the surplus, creating an equality. We place nonegativity constraint on the surplus variable (like all other variables) and therefore we subtract this variable from the constriant.

ex:

0.6x_1 + 0.4x_2 >= 6

Add the surplus variable, x_5

0.6x_1 + 0.4x_2 - x_5 = 6

Now that we equality, we use artificial variable to create it feasible to start simplex:

0.6x_1 + 0.4x_2 - x_5 +bar(x_6) = 6

60
Q

Recall that rule for min vs max

A

Minimization of some expressions is the same as maximizing the negative value of the expression

61
Q

When will some artificial problem return to be a real problem?

A

it becomes real whenever the artificial variables are equal to 0.

62
Q

Say there are more nonbasic variables than decision variables. How should we choose?

A

We just dont want artificial variables as nonbasic, because if they are equal to 0, then the feasible region is not enlarged, and we dont have a BFS.

63
Q

Advantage of two-phase?

A

We dont need to use the fucking M value

64
Q

Elaborate on two-phase

A

two-phase use 2 different objective functions. One of them (the first) use the artificial variables only.
The second objective function use the original objective function.

ex
Min Z = bar(x_4) + bar(x_6)

Min Z = 0.4x_1 + 0.5x_2

The two obj functions are used to their corresponding phase.

PHASE 1:
The objective of this first phase is to find some BFS for the real problem by using the artificial problem.

PHASE 2:
THe objective of the second phase is to run simplex on the real problem. We can drop the artificial variables, as we have found a BFS on the real problem.

Consider radiation therapy example. We have 2 artificial variables. The entire point of phase 1 is to find a BFS where these artificial variables have value = zero. we find this by solving simplex. We find something like this then:
(6, 6, 0.3, 0, 0, 0)

Then we drop the artificial variables, and we get:

(6, 6, 0.3, 0)

IMPORTANT: The phase 2 objective function, which in radiation example is: min Z = 0.4x_1 + 0.5x_2, is obviously not in proper form from gauss elimination. This is because it has basic variables in it. We must therefore use the constraints, as they have been modified by the phase 1 (but removed the cols of artificial variables), to get the phase 2 objective function to only contain nonbasic variables.

After this is done, we run simplex as usual.

A note on NO FEASIBLE SOLUTIONS:
If we are done with first phase, meaning that the first phase yields some “optimal” solution, we need to check to see if the artificial variables are basic or non basic. If at least one of the artificial variables are basic, we have no feasible solution.

65
Q

Why can postoptimality analysis be more “particularily” important for LP problems than other types of problems+

A

One reason could be because of the fact that LP problems are such that small changes in parameters are easy to predict.

66
Q

Elaborate on shadow prices

A

The shadow price of a resource i (denoted y*_i) measures the marginal value of this resource. marginal value of a resource is the additional amount of general gain we get from increasing this resource by a single unit. This means, we are considering changes in the b-values (RHS). For instance, what happens if we increase b_1 with one unit? what happens to Z then? This is what we call the shadow price.

The simplex method identifies the shadow price of each y*_i by looking at the coefficient of the i-th slack variable in row 0 of the final simplex tableau.

Economically, the shadow price also indicates the largest cost we would accept for the extra unit of the resource. As long as the cost for the resource is lower than the shadow price, we buy.

NB: Marginal value is marginal. Marginal values refer to point-level accuracies. This means, if we find that the shadow price at some point is 10 by adding one unit of resource b_2, adding 2 units of b_2 will not necessarily give a gain of 20.

67
Q

What are “free goods”?

A

Free goods is a term used to represent goods/resources with a shadow price of 0. This happens when the constraint is not binding.

For instance, if the constraint x_1 <= 4 is not the tightest constraint, increasing this will not actually do anything. Therefore, we call x_1 a free good. there is a surplus of x_1.

THe opposite of free goods is “scarce goods”. Scarce goods have positive shadow prices. They are connected to binding constraints. They are at the limit. Increasing the limit will benefit the objective function. Recall that “binding constraints” are constraints where the optimal solution is at an equality-situation.

68
Q

What is the purpose of sensitivity analysis?

A

Sensitivity analysis is about identifying sensitive parameters. Sensitive parameters are parameters that cannot be changed without changing the optimal solution. These parameters will have to be treated with special care and precise estimation.

69
Q

What are the sensitive parameters? Or, what parameters could be sensitive?

A

Sensitive parameters are defined as “parameters that cannot be changed without changing the optimal solution”.

In the model:

max Z = ∑(c_jx_j)[j=1, n]

s.t. ∑(a_ij x_j)[j=1, n] <= b_i, for i=1, …, I

So, we have:
c_j
b_i
a_ij

All these parameters can influence the optimal result in certain situations.

70
Q

What could happen to an optimal solution if we change some parameters?

A

We could remain at the optimal soltuion.

We could get new optimal solution.

We could get an infeasible solution.

We could get sub optimal solution.

71
Q

Say we change some parameters in a model that we have previously solved for optimality. What is the “obvious” way of finding out the result?

A

Solving the entire thing again but with the new variables. This can take a lot of time. We would rather use sensitivity analysis without solving the entire problem again.

72
Q

Hva er “sensitivitetsområdet for c_j”?

A

Det er det verdiområdet der vi kan endre på c_j mens den optimale løsningen forblir den samme. Det er derfor en range av verdier, et intervall, for c_j.

ex [-infinity, 2.5]

Hvordan finner vi ut av det?

Grafisk ser vi på løsningen vår, målfunksjonen og restriksjonene. Målfunksjonen går gjennom det optimale punktet.
Det handler om å identifisere hvilke restriksjoner som først vil bli truffet som følge av at vi endrer på c_j.

Husk parallelitet og skifting av målfunksjon så langt ut som mulig. Når målfunksjonen er paralell med en av restriksjonene, så betyr det at vi ikke kan endre c-verdi mer.

73
Q

Hva gjør vi dersom vi endrer en a_ij verdi?

A

Vi må løse hele systemet på nytt

74
Q

Why do we want the objective function to contain only the nonbasic variables?

A

Because this is required in order to have a form that allows us to read out value from the system. If some variables are basic, we have to set up vector-parameterization which is only fuckery. Therefore, we go for the proper form from gaussian elimination and include only one non-zero variable in every row. for the objective function, this will be the z-value.

because the nonbasic variables are all equal to 0, their coefficient gives the exact contribution to Z per unit we increase them with.

Z = 30 + 3x_1 - 5/2 x_4

x_1 and x_4 are 0. Therefore it is easy to use them to find our which direction to work on next.
Recall that when we choose an entering basic variable, we will solve the system of equations so that the other variables will adjust.

The final simplex tableau holds an objective function that contains shadow prices. It is the final tableau because moving to adjacent BFS will decrease Z. At the optimal BFS, the variables that are included in the objective function are all equal to 0. This means that there are no slack there. Therefore, the coefficient of these variables would indicate the increase (marginal) in Z should we be able to relax the constraint a little.

75
Q

What is “allowable increase” and “allowable decrease”?

A

They use some objective coefficient as a point, and then specify the allowable range for this coefficient to be that still maintains the current optimal solution.

So, as long as the objective coefficient c_j is within the “allowable range”, the optimal solution is the very same point.

76
Q

hva er sensitivitetsanalyse?

A

Sensitivitetsanalyse innebærer at vi ser på hva som skjer med målfunksjonen vår dersom vi endrer på parameterverdier.

Parameterverdier: b_i, c_j, a_ij

Når vi endrer på paramterverdiene kan følgende skje:
Vi kan få fortsatt optimal løsning
Vi kan få en ulovlig løsning
Fortsatt være lovlig, men suboptimal

77
Q

veldig overfladisk, hva skjer dersom vi endrer parametere i målfunksjonen?

A

Vi kan endre hva som er optimal løsning

78
Q

I forhold til å endre c_j verdi, hva kan vi si om sensitivitetsområdet?

A

Vi sammenligner målfunksjonens stigningstall med stigningstallet på restriksjonen.

-C_1/5 = -1/2
c_1 = 5/2

Når stigningstallene er like (c_1 = 5/2) så er målfunksjonen parallell med den spesifikke restriksjonen.
Vi vet at målfunksjonen tidligere har gått gjennom det optimale punktet. Når linjene er parallelle ligger de på en grense på sensitivitetsområdet.

I dette tilfellet betyr det at vi kan ØKE c_1 opp til 5/2 og samme punkt vil fortsatt være det optimale punktet.

79
Q

hva vil det si å endre på en b_j verdi?

A

Paralellforskyve en restriksjon

80
Q

hvordan er sensitivitetsområdet for b_j definert?

A

verdiområde for b_j hvor variabelverdiene kan skifte verdi, men variablene i basis forblir de samme.

Dette betyr egentlig bare at vi skyver på et hjørnepunkt. De samme slack-variablene har 0-verdi før og etter.

Det betyr at så lenge vi er i denne sitasjonen, så kan vi øke Z med samme hastighet ved å øke b_j. Vi parallellforskyver restriksjonen med et antall enheter, og øker Z med et antall enheter multiplisert med en eller annen koeffisient (skyggepris).

81
Q

hva er sammenhengen mellom skyggepris og sensitivitetsområdet for b_i?

IMP

A

Sensitivitetsområdet for b_i er det området der skyggeprisen for b_i er gyldig.

82
Q

hva er den økonomiske “beslutningstakersrelevante” betydningav begrepet “skyggepris”?

A

Skyggepris er maksimalt pris en beslutningstaker er villig til å betale for en ekstra enhet av ressursen

83
Q

Hva er sammenhengen mellom “redusert verdi” og “skyggepris”?

Kan bruke følgende ex:
Z = 6.5 - 1/10 s_1 - 1/4s_2
5x_1+5x_2+s_1 = 25
2x_1+4x_2+s_2 = 16

A

Redusert verdi/kostnad og skyggepris er basically det samme. Redusert verdi er det vi mister av Z ved å stramme inn en restriksjon.

Å redusere RHS for Restriksjon 1 med en enhet (fra 25 til 24) har samme effekt på basisbariablene og målfunksjon som å sette s_1 = 1.

Det gjør at målfunksjonen reduseres med 1/10. Dette kalles for redusert verdi.

Samtidig, dersom det å redusere RHS med en enhet minker Z med 1/10, så vil en økning av RHS med en enhet øke Z med 1/10. Skyggepris for b_1, y*_1 er dermed lik 1/10.

NB: Gjelder kun for det optimale tablået. Dersom det ikke er optimalt gir det null mening å skulle kunne hente skyggepris.

84
Q

økt RHS i en <= restriksjon er….

A

enklere å overholde. Vi kaller dette en relaksering.

85
Q

økt RHS i en >= restriksjon er ….

A

vanskeligere å overholde. Vi kaller dette en innstramming.

86
Q

Relaksering gir …

A

større mulighetsrom og like god eller bedre Z

87
Q

Innstramming gir …

A

mindre mulighetsrom og like god eller dårligere Z

88
Q

Forklar sammenhengen mellom parallellitet og sensitivitetsområder

A

Dette er i forhold til c_j. Fra definisjonen går det at “sensitivitetsområdet for c_j er det området med verdier som denne koeffisienten kan ha som bevarer det foreløpige optimale løsningen, under antakelse om at ingen andre koeffisienter endres”.

Hva skjer her egentlig?

La oss si vi har Z=c_1x_1 + c_2x_2
Det gir oss: x_2 = Z/c_2 - c_1/c_2 x_1
Det eneste vi bryr oss om er stigningstallet. Dette er fordi når stigningstallet er lik stigningstallet til en annen restriksjon, så ligger målfunksjonen parallell med retriksjonen. HUSK at vi oppnår optimal løsning ved å skifte målfunksjonen så langt ut i mulighetsområdet vi kan (maksimeringsproblem), så dette vil automatisk justere målfunksjonen. Når stigningstallene er like, ligger de derfor oppå hverandre. Dersom vi øker c_1 enda litt mer, vil slope bli brattere enn restriksjonen, som fører til at vi får ett nytt optimalt punkt. Vi vil flytte punktet helt til neste BFS faktisk.

Poenget er å forstå hvordan paralellitet fungerer med målfunksjonen.

89
Q

Forklar hvordan vi kan bruke delta-verdier i simplex tablået for å finne sensitivitetsområdet for målfunksjonskoeffisienter

A

Hva ønsker vi å finne ut?
vi ønsker å finne ut hvor mye c_j kan endre seg mens den samme optimale løsning forblir optimal.

Dersom vi har en målfunksjon, kan vi illustrere det slik:

z = c_1x_1 + c_2x_2

z = (c_1+∆)x_1 + c_2x_2

Det vi gjør er å introdusere en form for “ny” c_1 som inkluderer en delta verdi som står for den endring vi kan gjøre på c_1 som bevarer optimal løsning.

Man kan plotte dette rett inn i tablået, og løse som vanlig. Men, dette er tidkrevende, og vi har enklere metoder.

Vi går helt ned til siste “final” simplex tablå. Vi kan finne ut av hvilke delta-verdier vi skal ha med ved å bruke intuisjon.
For det første, vi trenger bare delta-verdier i målfunksjonsraden og kun i kolonner som faktisk har ikke-null-koeffisienter. Så blir spørsmålet: Hva skal delta-koeff inneholde?

Når vi ser på tablået, kan vi gjøre følgende tankegang: La oss si vi har en koeff i målraden for s_1. Dersom s_1 økes med en enhet, så vil følgende skje:
1) Vi ser direkte hva som skjer med målfunksjonsverdien
2) Vi vet at ved å endre s_1 og løse system, vil de andre variablene tilpasse seg. Vi kan da se direkte fra tablået hvordan x_1, x_2 etc vil bli påvirket.
Ved å bruke målfunksjonen, kan vi bekrefte dette:
ex: Z = c_1x_1+c_2x_2
∆Z = c_1(∆x_1)+c_2(∆x_2)
Dette vil gi en verdi på ∆Z som tilsvarer den som direkte kan hentes ut fra tablået.

MEN, vi brukte jo en annen delta tidligere, som en del av den nye c_1. Hva skjer med den? Vi må inkludere den.
∆Z = (c_1+∆)(∆x_1) + c_2(∆x_2)
∆Z = c_1∆x_1 + ∆∆x_1 + c_2*∆x_2

STOPP: Hvilken del av uttrykket over er viktig?
Svar: ∆*∆x_1
Det er nettopp dette som gir oss hvilken delta verdi vi skal ha med i koeffisienten til s_1.

Koeffisienten til s_1 i målraden skal inneholde hele summen ∆Z = c_1∆x_1 + ∆∆x_1 + c_2*∆x_2

90
Q
A
91
Q

Løs Wyndor case. Gi også sensitivitetsanalyse for c_j, b_i for alle j og i.

A

Ikke fullført enda.

c_1: 0<= c_1 <= 7.5

92
Q

Why is it NECESSARY to only have nonbasic variables in the objective function?

A

Because if we are going to pick “entering basic variable”, we need to be able to pick one purely on the basis of improving Z the most. If we have some basic variable in this function, we know that this variable will change if we change on of the non-basic ones. This gives us a fuckery.

92
Q

Using artificial variables: why do we really need them?

A

We need them to have a basic variable for the constraint that is missing one. Recall that we use the slack-variables for this purpose. HOWEVER, when we have equality constraints, we cannot use slack variables. Therefore, there are no basic variables in these constraints. Because of this, we introduce the artificial variables (along with the huge penalty in the objective function for having the artificial variable > 0) so that we get a basic variable to start the algorithm.

93
Q

How can we know if there are no feasible solutions? Artificial variables hides this fact

A

In the final solution, if all artificial variables equal 0, we have feasible solution(s). However, if as much as one (or more) of the artificial variables are greater than 0, then there are no feasible solutions.

We can see this either at the end of BIG M, or at the end of the first phase of the two-phase method.

94
Q

IMP: Why can we assume that non-basic variables remain zero after one iteration? We do have constraints involving these variables that would seem to indicate a varying relationship, so how can we “ignore” them from the results?

A

The answer to this lies in adjacency.

We only consider adjacent nodes. Adjacent nodes share (n-1) constraint boundaries in problems with n decision variables. This means that if two CPF solutions are adjacent, they share n-1 constraints, meaning that the variables that lie on the constraint boundary will REMAIN zero on the variables representing the slack.

Also, we know that adjacent nodes share all but one of their non-basic variables. This means that as we move from adjacent node to adjacent node, the set of non-basic variables will change with one variable at a time.

So, because we know that moving to an adjacent node will only change one variable from zero to some value, we know that once we have found our entering basic variable, we can assume that all other non-basic variables remain zero.

95
Q

What do we “sometimes” call a_ij coefficients?

A

Technological coefficients. Because they are usually a result of the current employed technology.

96
Q

EXTIMP: Explain the indirect effect on Z when changing a non-basic variable that is equal to zero in the final tableau.

Second part: Explain how the indirect effect helps us find the sensitivity area of objective function coefficients.

Understanding this card is enough to understand everything about finding the sensitivity area of objective function coefficients algebraically.

A

The OG objective function does not contain this variable. However, changing the variable-value from 0 to 1 will still impact the objective function with a value equal to the coefficient of the variable we just changed.

What happens here, is the indirect effect. When we change say x3 from 0 to 1, we will also make impacts at variables that are defined by that just-varied variable. For instance, if x1 is equal to 10 - 3x3, then this value is (before we change x3) x1=10. But, when we increase x3 to 1, we also change x1 to 10-3=7.
This effect will happen to every variable that use the x3-variable in its ‘definition’/constraint.
When we add together all the indirect effects (remember to multiply with the original coefficients of the objective function), we should (if calculated correctly) get a number that is exactly equal to the coefficient of the x3 variable in the objective function row of the final simplex tableau. Therefore, the indirect effect is camouflaged inside of the tableau.

END OF INDIRECT – START OF DELTA

We can use the indirect effect as a starting point to quickly determining the delta-values so that we can calculate the sensitive area of the objective function coefficients.

EXAMPLE: Say we have the case like the image shows.

The indirect effect applies like it always does. However, when we are trying to find the sensitive area of c_j, we usually approach it with a “new” cj that is equal to cj+∆. We replace ALL old coefficients from the beginning of the simplex method with this new one, and solve like normal, and watch the ∆-values propagate downward and get their final values that we can use directly. However, we can avoid solving the iterations AGAIN if we have done it once with the original values.

The indirect effect from changing slack1, s1, comes in the form of BOTH x1 and x2. x1 will be changed by 2/5, and multiplied by 1 due to the objective function coeff. x2 will be changed by -1/5 and multiplied by 1.5 as 1.5 is the c2 coeff. The sum 2/5-1.5/5 = 0.1, which is what the tableau shows.
BUT: Now we did not include the ∆’s when we multiplied by the coefficient weights of the objective function. What we really should do now, is this:

∆Z = ∆x1 c1new + ∆x2 c2 (we only change one c-coefficient at a time)

∆Z = ∆x1 (c1old+∆c1) + ∆x2c2

∆Z = 2/5 (1 + ∆) - 1/5 (1.5)

∆Z = 2/5 + 2/5∆ - 3/10

∆Z = 1/10 + 2/5∆

There, we have found the coefficient of the delta-value without iterating all the shit.

So, when finding deltas for objective function coefficients, we are just looking at the indirect effect of what WOULD happen if we change the non-basic variables from 0 to 1. IMPORTANT: The reason why only the coeffiecients from a single row is included in the final delta coeff, is because we only associate a single delta in the objective function. If we look at c1, we give this one a delta. Therefore, it does not make sense to talk about delta for x2.

Given these, deltas, we find the sensitivity area by comparing every delta-case with being greater than or equal to 0. Since there are 3 delta values, we get 2 inequalities to solve (dont need the RHS):
1) 1/10 + 2/5∆ >= 0
2) 1/4 - 1/5 ∆ >= 0

We get:

∆ >= -1/4
∆ <= 1/2

Then we include the c1 value, which is 1:

0.75 <= c1 <= 1.5

This is the sensitivity area for c1.

Same procedure for every other objective coefficient as well.

We dont need RHS value for the sensitivity analysis, but the pattern is still the same. I believe the intuition goes like this:
(c1+∆)x1 = c1x1 + ∆x1
Since x1=2, we get: 2+2∆
From c2x2, we get: 1.5 times 3 = 4
Therefore, the sum is: = 4 + 2 + 2∆ = 6 + 2∆

97
Q

How to determine sensitivity area for non-basis variables?

A

Recall the definition of allowable range for objective function variables: How much we can change them while retaining the optimal solution.

So, we’re asking ourselves: How much can we change a non-basis variable before the optimal solution is changed?
These non-basic variables are equal to 0.

We can lower it to infinity, and increase it to the corresponding z-value in the final tableau.

98
Q

EXTIMP: Explain how to find the allowable range of b_i values by using the delta-method

A

The hard way: We insert RHS+∆ and calculate entire simplex with this value. However, this is stupid, as we can calcualte simplex regularly and then find the sensitivity by inspection (almost).

The key here is the term “shadow price”.

When we look at the final iteration of simplex tableau, and want to find the delta’s for RHS column, we ask this:
“How will this RHS-value be impacted by a change in the slack-variable corresponding to the constraint we are either relaxing or tightening?”.

Then we use the principle “relaxing slack variable s1 is the same as increasing RHS by the same amount”.
And we can see from the tableau that if we change s1 by a unit, its column will contain coefficients for every row saying how much the corresponding RHS will be affected by this change. This is essentially the principle of shadow prices. Therefore, a change in s1 (or any other slack variable) will impact RHS by some coefficient. BUT: Since we’re looking at the effect of changing by ∆, and not necessarily by a single unit, we interchange ∆ with ‘1’ in the calculations. Therefore, if the coefficient is 2/5, we instead of 2/5 get (2/5)∆ to accommodate for changing by ∆ rather than a unit.
If the coefficient of s1 in the objective function row (final iteration simplex tableau) if 1/10, then we know that changing s1 by ∆ will change RHS by ∆*1/10. These are the values indicated by the image.

Then the question is: “How do we use these deltas to find the allowable range?”

From the example, we have 3 delta-cases:
1) 6.5 + 1/10 ∆
2) 2 + 2/5 ∆
3) 3 - 1/5 ∆

We actually dont need the first one, as it is the objective function.
We care about the others, because they essentially represent x1 and x2, which are variables that have the non-negativity constraint. Therefore, we get the following:

2 + 2/5 ∆ >= 0
3 - 1/5∆ >= 0

Which gives us…

∆ >= -5
∆ <= 15

-5 <= ∆ <= 15

Then we insert 25 (original RHS value) to find the allowable range:

20 <= RHS b1 <= 40

99
Q

Define adjacency

A

For any LP problem with n decision variables, two cornerpoint feasible solutions are said to be ADJACENT if they share (n-1) constraint boundaries.

For the 2D case, this means that two CPF solutions are adjacent if they are placed so that they are both lying on the same constraint boundary, for instance both are on x=3 for the constraint given by x<=3.

In 3D, we must have 2 common constraints. Consider a cube. THe corner at the top, furthest out, will lie on x,y and z boundaries. One of its adjacent CPFs will share 2 of these, either x and y, or xand z, or y and z.

100
Q

What is the requirement for a problem to possess CPF solution(s)?

A

The problem must be bounded. If the feasible region is bounded, we can guarantee CPF solutions.

101
Q

Why are the slack variables not originally included in the objective function?

A

Their coefficients are 0 in this function. This is because we are not rewarded for having slack. In fact, the reward is just nothing.

102
Q

In general, the number of free variables is equal to [number of variables] - [number of equations].

What could oppose this? When does this not work/apply?

A

It does not apply if there are redundant equations.

103
Q

How do we treat the objective function in simplex?

A

we treat it as if it is a part of the functional constraints. Therefore, we get it on the shape

z - ax1 - 10x2 ….

Along with the other functional constraints.

104
Q

Elaborate on tie for leaving basic variable

A

Here is the case:

There is a potential problem regarding ties for the leaving basic variable.

If there is a tie, it means that two or more basic variables reach 0 at the same time. In such a case, we will pick one of them as leaving basic variable, and the other remains a basic variable.
IF the one(s) not picked for leaving basic variable remains 0 until the next iteration, which it will due to how simplex works, and we pick this specific variable as the new leaving basic variable, then we will get a perpetual loop. This is because we pick a variable that is already 0 as the leaving basic variable. When we solve the system, with this as 0, we get no change.

We can avoid perpetual loops by doing some randomization on how we pick leaving basic variable.

105
Q

Elaborate on the usage of the term “degenerate”

A

Degenerate is used in two ways:
1) Reference to basic variables with a value of 0
2) Reference to BF solutions corresponding to basic variables that have value 0.

So, both are based on the same thing, but the “object” they refer to can be either the BF solution, or the specific variable.

106
Q

What is the only serious problem regarding forms that are not in our standard form?

A

Finding that initial BFS.

We need to use the artificial variable technique to deal with this problem.

107
Q

What is always complementary to artificial variables?

A

Overwhelming penalty of choosing to have a value that is greater than 0 for any of the introduced artificial variables.

108
Q

Is the artificial variable basic or non-basic?

A

It is always basic: This is because we use the artificial variable to extend the feasible region, making it inequality instead of equality essentially. If this artificial variable is equal to 0, then we are actually AT the equality constraint. If the artificial variable is greater than 0, then we are somewhere in the artificial feasible region.

109
Q

how do we define an artificial variable?
Say we have 3x + 5y = 18

Introduce the artificial variable for this constraint

A

x-art = 18 - 3x - 5y

Thus:

3x + 5y + x-art = 18

110
Q

When we use Big-M, what do we need to do after defining some artificial variable?

A

We add the punishment to the objective function.

HOWEVER: Now the objective function contains basic and non-basic variables. As we know, the objective function should only contain non-basic variables. Therefore, we need to convert it (along with the system of equations), into proper form from gaussian elimination. In other words, we algebraically eliminate the basic variable coefficients from the objective function.

111
Q

If we minimize Z, what should be the sign of the M coefficients when using BIg-M?

A

Positive. This is the opposite of the maximization problem.

112
Q

Consider two-phase. Do we minimize or maximize the phase1 objective function?

A

We always minimize it. This is because this objective function consist only of artificial variables. In the BFS that we want to start simplex on the original problem, we need to have ALL artificial variables equal to 0. Therefore, we must minimize the sum of them.

113
Q

Suppose we change hte Wyndor case like this:

Product 1 is already in production. The variable x1 represent the additional increase in production of product 1.
Variable x2 represent production of product 2.

What kind of scenario do we have here? What do we need to do?

A

This is a scenario where we allow x1 to be negative. x1 could represent the amount of production we can slack production of product 1 by.
In order to solve this kind of problem, we have to deal with negative variable values. This is because our standard model does not deal with this. The standard model assumes the origin is a possibility.

There is also a problem with leaving basic variable. As we know, the procedure for leaving basic variable make use of the fact that all variables have non-negativity constraints. This is crucial to find the distance allowed to travel.

We deal with negative variables by performing some substitution. This substitution specifically depends on whether the variables have a negative bound or not.

WITH BOUND
Consider xj >= Lj, where Lj is some negative constant. IN this case, we put Lj on the other side, xj-Lj>=0, and replace xj-Lj with x’j. x’j is non-negative.

The important part here is how we enter this substitution into the model. Every place where we have xj, we need to substitute by x’j-Lj. This means that in the constraints, the RHS value will change.

WITHOUT BOUND
When there is no bound, we need to make a substitution with two variables.
If xj is unbounded negative, we replace it with xj^(+) - xj^(-).
This new sum, where both variables are non-negative, can take on any possible value.
Why are the called positive and negative?
If xj is greater than 0, then xj^(+) = xj
If xj is smaller than 0, then xj^(-) = abs(xj)

114
Q

Regarding artificial variable: Do we get inequality or equality?

A

We go from having equality, to getting equality. We NEVER explicitly say that we get inequality. The only point is that the feasible region acts liek it.

115
Q
A