Linear programming Flashcards

1
Q

In a linear programming problem, what does each constraint do in terms of the graphical solution?

A

Each constraint defines a halfspace. A constraint cut the space in two parts, and since the constraint typically has <=, it selects on of those new spaces (one new halfspace).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

give a definition of a half space

A

A set of solutions that lie on the feasible side of the marked line, where the marked line is the linear constraint we have added to the space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a level curve?

A

A level curve is obtained by fixign the objective function to a specific value (z=k). when we use 2 variables, doing this allows us to graphically plot the curve (which we call a level curve, since it is fixed at a level).

So, this line plotted now holds all different combinations of the two variables that give us the specific result that we specified it to have.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

elaborate on level curve and solition to an LP problem

A

The level curve has a fixed slope. Our objective is to find the largest possible value for the function/level curve. We find this point by shifting the level curve as far out in the feasible region as poissble.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a binding constraint?

A

A constraint is said to be binding if its equality holds exactly at the optimal solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Another word for a binding constraint

A

Active constraint

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

If we have a problem that has 2 optimal solutions (meaning 2 differnet corner points are optimal) what can we say?

A

The line between them also holds optimal value. This is due to linearity.

This means that a problem has either 1 unique optimal solution, or infinitely many optimal solutions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Say we find a problem that has an unbounded soluition. What does this mean?

A

It means that there is no limit in how much we can gain. Usually, this means that there is something wrong with either the input data or hte model formulation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Elaborate on the relationship between feasibility and objective function

A

There is no relationship.

Feasibility depends only on the constraints.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Define the neighborhood of points for continuous problems

A

N(x^(k)) = {x | ||x^(k) - x|| < e, e>0}

In other words, we define the neighborhood of a point to be all points that are within a certain distance to the point.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why are local optimas easier to find than global+

A

For global, we need to exhaustively search the space. This is basically not feasible for large spaces.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Main challenge with search methods?

A

Search methods rely on local optimum conditions. This means that there are cases where the solution is not guaranteed to be global optimum.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

When is a rpoblem regarded as easy for a search method?

A

A search method regard easy problems as those that can guarantee local optimum => global optimum

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Define a convex problem

A

A problem

min f(x) subkect to x in X

is convex problem if f(x) is convex and X is a convex set.

A problem is also considered convex if it is a maximization problem and the objective function is concave, while the set X (feasuble region) is convex.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

why are convex problems nice to work with?

A

Local optima ==> global optima

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what is a concave problem?

A

No such thing.

Problems are either convex or non-convex

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Are LP problems convex or non-convex?

A

Convex

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

elaborate generally on the broad working of a search method

A

A search method start from a feasible point, which we call a feasible solution.

Then the search methos will iteratively make moves to new points, where the new points have better (or similar) value than the current point. better value in terms of objective function value.

The search method stop once there are no points that are better (that are included in its consideration).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What can we say is “produced” by a search method?

A

A search method produce a sequence of feasible solutions, whose objective function values become successively better. Note that we may have consecutive points with the same value.

20
Q

elaborate detailed on the search method when used in continuous problems

A

The sequence of points generated by the search method is defined as follows:

x^(k+1) = x^(k) + t^(k) d^(k)

x refer to points.
t refer to a step size, or step length ,that defines magnitude.
d refer to a vector that gives the direction of movement. We call d “the search direction”.

Interesting part is that this formula is general for search methods. Differnet kinds of search methods are defined in the way that they pick the search direction and the step length.

21
Q

What should be common for all search methods (in continuous problems) regarding the search direction and the step length

A

The search direction should be feasible. This means that it should be possible to take an arbitrarily small step in the direction of the search direction and still have a feasible solution.

The search direction should also be improving. Meaning, taking a small step in its direction should yield a better value for the objective function value.

The step length is generally chosen so that “given a search direction”, we move until the solution doesnt improve anymore.

22
Q

There is a step by step process in regards to search methods for continuous problems. What are the steps?

A

Step 0)
Start from a feasible point, x^(0). Set k=0

Step 1)
Determine a search direction, d^(k).

Step 2)
Check the convergence criterion. If we cannot find any feasible and improving search directions, we can conclude that we have a found a local optima. Since linear problems are convex, we’d have a global optima now.

Recall the difference and importance of both the “improving” and “feasible” search direction. Improving refers to how a small step will imrpvoe its value. Feasible refers to how a small step will still be legal.

Step 3)
Decide a step length, t^(k)

Step 4)
Determine the new point using hte formula

Step 5)
Update k=k+1

Go to step 1)

23
Q

Regarding the step process of search methods, what are some comments worht noticing?

A

1)
Selection of starting point. We can use phase 1 in LP.

2)
Choosing the search direction. We typically use some informaiton of the gradient and the Hessian matrix of the objective function in the specific point we’re at.

3)

24
Q

Define what we mean by primal problems in regards to the search method.

How about dual?

A

Primal refer to the principle that each iteration is a feasible solution, and we continue to improve it until we reach local/global optimum.

Dual refer to moving between infeasible solutions in a systematic way until we reach a feasible solution. With some tricks, we can make sure that once we find a feasible solution (using a dual method), we have the local (and possibly glboal) optimu,.

25
Q

When are two vertices adjacent?

A

Two vertices are adjacent when the set of active constraints in the two points differ in only element only.

26
Q

WHne are two basic solutions adjacent?

A

Two basic solutions are adjacent if htey have m-1 basic variables in common

27
Q

what do we mean by “change of basis”?

A

Change of basis refer to moving between basic solutions that are adjacent by changing one basic variable with non basic.

Recall that one variable is the leaving basic variable, and the other is that entering basic variable.

the leaving basic variable will be sat to 0, while the entering basic variable has previously had the value 0, and will now most likely get a new value.

In terms of change of basis, we have various search directions that are feasible. At some point, we can find these feasible search directions by relaxing the currently active constraints. In other words, in some point, we take a look at the constraitns that are binding/active, and let its correspoinding variable (which is currently 0) be allowed to increase its value. The search direction is given by the line obtained when we allow it to increase.

28
Q

how many possible feasible search directions will we have from each point?

A

n-n

29
Q

considering all the possible feasible search directions using simplex, do we consider all of them?

A

sort of. We are only interested in those wiht positive contribution, which means the improving feasible search directions.

Here, the term “reduced cost” come into play

30
Q

Elaborate on reduced cost

A

Reduced cost is specific to each point x^(k), and a certain direction.

The reduced cost in point x^(k) when we move in a direction d, is defined as:

c-bar = gradient f(x^(k))^T d = c^T d

So, the reduced cost is directly tied to how the objective function value will change as a function of moving in the specific direction from the specific point. Note that reduced cost is an incremental unit, it does not account for the step length.

31
Q

When is the reduced cost in ascent direction?

A

when the reduced cost is greater than 0

32
Q

Assume xj is entering basic variable. What does the reduced cost tell us

A

The reduced cost tell us what the objective function value will change by if we let the entering basic variable xj increase with one unit.

33
Q

What happens if a component of the search direction d is negative for some entering basic variable?

A

This just mean that the corresponding basic variable will decrease if we increase the current entering basic variable, which is fine, granted that we keep it inside of the legal interval. The legal interval is defined by the constraints that the variable is involved in.

Actually, if the search direction has no negative component, it means that we can improve the objective function value in an unbounded way, which means that the problem is unbounded and most likely wrong.

The components in the search direction that can be negative are the basic variables. Recall that basic variables can be a mix of slack, surplus, regular variables. The slack variables and decision variables represent natural counterparts. Say we have constraint x1 <= 10, which means in standard form it is x1 + s1 = 10. In order to increase x1, s1 must decrease. The search direction would be (1, -1)

34
Q

WHat determines the step length in the simplex method?

A

so, we have found the search direction d that give the greatest reduced cost from a certain point. In other words, in our current basic feasible solution, we have found the entering basic variable that give the greatest increase in the objective function value.

Now, we need to understand how far we should move in this direction. Because of linearityu and how we know that the optimal solution must lie in vertices, we are interested in how far we can move before one of the constraints become violated.
This can be non-negativity constraints, or it can be functional constraints.

35
Q

Why do we want to perform row operations?

A

It is about writing the system of equaitons in a suitable form.

The suitable form represent all the basic variables and z as functions of only the non-basic variables. The benefit of this is that the reduced cost is easily understood, and it is easy to choose the best search direction and step length.

36
Q

Elaborate on the step by step process for search method, but now with the simplex (in terms of how simplex perform the steps)

A

Step 0) Start from basic feasible solution x^(0). Let k=0.

Step 1)
Determine possible search directions and the reduced costs c-bar_j for all non-basic variables by performing suitable reformulations of the system of equations.

Step 2)
Check the convergence criterion: The point x^(k) is optimal IF:
reduced cost greather than 0 or equal to 0 for all variables j (minimization problem),
reduced cost smaller than or equal to 0 for all variables j (maximization problem).
(basically, does the reduced cost say that we can improve z or no)

Step 3)
Determine the entering basic variable according to the criterion:
c-bar_p = min j {c-bar_j | c-bar_j < 0} (min problem)

c-bar_p = max j {c-bar_j | c-bar_j > 0} (max problem)

(recall how simplex tableau swap the signs around).

This gives us the entering basic variable and a search direction.

Step 4)
Determine the stpe length by :

t^(k) = (x_r^(k))/(-d_r^(k)) = min j {(x_j^(k))/(-d_j^(k)) | d_j^(k) < 0}

which means that x_r becomes the leaving basic variable. The calculation is the same for min and max problems.

Step 5)
The new point is given by the regular search method equation. update k:=k+1, and go to step 1

37
Q

given the LP problem written in standard form matrix notation. Firstly, recall how this looks.

Second, how is a basic solution defined?

A

max z = c^Tx

st.
Ax=b

x>=0

A basic solution is defined by selecting m variables as basic and letting the remaining n-m variables be non-basic.

We can partition the set of variables into the different parts (basic and non basic)

x = (xB xN)

We can also partition the matrix A so that it fits the description in terms of columns:

A = (B | N)

Finally we partition the objective function as well:

c = (cB cN)

We end up with a system of equations that is basically partitioned in terms of whether it is basic or non basiv.

max z = cb xb + cn xn

s.t.

Bxb + Nxn = b

xb, xn >= 0

From this, we can see that we can solve the equation for xb by multiplying by the inverse of B from the left

xb + B^(-1)Nxn = B^(-1)b

xb = B^(-1)b - B^(-1)Nxn

Now, since xn represent the values of the non-basic variables, they are naturally set to 0. This gives us:

xb = B^(-1)b

The only requirement is that xb = B^(-1)b >= 0

To express the requirement on xb to be optimal basic solution, we need to modify the objective function.

z = cb xb + cn xn

z = cb (B^(-1)b - B^(-1)Nxn) + cn xn

z = cb B^(-1)b - cb B^(-1)Nxn + cn xn

z = cb B^(-1)b + (cn - cb B^(-1) N) xn

Notice how we know no longer have any xb part. The equation is purely a function of a constant (first part) and a xn-part.

In other words, cb B^(-1)b represent the current objective function value that corresponds to the current basic solution, while (cn - cb B^(-1)N) represent the reduced costs of selecting either of those nno basic variables to be the entering basic variable.

we can use the vector of reduced costs to see if the solution is optimal. We know, in a max problem, the reduced cost must be smaller than or equal to 0 in order to be optimal.

38
Q

elaborate on the thing with the initial and final simplex tablaeu

A

We set up the initial using the matrix notation.

Then we select the subset of variables to be basic and non basic.

Then we convert each row so that it represent each variable as a function of only non-basic variables.

39
Q

If some original constraint is a >= constraint, what happens in the matrix notation tableau+

A

The correspoinding column in B^(-1) will have flipped signs.

Recall that a constraint correspond to a column, specifically the column that has the surplus/slack variable.

40
Q

What happens if some original constraint is equality constraint, in regards to the matrix notation tableau?

A

The corresponding column will not exist because we never add a slack/surplus variable. the column n B^(-1) is just simply missing.

41
Q

What is the aim of the two phase method?

A

the aim of the two phase method is to find a starting point for the simplex method.

when we talk about starting point, we actually refer to “any” basic feasible soluiton.

42
Q

What question does the phase 1 problem solve/answer

A

Is there a feasible solution to the given set of constraints?

43
Q

elaborate on the general topics of two phase (specifically phase 1)

A

for any constraint that is not <= (meaning either = or >=) we need to add artificial variables.

We add the artificial variables, and use them as basic variables.
Recall that if we dont do this, then we have surplus variables that we use as basic. This is not good practice for initial point, as we will typically find the origin, which (in the presence of >= constraints) is a basic solution, but not basic feasible solution.

The goal of the phase 1 is to find a basis that has no artificial variables in it. Such a case would represent that we can set all the artifical variables to 0, which means that it would be the same as if they were not included. THe only differnece is that with this configuraiton, we also know a basis that is legal, which we can use as the initial point in simplex.

Since we want to find a basis without artificial variables, the objective is to minimize the sum of artificial variables, with the goal of achieving the sum / obj value of z=0.

min z = a1 + a2…

It should be completely clear why the objective function is always to minimize this sum, regardless of phase 2 objective.

44
Q

What is extremely important to do when doing phase 1

A

Given the specific objective function, we need to make sure that it is written on the canoncial form, which represent only function of non basic variables. Recall that since all of the artificial variables are basic, we need to perfrom row operations that basically remove them, and get in the non basic coefficeints instead.

When we find either the optimal solution or no feasible solution, we are done.

Say we find optimal solution. Now we drop the columns of the artificial variables.
Then we add the original objective function instead, but we need to convert it to canonical form (funciton of only z and non basic variables).

Now we can continue where we left with the new problem, which is the phase 2 problem.

45
Q

Define a degenerated solution

A

A degenerated solution is one where a basic variable has value 0.

The issue is that the reduced cost of a search direction indicate positive movement, but the method finds that the step length is 0. However, in most cases, the simplex method never return to earlier solutions. This means that although we can have some zero-imrpvoemnt iterations, we should break it.

There are specific cases that can cause perpeatual loop, but there are also ways to avoid it. For instance, specifuied slightly differnet values (perturb) or moifdy the selection criterion.

46
Q

how many iterations are typically needed for convergence

A

approx/rouhgly 3 times the number of equations.

47
Q
A