Simplex, LP, duality, extensions Flashcards
Elaborate on search methods
A search method iteratively finds better and better solutions to a problem.
They typically start from a feasible solution to the problem.
When no better point can be found, a local optimum is determined.
The objectuve function value will be better or equal to the previous iteration value.
elaborate on search methods for continuous problems
key about continuous problems, is that the search sequence of points is given by the following formula:
x^(k+1) = x^(k) + t^(k)d^(k)
t is the step length.
d is the vector that describe the direction we move in. In other words, we want to find the vector d that says “we move in this direction”. In LP and simplex, this will basically mean that we consider increasing a variable by one unit, and looking at what the other variables respond with.
In general, we say that the search direction d, and the step length, can define many search methods.
What is common for almost all search methods in regards to the search direction?
The search direction is feasible. this means that it should be possible to take a very small step in the direction (step s for all values between 0 and s) and keeping the solution feasible.
Also, the search direction should improve the value of the objective function.
Elaborate on searhc methods for integer programming problems
The case here is that we cannot use the concept of step length and search direction. Instead, we are talking about neighboring solutions. We search the space of neighboring solutions until we have local optimum.
it is absolutely key to remember that IP problems are non-convex. This means that any local optimum cannot be guaranteed to be global optimum. Therefore, those problems are more difficult to solve than LP problems.
what can we say about the feasible region of an LP problem?
The feasible region of an LP problem is a convex set.
PROOF:
we consider two arbitrary points x^(1) and x^(2) that are feasible with regards to constraint i,
∑aijxj^(1) <= bi,
∑aijxj^(2) <= bi
Then we consider a third point that lie somewhere between (on the line) x^(1) and x^(2), which we use lambda to find:
x = (1-L)x^(1) + Lx^(2)
Now we want to see if x is inside the region as well.
We multiply the first inequality by (1-L) and the other by L.
(1-L)∑aijxj^(1) + L∑aijxj^(2) <= (1-L)bi + Lbi
(1-L)∑aijxj^(1) + L∑aijxj^(2) <= bi
∑aijxj^(1)(1-L) + aijxj^(2)L <= bi
∑aij ((1-L)xj^(1) + Lxj^(2)) <= bi
now we have proved that the arbitrary middle point is also inside the constraint region.
Then we also make use of the fact that there is a theorem saying that the intersection of convex sets is a convex set itself. Therefore, we can conclude that the feasible region of LP problems are convex.
When is a basic solution to the system of equation Ax=b obtained?
A basic solution is obtained if n-m variables are set to 0 and the remaining variables get their unique values when we solve the remaining mxm system.
The variables that we set to 0 are called non-basic variables.
the other are called basic.
What is the requirement for there to exist a basic solution in the remainng mxm system?
The columns must be linearly independent.
How many constraint needs to be active to define a vertex?
n-m.
what is required to uniquely define a vertex?
Every basic feasible solution corresponds to a unique vertex. Therefore, we can uniquely define a vertex by setting non basic variables equal to 0.
it is worth noting that a vertex can hold SEVERAL basic solutions, but a basic solution can only represent a single vertex.
Two vertices are adjacent if …
the set of active constraints in the two point differ in one element only
Two basic solutions are adjacent if …
They have m-1 basic variables in common
From some basic solution, how to find the set of search directions?
The set of search directions is defined by those directions that are obtained if for each of the active constraints, we release the condition that this constraint should be active, and let the corresponding non-basic variable increase its value from 0.
how do we find reduced cost of a searc hdirectoin
c = C^(T)d
If c > 0, the search direction “d” is ascent direction
if c < 0, the search direction d is descent direction
What does reduced cost tell us?
The reduced cost tells us how much one unit increase of some variable changes the objective function value.
What happens if no value id d is negative?
We have an unbounded solution