Introduction, LP Flashcards

1
Q

What will the constraints hit?

A

Constraints are restrictions on the DECISION VARIABLES.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the purpose of the objective function?

A

To describe the objective. The objective is typically an economic or technical problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a pre-requisite for using optimization models?

A

The objective, and the constraints, must be able to be described quantitatively using mathematical functions and relations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is an assumption regarding optimization models in real life?

A

We assume that the number of variables and constraints are LARGE, so that we need computer power and algorithms that are efficient in order to actually find the optimal solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Define programming

A

It means planning, and refers to creating a plan for how to do shit.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the history of OR?

A

Operaitons research originates from WW2. Breakthrough methods originates from 1947 (Simplex method from George Dantzig)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what do we call short term planning?

A

Operative, or operational

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what do we call long erm planning?

A

Strategic, tactical.

Lecturer says that strategic allows for all kinds of infrastructure changes.
Tactical allows for minor tweaks.
Operational allows for no changes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Define the optimization process

A

We begin with a “real problem”. Can be extremely complex, and there will likely be aspects in it that we do not include further. This is most likely due to how dofficult it would be to add to the model.
The first phase is therefore about identifying the real problem, and find the relevant components.

NB: An essential part about this step, that is often neglected, is to determine whether it is necessary to set up an optimization model at all. Sometimes, there are alternative approaches. There will be a large focus on the fact that shit is quantifiable.

The result of this filtering process is that we now have the simplified problem.

We now take the simplified problem and transforms it to the mathematical description. We are now talking about decision variables, parameters, constraints, objective function. In this step, we might have to make further simplifications to the problem. For instance, the problem size can cause trouble in real life.

Now we have the optimization model.

The next step is to take the model, and solve it using some method.

When solved, we need to check whether the solution is correct/test the model for potential errors. This is called model validation.

The next step is to perform sensitivity analysis to better understand the solution.

Only now, we have the result.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

how do we denote an optimal solution

A

x*

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the most general way of describing the optimization problem mathematically?

A

min f(x)

st. x in X

where X is the set of feasible solutions.
f is the optimization function.
x is the vector/set of decision variables.

We usually, however, describe the constraints a little bit differently:

st. gi(x) <= bi

to highlight the fact that the model has constraints on this shape, and not just a set of feasible values. However, the first general one is also 100% correct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

There are two types of optimization methods…+

A

Exact methods

Heuristics.

Exact methods finds the optimal solution.
Heuristics find soltuions that are good enough

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How do we denote the optimal objective function value?

A

z*

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Define an optimal solution

A

An optimal solution is a set of values x in X that minimize (or maximize) f(x).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Convertion between max and min is?

A

to maximize a problem is the same as minimizing the negative of the other problem.

For instance, consider:

max x+y

this problem get a better value when x and y grow large.

min -(x+y)

this problem gets a better value when x and y grow large.

They are identical.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what do we mean by halfspace?

A

halfspace is a part of the space. We usually use halfspace to talk about the feasible side of the space as given by a constraint.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The intersection of all halfspaces is called…?

A

The feasible region, which we usually call X.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

WHat is noteworthy on infeasibility

A

it is never by the objective function, but always on the constraints.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A constraint that does not affect the feasible region is called …?

A

Redundant.

We can add that if the constraint affects the feasible region, but not in the optimal solution, we call it “redundant in optimum”.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The constraints that are satisfied with equality in the optimal solution are called…?

A

Active, or binding constraints.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Elaborate on global optimum vs local optimum, and define the neighborhood for continuous problems

A

We say that a feasible point x^(k) is a global optimum if f(x^(k)) >= f(x) for all x in X (max problem, opposite if min).

The same applies for local optimums, but we need to define the area/neighborhood in which the feasible point is a local optimum.
We do this by having “for all x in X and x in N(x^(k))”, where N is a function that defines a certain neighborhood.

For continuous problems, a neighborhood can be defined as

N(x^(k)) = {x | abs(x^(k) - x) < const e, const e > 0}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Can a set be concave?

A

No. A set is either convex or non-convex

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the use of a convex problem?

A

A convex problem has the property that a local optimum is also global.

16
Q

What is a search method?

A

A search method use information about the problem or the functions in the close neighborhood of current points to search towards optimum. Classical local-optimum finder.

16
Q

What is a concave problem?

A

There is no such thing. A problem is either convex or non-convex.

17
Q

When is it easy for search methods to find global optimum?

A

When we can guarantee that the local optimum is in fact a global optimum as well.

18
Q

Define a convex problem

A

The problem

min f(x)

s.t. x in X

is convex if f(x) is a convex function and X is a convex set. A problem is also convex if it is a maximization problem and the function f(x) is concave and X is a convex set.

19
Q

Elaborate on the general procedure of a search method

A

A search method starts from a known feasible point, which we call a solution to the problem. It is a solution, but not necessarily the optimal solution.

The procedure then iteratively make moves to new points with improving objective function values. When no better point can be found (according to the objective function), a local optimum has been found.

So, given the general problem

min f(x)
s.t. x in X,

the search method will generate a sequence of feasible points whose objective function values become successively better.

In some search methods, the objective function value may be the same between two consecutive iterations.

It is an iterative process, and to highlight that fact, we can specify the next point recursively:

x^(k+1) = x^(k) + t^(k) d^(k)

t^(k) is a step length/size.
d determines the direction of the step size.

19
Q

Discuss whether LP, IP or non-linear are convex

A

LP is always convex, due to the fact that the optimization function is linear, and linear functions are always both convex and concave.
Its constraints follow the same logic.

IP is always non-convex.

non-linear can be either.

19
Q

what is the search criterion?

A

The search criterion is the search direction, d^(k).

20
Q

Give the formal definition of a convex function

A

A function f(x) is convex on the feasible region X if for all choices of points x^(1) and x^(2) in X, and 0<= lambda<= 1, we have that :

f(lambda x^(1) + (1-lambda)x^(2)) <= lambda f(x^(1)) + (1-lambda) f(x^(2))

Basically, what this says is that all points between the selected points must lie under the straight line going from one point to another.

20
Q

Define a convex set

A

A set X in R^n is a convex set if for any pair of points x^(1) and x^(2) in X and 0<= lambda <= 1, we have:

x = lambda x^(1) + (1-lambda) x^(2) in X

Basically, it means that the line between any two points in the set X must be completely inside of the set X as well.

21
Q

discuss primal vs dual methods

A

Primal methods are based on the principle that in each iteration we have a feasible solution, and that we iteratively generate solutions with better and better objective function values until we stop in a local (possibly global) optimum.

Dual methods dont work like that. Their principle is to move between infeasible points in a systematic way, and ultimately in the last iteration find a feasible solution. However, we then know that this solution is local (possible global) optimum.

22
Q

Define a polyhedron

A

A polyhedron is a shape that has straight edges and sharp corners.

In LP, we typically use polyhedrons to describe feasible region.

The intersection of half spaces (divided by hyperplanes) is called a polyhedron. We can mathematically define a polyhedron like this:

H = {x | ∑aij xj <= bi, i=1,2,…,n}

23
Q

Define a hyperplane

A

A hyperplane is a set H that consists of all points x in R^n that satisfy the equation ∑ajxj = k. In R^2 we call a hyperplane a line. In R^3 we call it a plane.

We further say that a hyperplane divides the space into halfspaces.

24
Q

Define a convex combination of a set of points

A

A convex combination of the points x^1 etc… is a new point x such that

x = ∑lamba j x^(j) [j=1, n]
and ∑lambda j [j=1, n] = 1,
and lambda j >= 0.

In other words, a convex combination is the same as the definition of convex point, but it generalize to multiple points. Each point drag in a direction with a weight.

every point satisfied by the convex combination is also convex. This means that a convex combination defines a convex set.

25
Q

Define an extreme point in a set X

A

The point x^(k) is an extreme point in X if it is not possible to express x^(k) as a strict (0 < lambda < 1) convex combination of two points in X

26
Q

What is the fundamental theorem of linear programming?

A

Assume that the feasible region X of an LP problem is unbounded and non-empty. Then the minimum value of the objective function cx is obtained in (at least) one extreme point.

PROOF,

Suppose that the opposite is true. We have a point x^(hat) that is a better point (smaller) than an extreme point x(k).
Since x^(hat) is a convex combination of the extreme points of the feasible region, we can write:

x^(hat) = ∑lambda k x^(k), where ∑lambda j = 1.

Then we have:

cx^(hat) = c∑lambda k x^(k) = ∑lambda k cx^(k) > ∑lambda k cx^(hat) = cx^(hat)∑lambda k = cx^(hat)

which is a contradiction.

27
Q

what could be the interpretation of a positive slack variable?

A

rhs resource amount not yet utilized

28
Q

negative slack variable means

A

the current point is not feasible

29
Q

say we have a constraint like this:

x <= 0,

how do get to standard form?

A

we add new variable, y, and say that

y = -x

and y >= 0

30
Q

why do we care about standard form?

A

It is adventaguous to have standard form when doing math on the problem

31
Q

how do we create equality constraint from a >= constraint?

A

we must subtract a surplus (no negative) variable.

32
Q

a vertex is characteruzed by that it lies on an intersection of constraints. But how many constraints are required to define a vertex uniquely?

A

n-m.

n is the number of variables in standard form.

m is the number of constraints.

i.e. we must have n-m number of variables equal to 0, nonbasic, to uniquely define a vertex.

33
Q

Define a basic solution to the system of equations Ax = b

A

A basic solution to the system of equations Ax=b is obtained if n-m variables are set to 0 and the remaining variables get their unique values when we solve the remaining (mxm) system of equations.

The variables that are set to 0 are called nonbasic.

The variables that are among the remaining m variables are called basic variables.

34
Q

When does a basic solution exist?

A

A basic solution exists if the columns in the remaining (mxm) matrix are linearly independent.

35
Q

Every basic feasible solution correspond to a …

A

… unique vertex to the feasible region.

however, a single vertex can represent multiple basic feasible solutions.

36
Q

What is the relation between BFS and vertices?

A

The relationship is that a BFS has one and only one vertex, while a vertex can have multiple BFS.

37
Q

What do we call a basic solution in which one or more basic variables have the value zero?

A

A degenerated basic solution.

38
Q

For a general LP problem, how many basic solutions are there?

A

This is a combinatorial problem.

nCm = n! / (m! (n-m)!)

39
Q

The simplex method searches for …

A

the basis that give the best value according to the given objective function

40
Q

Define adjacency of basic solutions

A

Two basic solutions are adjacent if they have m-1 basic variables in common.

Except for degeneracy, where things are a little different.

41
Q

What do we mean by a change of basis?

A

Change of basis refer to changing one basic variable with a non-basic variable, and changing their status.

The new basic variable is called the entering basic variable, and the old basic variable is called leaving basic variable.

42
Q

From a given BFS, how many search directions are there?

A

We have n-m non-basic variables. These represent search directions. Therefore there are n-m search directions.

43
Q

Define reduced cost mathematically

A

We can calculate the reduced cost in the point x^(k) when we move in direction d as:

c^(bar) = gradient(f(x^(k)))d = cd

if c^(bar) > 0, the search direction d is an ascent direction

if c^(bar) < 0, the search direction d is a descent direction.

The reduced cost tells us how much the objective function change from a unit increase in x.

44
Q

What is the downside of using less indices to describe (mathematically) our decision variables?

A

Altough the number of decision variables would remain the same, we will get more difficulty in making the constraints.

More indices on the decision variables also make the model more generalizable.

45
Q
A