Theory-Lesson 4 Flashcards

1
Q

Sequential Linear Programming

A

Its main idea is to use linear approximations of non-linear functions and apply standard linear programming techniques.
-Process is repeated successively as the optimization process.
-Better results can be obtained by calculating the Taylor expansion up to the second order.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Advantage of primal methods

A

1)Since the points generated in the search are feasible, if the process is terminated before the end, this point is feasible
2) Often it can be guaranteed that if they generate a convergent sequence, then the limit point of that sequence must be at least a local constrained minimum.
3)Most primal methods do not rely on a special problem structure so these methods are applicable to general nonlinear functions.
4)Their convergence rates are competitive with other methods, and particularly for linear constraints, they are often the most efficient.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Disadvantages of Primal methods

A

1)They require a procedure to obtain an initial feasible point.
2)They are plagued when dealing with non-linear constraints and they have difficulties to remain inside the feasible domain.
It may nto be possible to identify a feasible moving direction because of the curvature of the non-linear inequality constraint.
3) Problems with inequality constraints may fail to converge.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Lagrange multipliers

A

They aim to transform a constrained problem to an unconstrained minimization problem and thus be able to perform the procedures for uncostrained optimization problems.
L(x,λ)=f(x)-λ*g(x)
we can compute the first and second-order derivatives in order to find λ. (second-order are important to avoid saddle points).

-It can be used for any number of equality constraints.

-The inequalities must be transformed into equalities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Sequential Quadratic Programming

A

-Best process for non-linear programming methods. Great efficiency and accuracy.
-It allows to mimic Newton’s method for constrained optimization just as it’s done for unconstrained optimization.
-At each iteration, an approximation is made of the Hessian of the Lagrangian function H using a quasi-Newton updating method.
-H is used to generate a QP subproblem whose solution is used to form a search direction for a line search procedure.
-The principal idea is the formulation of a QP subproblem based on quadratic approximation of the Lagrangian function.

IN OTHER WORDS, WE TRANSFORM THE NON-LINEAR PROBLEM INTO A QUADRATIC PROBLEM BY APPROXIMATING THE HESSIAN AT EACH ITERATION ANC CREATING A NEW QP AT EACH ITERATION IN ORDER TO FIND THE NEW DIRECTION.

-fmincon on matlab

How well did you know this?
1
Not at all
2
3
4
5
Perfectly