Theory-Lesson 2,3 Flashcards

1
Q

Search procedure

A

1) Selection of an initial design in an n-dimensional space of DVs.
2) Evaluation of the OF at a given design point.
3) Comparison of this value of the OF with the previous designs.
4) Find a rational way to select a new trajectory and examine the value of the OF at a given point at this trajectory.

X.k=X.k-1 + a* S.k
That’s the formula that describes the procedure above. a is the amplitude of the new trajectory at which a point is selected (line search) and the value of the OF is computed. Sk is the direction of the trajectory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the characteristics of a good algorithm?

A

1)Robustness: Must be able to converge no matter the starting point.
2)General: Should not impose restrictions on the model’s constraints and OFs.
3) Accurate: Must be able to converge with accuracy to find the exact optimal point.
4)Easy to use: Should not have problem dependent tuning parameters.
5)Efficient: Keep the number of analyses to a minimum (faster iterations or fewer iterations)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Zero. first and second order derivatives

A

Sometimes, we should choose between methods that require the calculation of first and second derivatives. The methods that use second derivatives are sometimes more precise but they do not always compensate for the additional computational power required. Also, sometimes we may not be able to calculate the derivatives.
Also, an approximation of the second-order derivatives is a classic method to avoid the extra computational power but still try to have accurate results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Simplex method

A

-It’s one of the simplest search methods but not really efficient.
-No derivatives are needed.
-Consists of n dimensions and n+1 initial points.
- We evaluate the values of each point at the simplex. We reject the smaller one and then we move in a direction opposite to the one of the rejected point. We repeat this process until we converge to a minimum.
- The minimum number of trials is n+1.
-We should never return to a rejected point.
-fminsearch on matlab

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Basic Descent Methods

A

-Used for solving unconstrained minimization problems.
-They offer the simplest and most direct alternatives for obtaining solutions.
Procedure:
1)Choose an initial point.
2)Determine according to a fixed rule the direction of moving.
3)Move in that direction to a minimum of the OF on that line.
4)At a new point, a new direction is determined and the same process is repeated.

-THE BASIC DIFFERENCE BETWEEN OTHER METHODS (steepest descent, Newton’s method) is the rule by which successive directions of movement are selected.
-The process of determining the minimum point of a line is called line seach.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Steepest Descent method

A

-Simple but not very efficient method (Also slow) which uses a first-order derivative of the OFs.
-Aims to minimize a function of several variables.

X.k+1=X.k - a.k * Δf.k

a.k-> amplitude of the line
-Δf.k -> direction of the new line

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Newton’s method

A

Newton’s method uses the Taylor’s expansion up to the second derivative. It required the calculation of the Jacobian and Hessian matrix of the OF.

X.k+1=X.k -H^-1 *Δf

H-1 is the ampitude of the search line
-Δf is the direction of the line.

It converges good if started near the optimal solution.

It requires a lot of computational power to calculate the Hessian matrix (second-order derivatives) and usually it doesn’t compensate.

A modification of this method is the quasi-Newton’s method which uses an approximation of the Hessian matrix. It’s effeciency is close and there is a great reduce of computational power.

X.k+1=X.k - a*H^-1 *Δf

a-> search parameter
Quasi-Newton method builds up curvature information by observing the behavior of the OFs and their first-order gradient.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Penalty methods

A

Used for constrained problems.
The main idea is to add a penalty term in the OF when it violates a constraint. That way, the solution is forced towards the feasible domain.

T(x)=f(x) +r.k*P(x)

T(x) -> pseudo OF
r.k ->scalar penalty
P(x) -> function that imposes penalties.

Two methods for imposing penalties:
1)Sequential Penalty transformations
2) Exact penalty transformations

The first category can be further divided into exterior
Φ(x,r)=f(x) + r*Σ (gj)^z
r->sequentially increasing penalty term, z-> non-negative constant
(for both inequality and equality constraints)

and interior
Φ(x,r)=f(x) - r*Σ log (gj)
(for only inequality constraints, because otherwise the log can’t be defined)

In this method, the scalar penalty term is sequentially increasing as long as the function tends to violate the constraints.

When i have a lot of constraints, this method is not very accurate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Number of equality constraints.

A

The number of independent equality constraints must be less than, or at the most equal to the number of design variables (p<=n)

when p>n we have an overdetermined system (some constraints are redundant-> linearly dependant on each other)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Number of inequality constraints.

A

No restriction on the number of inequality constraints.

An active constraint is one which, if removed, would alter the location of the optimum.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Scaling

A

-If the OF is scaled by multiplying it with a positive constant, the optimum design does not change.

However, the optimum OF value change.
-Any constant can be added to the cost function without affecting the optimum design.
-The inequality constraints can be scled by any positive constant and the equalities by any constant.
-All the foregoing tranformations may affect the performance of the numerical algorithms for a solution to the optimization problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What happens when the number of DVs is the same as the equality constraints?

A

In this case, we don’t need to run an optimization analysis because the root of the constraints are the optimal points for the problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly