Mueller 21/22 Flashcards
Give a brief outline of the SIMP method
Introduces material density variability from 0 to 1, where 0 is no material and 1 is a solid. Calculates derivative of the objective function (e.g. weight) w.r.t. density. Then reduces density of elements with lowest derivative.
What is an Adjoint Solution?
One which directly expresses the sensitivities of a single cost function w.r.t. many design variables. (Think of the car which showed where to push in/push out for better aerodynamic performance.)
State the Optimality conditions for optimization
If F'(x) = 0 and F''(x) >0 and if these are satisfied by x=x* F(x)>F(x*) for all other x
Then x* is a minimum.
State the steps for the Bisection Method
For an interval a
List the key properties of the Bisection Method
- User define interval which must contain the min
- Will find any min, not not necessarily the global
- Convergence is slow, depends of width of the initial and desired brackets
- N = [log(xb-xa)+log(eta)]/log(2)
- Gradient free
Describe the Secant Method and list the key steps
A gradient based optimization technique which uses linearly interpolates between a bracket (although extrapolation can be used under certain conditions), using values of x and F’(x).
- Set x1 = a and x2 = b (Bracket)
- Compute F’1 = F’(x1) and F’2
- Set k=2
- Use linear interpolation formula to find x’k+1
- Compute F’k+1
- Set k=k+1
- Repeat until convergence
State the conditions for extrapolation for the Secant Method
If x1x2 & F’1>F’2>0
State the key properties of the Secant Method
- Needs gradients
- Only need first derivatives
- May converge to a max
- Faster than Bisection Method
- Flexibility in choosing xk
- Can be generalised to multi-variate problems.
State the three methods of choosing xk in the Secant method and briefly describe them
Chronological - xk becomes xk-1, xk+1 becomes xk. Simple and quick.
Smallest gradient - The two F’(x) with the smallest absolute values are used as the bracketing points, as they are, in theory, closer to the min. Faster
Bracketed - The above may find a max, by choosing two F’(x) with opposite signs for the brackets, we make sure we find a min. Slower but no risk of finding max.
List the key differences in origin of the Bisection, Secant, and Newton Methods
Bisection - Literal calculation of function values over points and converging to the smallest values
Secant - Uses Linear Interpolation to find the zero of the gradient to find the min
Newton - Uses Taylor expansion to approximate the zero of the function using gradients.
What is the process for safeguarding the Bisection Method? Explain.
From the initial point, calculate the negative gradient and follow until it starts increases in steps s. This will select an interval with a min and if the function is unimodal, there will only be one min.
What is the process for safeguarding Newton’s Method? Explain.
If F’‘(x) < then we will tend to a max.
If F’‘(x) = 0 then we will divide by zero.
If the above conditions are true, then use deltax = -F’(k)
To ensure the next step, s = alphadeltax is within the interval.
If deltax < 0, alpha = min{1, (a-xk)/deltax}
If deltax > 0, alpha = min{1, (b-xk)/deltax}
What is the process for safeguarding the Secant Method? Explain.
Use a bracketed interval. This ensures a local max is not found.
State a Taylor Expansion for two variables
F(x+dx, y+dy) = F+p^tg+0.5p^tHp
where H is the hessian
g = gradient matrix = [Fx Fy]^t
p = step vector = [deltax, deltay]^t
State the multivariate optimality conditions
For F(x+dx, y+dy) = F+p^tg+0.5p^tHp
H is positive definite
p^t*g<0