Toropov Flashcards

1
Q

List the main steps in topology optimization

A

1) Define the design space
2) Apply the loads
3) Define the fixed points
4) Perform the optimization
5) Interpret results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Briefly describe the Method of Feasible Directions (MFD)

A

This gradient-based method first creates two zones at the starting point: the feasible zone and the usable zone. The feasible zone is that which satisfies the constraints. The usable zone is that which is a decrease in the objective function. A direction vector, S, is created in the usable-feasible direction, in which steps are taken and the process repeated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Briefly describe the Sequential Quadratic Programming method (SQP)

A

1) Create a quadratic approximation of the Lagrangian Function
2) Create linear approximations to the constraints
3) Solve the quadratic problem to find the search direction
4) Perform 1D search avoiding the constraints

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the advantages of SQP

A

1) Strong theoretical basis in KKT optimality conditions
2) Considered best by theoreticians
3) It has been further developed for specific application to engineering problems
4) If the gradients are accurate, the solution will be very accurate and very fast
5) As the number of design variables increases, the number of iterations does not dramatically increase

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the disadvantages of SQP

A

1) Only falls to the nearest optimum, so it may need restarting from a different point to find a better optimum
2) Deals with continuous problems only
3) If the gradients are bad, the solution will be inaccurate
4) It is a sequential technique and parallelization can only be used for getting the gradients

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is Checkerboard Control and why is it used?

A

Based on the density method, Checkerboard Control uses local averaging of elements to control the number of elements with a non-integer density. It is used because non-integer density are not able to be manufactured with constant density materials

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is Member Size Control

A

This method groups smaller bars into larger ones based on a preset preference of bar size. This helps produce more manufacturable designs by reducing the number of trusses, thus simplifying the design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the SIMP Method

A

The SIMP Method originates from the Density Method, penalizing elements with non-integer densities, skewing the density of the element into more manufacturable voids and solids.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which part of the structure does Material Optimization optimize?

A

The Microstructure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the Ground Structure Approach

A

A method in topology optimization which first generates a grid, then connects every node to every other node, before iteratively removing the least effective connections.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is Level Set Based Topology Optimization?

A

A 3D function is generated from which the zero level contour of a level set function, phi, is defined. This contour implicitly defines the contour of the design by establishing the contour of the 3D function. Similar to an island with tide pools filling or emptying.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the advantages and disadvantages of Level Set Based Topology Optimization?

A
Pros: 
- Smooth boundary 
- No element density issues
- Can apply almost any constraint imaginable
Cons:
- 3D Approach to a 2D design
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the advantages and disadvantages of topology optimization?

A

Pros:

  • Gives a good idea of optimal material distribution
  • If the mesh is good, the design may be almost ready for manufacture
  • May produce novel designs

Cons:

  • Doesn’t know the difference between tension and compressions
  • Can’t utilize buckling constraints
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is design optimization?

A

It is a systematic method of repeatedly improving a given design.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Define explicit and implicit analysis.

A

Explicit - Clear, define function available. Volume of a mug for example.

Implicit - Analysis without formula. Software is required as this is much harder.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Briefly describe the four types of optimization.

A

Sizing - This is when you have an established design and just need to optimize the size of constituents such as trusses.

Shape - More freedom of design that sizing. Focuses on shape and thickness of design.

Topology - More freedom than shape and sizing. Start with just location and size outlines and gives overall layout.

Material - Selection of material and structure such as ply angle, as well as optimization of substructure of artificial materials.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is an objective function?

A

The key variable that the engineer wishes to be minimized.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are design variables?

A

A set of variable which accurately and uniquely describe the problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is a constraint function?

A

Variables which have design requirements that must be satisfied for the design to be viable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is a side constraint?

A

Typically geometrical constraints in the form of an outline limit.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Define constrained, unconstrained, continuous, and discrete optimization.

A

Constrained - Subject to constraints

Unconstrained - Not subject to constraints but some techniques do turn constrained problems into unconstrained ones.

Continuous - Variables can take on value between two limits.

Discrete - Variables only able to take on specific values between limits. For example, number of buttons.

22
Q

Define a hessian with two variables

A

d^2f/dx1^2 d^2f/dx1dx2

d^2f/dx2dx1 d^2f/dx2^2

23
Q

What is the positive definite criteria

A

This is where the Hessian matrix of the design variables is symmetrical and has all positive eigenvalues. This will show that a given point is a local minimum.

It can also be said that if p^tHp > 0 for any p, then H is positive definite

24
Q

What is a Feasible Design

A

One which satisfies all requirements.

25
Q

Define the Lagrangian function for constrained optimization techniques and how it is used to find optima.

A

The Lagrangian function links the objective function to the constraint functions by a scalar value, called the lagrange multiplier (v).

L(x, v) = F(x) + SUM(vj*hj)

26
Q

What is a pareto optimum set?

A

The set of conditions which cannot be improved upon with respect to all criteria at the same time.

27
Q

State 3 basic approaches to multi-objective optimization problems.

A

1) Pick the most important one and treat the problem as a single variable problem and set the other variables as constraints.
2) Linear Combination of Noramlized Criteria - First, find F_k_best by optimizing as a single variable with no constraints. Define F_0 as:

F_0=a1F’1 +a2F’2 … +akF’k

Where SUM(a) = 1 and they correspond to weightings that determine how important each function is

and

F’k = Fk/Fk_best

and then min(F_0)

3) minmax Method

Define F’’ as Fk/F*
Where F* is the desired value or “Desirement”

F0 = max{F’‘1, F’‘2,…,F’‘k}

28
Q

What is Bisection? What are the advantages and disadvantages?

A

It is a method which uses the gradient of the function to identify minima. A given function f(x) over an interval [a, b] has a derivate f’(x). The midpoint of f’(x) is:

c = (a+b)/2

If |f’(a)| < |f’(b)| then the new interval will be over [a, c] if not, it will be over [c, b].

This process is iterated until L

Adv:

  • Doesn’t require gradients
  • Simple

Disadv:

  • Iterative so can be slow and expensive
  • Will only find one minimum of potentially many.
29
Q

What is the Golden Search Method and what are it’s benefits?

A

The Golden Search Method is a gradient-based interval reducing method. The intervals are established at points related to the Golden Ratio - Over an interval [a,c]:

ab/bc = bc/ac

Where b is a point between a and c, which determines the dimensions of the intervals.

Benefits

  • Robust
  • Economical
  • Reduces number of function evaluations compare to bisection
30
Q

What is the Hooke-Jeeves Method?

A

This approach is a gradient-free technique with steps as follows:

1) Take a step in the direction of axis 1. If this is an improvement in F(x), move to next step. If not, take a step in the opposite direction. If still unsatisfactory, reduce step size and repeat
2) Repeat step 1 but in axis 2
3) Create a vector between the two points and move in the direction of said vector until failure. Then shorten the steps
4) Repeat steps 1-3 until results converge.

31
Q

What is Gradient Free Optimization?

A

Optimization that does not involve the calculation of gradients.

32
Q

What are the KKT Optimality Conditions?

A

For a local minimum point, there exists a unique Lagrangian multiplier, v, such that:

df/dxi + SUM(vj*[dG(x)j/dxi]) = 0

Where G(x) is an inequality constant.

Summing from j=1 to j=P

Find the derivatives and solve for vj and if all v>0 then then point x is a constrained minimum.

This is defines the necessary condition for optimality

33
Q

What is the alternate form of the KKT optimality condition? What is the physical meaning of the Lagrangian multiplier in this case?

A

-Grad(F(x)) = SUM(vj*Grad(Gj(x))) and v>0

Greater the v, the greater the gain in relaxing the corresponding constraint and vice versa.

34
Q

What is the Nelder-Mead Method?

A

A gradient-free method which uses a simplex to find the optimum.

1) From an arbitrary starting position, build a simplex of N+1 points, where N is the number design variables
2) Reflect the point with the highest value through the centroid.
3) If successful, expand the reflection, if expansion is unsuccessful, stick with orginal point
4) If 3 doesn’t yield an improvement, pick two points either side of the reflection line and evaluate. If one if these is better, contract the simplex to this point.
5) If neither of the points outperform the second best point, shrink the original simplex toward the best performing point.
6) Repeat until convergence.

35
Q

What is a Penalty Function? Why are they used?

A

A function that is added to the objective function that includes a penalty multiplier, which skews the objective function towards the constraint function. In this way, unconstrained optimization techniques may be used.

36
Q

What is an Exterior Penalty Function? What are the advantages and disadvantages?

A

A function added to the objective function which penalizes solutions outside the constraints. The new objective function is:

n(x) = f(x) + (1/r)*P(x)

where P(x)= SUM(c_i(x)^2)

and r is a penalty multiplier which gradually decreases.

Adv:

  • Able to use unconstrained techniques
  • Starting point doesn’t have to be feasible

Disadv:

  • Solution will always violate constraints ever so slightly, so must adjust at beginning.
  • Iterative, so can be expensive.
  • Choice of r is not simple, too small and it becomes ill conditioned. For r to be small, c must almost be 0. Too large and the constraint function may be reduced in importance too much.
37
Q

What is an Interior Penalty Function? What are the advantages and disadvantages?

A

A function added to the objective function which penalizes solutions outside the constraints. The new objective function is:

n(x) = f(x) + rP(x)

Where P(x) = SUM[1/(1-Gj(x))]

and r is a penalty multiplier which gradually increases.

Adv:

  • Able to use unconstrained techniques
  • Constraints are never violated

Disadv:

  • Starting point must be feasible
  • Iterations so can be costly
  • Singularities can occur
38
Q

What is Sensitivity Analysis?

A

Concerned with Sensitivities, which are the derivatives of the objective function w.r.t. design variables. They find how much the objective function will change given a change in a design variable. These calculations allow for faster gradient-based optimization techniques.

39
Q

With respect to sensitivity analysis, what is the forward difference differentiation method? What are the main forms of error introduced?

A

A method of sensitivity analysis which approximates derivatives.

d/dx = [f(xi+delta xi) - f(xi)]/delta xi

  • Uses Taylor Expansion Theory, so truncation error introduced
  • Condition Error - The difference between the computers maximum numerical accuracy and the actual number of digits in the computed terms.
40
Q

What is a deterministic method of optimization?

A

A method which performs an exhaustive search over the function to find the global optima

41
Q

What is a stochastic method of optimization?

A

A method which utilizes sample points to build an approximation to then determine a global optima. Computationally less expensive than deterministic methods.

42
Q

List the typical steps in local stochastic optimization.

A

1) Find design sensitivities

2) Use Taylor expansion to approximate the local function and find optima

43
Q

What are Designs of Experiments?

A

They are methods in stochastic optimization concerned with finding the optimum positions of sample points at which to perform analysis in order to build an approximation with the best representation at the lowest computational cost.

44
Q

What is a Latin Hypercube?

A

A DoE which minimizes 1/L^2 (the Andze-Englais Optimality Criteria) between sample points. The most common DoE.

45
Q

What is the Moving Least Squares Method?

A

A DoE where a point cloud is established over some surface. A point is picked and the design variables evaluated. A set of points are then established around this point and assign them a weight based on the distance, r, from the evaluated point. This weight is a Gaussian Weight Decay Function:

wi = e^(-theta*ri^2)

Where theta is the closeness of fit. Increasing the number of points and theta results in increased accuracy. However if theta is too small, it may cause over-smoothing.

46
Q

What are the advantages and disadvantaged of global approximation techniques in stochastic optimization?

A

Adv:

  • Better chance of finding global optimum
  • Provides description of entire design space
  • Handles noise well
  • Can utilize parallel computing
  • Can reformulate the problem once the formulation is built, which is important if the number of design variables is much larger than about 10
  • More reliable than local techniques

Disadv:

  • Can’t handle a lot of design variables
  • Depends on the accuracy of the response surface so approximation must be very good.
47
Q

What are the advantages and disadvantaged of local approximation techniques in stochastic optimization?

A

Adv:

  • More efficient than global techniques
  • Can handle a large number of variables

Disadv:

  • Requires design sensitivities
  • Difficult to parallelize
  • Cannot handle numerical noise well
48
Q

What is a Genetic Algorithm in Optimization? What are the basic steps?

A

Genetic algorithm based optimization finds the best design by mimicking Natural Selection.

1) Create an initial population with binary representations of design variable values strung together like a chromosome
2) Evaluate the “fitness” of each member in the population by evaluating the objective function of each design.
3) Select the best designs and create the net generation of designs accordingly.
4) Replace the old designs with the new
5) Repeat until convergence or stopping criteria met

49
Q

In genetic algorithms, what are uniform and point crossovers?

A

They are methods of mimicking reproduction.

Point crossover - In each parent, cuts are made at certain points of the binary code representing design variable values and the sections connected to create an offspring design. For example, a 2-point crossover selects two random cuts, and swaps the genetic material between the sections with the other parent’s material.

Uniform crossover - Similar to point crossover but cuts are random in number and location

50
Q

What is the Elitist strategy in genetic algorithms?

A

During the reproductive stage, several of the best designs are directly copied into the next generation.

51
Q

In GA, what is a Mutation?

A

A method of encouraging diversity of designs in the “gene pool”. randomly selected binary strings are changed from 1 to 0 or vice versa.

52
Q

What are the advantages and disadvantages of GA optimization?

A

Advantages:

  • Increased chance of finding non-local solutions
  • Can handle discrete and mixed problems
  • Can handle noise and occasional failure to compute
  • Simple - no complex maths
  • Can be parallelized

Disavantages:

  • High number of iterations
  • Lower accuracy - can even be considered an improver instead of an optimizer
  • No indication as to whether the solution is close to the optimum
  • Some parameters need to be defined that affect the solution process