Optimization and Search Flashcards

1
Q

Optimization

We need:

A

Optimization

We need:

  • A numerical representation of x for all possible solutions to the problem
  • A function f(x) that tells us how good it is
  • A way of finding max or min f(x)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Exhaustive search

A

Exhaustive search

  • Test all possible solutions, pick the best
  • Guaranteed to find the optimal solution

Only works for simple discrete problems, but can be approximated in continuous problems

• Sample the space at regular intervals (grid search)
​• Sample the space randomly 𝑁 times

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Greedy search

A

Greedy search

  • Pick a solution as the current best
  • Compare to all neighboring solutions
    • If no neighbor is better, then terminate
    • Otherwise, replace the current best with the best of the neighbors
    • Repeat
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Hill climbing

A
  • Pick a solution as the current best
  • Compare to a random neighbor
    • If the neighbor is better, replace the current best
    • Repeat
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Examples of continuous optimization

A

Examples of continuous optimization

  • Mechanics
    • Optimized design of mechanical shapes etc.
  • Economics
    • Portfolio selection, pricing options, risk management etc.
  • Control engineering
  • Process engineering, robotics etc.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Gradient ascent / descent

A

In continous optimization, we may be able to calculate the gradient of f(x).

The gradient tells us in what direction f(x) increases the most.

Starting from x0, we can iteratively find hight f(x(k+1)) by adding a value proportional to the gradient to x(k):

x(k+1) = xk + yΔf(xk)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Algorithms like _____ can only find local optima:

A

Algorithms like greedy search, hill climbing and gradient ascent/descent can only find local optima:

  • They will only move through a strictly improving chain of neighbors
  • Once they find a solution with no better neighbors they stop
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Exploitation

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Exploitation and Exploration

A

Search methods should combine:

  • Trying completely new solutions (like in exhaustive search) => Exploration
  • Trying to improve the current best solution by local search => Exploitation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Mixed solution (exploration and exploitation)

A

Works better, but only if there are few local optima.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Simulated annealing

A
  • Set an initial temperatur T
  • Pick an initial solution
  • Repeat:
    • Pick a solution neighboring the current solution
    • If the new one is better, keep it
    • Otherwise, keep the new one with a probability
      P(Δf, T) = e-Δf/T
    • Decrease T
How well did you know this?
1
Not at all
2
3
4
5
Perfectly