Optimization and Search Flashcards
Optimization
We need:
Optimization
We need:
- A numerical representation of x for all possible solutions to the problem
- A function f(x) that tells us how good it is
- A way of finding max or min f(x)
Exhaustive search
Exhaustive search
- Test all possible solutions, pick the best
- Guaranteed to find the optimal solution
Only works for simple discrete problems, but can be approximated in continuous problems
• Sample the space at regular intervals (grid search)
• Sample the space randomly 𝑁 times
Greedy search
Greedy search
- Pick a solution as the current best
- Compare to all neighboring solutions
- If no neighbor is better, then terminate
- Otherwise, replace the current best with the best of the neighbors
- Repeat
Hill climbing
- Pick a solution as the current best
- Compare to a random neighbor
- If the neighbor is better, replace the current best
- Repeat
Examples of continuous optimization
Examples of continuous optimization
- Mechanics
- Optimized design of mechanical shapes etc.
- Economics
- Portfolio selection, pricing options, risk management etc.
- Control engineering
- Process engineering, robotics etc.
Gradient ascent / descent
In continous optimization, we may be able to calculate the gradient of f(x).
The gradient tells us in what direction f(x) increases the most.
Starting from x0, we can iteratively find hight f(x(k+1)) by adding a value proportional to the gradient to x(k):
x(k+1) = xk + yΔf(xk)
Algorithms like _____ can only find local optima:
Algorithms like greedy search, hill climbing and gradient ascent/descent can only find local optima:
- They will only move through a strictly improving chain of neighbors
- Once they find a solution with no better neighbors they stop
Exploitation
Exploitation and Exploration
Search methods should combine:
- Trying completely new solutions (like in exhaustive search) => Exploration
- Trying to improve the current best solution by local search => Exploitation
Mixed solution (exploration and exploitation)
Works better, but only if there are few local optima.
Simulated annealing
- Set an initial temperatur T
- Pick an initial solution
- Repeat:
- Pick a solution neighboring the current solution
- If the new one is better, keep it
- Otherwise, keep the new one with a probability
P(Δf, T) = e-Δf/T - Decrease T