Optimization training Flashcards
Hill climbing
- Marginally more complex than greedy random training.
- Instead of throwing out random vector for better on, it does refinement in the current one.
- Which is why the name actually is very descriptive.
- Hill climbing the process terminates at the nearest high point which is called local maxima
- Need to consider more dimensions to search. You will very likely never find global maxima and minima.
- Sometimes have to randomize and start over if you find your training to be”stuck”
Hill climbing implementation
- Hill climbing algorithm is implemented in two separate functions.
- The first function performs initialization and second one does iteration
Simulated Annealing
Simulated annealing is a method for finding a good (not necessarily perfect) solution to an optimization problem. If you’re in a situation where you want to maximize or minimize something, your problem can likely be tackled with simulated annealing.
The traveling salesman problem is a good example: the salesman is looking to visit a set of cities in the order that minimizes the total number of miles he travels. As the number of cities gets large, it becomes too computationally intensive to check every possible itinerary. At that point, you need an algorithm.
Why choose simulated annealing?
There are many optimization algorithms, including hill climbing, genetic algorithms, gradient descent, and more. Simulated annealing’s strength is that it avoids getting caught at local maxima - solutions that are better than any others nearby but aren’t the very best.
Simulated annealing basic structure?
basic algorithm:
First, generate a random solution
Calculate its cost using some cost function you’ve defined
Generate a random neighboring solution
Calculate the new solution’s cost
Compare them:
If cnew < cold: move to the new solution
If cnew > cold: maybe move to the new solution
Repeat steps 3-5 above until an acceptable solution is found or you reach some maximum number of iterations.