Chapter 11: Optimization & Metaheuristics Flashcards
What is the objective of optimization?
That which defines what ‘improving’, ‘better’, ‘best’ means, usually considered as a function
What are variables in the context of optimization?
Controllable things affecting the objective
What is a solution in the context of optimization?
A set of values for the optimization variables
what is a constraint in terms of optimization?
A requirement on the solutions. Like g(x, y) ≥ 0
What types of optimization problems are there?
- Constraint vs unconstrained
-
Discrete vs. continuous: Determines the nature
of the math involved -
Single- vs. Multi-objective: Complicated to have
multiple notions of what is ‘better’, solutions could also be incomparable. It is usually better to create a single objective out of the multiple by making a trade-off. - Global vs. Local: Simplify problem by cutting it up into smaller, easier pieces. Optimize pieces sequentially.
- Convex vs. Non-convex: Convexity in the context of optimization refers to the property of a function that its local minima are also global minima
- Optimizing AI agent vs. AI agent Optimization: The agent can be used to optimize something, or the agent itself is optimized.
List multiple conceptual optimization techniques.
Enumeration: Listing all feasible solutions and their objective value. It might be impossible to generate all solution or it might be costly to calculate the objective value. Usually useful when the set of candidates is small.
Iteration: Start with a (partial) solution and improve it. It is local, as you need to look in the vicinity of the existing solution, but it is also flexible, as the complexity is manageable.
Which steps would you take in iterative search for optimization?
Initialization step: Create some initial solution
Repetition step: 1. Generate some neighbors and calculate their objective value. 2. pick the best performing neighbor. 3. keep performing this step until time runs out, or the optimization is good enough.
Name three iterative search optimization techniques and how they differ from each other.
Random search: No stopping criterion, neighbors are generated randomly, neighbors are selected randomly.
Greedy search: Search stops when local optimum has been reached, moves are generated randomly and selection is done on the current solution or the best better neighbor.
Gradient-following: C = until local optimum reached (up to ε), G = unique neighbor determined by (analytical or numerical) gradien
Name three iterative search optimization techniques and how they differ from each other.
Random search: No stopping criterion, neighbors are generated randomly, neighbors are selected randomly.
Greedy search: Search stops when local optimum has been reached, moves are generated randomly and selection is done on the current solution or the best better neighbor.
Gradient-following: Search stops when local optimum has been reached, neighbors are determined by gradient.
All of these have application parameters that determine the number of neighbors that should be generated, what the sampling distributions are or what step sizes the search should take. There are also adaptive techniques, for which these parameters can depend on history.
What are metaheuristics and when would they be a suitable choice?
Metaheuristics: a class of optimization algorithms that find approximate solutions to difficult or complex optimization problems using heuristics, rather than gradient or Hessian. Can handle large search spaces and constraints effectively but may not find global optimum.
Metaheuristics are suitable when you have many local optima and want to find a very good local optima (globally).
Give three examples of metaheuristics.
Simulated Annealing: Metaheuristic algorithm that mimics the process of annealing in metallurgy, uses random changes and temperature to encourage convergence to global optimum.
Evolutionary Algorithms (EA): Metaheuristic optimization algorithm inspired by the process of evolution in nature. Uses genetic operators such as crossover and mutation to improve population of solutions.
Particle Swarm Optimization (PSO): Metaheuristic algorithm inspired by the behavior of social animals. Maintains population of particles that move through the search space, influenced by local and global best, trying to converge to global optimum.
What is the main difference between metaheuristics and regular optimization methods?
Metaheuristics give no guarantee to find global or good local optima, unlike gradient descent that usually have some theoretical convergence.