Lec 3 | Optimization Flashcards
It is choosing the best option from a set of possible options
Optimization
It is a search algorithm that maintains a single node and searches by moving to a neighboring node. It is interested in finding the best answer to a question.
Local Search
Will local search give an optimal answer?
Local search will bring to an answer that is not optimal but “good enough”.
Although local search algorithms don’t always give the best possible solution, they can often give a good enough solution in situations where considering every possible state is computationally infeasible.
It is a function that we use to maximize the value of the solution.
Objective Function
It is a function that we use to minimize the cost of the solution
Cost Function
It is the state that is currently being considered by the function.
Current State
It is a state that the current state can transition to.
Neighbor State
How do local search algorithms work?
Local search algorithm work is by considering one node in a current state, and then moving the node to one of the current state’s neighbors.
It is one type of a local search algorithm. Where neighbor states are compared to the current state, and if any of them is better, we change the current node from the current state to that neighbor state.
Hill Climbing
What is the pseudocode for Hill Climbing?
function Hill-Climb(problem): current = initial state of problem repeat: neighbor = best valued neighbor of current if neighbor not better than current: return current current = neighbor
It is short-sighted, often settling for solutions that are better than some others, but not necessarily the best of all possible solutions.
hill climbing algorithm
It is a state that has a higher value than its neighboring states
Local Maximum/Maxima
It is a state that has the highest value of all states in the state-space.
Global Maximum
It is a state that has a lower value than its neighboring states.
Local Minimum/Minima
It is a state that has the lowest value of all states in the state-space.
Global Minimum
where multiple states of equal value are adjacent, forming a plateau whose neighbors have a worse value
Flat local maximum/minimum
where multiple states of equal value are adjacent and the neighbors of the plateau can be both better and worse.
shoulder
What is the problem when using the hill climbing algorithm?
The problem with hill climbing algorithms is that they may end up in local minima and maxima. What all variations of the algorithm have in common is that, no matter the strategy, each one still has the potential of ending up in local minima and maxima and no means to continue optimizing.
What are the Hill Climbing Variants?
Steepest-ascent, Stochastic, First-choice, random-restart, and Local Beam Search
Hill Climbing Variant
It chooses the highest-valued neighbor. It is the standard variation.
Steepest-ascent
Hill Climbing Variant
It chooses randomly from higher-valued neighbors.
Stochastic
Hill Climbing Variant
It chooses the first higher-valued neighbor.
First-choice
Hill climbing Variant
It conducts hill climbing multiple times. Each time, start from a random state. Compare the maxima from every trial, and choose the highest amongst those.
Random-restart
Hill climbing variant
It chooses the k highest-valued neighbors. It uses multiple nodes for the search, and not just one
Local Beam Search
It allows the algorithm to “dislodge” itself if it gets stuck in a local maximum.
Simulated Annealing
It is the process of heating metal and allowing it to cool slowly
Annealing
What is the pseudocode for simulated annealing?
function Simulated-Annealing(problem, max): current = initial state of problem for t = 1 to max: T = Temperature(t) neighbor = random neighbor of current ΔE = how much better neighbor is than current if ΔE > 0: current = neighbor with probability e^(ΔE/T) set current = neighbor return current
The task is to connect all points while choosing the shortest possible distance. In this case, a neighbor state might be seen as a state where two arrows swap places. Calculating every possible combination makes this problem computationally demanding. By using the simulated annealing algorithm, a good solution can be found for a lower computational cost
Travelling Salesman Problem
It is a family of problems that optimize a linear equation (an equation of the form y = ax₁ + bx₂ + …).
Linear Programming
What bare the components of Linear Programming?
- A cost function that we want to minimize: c₁x₁ + c₂x₂ + … + cₙxₙ. Here, each x₋ is a variable and it is associated with some cost c₋.
- A constraint that’s represented as a sum of variables that is either less than or equal to a value (a₁x₁ + a₂x₂ + … + aₙxₙ ≤ b) or precisely equal to this value (a₁x₁ + a₂x₂ + … + aₙxₙ = b). In this case, x₋ is a variable, and a₋ is some resource associated with it, and b is how much resources we can dedicate to this problem.
- Individual bounds on variables (for example, that a variable can’t be negative) of the form lᵢ ≤ xᵢ ≤ uᵢ.
What algorithms can we use in Linear Programming?
Simplex and Interior-Point.
These are a class of problems where variables need to be assigned values while satisfying some conditions
Constraint Satisfaction problems
What are the properties of Constraints satisfaction problems?
- Set of variables (x₁, x₂, …, xₙ)
- Set of domains for each variable {D₁, D₂, …, Dₙ}
- Set of constraints C
terms worth knowing about constraint satisfaction problems:
It is a constraint that must be satisfied in a correct solution.
Hard Constraint
A few more terms worth knowing about constraint satisfaction problems:
It is a constraint that expresses which solution is preferred over others
Soft Constraint
A few more terms worth knowing about constraint satisfaction problems:
It is a constraint that involves only one variable. An example of this constraint would be saying that course A can’t have an exam on Monday {A ≠ Monday}.
Unary Constraint
A few more terms worth knowing about constraint satisfaction problems:
It is a constraint that involves two variables. This is the type of constraint that we used in the example above, saying that some two courses can’t have the same value {A ≠ B}.
Binary Constraint
It is when all the values in a variable’s domain satisfy the variable’s unary constraints.
Node Consistency
It is when all the values in a variable’s domain satisfy the variable’s binary constraints (note that we are now using “arc” to refer to what we previously referred to as “edge”).
Arc Consistency
What is the pseudocode that makes a variable arc-consistent with respect to some other variable?
function Revise(csp, X, Y): revised = false for x in X.domain: if no y in Y.domain satisfies constraint for (X,Y): delete x from X.domain revised = true return revised
What is the pseudocode for the algorithm called AC-3, which uses Revise:
function AC-3(csp): queue = all arcs in csp while queue non-empty: (X, Y) = Dequeue(queue) if Revise(csp, X, Y): if size of X.domain == 0: return false for each Z in X.neighbors - {Y}: Enqueue(queue, (Z,X)) return true
A constraint satisfaction problem can be seen as a search problem:
- Initial state: empty assignment (all variables don’t have any values assigned to them).
- Actions: add a {variable = value} to assignment; that is, give some variable a value.
- Transition model: shows how adding the assignment changes the assignment. There is not much depth to this: the transition model returns the state that includes the assignment following the latest action.
- Goal test: check if all variables are assigned a value and all constraints are satisfied.
- Path cost function: all paths have the same cost. As we mentioned earlier, as opposed to typical search problems, optimization problems care about the solution and not the route to the solution.
It is a type of a search algorithm that takes into account the structure of a constraint satisfaction search problem. It is a recursive function that attempts to continue assigning values as long as they satisfy the constraints.
Backtracking Search
What is the pseudocode for backtracking search?
function Backtrack(assignment, csp): if assignment complete: return assignment var = Select-Unassigned-Var(assignment, csp) for value in Domain-Values(var, assignment, csp): if value consistent with assignment: add {var = value} to assignment result = Backtrack(assignment, csp) if result ≠ failure: return result remove {var = value} from assignment return failure
This algorithm will enforce arc-consistency after every new assignment of the backtracking search.
Maintaining Arc-Consistency algorithm.
What is the revised pseudocode for backtracking search if it maintains arc-consistency?
function Backtrack(assignment, csp): if assignment complete: return assignment var = Select-Unassigned-Var(assignment, csp) for value in Domain-Values(var, assignment, csp): if value consistent with assignment: add {var = value} to assignment inferences = Inference(assignment, csp) if inferences ≠ failure: add inferences to assignment result = Backtrack(assignment, csp) if result ≠ failure: return result remove {var = value} and inferences from assignment return failure
Heuristics
It is one such heuristic. The idea here is that if a variable’s domain was constricted by inference, and now it has only one value left (or even if it’s two values), then by making this assignment we will reduce the number of backtracks we might need to do later.
Minimum Remaining Values (MRV)
Heuristics
It relies on the degrees of variables, where a __________ is how many arcs connect a variable to other variables.
Degree
Heuristics
Where we select the value that will constrain the least other variables.
Least Constraining Values heuristic
CS50 QUIZ
For which of the following will you always find the same solution, even if you re-run the algorithm multiple times?
Assume a problem where the goal is to minimize a cost function, and every state in the state space has a different cost.
- Steepest-ascent hill-climbing, each time starting from a different starting state
- Steepest-ascent hill-climbing, each time starting from the same starting state
- Stochastic hill-climbing, each time starting from a different starting state
- Stochastic hill-climbing, each time starting from the same starting state
- Both steepest-ascent and stochastic hill climbing, so long as you always start from the same starting state
- Both steepest-ascent and stochastic hill climbing, each time starting from a different starting state
- No version of hill-climbing will guarantee the same solution every time
Steepest-ascent hill-climbing, each time starting from the same starting state
CS50 QUIZ
Consider this optimization problem:
A farmer is trying to plant two crops, Crop 1 and Crop 2, and wants to maximize his profits. The farmer will make $500 in profit from each acre of Crop 1 planted, and will make $400 in profit from each acre of Crop 2 planted.
However, the farmer needs to do all of his planting today, during the 12 hours between 7am and 7pm. Planting an acre of Crop 1 takes 3 hours, and planting an acre of Crop 2 takes 2 hours.
The farmer is also limited in terms of supplies: he has enough supplies to plant 10 acres of Crop 1 and enough supplies to plant 4 acres of Crop 2.
Assume the variable C1 represents the number of acres of Crop 1 to plant, and the variable C2 represents the number of acres of Crop 2 to plant.
What would be a valid objective function for this problem?
- 500 * C1 + 400 * C2
- 500 * 10 * C1 + 400 * 4 * C2
- 10 * C1 + 4 * C2
- -3 * C1 - 2 * C2
- C1 + C2
500 * C1 + 400 * C2
CS50 QUIZ
Consider the same optimization problem as in Question 2. What are the constraints for this problem?
- 3 * C1 + 2 * C2 <= 12; C1 <= 10; C2 <= 4
- 3 * C1 + 2 * C2 <= 12; C1 + C2 <= 14
- 3 * C1 <= 10; 2 * C2 <= 4
- C1 + C2 <= 12; C1 + C2 <= 14
3 * C1 + 2 * C2 <= 12; C1 <= 10; C2 <= 4
CS50 QUIZ
DI NAKO MABUTANG IRI
https://cs50.harvard.edu/ai/2024/quizzes/3/