Lec 3 | Optimization Flashcards

1
Q

It is choosing the best option from a set of possible options

A

Optimization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

It is a search algorithm that maintains a single node and searches by moving to a neighboring node. It is interested in finding the best answer to a question.

A

Local Search

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Will local search give an optimal answer?

A

Local search will bring to an answer that is not optimal but “good enough”.

Although local search algorithms don’t always give the best possible solution, they can often give a good enough solution in situations where considering every possible state is computationally infeasible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

It is a function that we use to maximize the value of the solution.

A

Objective Function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

It is a function that we use to minimize the cost of the solution

A

Cost Function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

It is the state that is currently being considered by the function.

A

Current State

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

It is a state that the current state can transition to.

A

Neighbor State

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How do local search algorithms work?

A

Local search algorithm work is by considering one node in a current state, and then moving the node to one of the current state’s neighbors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

It is one type of a local search algorithm. Where neighbor states are compared to the current state, and if any of them is better, we change the current node from the current state to that neighbor state.

A

Hill Climbing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the pseudocode for Hill Climbing?

A
function Hill-Climb(problem):
        current = initial state of problem
        repeat:
                  neighbor = best valued neighbor of current
                  if neighbor not better than current:
                           return current
                  current = neighbor
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

It is short-sighted, often settling for solutions that are better than some others, but not necessarily the best of all possible solutions.

A

hill climbing algorithm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

It is a state that has a higher value than its neighboring states

A

Local Maximum/Maxima

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

It is a state that has the highest value of all states in the state-space.

A

Global Maximum

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

It is a state that has a lower value than its neighboring states.

A

Local Minimum/Minima

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

It is a state that has the lowest value of all states in the state-space.

A

Global Minimum

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

where multiple states of equal value are adjacent, forming a plateau whose neighbors have a worse value

A

Flat local maximum/minimum

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

where multiple states of equal value are adjacent and the neighbors of the plateau can be both better and worse.

A

shoulder

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is the problem when using the hill climbing algorithm?

A

The problem with hill climbing algorithms is that they may end up in local minima and maxima. What all variations of the algorithm have in common is that, no matter the strategy, each one still has the potential of ending up in local minima and maxima and no means to continue optimizing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are the Hill Climbing Variants?

A

Steepest-ascent, Stochastic, First-choice, random-restart, and Local Beam Search

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Hill Climbing Variant

It chooses the highest-valued neighbor. It is the standard variation.

A

Steepest-ascent

21
Q

Hill Climbing Variant

It chooses randomly from higher-valued neighbors.

A

Stochastic

22
Q

Hill Climbing Variant

It chooses the first higher-valued neighbor.

A

First-choice

23
Q

Hill climbing Variant

It conducts hill climbing multiple times. Each time, start from a random state. Compare the maxima from every trial, and choose the highest amongst those.

A

Random-restart

24
Q

Hill climbing variant

It chooses the k highest-valued neighbors. It uses multiple nodes for the search, and not just one

A

Local Beam Search

25
Q

It allows the algorithm to “dislodge” itself if it gets stuck in a local maximum.

A

Simulated Annealing

26
Q

It is the process of heating metal and allowing it to cool slowly

A

Annealing

27
Q

What is the pseudocode for simulated annealing?

A
function Simulated-Annealing(problem, max):
        current = initial state of problem
              for t = 1 to max:
                        T = Temperature(t)
                        neighbor = random neighbor of current
                        ΔE = how much better neighbor is than current
                        if ΔE > 0:
                                 current = neighbor
                        with probability e^(ΔE/T) set current = neighbor
      return current
28
Q

The task is to connect all points while choosing the shortest possible distance. In this case, a neighbor state might be seen as a state where two arrows swap places. Calculating every possible combination makes this problem computationally demanding. By using the simulated annealing algorithm, a good solution can be found for a lower computational cost

A

Travelling Salesman Problem

29
Q

It is a family of problems that optimize a linear equation (an equation of the form y = ax₁ + bx₂ + …).

A

Linear Programming

30
Q

What bare the components of Linear Programming?

A
  • A cost function that we want to minimize: c₁x₁ + c₂x₂ + … + cₙxₙ. Here, each x₋ is a variable and it is associated with some cost c₋.
  • A constraint that’s represented as a sum of variables that is either less than or equal to a value (a₁x₁ + a₂x₂ + … + aₙxₙ ≤ b) or precisely equal to this value (a₁x₁ + a₂x₂ + … + aₙxₙ = b). In this case, x₋ is a variable, and a₋ is some resource associated with it, and b is how much resources we can dedicate to this problem.
  • Individual bounds on variables (for example, that a variable can’t be negative) of the form lᵢ ≤ xᵢ ≤ uᵢ.
31
Q

What algorithms can we use in Linear Programming?

A

Simplex and Interior-Point.

32
Q

These are a class of problems where variables need to be assigned values while satisfying some conditions

A

Constraint Satisfaction problems

33
Q

What are the properties of Constraints satisfaction problems?

A
  • Set of variables (x₁, x₂, …, xₙ)
  • Set of domains for each variable {D₁, D₂, …, Dₙ}
  • Set of constraints C
34
Q

terms worth knowing about constraint satisfaction problems:

It is a constraint that must be satisfied in a correct solution.

A

Hard Constraint

35
Q

A few more terms worth knowing about constraint satisfaction problems:

It is a constraint that expresses which solution is preferred over others

A

Soft Constraint

36
Q

A few more terms worth knowing about constraint satisfaction problems:

It is a constraint that involves only one variable. An example of this constraint would be saying that course A can’t have an exam on Monday {A ≠ Monday}.

A

Unary Constraint

37
Q

A few more terms worth knowing about constraint satisfaction problems:

It is a constraint that involves two variables. This is the type of constraint that we used in the example above, saying that some two courses can’t have the same value {A ≠ B}.

A

Binary Constraint

38
Q

It is when all the values in a variable’s domain satisfy the variable’s unary constraints.

A

Node Consistency

39
Q

It is when all the values in a variable’s domain satisfy the variable’s binary constraints (note that we are now using “arc” to refer to what we previously referred to as “edge”).

A

Arc Consistency

40
Q

What is the pseudocode that makes a variable arc-consistent with respect to some other variable?

A
function Revise(csp, X, Y):
       revised = false
       for x in X.domain:
             if no y in Y.domain satisfies constraint for (X,Y):
                     delete x from X.domain
                     revised = true
       return revised
41
Q

What is the pseudocode for the algorithm called AC-3, which uses Revise:

A
function AC-3(csp):
       queue = all arcs in csp
       while queue non-empty:
               (X, Y) = Dequeue(queue)
               if Revise(csp, X, Y):
                       if size of X.domain == 0:
                              return false
                       for each Z in X.neighbors - {Y}:
                              Enqueue(queue, (Z,X))
      return true
42
Q

A constraint satisfaction problem can be seen as a search problem:

A
  • Initial state: empty assignment (all variables don’t have any values assigned to them).
  • Actions: add a {variable = value} to assignment; that is, give some variable a value.
  • Transition model: shows how adding the assignment changes the assignment. There is not much depth to this: the transition model returns the state that includes the assignment following the latest action.
  • Goal test: check if all variables are assigned a value and all constraints are satisfied.
  • Path cost function: all paths have the same cost. As we mentioned earlier, as opposed to typical search problems, optimization problems care about the solution and not the route to the solution.
43
Q

It is a type of a search algorithm that takes into account the structure of a constraint satisfaction search problem. It is a recursive function that attempts to continue assigning values as long as they satisfy the constraints.

A

Backtracking Search

44
Q

What is the pseudocode for backtracking search?

A
function Backtrack(assignment, csp):
       if assignment complete:
              return assignment
       var = Select-Unassigned-Var(assignment, csp)
       for value in Domain-Values(var, assignment, csp):
              if value consistent with assignment:
                      add {var = value} to assignment
                      result = Backtrack(assignment, csp)
                      if result ≠ failure:
                               return result
                      remove {var = value} from assignment
       return failure
45
Q

This algorithm will enforce arc-consistency after every new assignment of the backtracking search.

A

Maintaining Arc-Consistency algorithm.

46
Q

What is the revised pseudocode for backtracking search if it maintains arc-consistency?

A
function Backtrack(assignment, csp):
        if assignment complete:
                return assignment
        var = Select-Unassigned-Var(assignment, csp)
        for value in Domain-Values(var, assignment, csp):
                if value consistent with assignment:
                           add {var = value} to assignment
                           inferences = Inference(assignment, csp)
                           if inferences ≠ failure:
                                    add inferences to assignment
                          result = Backtrack(assignment, csp)
                          if result ≠ failure:
                                    return result
                          remove {var = value} and inferences from assignment
       return failure
47
Q

Heuristics

It is one such heuristic. The idea here is that if a variable’s domain was constricted by inference, and now it has only one value left (or even if it’s two values), then by making this assignment we will reduce the number of backtracks we might need to do later.

A

Minimum Remaining Values (MRV)

48
Q

Heuristics

It relies on the degrees of variables, where a __________ is how many arcs connect a variable to other variables.

A

Degree

49
Q

Heuristics

Where we select the value that will constrain the least other variables.

A

Least Constraining Values heuristic