Theory Flashcards

1
Q

Question: What is Linear Programming (LP)?

A

Linear Programming is a mathematical method for solving optimization problems where the objective function and constraints are linear. The goal is to find the maximum or minimum of the objective function, subject to given constraints.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the general form of an LP problem? (Included in the tableaux)

A

The general form of an LP problem is:
Minimize or Maximize 𝑐(^𝑇)𝑥
Subject to 𝐴𝑥≤𝑏
and 𝑥≥0
where A is a matrix of coefficients,
b is a vector of constraints,
c is a vector of costs or coefficients, and
x is a vector of decision variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does the Fundamental Theorem of LP state?

A

The Fundamental Theorem of Linear Programming states that if an optimal solution exists, it will occur at an extreme point of the feasible region, which is the convex set defined by the constraints.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a convex set?

A

A set is convex if, for any two points within the set, the line segment connecting them lies entirely within the set.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the Simplex Algorithm?

A

The Simplex Algorithm is a method for solving linear programming problems by moving along the edges of the feasible region to find the optimal extreme point (vertex) that maximizes or minimizes the objective function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are slack variables in LP?

A

Slack variables are added to convert inequalities into equalities in a linear programming problem. They represent the difference between the left-hand side and the right-hand side of the inequality constraints.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the dual problem in LP?

A

The dual problem in LP is derived from the primal problem and involves maximizing a function subject to constraints derived from the original problem. The optimal value of the dual problem is equal to the optimal value of the primal problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is an infeasible LP problem?

A

An LP problem is infeasible when no solution satisfies all of the constraints, meaning the feasible set is empty.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is an unbounded LP problem?

A

Answer: An LP problem is unbounded if the objective function can be made arbitrarily large (or small in minimization problems) without violating any of the constraints.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is computational complexity in the context of optimization?

A

Computational complexity refers to the difficulty of solving an optimization problem in terms of the time and resources required, often classified into categories such as P (polynomial time) and NP-hard (non-deterministic polynomial-time hard).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What does it mean for a solution to be sound in optimization problems?

A

A solution is sound if every solution found by the algorithm is a valid solution, meaning it satisfies all the constraints of the problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What does it mean for a solution to be complete in optimization problems?

A

A solution is complete if the algorithm can find a solution whenever one exists. In other words, it is guaranteed to find a solution if the problem is solvable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Can an algorithm be sound but not complete?

A

Yes, an algorithm can be sound but not complete. This means it only finds valid solutions, but it might not always find a solution even if one exists.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Can an algorithm be complete but not sound?

A

No

If an algorithm is complete but not sound, it would mean it finds all possible solutions but also includes incorrect ones. In this case, it would not truly “find” the solutions in a meaningful way because it would lack confidence in the correctness of what it outputs. Essentially, its results would be unreliable, making it invalid as an algorithm for solving the problem, as algorithms are typically required to give correct answers.

In summary, an algorithm can be:

  • Sound but not complete (always correct but might miss some solutions),
  • Both sound and complete (correctly finds all solutions), or
  • Neither sound nor complete (may give incorrect answers and miss solutions).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Is it possible for a solution to be neither sound nor complete?

A

Yes, it is possible for an algorithm to be neither sound nor complete, meaning it might produce invalid solutions and fail to find all valid solutions when they exist.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

LP: What does it mean when there are no negative values in the bottom row?

A

There are two cases when the cT row has no negative values:
1. When the origin is not in the feasible set, which happens if there are surplus variables
that have negative values at the origin; or
2. all variables have non-negtive values at the current basic solution.
In the first case we need to do phase 1 to get into the feasible set. In the second case we cannot
improve the objective function value, and so we have found the optimum.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

When solving LP problems, we typically (but not necessarily) select the pivot column by the largest negative value in the bottom row. Why?

A

When solving, we typically let the pivot column be decided by the currently larges negative
value in the cT row, as this affects the objective function value the most, and has a good
chance of finding the optimum with examining few extreme points.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

For the A* algorithm, we learned that the heuristic should be admissible and monotone, explain what those properties mean and why they are important

A

A heuristic is admissible if it never overestimates the true cost to reach the goal from any node in the search space. In other words, it always provides a cost estimate that is equal to or less than the actual lowest cost from the current node to the goal.A heuristic is admissible if it never overestimates the cost to the goal

A heuristic is monotone if Cost (a -> c) <= Cost(a -> b) + Cost(b -> c), compare the triangle inequality.
Consistency ensures that as A* progresses, the total estimated cost along any path will only increase or stay the same, preventing the need to revisit nodes and making A* more efficient.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

If the heuristic cost at the goal node is 0, then one of those properties implies the other, which one and why?

A

Monotonicity implies admissibility (assuming that h(goal)=0)

monotonicity –> admissible
not admissible –> not monotone

Note: h can be admissible without being monotone

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

A* is said to be optimal, what does that really mean?

A

A* is optimal means that there can exist no other algorithm using the same information that searches fewer nodes than what A* does.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Describe the P vs NP problem

A

NP represents the class of problems for which given a solution, we can in polynomial (with respect to the size of the input) time verify that this is indeed a correct solution.

P represents the class of problems for which we can compute a solution in polynomial time. We know that P is included in NP, but is P = NP?
This we do not know. No-one expects the equality to hold, if it did our world would be a bleaker place, but no one knows for sure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

There is something special with the set of problems that we call NP-complete. What is so special about them?

A

NP-complete problems are a special kind of NP problem that are also very hard (we call this NP-hard).

Here’s what’s interesting about them:

  • All NP-complete problems are connected: You can convert one NP-complete problem into another quickly (in a reasonable amount of time). This means that, in a way, all NP-complete problems are different versions of the same tough problem.
  • Why they matter for P vs. NP: Every NP problem can be turned into an NP-complete problem, but the reverse isn’t true. So, if we could find a fast (polynomial-time) way to solve even one NP-complete problem, we would suddenly be able to solve all NP problems quickly. That would prove that P = NP.

NP problems are ones where, if you have a solution, you can quickly check it’s correct — but finding that solution from scratch can be very hard.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is the key difference between Mixed Integer Linear Programming (MILP) and Constraint Programming (CP)?

A

MILP focuses on optimizing a linear objective function with constraints that may include both continuous and integer variables, while CP focuses on satisfying constraints, typically used in combinatorial problems, without necessarily optimizing a linear objective function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

How can you convert a model from MILP to CP?

A

To convert from MILP to CP, you replace the linear objective function with constraints and remove the requirement for an objective function. You may need to reformulate constraints to fit into CP’s structure of variable domains and satisfaction of constraints rather than optimization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

In what situations is Mixed Integer Linear Programming (MILP) preferable over Constraint Programming (CP)?

A

MILP is preferable when the problem involves linear relationships and the goal is to optimize an objective function, especially in scenarios where precision is required, such as resource allocation, financial optimization, and logistics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

In what situations is Constraint Programming (CP) preferable over Mixed Integer Linear Programming (MILP)?

A

CP is better suited for combinatorial problems, scheduling, and problems with complex constraints that are hard to express linearly, or where feasibility is the primary concern over optimization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is the A* algorithm, and where is it typically used?

A

The A* algorithm is a heuristic search algorithm used to find the shortest path in graph-based problems. It combines the actual cost from the start with an estimated cost to the goal, making it efficient for pathfinding and routing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What is Dijkstra’s algorithm, and how does it differ from A*?

A

Dijkstra’s algorithm is a graph search algorithm that finds the shortest path from a start node to all other nodes by expanding the least costly paths first. Unlike A*, it doesn’t use a heuristic and explores nodes based purely on path cost.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What are heuristic algorithms, and why are they important in optimization?

A

Heuristic algorithms are problem-solving methods that use practical approaches to finding good-enough solutions efficiently, especially when exact methods are computationally expensive. They are important for solving large, complex problems quickly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What does it mean if a linear programming problem is infeasible?

A

A problem is infeasible if no satisfiable assignment exists, meaning the feasible set spanned by the constraints and domains is empty due to conflicting constraints.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Give an example of an infeasible linear programming problem.

A

Objective: Maximize 𝑧 = 𝑥 + 𝑦
Subject to the constraints:
𝑥 + 𝑦 ≤ 1
𝑥 + 𝑦 ≥ 3
𝑥 ≥ 0
𝑦 ≥ 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What does it mean if a linear programming problem is unbounded?

A

A problem is unbounded when there is no bounded solution, meaning the objective function can grow arbitrarily large or small because the constraints are not tight enough to limit the solution space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Give an example of an unbounded linear programming problem.

A

An example of an unbounded problem is:
maximize x1 + 3*x2
subject to x1+x2 >= 4,
x1, x2 >= 0
Here, there is no upper limit on the values can take, making the solution unbounded.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What does it mean for a solution to be feasible in linear programming?

A

A solution is feasible if it satisfies all the constraints, meaning the solution lies within the bounded feasible region defined by the constraints.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What is a unique feasible solution in linear programming?

A

A unique feasible solution occurs when there is only one optimal solution, which appears at an extreme point where constraints intersect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What is a non-unique feasible solution in linear programming?

A

A non-unique feasible solution occurs when there are infinitely many optimal solutions along a constraint boundary, typically when the objective function is parallel to one of the constraints.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Can a non-unique solution still include extreme points?

A

Yes, even in a non-unique solution, at least one solution will occur at an extreme point of the feasible region.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

When is a feasible set convex?

A

If all constraints C of an optimization problem are linear

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

When is a function unimodal?

A

A function f(x) is unimodal (over an interval) if there exists an m (the mode) such that f(x) strictly (monotonically) increases for x < m and strictly decreases for x > m (over the interval).

40
Q

When is every local optimum is also a global optimum

A

If for an optimization problem
* the objective function is unimodal, and
* the constraints define a convex feasible set

41
Q

What is the Simplex method in linear programming?

A

The Simplex method is an iterative algorithm used to find the optimal solution to a linear programming problem by moving along the edges of the feasible region to reach an extreme point that optimizes the objective function.

42
Q

Where does the Simplex method start when solving a problem?

A

The Simplex method starts at a feasible extreme point, typically the origin, where all decision variables are zero (if it satisfies the constraints).

43
Q

How does the Simplex method move between solutions?

A

The Simplex method moves from one extreme point (vertex) of the feasible region to another by increasing or decreasing decision variables, following the direction that improves the objective function the most.

44
Q

How does the Simplex method decide which variable to increase or decrease?

A

The method selects the non-basic variable with the most negative coefficient in the objective function (for maximization problems) to increase, because increasing it will improve the objective function the fastest.

45
Q

How does the Simplex method determine when it has found the optimal solution?

A

The Simplex method stops when there are no negative coefficients left in the row of the objective function (for maximization problems), indicating that increasing any variable will no longer improve the objective value.

46
Q

Can the Simplex method handle both bounded and unbounded problems?

A

The Simplex method can handle bounded problems efficiently by finding the optimal solution at an extreme point, but for unbounded problems, the method will detect that the solution can be increased indefinitely without violating constraints.

47
Q

What is a pivot in the Simplex method?

A

In the Simplex method, a pivot is the step where we adjust the current solution to improve it and move closer to the optimal solution of a linear programming problem.

Here’s what a pivot involves:

  1. Choosing an entering variable: We select a variable that can increase or decrease to improve the objective function (e.g., profit or cost). This variable enters the basis (the set of variables we’re currently focusing on).
  2. Choosing a leaving variable: To keep the solution feasible (meeting all constraints), we need to adjust another variable to exit the basis. This is often chosen based on a rule that prevents any constraint from being violated.
  3. Swapping the variables: The entering variable takes the place of the leaving variable, and we recalculate the values of all variables based on this new setup.
48
Q

What good is relaxation?

A
  • show infeasibility (if the relaxed problem don’t have a solution, then the original problem wont either)
  • get bounds on the true optimal solutions (lower bounds for min, upper bound for max)
  • rounded solutions may be good enough
  • Disregards the discrete domain constraint, treat discrete variables as real values
  • if the relaxed optimum is also feasible for the original problem, then it is the optimum for the original solution
48
Q

A unimodular matrix is:

A
  • Square
  • Integer
  • Determinant ±1

Note: Square and non-zero determinant means that the matrix is invertible
Note: Integer and determinant ±1 means that the inverse matrix is also integer

49
Q

Totally unimodular matrix

A
  • not necessarily square, but
  • every square, non-singular sub-matrix is unimodular
  1. Each element of A either −1, 0, or 1
  2. At most two non-zero elements in each column (row)
  3. The columns (rows) can be partitioned into M1, M2, such that:

If A is totally unimodular, then the optimal solution is integer, since then for every basic solution B, x = B^(−1) b is integer (whenever b is integer).

(a) for two non-zero elements of equal sign in same column (row), one is in M1, the other in M2
(b) for two non-zero elements of different signs in same column (row), both are in M1, or both in M2

50
Q

IP summarized

A

The equation Ax = b, with A non-square, has many solutions (typically).

Such solutions are given by
invertible m × m sub-matrices of A.
The optimization problem now involves finding one solution that minimizes the objective function cT x.

If the relaxed (LP) problem happened to give a solution where also x ∈ N+, then this is a solution to the IP problem.

If A is totally unimodular, then every invertible sub-matrix is uni-modular, its inverse exists and is integer. Thus, if A totally unimodular, and b is integer, then all the solutions will be integer.

51
Q

Formulating LP/IP/MILP problems

A
  1. Define what seems like the necessary variables
  2. Define a set of constraints so the feasible set is closed (and preferably convex)
  3. Define the objective function
    If difficulties arise, define additional or alternative sets of variables and iterate.

Be picky!
Remember, we are formulating the problem to be solved by a computer. There is no hand-wavy “nudge, nudge, know what I mean, eh?”

52
Q

When is Less More? (Variables in a model)

A

An important notable exception for when we want to have less, rather than more, variables is for highly symmetric problems, where instead the symmetry is best removed. See Example (10.2)

53
Q

A CSP can be (states)

A
  • consistent, a solution (or more than one) exists
  • inconsistent, no solution exists
  • solved, the domains are singletons and together a solution to the CSP
  • failed, one (or more) empty domains, or only singleton domains that are together not a solution to the CSP

A solution is a domain.
A failed CSP is inconsistent.

54
Q

Solving CSPs revolves around different notions of concistency:

A
  • Node consistency (aka domain consistency)
  • Arc consistency (aka binary consistency)
  • Directed arc consistency (fwd, bwd)
  • Hyper-arc consistency
  • Bounds consistency

The overall aim is to tighten the bounds on the problem, until only a solution is left.
Most of the procedures are rather straightforward, yet applying them iteratively lead to powerful solution procedures.

55
Q

Notes on linear inequality constraints domain reduction:

A
  • The reduction needs to be iterated until they “stabilize” (a fix-point is reached)
  • The calculated domain bounds may or may not result in domain reduction
    If the calculated bounds are less constraining than the original bounds, then no reduction.
56
Q

Constraint propagation

A

Constraint propagation is a technique used in constraint programming to reduce the search space by deducing variable constraints and eliminating values that cannot be part of any valid solution, thereby simplifying the problem before or during the search process.

Variables depend on each other by being part of the same constraint(s) - if one variables domain changes then the others also do

(there is a good example on page 136)

57
Q

Types of constraints

A

Unary Constraints: Restrictions applied to a single variable (e.g., 𝑥>5)

Binary Constraints: Constraints involving two variables (e.g.,x+y≤10).

Global Constraints: Involve multiple variables and capture common patterns, such as AllDifferent, which requires all variables to have different values.

Linear Constraints: Expressed as a linear equation or inequality (e.g., 2x+3y≤15).

Nonlinear Constraints: Involve nonlinear relationships between variables (e.g., x 2+y 2≤25).

Logical Constraints: Use logical operators (AND, OR, NOT) to link constraints (e.g., x>5 ANDy<3).

Time/Temporal Constraints: Apply to scheduling problems, where variables are bound by timing constraints (e.g., start time, end time).

Resource Constraints: In resource allocation, these limit how much of a resource can be used (e.g., limited machine hours in production).

Mutual Exclusion Constraints: Specify that certain variables or values cannot coexist (e.g., Investment A and Investment B cannot both be selected).

58
Q

Define the dual problem (equation)

A

max b^(t)y, subject to A^(T)y = c, y>=0
The dual problem is larger or equal to the constraints compared to standard LP
Original: m constraints, n decision variables –> Dual: n cons., m dec. var.
Has the same optimal value as original problem

59
Q

Costs in discrete optimization can be:

A

uniform, all costs equal
non-uniform

60
Q

Mixed Integer Linear Programming

A
  • Involves both discrete & continuous variables (linear constraints)
  • Real x E R^(m), discrete z E Z^(n)
61
Q

Big M

A
  • If we want either one or another constraint to be true
  • M is a large enough number
  • M * gamma OR M*(1-gamma) (these are present in each ones equation, gamma 1 or 0)
62
Q

Graph search algorithms are:

A

Branch - Left or right, take decision

Bound - Compare current cost with best seen so far, can cut off branches if we know costs are monotonically increasing

63
Q

Breadth first

A
  • Uninformed
  • Looking for possible path at different “levels”
  • The first time we see the goal, its via the shortest path (not guaranteed if depth first search is used)
64
Q

Informed algorithms

A

Dijkstras and A*

65
Q

What information does dijkstra and A* use

A

Dijkstras:
- current cost (nodes are picked based on this)

A*:
- Current cost
- Estimate to reach the goal (heuristic)

66
Q

Describe Dijkstras

A
  1. Start at beginning
  2. Assign beginning = 0 and all other nodes = inf
  3. Visit the nearest & check for neighboring nodes
    If you can reach that neighbor with a shorter distance than previously known –> update distance
  4. Mark current node as visited
  5. Move to unvisited node with shortest path
  6. Repeat until all nodes are visited or goal is reached
67
Q

Describe A*

A
  1. Start at the beginning
  2. Visit the node with lowest total cost (distance so far + estimate)
  3. For this node, check for neighbors, update if its shorter(less costly) than you knew before, also estimate remaining distance
  4. Keep track of which node came from wich (parent and children)
  5. Repeat visiting neighbors with lowest total cost, continue until goal is reached

A* is faster since it’s using a heuristic to guess which path is best

It’s a difference between finding the optimum and being optimal - A* is optimal!

68
Q

Heuristic

A

Straight line distance between current point & goal

69
Q

CP model formulation

A

<X, D, C>
X - set of decision variables
D - set of domain, Di for variable xi etc
C - set of global constraints

70
Q

When is X bound, inconsistent and free?

A

bound:
|Di| = 1 (single value)

inconsistent:
|Di| = 0 (Dx=0)

free:
|Di| >= 2 (more than one value)

71
Q

CP
Constraints are defined by:

A

intension:
defining an explicit relation between the variables (x!=y)

extension:
defining valid tuples

72
Q

Several feasible solutions is called

A

The solution set

73
Q

CSP feasible solution

A

(p.110)
Given a CSP ⟨X, D, C⟩, a solution δ ⊆ D is feasible if
^ c(δ) is true

74
Q

Inequality constraints

A

Sum(X aixi, i∈POS) − Sum(X aixi, i∈NEG) ≤b

  • ai positive integer for i ∈ POS U NEG
  • xi and xj are different variables for i,j ∈ POS U NEG and i != j
  • b is integer

POS is the set of indices for the variables with positive contribution to the constraint

NEG is the set of indices for the variables with negative contribution to the constraint

The constraint is then said to be normalized. Assume the domains are integer intervals
xi ∈ min(xi)..max(xi)

75
Q

How to handle disequalities

A

Given: 𝑆 ≠ 𝑇 with domains 𝐷 for variables in 𝑆 and 𝑇.

Step 1: Introduce a new variable 𝑥 , rewriting as:
𝑥 ≠ 𝑇 with domain 𝐷
𝑥 = 𝑆 where x∈{min(S)..max(S)}

Why this helps: It simplifies the disequality by splitting it into two constraints, making it easier to process.

Step 2: Apply again to 𝑥 ≠ 𝑇:
𝑥 ≠ 𝑦 , where x∈{min(S)..max(S)}
𝑥 = 𝑆 where y∈{min(T)..max(T)}
𝑦 = 𝑇, D

Result: We obtain two equalities (x=S and 𝑦 = 𝑇 ) and a simpler disequality 𝑥 ≠ 𝑦

76
Q

Boolean CSP Basics

A

Domain: Boolean domains,
x∈{0,1}
Boolean Connectives:
¬: Negation (“not”)
∧: Conjunction (“and”)
∨: Disjunction (“or”)
Equality Relation: =

77
Q

Simple Boolean Constraints

A

Given variables x,y,z:
x=y: Equality constraint
¬x=y: NOT constraint
x∧y=z: AND constraint
x∨y=z: OR constraint

78
Q

Transforming Compound Boolean Constraints to simple - how?

A

Non-simple (compound) Boolean constraints can be transformed into simple constraints by introducing new variables.

79
Q

Transformation rules for boolean CSP (make it more simple)

A

Let S:= S1∨(S2∧S3) with Si variables
Then ¬S = t is transformed into ¬x = t1, S = x

SEE EXAMPLE IN BOOK p.132

80
Q

Constraint Propagation (definition and how it works)

A

Definition: A process in constraint satisfaction problems where variables depend on each other through shared constraints.

How it works: When one variable is changed (e.g., assigned a value or has its domain reduced), other dependent variables may also be affected.

Propagation: These changes spread through the constraints, affecting related variables.

Purpose: Helps to reduce possible values for variables, making the problem easier to solve.

81
Q

Constraint graph

A

A graph with:
- The variables associated to the nodes
- The constraints associated to the edges
- Straight forward for binary relations

82
Q

Dual graph

A
  • Had the constraints as the nodes
  • The edges representing the variable connections
83
Q

Node consistency

A

A CSP is node consistent if for every unary constraint over the variable x, the constraint is valid over Dx (the whole domain)
A CSP without unary constraints is node consistent

Note:
Node consistency does not imply consistency
Consistency does not imply node consistency

84
Q

Arc consistency

A

A CSP is arc consistent if for every binary constraint over variables X and Y , the constraint is valid for all values of Dx and Dy
(p.146)

85
Q

Hyper-arc-consistency

A

A CSP is hyper-arc-consistent if for every constraint C, for every involved domain, each element participates in a solution to C

Ex:
C: x<y<z
x E {0,1}
y E {1,2}
z E {2,3}

86
Q

Path consistency

A

Simple explanation:
- Think of a path from xi to xj where xk is in the middle

Path consistency tightens the binary constraints using implicit constraints inferred from looking at triplets of variables

More complex:
- A two-variable set {xi, xj} is path consistent with a third variable xk, if for every assignment xi & xj consistent with the constraints on {xi, xj} there is an assignment to xk that satisfies the constraints on {xi, xk} and {xk, xj}

87
Q

Bound propagation (consistency)

A
  • For large domains, it is not practical to represent all values in the domain explicitly
  • Instead, the domain is represented by its bounds
  • Thus, it is not possible to use the local consistency rules to gradually reduce the domain
    (Something is called local if it only considers subsets of variables/constraints/domains, never the whole global problem
88
Q

Minimum Remaining Values (MRV)

A
  • Selects the most constrained variable in a “fail first” attempt
  • The degree heuristic attempts to reduce the branching factor of future choices by selecting the variable involved in the largest number of constraints with other unassigned variables

The MRV heuristic chooses the variable that has the fewest possible values left in its domain — in other words, the variable with the most constraints or restrictions. The idea is to tackle the hardest (most constrained) variables first, to see if a solution is even possible. If we hit a dead end, we know sooner rather than later, which can save time.

89
Q

Order domain values (MRV)

A
  • The least constraining value heuristic is often a good choice
    It prefers the value that rules out the fewest choices for the neighboring variables in the constraint graph
  • Tries to leave the maximum flexibility for subsequent variable assignments

By choosing the least constraining value each time, we reduce the chance of getting stuck in a situation where there are no valid options left for other variables. This strategy can make it easier and faster to find a solution.

90
Q

Maintaining arc consistency (MAC)

A
  • Aka full look ahead, does what FC does + recursively propagate constraints when changes are made to the domains

When you assign a value to a variable, MAC checks all constraints to see if the remaining possible values for other variables are consistent with the new assignment. It does this by enforcing something called arc consistency:

Arc consistency means that for every possible value of one variable, there must be at least one compatible value for every other variable it’s connected to by a constraint.
If assigning a value to a variable causes any other variable to lose all possible values (because it no longer has any options that are consistent with the new assignment), MAC will trigger backtracking immediately.

MAC is powerful because it ensures the entire problem remains consistent with every assignment, which reduces the chance of getting deep into the search only to discover a problem later. This can make the search process much more efficient by cutting off invalid paths sooner.

91
Q

Forward Checking (FC)

A
  • Checks the effect of a tentative assignment for every variable, one at a time

When you assign a value to a variable, forward checking immediately looks ahead to the neighboring variables (those directly connected to it by constraints) to see if the chosen value would prevent any of them from having valid options left. If assigning this value would eliminate all possible values for any neighboring variable, the algorithm stops and “backtracks” — meaning it undoes this choice and tries a different value.

Forward checking prevents the algorithm from going too far down a path that won’t work. By catching conflicts early, it avoids wasting time on assignments that will lead to failure later, making it more efficient in finding solutions.

92
Q

Partial look ahead

A
  • Enforces single step directional arc consistency in the “forward” direction for xi, xj with i < j <= n (needs some idea of ordering)
93
Q

Full look ahead

A
  • Enforces arc consistency between every unassigned pair xi, xj (only single step)

Whenever a variable xi is assigned, full lookahead checks all pairs of variables ⟨xj,xk⟩ in both forward and backward direction.

94
Q

Arc consistency

A
  • Iteratively checks arc consistency for all pairs until stabilizing
95
Q

WHY DO PROPAGATION DURING SEARCH INSTEAD OF JUST SIMPLE FORWARD CHECKING AND EVALUATING THE TREE UNTIL SOLUTION OR INCONSISTENCY IS FOUND?

A
  • Find inconsistencies early by doing local examination of neighboring values
  • Shrink the domains of the variables so that the search tree is narrowed “underneath” the current partial assignment
  • Avoid searching identical subtrees underneath different assignments