4 | Reasoning with constraints Flashcards
What is a possible world?
A possible world is a possible way the world (the real world or some imaginary world) could be.
What types of variables do we have?
An algebraic variable is a symbol used to denote features of possible worlds.
A discrete variable is one whose domain is finite or countably infinite.
A binary variable is a discrete variable with two values in its domain.
One particular case of a binary variable is a Boolean variable, which is a variable with domain {true, false}.
We can also have variables that are not discrete; for example, a variable whose domain corresponds to the real line or an interval of the real line is a continuous variable.
What is an assignment?
Given a set of variables, an assignment on the set of variables is a function from the variables into the domains of the variables. We write an assignment on {X1,X2,…,Xk} as {X1=v1,X2=v2,…,Xk=vk}, where vi is in dom(Xi). This assignment specifies that, for each i, variable Xi is assigned value vi. A variable can only be assigned one value in an assignment.
What is an total assignment?
A total assignment assigns all of the variables.
What is a possible world?
A possible world is a possible way the world could be. A possible world is defined to be a total assignment.
What is a hard constraint?
A hard constraint, or simply constraint, specifies legal combinations of assignments of values to some of the variables.
What is a scope and a relation?
A scope is a set of variables. A relation on a scope S is a function from assignments on S to {true,false}.
What is a constraint?
A constraint c is a scope S and a relation on S. A constraint is said to involve each of the variables in its scope.
What does a constraint satisfaction problem consist of?
a set of variables,
a domain for each variable, and
a set of constraints.
Given a CSP, what tasks are useful?
Determine whether or not there is a model.
Find a model.
Count the number of models.
Enumerate all of the models.
Find a best model, given a measure of how good models are.
Determine whether some statement holds in all the models.
What are generate-and-test algorithms?
The generate-and-test algorithm to find one model is as follows: check each total assignment in turn; if an assignment is found that satisfies all of the constraints, return that assignment. A generate-and-test algorithm to find all models is the same except, instead of returning the first model found, it saves all of the models found.
How do you convert a CSP to a graph?
The nodes are assignments of values to some subset of the variables.
The neighbors of a node n
are obtained by selecting a variable Y that is not assigned in node n and by having a neighbor for each assignment of a value to Y that does not violate any constraint.
Suppose node n is the assignment {X1=v1,…,Xk=vk}. To find the neighbors of n, select a variable Y that is not in the set {X1,…,Xk}. For each value yi∈dom(Y), where X1=v1,…,Xk=vk,Y=yi is consistent with each of the constraints, the node {X1=v1,…,Xk=vk,Y=yi} is a neighbor of n
The start node is the empty assignment that does not assign a value to any variables.
A goal node is a node that assigns a value to every variable. Note that this only exists if the assignment is consistent with all of the constraints.
What does a constraint network consist of?
The consistency algorithms are best thought of as operating over the network of constraints formed by the CSP:
There is a node for each variable. These nodes are drawn as circles or ovals.
There is a node for each constraint. These nodes are drawn as rectangles.
Associated with each variable, X, is a set DX of possible values. This set of values is initially the domain of the variable.
For every constraint c, and for every variable X in the scope of c, there is an arc ⟨X,c⟩.
What is the generalized arc consistency (GAC) algorithm?
It makes the entire network arc consistent by considering a set to_do of potentially inconsistent arcs, the to-do arcs. The set to_do initially consists of all the arcs in the graph. While the set is not empty, an arc ⟨X,c⟩ is removed from the set and considered. If the arc is not arc consistent, it is made arc consistent by pruning the domain of variable X. All of the previously consistent arcs that could, as a result of pruning X, have become inconsistent are placed back into the set to_do. These are the arcs ⟨Z,c′⟩, where c′ is a constraint different from c that involves X, and Z is a variable involved in c′ other than X.
What is domain splitting?
The idea is to split a problem into a number of disjoint cases and solve each case separately. The set of all solutions to the initial problem is the union of the solutions to each case.
In the simplest case, suppose there is a binary variable X with domain {t,f}. All of the solutions either have X=t or X=f. One way to find all of the solutions is to set X=t, find all of the solutions with this assignment, and then assign X=f and find all solutions with this assignment. Assigning a value to a variable gives a smaller reduced problem to solve. If we only want to find one solution, we can look for the solutions with X=t, and if we do not find any, we can look for the solutions with X=f.
What is variable elimination (VE)?
simplifies the network by removing variables.
The idea of VE is to remove the variables one by one. When removing a variable X, VE constructs a new constraint on some of the remaining variables reflecting the effects of X on all of the other variables. This new constraint replaces all of the constraints that involve X, forming a reduced network that does not involve X. The new constraint is constructed so that any solution to the reduced CSP can be extended to a solution of the CSP that contains X. In addition to creating the new constraint, VE provides a way to construct a solution to the CSP that contains X from a solution to the reduced CSP.
What is local search?
Local search methods start with a total assignment of a value to each variable and try to improve this assignment iteratively by taking improving steps, by taking random steps, or by restarting with another total assignment. Many different local search techniques have been proposed. Understanding when these techniques work for different problems forms the focus of a number of research communities, in both operations research and AI.
What is iterative best improvement?
Iterative best improvement is a local search algorithm that selects a successor of the current assignment that most improves some evaluation function. If there are several possible successors that most improve the evaluation function, one is chosen at random. When the aim is to minimize, this algorithm is called greedy descent. When the aim is to maximize a function, this is called hill climbing or greedy ascent. We only consider minimization; to maximize a quantity, you can minimize its negation.
What is a local optimum?
A local optimum is an assignment such that no possible successor improves the evaluation function. This is also called a local minimum in greedy descent, or a local maximum in greedy ascent.
What is a global optimum?
A global optimum is an assignment that has the best value out of all assignments. All global optima are local optima, but there can be many local optima that are not a global optimum.
What are two random ways to escape local minima that are not global minima?
random restart, in which values for all variables are chosen at random. This lets the search start from a completely different part of the search space.
random walk, in which some random steps are taken interleaved with the optimizing steps. With greedy descent, this process allows for upward steps that may enable random walk to escape a local minimum that is not a global minimum.
A random walk is a local random move, whereas a random restart is a global random move. For problems involving a large number of variables, a random restart can be quite expensive.
What is a stochastic local search?
A mix of iterative best improvement with random moves is an instance of a class of algorithms
What is the most improving step method?
The most improving step method always selects a variable–value pair that makes the best improvement. If there are many such pairs, one is chosen at random.
What is the two-stage choice method?
The two-stage choice algorithm maintains a priority queue of variables, where the weight of a variable is the number of conflicts in which it participates. At each time, the algorithm selects a variable that participates in the maximum number of conflicts. Once a variable has been chosen, it can be assigned either a value that minimizes the number of conflicts or a value at random. For each constraint that becomes true or false as a result of this new assignment, the other variables participating in the constraint must have their weight reevaluated.