Week 2 Flashcards
State model
S: finite, discrete state space
s0 ϵ S: initial state
SG ⊆ S: goal state(s)
A(s) ⊆ A: actions applicable in each s ϵ S
s’ = f(a, s): deterministic transition function for a ϵ A(s)
c(a, s): positive action costs
Solution
Sequence of actions mapping s0 to SG
Solution
Sequence of actions mapping s0 to SG
Optimal
minimises sum of action costs
Blind search
Use basic ingredients for general search algorithms
Heuristic search
Use heuristic functions which estimate the distance to the goal.
Much better for satisficing planning, not so much better for optimal planning
Systematic search
Consider a large number of search nodes simultaneously
Local search algorithms
Work with at most a few candidate solutions at a time.
Doesn’t work for optimal planning but successful cases of satisficing.
Satisficing planning
“Good enough” planning
Search node n
state reached by the search plus information about how it was reached (e.g. cost, parent node)
Path cost g(n)
Cost of the path reaching a search node, n
Optimal cost g*
Cost of an optimal solution path. For state s, g*(s) is the cheapest path to reach s
Node expansion
Generating all successors of a node by applying all actions applicable to the node’s state s
Search strategy
Method for deciding which node to expand next
Open list/Frontier
Set of all candidate nodes for expansion. AKA frontier