Artificial Intelligence : Tree Search Problems and Searches Flashcards
Problem with reaching a goal
There are multiple actions, but they require an order.
Problem Formulation
The process of describing the goal, the relevant states of the world, the possible actions and the optimality criterion.
The Search Graph
First determine the set S of possible states of the problem, then the search graph for our problem is given by the set of states (s), the start state (sstart), the goal states (sgoal), the set of actions (A) that can be performed and that take the agent from one state in S to another state in S, and a cost function.
Abstract State
Set of real states
Abstract action
Complex combination of real actions
Abstract Solution
Set of real paths that are solutions in the real world
Successor States
States that follow on from other states
Search
Systematic exploration of the search tree.
Imaginary
Cannot be stored
Don’t care non-determinism
No better option
Breadth First
FIFO / Levels
Works in [S0,S1,S2,S1.1,S1.2,S2.2]
1 + b + b^2 + … + b^d with d being the shortest path to the goal state
Depth First
LIFO / Left to Right
1 + b + b^2 + … + b^m, with m possibly being infinite
Depth Limited Search
Limits the depth on a certain path (especially if you know the length of the solution)
1 + b + b^2 + …+ b^l with l being the depth limit
Iterative Deepening
DLS depth n = 0, if solution found, return it, if not, depth n = n+1 ; complete version
IDS is complete and the solution is guaranteed to be the shortest.
1 + (1 + b) + (1 + b + b^2) + … + (1 + b + b^2 … + b^d), with d being the length of the shortest path to goal state
Always puts results at the back.
Bidirectional Search
Search the goal state backwards as well as from initial state forwards.
Two steps per depth : searches up to depth d/2.
2 x (1 + b + b^2 + b^3).
Uniform Cost Search
Gives a cost to each path. g = paths -> real numbers.
Guaranteed to find cheapest solution assuming the path cost grows monotonically.
Heuristics
Problem specific knowledge (trial and error etc.) ; we know how good a certain state is. h = states -> real numbers.
h(s) = 0 is s is the goal state.
A heuristic is admissible if it does not overestimate the length to the goal state (is optimistic ; h(s) <= h(s), where h is the true cost from s to the goal) / doesn’t take any unnecessary steps.
Weighing the heuristic more can help with solutions faster (instead of g + h you could do g + 1.5h).
Greedy Search
Expands to the path that appears to be closest to the goal.
Does not find the shortest path, but instead finds the cheapest path (cheapest node, not cheapest path).
Uses heuristics.
STILL RETURNS PATH COST.
A* Search
Combines greedy search with uniform cost search.
f(O -> K) = g(0 -> k) + h(K)
Manhattan Distance
Horizontal and vertical steps from the desired location.