Search Flashcards
Agent
An entity that perceives its environment and acts upon that environment
State
A configuration of the agent and its environment
Initial State
The state which the AI begins
Actions
Choices that can be made in a state
Actions(s)
A function that returns the set of actions that can be executed in state s
Transition Model
A description of what state results from performing any applicable action in any state
Results(s, a)
Returns the state resulting from performing action a in state s
State Space
The set of all states reachable from the initial state by any sequence of actions
-Each node represents a state and an edge is how we can get to that state
Goal Test
A way to determine whether a given state is a goal state
Path Cost
Numerical cost associated with a given path
Nodes for Search
A node for a search needs to track a
1. State
2. Parent
3. Action
4. Path cost
Frontier
The stack of queue where our comparing nodes are looking at
Depth First Search
Search algorithm that always expands the deepest node in the frontier
Breadth First Search
Search algorithm that always expands the shallowest node in the frontier
Uninformed Search
Search strategies that uses no problem specific knowledge
DPS vs BFS
DPS is a stack
BFS is a queue
Informed Search
Search strategy that uses problem-specific knowledge to find solutions more efficiently
What could go wrong with searching nodes?
Imagine two nodes with a path to each one, you end up looping between those two never finding your goal
Approach to Searching
- Start with a frontier that contains the initial state
- Frontier is a stack- Start with an empty explored set/list/array
- Repeat: If the frontier is empty, then no solution (exit failure)
- Else, remove a node from the frontier
- If node contains goal state then return the solution (exit success)
- Else, add node to the explored set
- Expand node, add resulting nodes to the frontier
- Else, remove a node from the frontier
Greedy Best-First Search
Search algorithm that expands the node that is closest to the goal, as estimated by a heuristic function h(n)
Why is a greedy best-first search not always ideal
It might appear closer to the goal, but there could be an obstacle. The way it calculates the distance from the goal doesn’t calculate if theres a wall between or deadend.
Thus you might not find the most optimal search
A* Search
Search algorithm that expands node with lowest value of g(n) + h(n)
g(n) cost to reach node (increments as we go about the node)
h(n) estimated cost to goal
How is A* Search Optimal
h(n) is admissible (never overestimates the true cost)
h(n) is consistent
(for every node n and successor n’ with step cost c, h(n) <= h(n’) + c)
What are the drawbacks of A* Search?
It uses a lot of memory