Uninformed Search Flashcards
What is a state-based model?
Model task as a graph of all possible states.
- A state captures all the relevant information about the past in order to act (optimally)in the future
- Actions correspond to transitions from one state to another
- Solutions are defined as a sequence of steps/actions (path)
What is the general goal of a search problem?
To find a sequence of actions
Basic Search Task Assumptions (not including games)
Fully observable Deterministic Static Discrete Single Agent Solution = sequence of actions
What knowledge doe the agent need?
info needs to be sufficient to describe all relevant aspects for reaching the goal
adequent to describe world state/situation
Fully observable (aka closed world assumption)
All necessary information about a problem domain is accessible so that each state is a complete description of the world; there is no missing info at any point
problem state
representation of all necessary information about the environment
state space
all possible valid configurations of the environment
goal test
defines what it means to have achieved the goal (goal can be task to be accomplished, state to be reached, a set of properties to be satisfied)
What do the actions of the agent need?
given an action and description of the current state, action will specify if action can be applied, what state of world will be after performed (no history needed to compute successor)
Actions need to be
steps that are discrete and individual, so can be treated as instantaneous, sufficient to describe all necessary changes
A state space
directed graph with set of nodes and arcs
each node in state space graph
contains a state description, maybe link to parent node, name of action that generated node
each arc in state space graph
has fixed, positive cost; corresponds to when action is applied to arc’s source node, destination node is resulting state
successor nodes
correspond to all of the legal actions that can be applied to the source node’s state
expanding a node means:
generate all successor nodes
add them and associated arcs to the state-space search tree
state space graph needs
start node(s), goal test, solution (sequence of actions), total cost
state space, inital states, goal states, action function, cost function
Search summary
Solution is an ordered sequence of primitive actions, model task as graph of all possible states and actions, solution as a path, state captures all the relevant info about the past
frontier
generated but not yet expanded states
tree search algorithm
loop do if frontier is empty, return false pick node from frontier if goal node, return solution generate n's successors and add to frontier remove n from frontier
tree search does not:
detect goal when node is generated
does not detect loops (repeated states) in state space
Uninformed Search: we know
the goal test, the successors() function, but we do not know which non-goal states are better
in state -space search tree, what is a leaf?
unexpanded node in frontier, dead end, goal node
completeness
if a solution exists, will it be found
optimality/admissibility
if a solution is found, is it guarunteed to be optimal; an admissible algorithm will find a solution with the minimum cost
time complexity
measured in worst case, measured by counting number of nodes expanded
space complexity
max size of frontier during search
Is BFS complete?
Yes
Is BSF optimal?
yes, if all arcs have the same cost, or costs are positive, non-decreasing with depth, otherwise not optimal but will find shortest length solution
BFS time complexity
O(b^d) where d is depth of solution and b is branching factor. Usually very solw to find solutions with a large number of steps because must look at all shorter lengths
BSF space complexity
O(b^d) where b is branching factor and d is depth
describe DFS
expand the deepest node first…select a direction go deep to the end; slightly change the end; then change the end again, etc
Cycle checking
Don’t add node to frontier if its state already occurs on path back to root
Is DFS complete?
No; may not be complete with or w/o cycle checking or depth cutoff
Is DFS optimal/admissible?
No
DFS time complexity
O(b^d); b is branching factor; d is depth of solution
DFS chronological backtracking
when search hits a dead end, back up one level at a time; problematic if the mistake occurs because of a bad action choice near the top of search tree
What is Uniform Cost Search (UCS)
Use a priority queue to order nodes on the frontier list, sorted by path cost, if g(n) is cost of path from start node to current node, UCS sorts nodes by increasing value g
Is UCS complete?
Yes
Is UCS optimal/admissible?
requires that the goal test done when node is removed from frontier instead of when generated by parent
UCS time complexity
O(b^d); O(b^C/e) and c is the best goal path cost
What is iterative deepening search?
do DFS to a depth 1, and repeat by increasing “depth bound” until a solution is found
Advantages of iterative deepening search
complete, optimal if costs are the same, memory efficient, finds paths quickly in practice
is IDS complete?
Yes
is IDS optimal?
if costs are constant or positive, non-decreasing
Time complexity of IDS?
slightly worse than BFS or DFS because nodes near the top of the tree are generated multiple times but O(b*d) -> most nodes near bottom of tree
space complexity of IDS
O(bd)
What is IDS good for?
trades time for large reduction in space, good anytime algorithm because can return valid solution before ends, more time= better solution
Bidirectional Search -
search from start and goal node, stop when frontiers meet, generates O(b^d/2) nodes; takes word to compare state
graph search algorithm
must remember already-expanded states