Searching Flashcards

1
Q

Describe the simple problem-solving agent providing both the pseudocode and an informal description.

A

Is a Particular type of goal-based agent, that means it act to achieve itsgoals.

Representation: atomic (the states have no internal structure visible to the problem-solving algorithm)

Environment:

  • observable
  • known
  • deterministic

Solution: fixed sequence of actions

Search: process of looking for such a sequence

Search algorithm:

  • input: problem
  • output: an action sequence

The sequence in output can then be executed without worrying about the environment.

The agent execute repeatedly:

  • Formulates a goal and a problem.
  • Searches for a sequence of actions that would solve the problem.
  • Executes the actions one at a time.

PSEUDOCODE image

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Provide the formal definition of search problem (describe its components and transition model), solutions, and optimal solution in a search problem.

A

A problem can be defined formally by:

  • Initial state: of the agent
  • Actions available to the agent
    • Given a state s, ACTIONS(s) returns the set of actions that can be executed in
    • We say that each of these actions is applicable in s
  • Transition model: description of what each action does.
    • RESULT(s,a) returns the state obtained from doing action a in state s
    • Successor: any state reachable from a given state by a single action
  • Goal test: allows to check if a state is a goal.
  • Path cost: numeric value associated to each path reflecting the desired performance measure. We assume path costs to be additive: sum of step costs.

Step cost c(x,a,y) :for going from state x to state y by performing action a. Assumed to be ≥ 0.

Solution: a sequence of actions (path) leading from the initial state to a goal state.

Optimal solution: a solution with minimal path-cost.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Describe Breadth-first search and Depth-first search. What are the main differences between them?

A

Breadth-first search

Expand shallowest unexpanded node.

Implementation: frontier is a FIFO queue, i.e., new successors go at end

Goal test is applied to each node when it is generated rather than when it is selected for expansion.

Properties:

  • Completeness: only if b is finite
  • Optimality: only if cost = 1 per step
  • Time: O(b^(d+1))
  • Space: O(b^(d+1)) keeps every generated node in memory

Depth-first search

Expand deepest unexpanded node.

Implementation: frontier = LIFO queue, i.e., put successors at front

Properties:

  • Completeness: only in finite state spaces
  • Optimality: No
  • Time: O(b^m), terrible if m is much larger than d
  • Space: O(bm)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Describe greedy best-First search.

A

Informed search strategies use problem-specific knowledge to speed up the search process.

Best-first search: Node is chosen for expansion based on an evaluation function f(n) that is a cost estimate, so is expanded the node with lowest f(n) first.

A component of f is Heuristic function h(n) = estimate of cost of the cheapest path from the state of the node n to a goal state.

  • depends only on the state associated with n
  • h(n) is non negative
  • h(n)=0 at every goal state

Special cases of Best first search: greedy, A*

Greedy best first search

Expands the node that appears to be closest to goal. f(n)=g(n)

Properties:

  • Completeness: No, can be trapped in dead ends.
  • Optimality: No
  • Time: O(b^m), but a good heuristic can give dramatic improvement
  • Space: O(b^m), keeps all nodes in memory
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Describe algorithm A*.

15/06/2021

A

Informed search strategies use problem-specific knowledge to speed up the search process.

Best-first search: Node is chosen for expansion based on an evaluation function f(n) that is a cost estimate, so is expanded the node with lowest f(n) first.

A component of f is Heuristic function h(n) = estimate of cost of the cheapest path from the state of the node n to a goal state.

  • depends only on the state associated with n
  • h(n) is non negative
  • h(n)=0 at every goal state

Special cases of Best first search: greedy, A*

A* star search

Avoid expanding paths that are already expensive.

f(n) = g(n) + h(n)

  • g(n) = path cost from the start node to node n
  • h(n) = estimated cost of the cheapest path from n to goal
  • f(n) = estimated cost of the cheapest solution through n

We expand the node with lowest f(n)

Properties:

  • Completeness: yes, unless there are infinitely many nodes with f<=f(G)
  • Optimality: Yes
  • Time: O(b^m)
  • Space: O(b^m), keeps all nodes in memory
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Dare la definizione di euristica consistente e di euristica ammissibile.

15/06/2021

A

Admissible Heuristics: h is admissible if for every node n: h(n)<=h*(n)

where h*(n) is the true cost form n to goal.

It never overestimates the cost to reach the goal.

Consistent heuristics: h is consistent if for every node n, for every successor n’ of n generated by an action a, h(n)<=c(n,a,n’) + h(n’)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Describe hill climbing search providing both the pseudocode and an informal description.

Spiegare per quali ragioni spesso l’hill climbing rimane bloccato.

A

Local Search algorithms

In many optimization problems the path from the start node to the goal is irrelevant we care just about the goal state.

They keep a single node and move to adjacent nodes.

Useful for solving optimization problems, where the goal is to find the best state according to an objective function.

State space = set of “complete” configurations

Find configuration satisfying constraints.

Local search algorithms explore the state-space landscape.

State-space landscape: it has a location (defined by the state) and an elevation (defined by the value of heuristic cost function or objective function)

The aim is to find: a global minimum (lowest valley) if elevation corresponds to cost or a global maximum (highest peak ) if elevation corresponds to objective function.

Hill-climbing search

Assume the elevation corresponds to the objective function.

Modifies the current state to try to improve it.

At each steps: Picks a neighbor with the highest value. (Usually chooses at random among neighbors with maximum value)

Terminates when it reaches a “peak” where no neighbor has a higher value.

Pseudocode: image

Problem: often gets stuck in

  • Local maxima: a peak that is higher than each of its neighboring states but lower than the global maximum.
  • Plateau: a flat area of the state-space landscape. Flat local maximum, from which no progress is possible or a shoulder, from which progress is possible.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Describe the local beam search algorithm.

15/06/2021

A

Keep track of k states rather than just one.

  • Start with k randomly generated states.
  • At each iteration all the successors of all k states are generated
  • If any one is a goal state the algorithm stops, Else select the k best successors from the complete list and repeat (they could be all successors of the same node).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Describe simulated anealing algorithm and pseudocode.

(Invented question, not in past exams)

A

Version of stochastic hill climbing where some downhill moves are allowed.

If the move improves the situation is always accepted, otherwise is accepted with some probability less than 1 that decrease exponentially over time.

Idea: escape local maxima by allowing some “bad” moves but gradually decrease their frequency.

One can prove: If T decreases slowly enough, then simulated annealing search will find a global optimum with probability approaching 1.

Pseudocode: IMAGE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly