Lecture 2 - searches Flashcards

1
Q

Search

A

The process of looking for a sequence of actions that reaches the goal

  • Input: problem
  • Output: solution in the form of an action sequence
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

[Basic | uninformed | blind] search

A

when strategies have no additional information about states beyond that provided in the problem definition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Basic search:

Depth-first search (DFS)

A

One way is to iteratively pick one of the children at every node visited and work forward

Other alternatives are ignored as long as there is a chance of reaching the goal.

When a dead end is reached, the algorithm backtracks to the previous choice point.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Breadth-first search* (BFS)

A

check all paths of a given length before moving on to the next level.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

-> Depth-first search when …

A

If You are confident that complete paths or dead ends are found after a reasonable amount of steps

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Breadth-first search when …
- example

A

If You are working with very deep trees

BUT not when you think that all paths reach the goal at about the same depth, because:

Using breadth-first search on very deep search trees with similar depths to the goal is inefficient, because it requires storing and exploring a large number of nodes at the shallow levels before reaching the goal,

-> leading to excessive memory usage;

Example:
for instance; in a chess game where each level represents a move, exploring all possible moves exhaustively before deeper levels may be impractical due to the vast number of initial moves and board configurations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Non-deterministic search when…

A

If you don’t know much about the problem set

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

If you have more information and the problem set is heuristically informed (2)

A

-> Hill climbing*

-> Beam search*

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Non-deterministic search algorithm
- definition
- Example

A

It explores different possibilities without knowing the exact outcome, allowing for a more flexible exploration of the search space.

Example:

imagine a robot trying to navigate through a maze. In a non-deterministic search, at each intersection, the robot may randomly choose one of the available paths without knowing which one is the correct route. This randomness allows the algorithm to explore multiple paths simultaneously, potentially finding a solution more efficiently, especially in scenarios with uncertain or changing environments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Hill climbing algorithm
- definition
- example

A

it chooses the neighbor with the highest improvement, “climbing” up the metaphorical hill, until it reaches a peak where no better solution can be found locally.

Example:

imagine a traveller trying to reach the highest point on a mountain.

Starting from any location, the traveller takes steps in the steepest uphill direction.

At each step, they move to the adjacent point with the highest elevation.

The process continues until they reach a peak where no higher point can be reached with a single step, representing a locally optimal solution.

-> it is the same as DFS, but with a preference

-> it is a greedy algorithm*

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Beam search algorithm
- Definition
- Example

A

Modification of BFS, in the same manner as DFS is modified in Hill climbing

Beam search performs breadth-first search,

-> moving downward only through the best w nodes.

Example:

in natural language processing, beam search is commonly employed in machine translation; if translating a sentence, the algorithm considers only a fixed number of best possible translations at each step, leading to a more efficient and practical search process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Optimal search algorithm

A

Finding a path from start to goal is not always the goal.

For example, I do not want to plan any route from work to a bar, I want to find the shortest path possible.

But we don’t want to evaluate all possible paths.

-> Branch-and-Bound*

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Branch-and-bound algorithm
- defintion
- example

A

It works by expanding nodes ordered by traveled distance.

-> Once it has found a solution

-> Ignores paths with distances that are equal to or greater than the distance for that solution

Example:

Consider a traveling salesperson trying to find the shortest route to visit a set of cities. The algorithm would start with an initial route, then systematically explore different routes by considering all possible combinations of cities. As it explores, it keeps track of the best route found so far and uses this information to eliminate paths that cannot lead to a better solution. This process continues until the optimal route is discovered.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

informing branch-and-bound about the estimated distance left

A

-> heuristic branch-and-bound

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Heuristic branch-and-bound algorithm
- definition
- Example

A

After all, e(total path length) = d(already traveled) + e(distance remaining)

-> use accumulated distances + estimates to expand nodes.

Example:

in the traveling salesman problem, it might use a heuristic to prioritize visiting cities based on proximity, gradually refining the tour for a more efficient solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A* procedure

A

The A* procedure combines all of our discussed techniques:

  • branch-and-bound
  • distance estimates
  • redundant path removal

It guarantees finding an optimal result efficiently

17
Q

Greedy algorithm

A

Algorithms that choose the locally optimal choice at each stage are known as greedy algorithms.

18
Q

Redundant*

A

“Redundant” refers to the presence of excessive or unnecessary elements, often implying duplication or repetition that does not add significant value to a system or process.

19
Q

Adversarial search

A

situations where 2 agents both make choices (games such as checkers, chess, etc.)

-> Adversarial search

We can represent games in game trees, similar to the trees we discussed earlier.

Considering chess to have a game tree of b = 16 and d = 100,

-> we can see that exhaustive search is again, not feasible: 10120 possible states.

20
Q

The 𝛼-𝛽 principle

A

If you have an idea that is surely bad, do not take time to see how truly awful it is

-> The 𝛼-𝛽 algorithm does not consider moves that are considered useless.

21
Q

When is game considered as solved?

A

for every possible state, optimal moves are known.

-> This allows for perfect play*

22
Q

Perfect play

A

The behavior or strategy of a player that leads to the best possible outcome for that player regardless of the response by the opponent.

Example:

in chess, a perfect play would involve making the best move at each turn to maximize chances of winning.

23
Q
A