Session 3 Flashcards

1
Q

What is the definition of a search strategy?

A

A search strategy is a method used by problem-solving agents to explore the state space and find a solution to a given problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the purpose of a search strategy?

A

The purpose of a search strategy is to systematically traverse the state space to reach the goal state from the initial state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the two main types of search strategies?

A

The two main types of search strategies are uninformed search and informed search.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is an uninformed search strategy?

A

An uninformed search strategy is a method that does not use additional information about the goal state beyond the problem definition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is an informed search strategy?

A

An informed search strategy is a method that uses heuristic information to estimate the cost of reaching the goal state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is Breadth-First Search (BFS)?

A

BFS is an uninformed search strategy that explores all nodes at the current depth before moving to the next depth.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is Depth-First Search (DFS)?

A

DFS is an uninformed search strategy that explores as far as possible along a branch before backtracking.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is Uniform Cost Search (UCS)?

A

UCS is an uninformed search strategy that expands the node with the lowest path cost.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is Iterative Deepening Search (IDS)?

A

IDS is an uninformed search strategy that combines the space efficiency of DFS with the completeness of BFS by performing DFS with increasing depth limits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a heuristic function?

A

A heuristic function is an estimate of the cost to reach the goal from a given state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the key advantage of A* search?

A

A* search guarantees the optimal solution if the heuristic is admissible (never overestimates the cost).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the main limitation of DFS?

A

DFS is not guaranteed to find the optimal solution and can get stuck in infinite branches.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the main advantage of BFS?

A

BFS is complete and optimal for unweighted graphs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the main limitation of UCS?

A

UCS can be slow if many nodes have similar costs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the main advantage of IDS?

A

IDS is complete and uses less memory than BFS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the main limitation of Greedy Best-First Search?

A

Greedy Best-First Search does not guarantee the optimal solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is the purpose of a state space representation?

A

State space representation is used to model all possible states and transitions in a problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is a search tree?

A

A search tree is a tree representation of the state space where nodes represent states and edges represent actions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the role of feedback in learning agents?

A

Feedback helps learning agents improve their performance by evaluating actions and guiding the learning process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What are the four components of a learning agent?

A

The four components of a learning agent are the performance element, learning element, critic, and problem generator.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is supervised learning?

A

Supervised learning is a method where the agent learns from labeled data with known outcomes.

22
Q

What is unsupervised learning?

A

Unsupervised learning is a method where the agent learns patterns and structures from unlabeled data.

23
Q

What is reinforcement learning?

A

Reinforcement learning is a method where the agent learns by interacting with the environment and receiving rewards or penalties.

24
Q

What is the role of the critic in a learning agent?

A

The critic evaluates the agent’s actions and provides feedback to guide learning.

25
Q

What is the role of the problem generator in a learning agent?

A

The problem generator suggests exploratory actions to improve the agent’s learning.

26
Q

What is the main advantage of learning agents?

A

Learning agents can adapt to dynamic and uncertain environments.

27
Q

What is the main challenge of learning agents?

A

Learning agents require large amounts of data and computational resources.

28
Q

What is the Grid World problem?

A

The Grid World problem is a two-dimensional grid where agents move between cells to achieve a goal.

29
Q

What is the Sliding-Tile Puzzle?

A

The Sliding-Tile Puzzle involves moving tiles into an empty space to achieve a specific configuration.

30
Q

What is the Sokoban Puzzle?

A

The Sokoban Puzzle is a grid-based puzzle where the agent pushes boxes to designated storage locations.

31
Q

What is the Traveling Salesperson Problem (TSP)?

A

TSP is a problem where the agent must visit all cities and return to the starting point while minimizing travel cost.

32
Q

What is the main challenge of TSP?

A

TSP is an NP-hard problem with exponential complexity.

33
Q

What is the purpose of route-finding problems?

A

Route-finding problems aim to find the shortest or optimal path between two locations.

34
Q

What is the role of a utility function in utility-based agents?

A

A utility function evaluates the desirability of different states to help the agent optimize its actions.

35
Q

What is a simple reflex agent?

A

A simple reflex agent selects actions based solely on the current percept, ignoring percept history.

36
Q

What is a model-based agent?

A

A model-based agent maintains an internal model of the world to handle partially observable environments.

37
Q

What is a goal-based agent?

A

A goal-based agent makes decisions based on goals it needs to achieve.

38
Q

What is a utility-based agent?

A

A utility-based agent uses a utility function to evaluate and optimize its actions.

39
Q

What is the main limitation of simple reflex agents?

A

Simple reflex agents cannot handle partially observable environments or learn from experiences.

40
Q

What is the main advantage of model-based agents?

A

Model-based agents can handle partially observable environments by maintaining an internal state.

41
Q

What is the main limitation of goal-based agents?

A

Goal-based agents may require significant computational resources for complex goals.

42
Q

What is the main advantage of utility-based agents?

A

Utility-based agents can handle trade-offs between conflicting goals and optimize performance.

43
Q

What is the main advantage of learning agents over other types?

A

Learning agents can improve their performance over time by learning from feedback and experiences.

44
Q

What is the Vacuum Cleaner Problem?

A

The Vacuum Cleaner Problem involves an agent cleaning dirt in a grid-based environment.

45
Q

What is the purpose of automatic assembly sequencing?

A

Automatic assembly sequencing determines the order of assembling parts to create a product.

46
Q

What is the main advantage of A* search over UCS?

A

A* search combines path cost and heuristic value to find the optimal solution more efficiently.

47
Q

What is the main limitation of Greedy Best-First Search compared to A*?

A

Greedy Best-First Search does not guarantee the optimal solution, while A* does.

48
Q

What is the role of exploration in reinforcement learning?

A

Exploration allows the agent to try new actions to discover better strategies.

49
Q

What is the role of exploitation in reinforcement learning?

A

Exploitation involves using known strategies to maximize rewards.

50
Q

What is the main challenge in balancing exploration and exploitation?

A

Balancing exploration and exploitation is challenging because too much exploration can waste resources, while too much exploitation can prevent discovering better strategies.