Lecture 2 Flashcards

1
Q

What is an agent in the context of AI?

A

An entity with sensors to perceive the environment and actuators to manipulate it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the three levels of abstraction for studying agents?

A

Functional (input-output), program (implementation), and architecture (system design).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a simple reflex agent?

A

An agent that selects actions based only on the current percept, using condition-action rules.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a model-based reflex agent?

A

An agent that maintains an internal state to deal with partially observable environments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a goal-based agent?

A

An agent that reasons about the future to achieve a specific goal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the PEAS acronym used for?

A

To define an agent’s task environment: Performance, Environment, Actuators, and Sensors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the difference between a fully observable and a partially observable environment?

A

In a fully observable environment, the agent’s sensors can detect all relevant aspects for decision-making; in a partially observable one, they cannot.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a deterministic environment?

A

An environment where the next state is completely determined by the current state and the agent’s action.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is a stochastic environment?

A

An environment where the next state involves some element of chance or unpredictability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a static environment?

A

An environment that does not change while the agent is deciding on its next action.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a dynamic environment?

A

An environment that can change while the agent is deciding on its next action.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the difference between discrete and continuous environments?

A

Discrete environments have a finite set of states, while continuous environments have an infinite range of possible states.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is a single-agent environment?

A

An environment where only one agent is taking actions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a multi-agent environment?

A

An environment where multiple agents are acting, often with competing goals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is state-space representation?

A

A graph-based abstraction where nodes represent states and edges represent actions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Why is state-space representation useful?

A

It simplifies real-world problems into manageable abstractions for decision-making.

17
Q

What is a fringe in search algorithms?

A

A data structure that stores all the nodes that are yet to be expanded in the search tree.

18
Q

How does Breadth-First Search (BFS) expand nodes?

A

BFS expands the shallowest unexpanded nodes level by level, using a FIFO queue.

19
Q

How does Depth-First Search (DFS) expand nodes?

A

DFS expands the deepest unexpanded nodes first, using a LIFO stack.

20
Q

What are the four performance metrics for evaluating search algorithms?

A

Completeness, Optimality, Time Complexity, and Space Complexity.

21
Q

What is iterative deepening search?

A

A search method that combines BFS and DFS, expanding nodes iteratively at increasing depth levels.

22
Q

Why is BFS considered complete and optimal?

A

Because it systematically explores all nodes at a level before moving deeper, ensuring the shortest path is found.

23
Q

Why is DFS considered incomplete?

A

Because it can get stuck in loops and may fail to explore all nodes.

24
Q

What is depth-limited search?

A

A variation of DFS that imposes a predefined depth limit to avoid infinite loops.

25
Q

What is the horizon effect in depth-limited search?

A

Important outcomes beyond the depth limit may be missed, leading to suboptimal decisions.

26
Q

What is the main advantage of iterative deepening search over BFS?

A

It uses significantly less memory while still guaranteeing optimal solutions.

27
Q

What is a data-driven approach in search?

A

An approach where the agent starts from the current state and reasons backward to the goal.

28
Q

What is a goal-driven approach in search?

A

An approach where the agent starts from the goal and reasons forward to the current state.

29
Q

What factors determine whether to use a data-driven or goal-driven approach?

A

The clarity of the goal and the branching factor in each direction.

30
Q

What are the main limitations of uninformed search methods?

A

They do not use problem-specific knowledge and have no preference for one state over another.