Session 2 Problem Solving Agent Flashcards

1
Q

What is a rational agent?

A

A rational agent perceives its environment and takes actions to achieve the best possible outcome based on its goals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the components of a rational agent?

A
  • Sensors: Perceive the environment.
  • Actuators: Perform actions.
  • Environment: External conditions.
  • Actions: Operations to achieve goals.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the role of sensors in a rational agent?

A

Sensors gather information from the environment, enabling the agent to perceive its surroundings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are actuators in a rational agent?

A

Actuators are mechanisms through which an agent acts upon the environment to achieve its goals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What does it mean for an agent to be autonomous?

A

An agent is autonomous if its actions are determined by its own percepts and experiences rather than relying solely on pre-programmed rules.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the Traveller’s Problem?

A

It is a conceptual model illustrating how rational agents make decisions in dynamic and uncertain environments to find optimal paths.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How is the Traveller’s Problem represented?

A

It is represented as a graph where:
- Nodes are locations/states.
- Edges are transitions/actions.
- Weights represent costs (e.g., time, distance).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is problem formulation in a problem-solving agent?

A

It involves defining the initial state, actions, goal test, and path cost to specify the problem the agent needs to solve.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is state space?

A

The state space is the set of all possible states the agent can occupy during problem-solving.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the steps in problem-solving for an agent?

A
  1. Goal formulation.
  2. Problem formulation.
  3. Search for a solution.
  4. Execution of the solution.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the PEAS framework?

A

The PEAS framework defines an agent’s task environment:
- Performance Measure.
- Environment.
- Actuators.
- Sensors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Provide an example of the PEAS framework for a vacuum cleaner robot.

A
  • Performance Measure: Amount of dirt cleaned, time efficiency.
  • Environment: Rooms with dirt and obstacles.
  • Actuators: Wheels, suction mechanism.
  • Sensors: Dirt sensors, bump sensors.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the dimensions of task environments?

A
  1. Fully vs. Partially Observable.
  2. Deterministic vs. Stochastic.
  3. Episodic vs. Sequential.
  4. Static vs. Dynamic.
  5. Discrete vs. Continuous.
  6. Single-Agent vs. Multi-Agent.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a fully observable environment?

A

An environment where the agent has complete access to all relevant information about its state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is a partially observable environment?

A

An environment where the agent’s sensors provide incomplete or noisy information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a deterministic environment?

A

An environment where the next state is fully determined by the current state and the agent’s action.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is a stochastic environment?

A

An environment where the outcomes of actions are uncertain and influenced by randomness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is the difference between episodic and sequential environments?

A
  • Episodic: Each action is independent of previous actions.
  • Sequential: Current actions affect future states.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the difference between static and dynamic environments?

A
  • Static: The environment does not change while the agent deliberates.
  • Dynamic: The environment changes independently of the agent’s actions.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the difference between discrete and continuous environments?

A
  • Discrete: Finite number of distinct states and actions.
  • Continuous: Infinite range of states and actions.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is a single-agent environment?

A

An environment where only one agent operates (e.g., vacuum cleaner robot).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is a multi-agent environment?

A

An environment where multiple agents interact, either cooperatively or competitively (e.g., multiplayer games).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is a state transition diagram?

A

A visual representation of states (nodes) and transitions between them (edges) based on actions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What are the types of state transition diagrams?

A
  1. Graph: Allows revisiting states.
  2. Tree: No revisiting of states.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is a reflex agent?

A

An agent that acts based solely on the current percept without considering history.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is a model-based reflex agent?

A

An agent that maintains an internal state to handle partially observable environments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is a goal-based agent?

A

An agent that takes actions to achieve specific goals by considering future states.

28
Q

What is a utility-based agent?

A

An agent that chooses actions to maximize a utility function, balancing multiple performance criteria.

29
Q

What is a learning agent?

A

An agent that improves its performance over time by learning from experiences and feedback.

30
Q

What are the properties of intelligent agents?

A
  1. Autonomy.
  2. Reactivity.
  3. Proactiveness.
  4. Social Ability.
31
Q

What is reactivity in intelligent agents?

A

Reactivity refers to the agent’s ability to perceive changes in the environment and respond in real time.

32
Q

What is proactiveness in intelligent agents?

A

Proactiveness is the agent’s ability to take initiative and plan for future goals.

33
Q

What is social ability in intelligent agents?

A

Social ability is the agent’s capacity to interact and communicate effectively with other agents or humans.

34
Q

Provide an example of an autonomous agent.

A

A self-driving car operates autonomously by using sensors to perceive its environment and actuators to control its movement.

35
Q

How does a stock trading agent operate?

A
  • Sensors: Market data feeds.
  • Actuators: Placing buy/sell orders.
  • Performance Measure: Maximizing profit while minimizing risk.
36
Q

What is the goal of a chess-playing agent?

A

To win games by evaluating board states and making optimal moves.

37
Q

How does a recommendation system work?

A

A recommendation system learns user preferences and suggests relevant content to improve engagement.

38
Q

What is the PEAS framework for a self-driving car?

A
  • Performance Measure: Safety, speed, fuel efficiency.
  • Environment: Roads, traffic signals, pedestrians.
  • Actuators: Steering, brakes, accelerator.
  • Sensors: Cameras, lidar, GPS.
39
Q

What is the PEAS framework for a chess-playing agent?

A
  • Performance Measure: Winning games, strategic gameplay.
  • Environment: Chessboard, opponent’s moves.
  • Actuators: Moving pieces on the board.
  • Sensors: Observing board state.
40
Q

What is the PEAS framework for a vacuum cleaner robot?

A
  • Performance Measure: Amount of dirt cleaned, time efficiency.
  • Environment: Rooms with dirt and obstacles.
  • Actuators: Suction mechanism, wheels.
  • Sensors: Dirt sensors, bump sensors.
41
Q

What is the difference between goal-based and utility-based agents?

A
  • Goal-based agents focus on achieving specific goals.
  • Utility-based agents consider multiple criteria and aim to maximize utility.
42
Q

What is a learning agent’s key feature?

A

A learning agent adapts and improves its performance over time based on feedback and experiences.

43
Q

What is an example of a partially observable environment?

A

Self-driving cars operate in partially observable environments due to limited sensor coverage and environmental conditions.

44
Q

What is an example of a stochastic environment?

A

Stock trading involves stochastic environments due to unpredictable market fluctuations.

45
Q

What is an episodic task environment?

A

A spam email filter evaluates each email independently, making it an episodic environment.

46
Q

What is a sequential task environment?

A

Self-driving cars operate in sequential environments where current actions influence future states.

47
Q

What is an example of a static environment?

A

A crossword puzzle solver operates in a static environment since the puzzle does not change.

48
Q

What is an example of a dynamic environment?

A

Real-time gaming AI operates in dynamic environments where conditions change continuously.

49
Q

What are the two types of state transition diagrams?

A
  1. Graph: Allows revisiting states.
  2. Tree: No revisiting of states.
50
Q

How does a medical diagnosis system work?

A

It analyzes patient data and symptoms to suggest diagnoses and treatments, relying on sensors like patient records and actuators like diagnostic reports.

51
Q

What is the difference between fully observable and partially observable environments in the sensor-based dimension?

A

In fully observable environments, the agent has complete and accurate access to all relevant information, while in partially observable environments, the agent’s sensors provide incomplete or noisy information.

52
Q

What is an example of a fully observable environment?

A

Chess, where the entire board state is visible to the agent.

53
Q

What is an example of a partially observable environment?

A

A self-driving car, where sensors like cameras and lidar may be obstructed by weather conditions.

54
Q

What is the difference between episodic and sequential environments in the action-based dimension?

A

In episodic environments, each action is independent of previous actions, while in sequential environments, current actions affect future states and decisions.

55
Q

What is an example of an episodic environment?

A

Spam email filtering, where each email is evaluated independently of others.

56
Q

What is an example of a sequential environment?

A

Driving a car, where each decision (e.g., turning or accelerating) impacts the overall journey.

57
Q

What is the difference between discrete and continuous environments in the state-based dimension?

A

Discrete environments have a finite number of distinct states, while continuous environments have states and actions that vary smoothly across a range.

58
Q

What is an example of a discrete environment?

A

A vacuum cleaner robot, where the environment can be divided into distinct states (e.g., Room A dirty, Room B clean).

59
Q

What is an example of a continuous environment?

A

A self-driving car, where variables like speed and steering angle are continuous.

60
Q

What is the difference between single-agent and multi-agent environments in the agent-based dimension?

A

Single-agent environments involve only one agent, while multi-agent environments involve multiple agents that may cooperate or compete.

61
Q

What is an example of a single-agent environment?

A

A vacuum cleaner robot operating independently in a room.

62
Q

What is an example of a multi-agent environment?

A

Chess, where two agents (players) compete against each other.

63
Q

What is the difference between deterministic and stochastic environments in the action & state-based dimension?

A

Deterministic environments have predictable outcomes for actions, while stochastic environments involve randomness and uncertainty in outcomes.

64
Q

What is the difference between static and dynamic environments in the action & state-based dimension?

A

Static environments do not change while the agent is deliberating, whereas dynamic environments can change independently of the agent’s actions.

65
Q

What is an example of a dynamic environment?

A

A self-driving car navigating traffic, where conditions like traffic and weather can change in real time.