Session 2 Problem Solving Agent Flashcards
What is a rational agent?
A rational agent perceives its environment and takes actions to achieve the best possible outcome based on its goals.
What are the components of a rational agent?
- Sensors: Perceive the environment.
- Actuators: Perform actions.
- Environment: External conditions.
- Actions: Operations to achieve goals.
What is the role of sensors in a rational agent?
Sensors gather information from the environment, enabling the agent to perceive its surroundings.
What are actuators in a rational agent?
Actuators are mechanisms through which an agent acts upon the environment to achieve its goals.
What does it mean for an agent to be autonomous?
An agent is autonomous if its actions are determined by its own percepts and experiences rather than relying solely on pre-programmed rules.
What is the Traveller’s Problem?
It is a conceptual model illustrating how rational agents make decisions in dynamic and uncertain environments to find optimal paths.
How is the Traveller’s Problem represented?
It is represented as a graph where:
- Nodes are locations/states.
- Edges are transitions/actions.
- Weights represent costs (e.g., time, distance).
What is problem formulation in a problem-solving agent?
It involves defining the initial state, actions, goal test, and path cost to specify the problem the agent needs to solve.
What is state space?
The state space is the set of all possible states the agent can occupy during problem-solving.
What are the steps in problem-solving for an agent?
- Goal formulation.
- Problem formulation.
- Search for a solution.
- Execution of the solution.
What is the PEAS framework?
The PEAS framework defines an agent’s task environment:
- Performance Measure.
- Environment.
- Actuators.
- Sensors.
Provide an example of the PEAS framework for a vacuum cleaner robot.
- Performance Measure: Amount of dirt cleaned, time efficiency.
- Environment: Rooms with dirt and obstacles.
- Actuators: Wheels, suction mechanism.
- Sensors: Dirt sensors, bump sensors.
What are the dimensions of task environments?
- Fully vs. Partially Observable.
- Deterministic vs. Stochastic.
- Episodic vs. Sequential.
- Static vs. Dynamic.
- Discrete vs. Continuous.
- Single-Agent vs. Multi-Agent.
What is a fully observable environment?
An environment where the agent has complete access to all relevant information about its state.
What is a partially observable environment?
An environment where the agent’s sensors provide incomplete or noisy information.
What is a deterministic environment?
An environment where the next state is fully determined by the current state and the agent’s action.
What is a stochastic environment?
An environment where the outcomes of actions are uncertain and influenced by randomness.
What is the difference between episodic and sequential environments?
- Episodic: Each action is independent of previous actions.
- Sequential: Current actions affect future states.
What is the difference between static and dynamic environments?
- Static: The environment does not change while the agent deliberates.
- Dynamic: The environment changes independently of the agent’s actions.
What is the difference between discrete and continuous environments?
- Discrete: Finite number of distinct states and actions.
- Continuous: Infinite range of states and actions.
What is a single-agent environment?
An environment where only one agent operates (e.g., vacuum cleaner robot).
What is a multi-agent environment?
An environment where multiple agents interact, either cooperatively or competitively (e.g., multiplayer games).
What is a state transition diagram?
A visual representation of states (nodes) and transitions between them (edges) based on actions.
What are the types of state transition diagrams?
- Graph: Allows revisiting states.
- Tree: No revisiting of states.
What is a reflex agent?
An agent that acts based solely on the current percept without considering history.
What is a model-based reflex agent?
An agent that maintains an internal state to handle partially observable environments.
What is a goal-based agent?
An agent that takes actions to achieve specific goals by considering future states.
What is a utility-based agent?
An agent that chooses actions to maximize a utility function, balancing multiple performance criteria.
What is a learning agent?
An agent that improves its performance over time by learning from experiences and feedback.
What are the properties of intelligent agents?
- Autonomy.
- Reactivity.
- Proactiveness.
- Social Ability.
What is reactivity in intelligent agents?
Reactivity refers to the agent’s ability to perceive changes in the environment and respond in real time.
What is proactiveness in intelligent agents?
Proactiveness is the agent’s ability to take initiative and plan for future goals.
What is social ability in intelligent agents?
Social ability is the agent’s capacity to interact and communicate effectively with other agents or humans.
Provide an example of an autonomous agent.
A self-driving car operates autonomously by using sensors to perceive its environment and actuators to control its movement.
How does a stock trading agent operate?
- Sensors: Market data feeds.
- Actuators: Placing buy/sell orders.
- Performance Measure: Maximizing profit while minimizing risk.
What is the goal of a chess-playing agent?
To win games by evaluating board states and making optimal moves.
How does a recommendation system work?
A recommendation system learns user preferences and suggests relevant content to improve engagement.
What is the PEAS framework for a self-driving car?
- Performance Measure: Safety, speed, fuel efficiency.
- Environment: Roads, traffic signals, pedestrians.
- Actuators: Steering, brakes, accelerator.
- Sensors: Cameras, lidar, GPS.
What is the PEAS framework for a chess-playing agent?
- Performance Measure: Winning games, strategic gameplay.
- Environment: Chessboard, opponent’s moves.
- Actuators: Moving pieces on the board.
- Sensors: Observing board state.
What is the PEAS framework for a vacuum cleaner robot?
- Performance Measure: Amount of dirt cleaned, time efficiency.
- Environment: Rooms with dirt and obstacles.
- Actuators: Suction mechanism, wheels.
- Sensors: Dirt sensors, bump sensors.
What is the difference between goal-based and utility-based agents?
- Goal-based agents focus on achieving specific goals.
- Utility-based agents consider multiple criteria and aim to maximize utility.
What is a learning agent’s key feature?
A learning agent adapts and improves its performance over time based on feedback and experiences.
What is an example of a partially observable environment?
Self-driving cars operate in partially observable environments due to limited sensor coverage and environmental conditions.
What is an example of a stochastic environment?
Stock trading involves stochastic environments due to unpredictable market fluctuations.
What is an episodic task environment?
A spam email filter evaluates each email independently, making it an episodic environment.
What is a sequential task environment?
Self-driving cars operate in sequential environments where current actions influence future states.
What is an example of a static environment?
A crossword puzzle solver operates in a static environment since the puzzle does not change.
What is an example of a dynamic environment?
Real-time gaming AI operates in dynamic environments where conditions change continuously.
What are the two types of state transition diagrams?
- Graph: Allows revisiting states.
- Tree: No revisiting of states.
How does a medical diagnosis system work?
It analyzes patient data and symptoms to suggest diagnoses and treatments, relying on sensors like patient records and actuators like diagnostic reports.
What is the difference between fully observable and partially observable environments in the sensor-based dimension?
In fully observable environments, the agent has complete and accurate access to all relevant information, while in partially observable environments, the agent’s sensors provide incomplete or noisy information.
What is an example of a fully observable environment?
Chess, where the entire board state is visible to the agent.
What is an example of a partially observable environment?
A self-driving car, where sensors like cameras and lidar may be obstructed by weather conditions.
What is the difference between episodic and sequential environments in the action-based dimension?
In episodic environments, each action is independent of previous actions, while in sequential environments, current actions affect future states and decisions.
What is an example of an episodic environment?
Spam email filtering, where each email is evaluated independently of others.
What is an example of a sequential environment?
Driving a car, where each decision (e.g., turning or accelerating) impacts the overall journey.
What is the difference between discrete and continuous environments in the state-based dimension?
Discrete environments have a finite number of distinct states, while continuous environments have states and actions that vary smoothly across a range.
What is an example of a discrete environment?
A vacuum cleaner robot, where the environment can be divided into distinct states (e.g., Room A dirty, Room B clean).
What is an example of a continuous environment?
A self-driving car, where variables like speed and steering angle are continuous.
What is the difference between single-agent and multi-agent environments in the agent-based dimension?
Single-agent environments involve only one agent, while multi-agent environments involve multiple agents that may cooperate or compete.
What is an example of a single-agent environment?
A vacuum cleaner robot operating independently in a room.
What is an example of a multi-agent environment?
Chess, where two agents (players) compete against each other.
What is the difference between deterministic and stochastic environments in the action & state-based dimension?
Deterministic environments have predictable outcomes for actions, while stochastic environments involve randomness and uncertainty in outcomes.
What is the difference between static and dynamic environments in the action & state-based dimension?
Static environments do not change while the agent is deliberating, whereas dynamic environments can change independently of the agent’s actions.
What is an example of a dynamic environment?
A self-driving car navigating traffic, where conditions like traffic and weather can change in real time.