Properties of Environment (del av PP 5 - Intelligent Agents (Ch4)) Flashcards

Begrepp of environment som tas upp i PP 5 - Intelligent Agents (Ch4), men inget om Intelligent Agents.

1
Q

Observable

A

Accessible.

The agent is able to obtain complete, accurate, up-to-date information about the environment and its states.

The agent can directly and completely observe the current state of the environment at any given time.

Ex. A chess game where the agent can see the entire board and knows the position of every piece.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Non-Observable

A

Inaccessible.

The agent does not know anything about the enviornment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Partially Observable

A

The agent has knowledge of some parts of the environment, but some are not observable and thus partly known.

The agent cannot directly observe the complete state of the environment.

Ex. A card game like poker, where players have hidden cards and can only infer the opponent’s hand based on their actions and limited information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Collaborative

A

An environment that allows agents to together carry out tasks related to monitoring and manipulation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Non-Collaborative

A

An environment where the agents carry out tasks individually.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Semi-Collaborative

A

An environment that is collaborative, but the agents carry out some of their work individually.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Static

A

The environment is assumed to remain unchanged during the execution.

The elements and their properties remain constant over time.

Ex. A puzzle game where the arrangement of pieces or the rules of the game do not change throughout the gameplay

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Dynamic

A

The environment can change beyond the agent’s control.

In this case, other processes operate in the system.

Elements or properties can change unpredictably.

Ex. A real-time strategy game where the terrain can be altered, new obstacles can appear, or opponents can adapt their strategies in response to the player’s actions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Deterministic

A

Every action has a single guaranteed effect.

The next state is entirely determined by the current state and the agent’s actions.

There is no randomness involved.

Ex. Mathematical model (2 + 2 = 4), traffic signal (the pedestrian knows what the next signal will be)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Stochastic

A

There are multiple unpredictable outcomes and the agent cannot be sure of the next state.

Involves randomness or uncertainty in the outcome of actions. The next
state of the environment is not entirely predictable based on the current state and actions.

Ex. Stock markets, medical diagnosis (treatment), listening to the radio (you don’t know which song will play next)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Discrete

A

The number of actions is assured to be performed and the percepts in the environment are fixed and finite.

There are a limited number of distinct, clearly defined percepts and actions.

Ex. A chess game where the pieces can only occupy certain squares on the board.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Continuous

A

There are an infinite number of actions.

Percepts and actions exist along a continuous spectrum rather than being discrete.

Ex. A self-driving car navigating through traffic, where the car’s sensors perceive a continuous stream of data such as distances, speeds, and positions of other vehicles, a game of soccer where the position of the players and the ball keeps changing and the ball can hit the goal at different angles and speeds (infinite possibilities)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Episodic

A

The agent works with one “episode at a time”. The agent only considers the task at hand and decides the best action for carrying out that task.

There are no links between the agent’s performance and other scenarios. The agent does not consider the effect of earlier or future tasks.

The agent’s performance is the result of a series of independent tasks.

The agent’s experience is divided into distinct episodes where each episode is independent of the others. Each episode starts with the agent’s perception of the environment and ends with a terminal state, with no influence or memory of previous states.

Ex. Medical diagnosis (each diagnosis is an independent decision), chess game (analyzing each possible move in isolation), a support bot (answers questions)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Sequential (Non-Episodic)

A

The agent’s actions have consequences that affect future perceptions and decisions.

Ex. Interactive English tutor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Describe the environment for: medical diagnosis

A

Partially-Observable
Stochastic
Episodic
Static
Continuous
Single-Agent System

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Single Agent

A

An environment is explored by a single agent. All actions are performed by a single agent in the environment.

Ex. Practicing tennis alone with a ball

17
Q

Multi-Agent

A

An environment where two or more agents are taking actions in the environment.

Ex. Playing a soccer match