Intelligent Agents Flashcards

1
Q

An agent Perceive the environment through_____

A

sensors (🡪Percepts)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

An agent Act upon the environment through

A

actuators (🡪Actions)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Agent whose actions are based on both built-in knowledge and own experience are called ____

A

AUTONOMOUS AGENT

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Agent = Architecture + Program

A

Structure of Rational Agents

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Task Environment has PEAS

A

Performance measure
Environment
Actuators
Sensors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Properties of Task Environment

or Types

A
Fully observable vs. partially observable
 Deterministic vs. stochastic
 Episodic vs. sequential
 Static vs. dynamic
 Discrete vs. continuous
 Single agent vs. multi agent
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

if an agent’s sensors give it access to the complete state of the environment at each point in time.

A

Fully observable Task Environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

if the next state of the environment is completely determined by the current state and the action executed by the agent.

A

Deterministic Environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

An agent’s action is divided into atomic episodes.

A

Episodic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Static environment is unchanged while an agent is deliberating while ______ continuously ask the agent what it wants to do

A

Dynamic environments:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Fully observable, deterministic, episodic, static, discrete and single-agent.

A

The simplest environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Partially observable, stochastic, sequential, dynamic, continuous and multi-agent.

A

Most real situations are:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Different classes of agents

A

Simple Reflex agents
Model based Reflex agents
Goal Based agents
Utility Based agents

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Learning agents improve their behavior over time

A

They start with an initially empty knowledge base.

They can operate in initially unknown environments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A utility function maps a state (or a sequence of states) onto a real number. The agent can also use these numbers to weigh the importance of competing goals.

A

Utility-based agents take action that maximize their reward.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly