Agents Flashcards

1
Q

Describe the diagram on the agent-oriented perspective on AI

A

Look in notes if you don’t remember :p

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

____________ are things that could change the state of an environment

A

Actuators

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Sensors process _________ from the environment

A

percepts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a percept?

A

An agent’s perceptual input at any point in time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

And agent’s percept _________ is the order of everything it has perceived

A

sequence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

In general, an agent’s choice of actions can depend on its entire ______ _______

A

percept history

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A performance measure measures the degree of the “______ _____” the agent does

A

right thing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are 4 things a agent needs to be rational?

A
  • A performance measure that defines its criterion for success
  • Some prior knowledge of the environment
  • To know the actions it can perform
  • Its percept sequence to date
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the Definition of a rational agent? (Memorize this)

A

For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

To tell if an agent is ________, we have to take into account the relevant performance measure

A

rational

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

An agent might do an ________ _______ actions to learn more about the environment

A

information gathering

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What does it mean for an agent to be autonomous?

A

They can operate on their own without a lot of prior information from their designers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

We expect rational agents to deal with ________ and _____ information, and to learn new _______ when necessary

A

impartial, unknown, behaviours

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What does the PEAS model stand for?

A
  • Performance measure
  • Environment
  • Actuators
  • Sensors
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Describe a taxi-driving agent using the PEAS model

A

performance measure: safety, speed, legality, passenger comfort, profit

environment: roads, including other traffic, pedestrians, weather conditions
actuators: steering, breaks, accelerator, turn signals, horn, lights
sensors: camera, sonar, speedometer, odometer, GPS, engine sensors
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the 5 property pairs of environments?

A
  • Fully Observable vs Partially Observable
  • Deterministic vs Stochastic
  • Static vs Dynamic
  • Discrete vs Continuous
  • Known vs Unknown
17
Q

When is an environment Fully Observable?

A

If an agents sensors give it access to the complete environment at each point in time

18
Q

When is an environment Partially Observable?

A

only a part of the environment can be sensed at each point in time

19
Q

When is an environment Deterministic?

A

if the next state of the environment is completely determined by the previous state sequence, and the action of the agent

20
Q

When is an environment Stochastic?

A

if the next state of the environment is NOT completely determined by the previous state sequence, and the action of the agent

21
Q

When is an environment Static?

A

if the environment cannot change while the agent is thinking

22
Q

When is an environment Dynamic?

A

if the environment can change while the agent is thinking

23
Q

When is an environment Discrete?

A

if time is handled in discrete sequential steps (like the discrete ticking of a CPU)

24
Q

When is an environment Continuous?

A

time is handled as a continuous stream (as in physics)

25
Q

When is an environment known?

A

if the outcomes of all actions are known ahead of time by the agent

26
Q

When is an environment unknown?

A

the outcomes of some actions are not known ahead of time, meaning the agent must test things out to find out what will happen

27
Q

What are 6 different structures of agents?

A
  • Table-driven
  • Simple Reflex
  • Model-based Reflex
  • Goal-based
  • Utility
  • Learning
28
Q

A table-driven agent uses a table of actions indexed by ______-________

A

percept sequences

29
Q

Using a table-driven agent is unreasonable in even _________ complicated situations

A

moderately

30
Q

A ________ _______ agent decides what to do based solely on the current percept, and ignores all previous percepts

A

simple reflex

31
Q

Simple reflex agents consist of _______-______ rules

A

condition action

32
Q

What are 2 cons of the simple reflex structure?

A
  • Single percepts are not enough to determine the right thing
  • Can get stuck in infinite loops
33
Q

Model-based reflex agents save information about the external world using _______ state

A

internal

34
Q

A __________ agent can simulate actions with their internal model

A

Model-based

35
Q

What are goals?

A

A description of desirable states

36
Q

Utility agents are ______-______ agents that also take into account the quality of the _____ sequences used to achieve the goal

A

goal-based, action

37
Q

A ______ function measures the agent’s “happiness” with a set of actions

A

utility

38
Q

Utility-maximizing agents are the goal of much ____ research

A

AI

39
Q

A _______ agent can modify its components to improve overall performance

A

learning