lecture 1 Flashcards

1
Q

agent

A

perceiving its environment (throgh sensors) and acting upon the envirnment (through actuators)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

human agent

A

environment: the world in which we are embodied.
sensors: eyes, ears, nose.
actuators : hands, legs, voice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

robotic agent

A

environment: the (part of the) world in which it is embodied.
sensors: (infrared) cameras, mic, temp sensors.
actuators: various motors and output devices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

sowftware agent:

A

environment: software that it interacts with, including IO.
sensors: keystrokes, networl packages, file contents.
actuators: writing to files, screen display.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Genral AI

A

building machines that can perform any intelectual task as well as or better the humans

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Narrow AI

A

building machines that can perform a specific intellectual task very well.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Physical symbol system hypothesis

A

A physical system has the necessary and sufficient means for general intelligent action.

  • A symbol is a meaningful physical pattern that can be manipulated.
  • A symbol systen creates, copies, modifies and destroys symbols.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

The chinese room arguement

A

Assuming that a computer can fool us into thinking its human, does that mean that it is intelligent.
“The computer doesn’t think, it just follows rules.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

optimal solution

A

best solution according some measure of solution quality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

satisficing solution

A

one that is good enogh, according to some description of which solutions are adequare

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

approx. optimal solution

A

one whose measure of quality is close to the best theoretically possible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

probable solution

A

one that is likely to be a solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

rational agent

A

Goal is to find optimal solution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

bounded rationality

A

Goal is to find a solution that is as best as possible given limitations (time, memory, energy)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Abilities

A

the set of possible actions it can perform

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

goals

A

what it wants, its desires, its value

17
Q

prior knowledge

A

what it comes into being knowing, what it doesnt get from experience

18
Q

history of stimuli

A

current stimuli - what it recieves from the environment now (observation, percepts)
past experiences - what it has received in the past.

19
Q

body and controller

A

an agent interacts with the environment through its body.
the body id made up of: *sensors that interpret stimuli
*actuators that carry out actions.

  • the controller receives percepts from the body and the controller sends commands to the body.
20
Q

controller

A

the controller is the brain of the agent.

agents are situated in time, they receive sensory data in time, and do actions in time.

controleers have (limitied) memoryy and (limited) computational capabiltiies.

the controller specifies the command at every time.

the comman at any time depend on the current and previous percepts (its history).

21
Q

Belief state

A

an agent doe’snt have access to its entire history., it only has access to what it has remembered.

the memory or belief state of an agent at times encodes all of the agent’s history that it has access to.

the belief state of an agent encapsulates the info avbout its past that it can use for current anf future actions.

at every time a controller has to decide on: what should it do?
what should it remember?

22
Q

flat

A

one level of abstraction

23
Q

modular

A

interacting modules that can be understood separately

24
Q

hierarchical

A

moduls that are (recursively) decomposd into modules

  • flat representations are adequate for simple systems.
  • complex bio systems, computer systems, organizations are all hierarchocal
25
Q

kind of agents

A
  • dead reckoning agent: doesnt prceive the world
  • reactive agent: doesnt have a belief state.
  • model free agent: agent maintains belief state that is used to maximize utility/reward.
  • model based agent: agent uses model of the world as belief state to plan optimal actions.