Agent Concepts Flashcards

1
Q

Agent

A

No common definition

“An agent is a (computational) entity that is situated
in some environment and that is capable of flexible, autonomous activity – that is, capable of action and
interaction – in order to meet it’s design objectives.”

eg: watch is autonomous but not flexible, RL not necessarily intelligent (could just remember or add things up such as AlphaZero), OK Google, Alexa, Siri

(1)
● A watch (not even a smart one)
○ It interacts with environment
○ It moves autonomously
○ It meets its design objectives
(2)
● Reinforcement Learning
○ Interacts with environment
○ Motivated to move by reward
function
○ Learns to maximize reward (in effect
meeting its design objective)
(3)
● Alexa / Siri
○ Autonomously offers advice
○ Adapts to user preference
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Characteristics agents

A
autonomous 
flexible 
interact with environment 
rationality 
mobility 
adaptivity
introspection 
benevolence

belief, intention, plan, desire,..

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Minimally intelligent Agent

A
  • Pro-active: takes the initiative to satisfy their (delegated) goals
  • Reactive: perceives and responds timely to the environment
  • Socially able: capable of interacting with other agents and humans, which includes cooperation and
    negotiation.

Difficulties:
goal direcect can be easy
reactive can be easy
combining both can be complex (react to reaction while getting it done)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

agent vs object concept

A

(similarities to an object)
To allow for qualitatively different system perspectives and to have different levels of abstraction, there is an
agent vs object concept which is complementary, rather than mutually exclusive. Both encapsulate:
- Identity: who
- State: what
- Passive behaviour: how, if invoked
But agents additionally encapsulate active behaviour: when, why, with whom, whether at all. There is a
gradual transition from agent to object.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

agent vs expert system

A

expert system interact with user to collect ingo

disembodied, don’t interact with environment, no proactive behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

types of agents behavior

A
● Accessible vs inaccessible.
○ Agent can obtain complete,
accurate, up-to-date information
about the environment’s state
● Deterministic vs non-deterministic.
○ How much uncertainty there is
regarding the next state given an
action
● Episodic vs non-episodic.
○ Episodic: Performance depends on
a number of discrete episodes
● Static vs dynamic
○ Is the Agent’s influence on the world,
the only way the world can change?
Then it’s static. The Physical world is
highly dynamic.
● Discrete vs continuous
○ Refers to the action space: Chess is
discrete, taxi-driving is continuous
(Russell and Norvig’s example)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

agent architectures

A

Logic based
Reactive
Belief/desire/intention
Layered

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Logic Based agents

A
● Symbolic / Knowledge based AI view
● Intelligent behavior is a result of symbolic
representation of the environment,
combined with logical deduction (theorem
proving)

Theory of agency 𝜌: describes how intelligent agents behave in an executable way
· Belief database Δ: information the agent has about the environment
· Δ ⊢𝜌 φ means that 𝜑 can be derived from Δ using the rules of 𝜌. It basically has implied
actions, allowed actions and no action. (“if you can prove this then do this”).
· Problems:
▪ Symbol grounding: coupling perception with symbolic facts
▪ Reasoning takes (too much) time
▪ Very hard to build a sufficiently complete model of a complex environment
· But logic-based is not dead as symbol grounding is starting to work (deep learning for vision),
hardware is getting ridiculously fast and logical policies can be learned.

eg: vacuum agent, In(x,y), Dirt(x,y) with
* action rules* In(x,y) ^Dirt(x,y) -> Do(suck)

Planning agents ( all about using an environmental model to determine the best action to take) such as Monte Carlo Tree Search

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Reactive Agents

A

● Intelligence emerges from the interaction
between simple behaviors and the
environment
● Intelligence cannot be separated from
acting in the real world: Intelligence cannot
be disembodied

architecture: subsumption, RL

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Subsumption architecture

A

Reactive agents

decision making is established through a set of behaviours, where
each behaviour accomplishes some task (i.e. FSM, finite state machine). There are no complex symbolic
representations and no symbolic reasoning: situation → action. There is a subsumption
hierarchy for when multiple behaviours choose conflicting actions (e.g. autonomous cars).
▪ Advantages: simplicity, computational tracktability, robustness against failure and can
be quite elegant.
▪ Problems:
• Only local information is used, no model of environment
• Short term view is inherent
• Emergence can be hard to predict
• Dynamics of interactions can become too complex to design

eg: autonomous car, go from a to b, avoid collision, fuel consumption -> hierarchy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

RL reactive agent

A

▪ Learns a reactive policy
▪ Environment is assumed to be Markovian
▪ Global optimality through local decisions are made possible through a value-function
(value = sum of discounted future rewards).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Believes Desires Intentions Agents

A
  1. Deliberation: what are the goals we want to achieve?
  2. Means-end reasoning: how are we going to achieve these goals?
    · Intentions:
    ▪ Drive means-end reasoning, lead to actions
    ▪ Constrain future deliberation, restrict reasoning
    ▪ Persist until achieved, believed to be unachievable or purpose is gone
    ▪ Influence beliefs for future reasoning
    · This results in a trade-off: accomplish intentions through direct action and reconsider current
    intentions.
    ▪ Bold agent: never stops to reconsider
    ▪ Cautious agent: constantly stops to reconsider
    · If the world/environment changes a lot, it’s better to be cautious.
    · Components are:
    ▪ Current beliefs (info about environment)
    ▪ BR function: updates beliefs according to perceptions
    ▪ GO function: generates available options/desires
    ▪ Current options/desires (must be consistent)
    ▪ Filter function: deliberation or intention revision process
    ▪ Current intentions (agent’s focus)
    ▪ Action selection: translates intentions into action
    · Usually this implies representing the intentions as a stack or hierarchy again,
    to make action selection and prioritization possible
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Layered

A

when there is a need for both pro-active and reactive behaviour. The
planning for goals depends on current conditions and it must respond to changes in
environment.
· Two types:
▪ Vertically layered: stack of layer (put top to bottom and go on them one by one) and do one or two passes. In is perceptual input, output is action.
- Example: (2-pass) InteRRaP, which has bottom-up activation and top-down
execution. It has social interactions about others, every-day behaviour about
itself and reactive behaviour about the environment.
▪ Horizontally layered: Number of desired behaviours = number of layers, which might
need a mediator function if actions contradict. Central control might create
bottleneck if complex.
- Example: Touringmachine, which is a symbolic representation of state of all
entities and constructs plans to achieve the agent’s objectives. It has

How well did you know this?
1
Not at all
2
3
4
5
Perfectly