Agents Flashcards
Definition of a rational Agent:
For each possible percept history, select an action
that is expected to maximize its performance measure, given the evidence by the percept history and
whatever built-in knowledge the agent has.
What does PEAS stand for?
Performance measure, Environment, Actuators, Sensors. It is used to describe agents.
Fully/Partly observable environment…
sensors detect all relevant properties of the world for the current
action
Single/Multi - agent environment…
only one agent, no cooperation and no competition between agents
deterministic / stochastic environment…
An environment is deterministic if the next state is perfectly predictable given knowledge of the previous state and the agent’s action. Stochastic, if there is a probability of the next state.
episodic / sequential environment…
Sequential environments require memory of past actions to determine the next best action. Episodic environments are a series of one-shot actions, and only the current (or recent) percept is relevant. An AI that looks at radiology images to determine if there is a sickness is an example of an episodic environment. One image has nothing to do with the next.
static / dynamic environment…
the world does not change during the reasoning time of the agent
semi-dynamic: static, but the performance score decreases with
deliberation time
discrete / continous environment…
world properties have discrete values, e.g. time, number of possible
states,…
known / unknown environment…
An environment is considered to be “known” if the agent understands the laws that govern the environment’s behavior. For example, in chess, the agent would know that when a piece is “taken” it is removed from the game. On a street, the agent might know that when it rains, the streets get slippery.
four basic types of agents:
- Simple reflex agents (no memory, no sequences of percepts)
- Model-based reflex agents (internal state -> memory, model of env.)
- Goal-based agents (model of the world, explicit goal, search & planning)
- Utility-based agents (“happiness”, utility function -> expected utility for decision, resolve conflicting goals)