Wk 1 (Ch1/2) : Introduction, Intelligent Agents Flashcards
What is the definition of a rational agent?
Rational agent : for each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built in knowledge the agent has.
Explain environments, rational actions, and performance measures.
In my words (sort of) rational actions are those that maximize performance measures in an environment. Of course, actions need to be available and you also need to be able to perceive the environment and be doing it on purpose. ---PEAS Performance measure Environment Acuators Sensors
What is the best way to design performance measures?
Based on what you actually want to achieve in the environment, not based on how you think the agent should behave.
What 4 things does rationality depend on?
- performance measure
- percepts sequence to date
- prior knowledge of the environment
- actions available
How can time be part of an environment?
Just one other parameter to take in to account. Doing this will lead to no two states ever being the same as time is continually changing.
How do you show an agent is rational?
For all possible environments, (and all possible start states and potential external states?) , this agent performs at least as well as any other agent. ???? this is a very strict interpretation.
How can an agent keep awareness on potential states it can’t see without internal memory?
Using the environment as external memory (think appt calendars and knots in handkerchiefs) external triggers, writing things down, etc.
How does action cost affect movement decisions?
Any expenditure of cost, should be an investment toward increasing future payout and performance measure improvement.
What is the biggest problem with reflex agents? What IS a reflex agent?
They have to continually do the same thing in environments that look the same, but really are quite different. Reflex agents essentially always act based on the current percept, regardless of percept history.
Does partial information imply non rationality?
absolutely not.
What kind of task environment would prevent pure reflex agents from acting rationally?
partially observable… think correspondence chess and reacting to A4 the same way every time.
In what environment is every agent rational?
reward-invariance under permutations of actions…. in other words, what I do has basically no affect on the reward I receive.
what’s the difference between agent functions and programs?
functions take into account all percepts up to this point…. programs are only the current percept.
What are the 6 types of task environments and how do I remember them? What is the options of each environment
Observable (fully / partially) Agents (single /multiple) Deterministic (vs. stochastic) Episodic Static (vs. dynamic) Discrete (vs. continuous)
In other words…. EASY = :
its only me, I can see everything, and its not changing. I have black and white choices that directly determine the outcome. If I make a mistake, no later environments are affected.
As opposed to …..HARD = :
There are many agents here, I can’t see everything that’s happening, and it’s all changing anyway. My choices are grey, not black and white, and if I make a mistake, it will affect later states.
Define agent in the book’s words.
Agent: an entity that perceives and acts; or, one that can be viewed as perceiving and
acting. Essentially any object qualifies; the key point is the way the object implements
an agent function. (Note: some authors restrict the term to programs that operate on
behalf of a human, or to programs that can cause some or all of their code to run on
other machines on a network, MOBILE AGENT as in mobile agents.)