Midterm review Flashcards
What’s a rational agent
An agent that performs an action that will be most successful/max performance measure based on what it perceives/percept sequence and actions it can perform
What does PEAS stand for what are the parts
Performance measure (goal ie safety ), environment (location/what’s in location), actuators (perform actions through), sensors(how thing will sense things for input)
What do we mean by PEAS analysis?
M
What is an example of software agent
An AI program/ program to do something in ai, ex program that brakes when it sees a car too close: camera sensors, mechanism to push brake as effector
What is abstraction and example
Remove details, focus on bigger idea . Ex. Graph of sibiu
Simple reflex agent
Looks at latest percept or 2, acts as a reflex, ex dirty so clean
Model based agents
Has a ds that models environment (other situations going on, what my actions will do) makes decisions off of that
Goal based agents
Model with a goal, ie destination, more sophisticated
Utility based agents
Model, goal, and utility function, considers future state ie will this action help me meet my goal
Percept sequence
Everything the agent has perceived so far
Uninformed searches
Uninformed is blind, has no info about steps or path cost (good with no additional info, but less effective )
Informed searches
Considers problem specific knowledge, may use heuristics or functions that consider extra info, more effective
Uninformed searches examples
Breadth first, depth first, iterative deepening depth first, uniforms cost
Informed searches examples
Greedy best, a*
Graph vs tree based models
Tree has at most 1 parent, may have many children, graph nodes may have many parents
G (n) function
Cost of getting to node
H (n) function
Cost of getting from node to goal
fully observable vs partially observable env
sensors give access to complete state of env at ea pt in time (chess with a clock) (poker)
deterministic vs stochatic env
next state of env is completely determined by the current state and action executed by agent(chess with clock)(poker)
strategic env
env is deterministic except for actions of other agents(chess with clock)
episodic vs sequential
agent’s exp is divided into episodes (perceive then one action) and choice of action in ea episode depends only on the episode itself (part picking robot)(kchess with clock)
static vs dynamic env
env is unchanged while an agent is deliberating (poker)(taxi driving)
semidynamic env
env itself doesn’t change with passing time but agent’s performance score does(image analysis)
discrete vs cont env
limited number of distinct, clearly defined percepts and actions (chess with clock)(taxi driving)
single agent vs multiagent env
agent operating by itself in env
real world env types?
partially observable, stochastic, sequential, dynamic, continuous, multi-agent
greedy best-first search
uses h(n) - cost from n to goal, gets to goal but may not be optimal, can get stuck in loops, keeps nodes in memory
A* search
uses f(n) = g(n) + h(n) total cost from n to goal = cost so far to get to n + cost from n to goal; optimal solution when heuristic is admissible, fairly efficient; puts children in fringe list; keeps all nodes
heuristic is admissible
h(n)
breadth-first search
goes across all children in first node, FIFO put at end of fringe list; lots of time and space
depth-first
LIFO, children put at front of fringe list; could go infinitely down
iterative deepening depth-first
same as depth first but can have branching factor of say 10, then goes down from next original child … doesn’t keep track forgets memory, more efficient
uniform cost search
uses g(n) if g is low puts at front of fringe list, if high puts at end of fringe list; takes timme
steps of search (for all)
take node off fringe list, expand by rule, is it goal? if yes return value, if no follow rule for which to expand/where to put children