AI Week 1 Flashcards
Definition of AI
AI is when a computer acts rationally
Agent VS Rational Agent
Agent: anything that perceives its environment through sensors and acts upon that environment through actuators.
Rational Agent: an agent that achieves its goal while optimizing some performance measure
Dartmouth Meeting
A meeting of 10 researchers at Dartmouth in 1956 to discuss AI. Considered the birthplace of modern AI.
How does an agent interact with its environment?
Percept/Sensors: The agent can perceive or sense the environment
Actuators/Actions: The agent can execute an action in the environment
Types of Environments
Remember with the mnemonic “D-SOAKED”:
Deterministicness (deterministic or stochastic):
Staticness (static or dynamic)
Observability (fully observable or partially observable or unobservable)
Agency (single agent or multi-agent)
Knowledge (known or unknown)
Episodicness (episodic or sequential)
Discreteness (discrete or continuous)
Deterministic & Stochastic Environment
An environment is deterministic if the next state is perfectly predictable given knowledge of the previous state and the agent’s action. Ex: chess AI.
An environment is stochastic if we can only predict the probablity of the next state occuring given knowledge of the previous state and the agent’s action. Ex: we know the probablity a self-driving car will crash into someone.
Mathematically:
Deterministic: transition_model(state1, action) = state2
Stochastic: transition_model(state1, action) is a probability measure
Static & Dynamic Environment
An environment is static if it does not change while the agent deliberates. Ex: chess game.
An environment is dynamic if it does change while the agent deliberates. Ex: other cars move while our self-driving car decides to brake.
Fully Observable, Partially Observable, & Unobservable Environment
An environment is fully observable if the agent can perceive and access all the information relevant to executing an action. Ex: chess AI. Math: the set S of possible percepts contains all percepts relevant to selecting an action
An environment is partially observable if the agent can perceive and access some of the information relevant to executing an action. Ex: self-driving cars. Math: the set S contains some but not all the percepts relevant to selecting an action
An environment is unobservable if the agent can perceive and access none of the information relevant to executing an action. Ex: no examples of this in the real world. The set S contains no percepts and is an empty set.
Single-Agent & Multi-Agent Environment
An environment is a single agent environment if there is only one decision-making entity in the environment. Ex: image classification because there is only one decision-making entity.
An environment is a multi-agent environment if there are multiple decision-making entities in the environment. Ex: a self-driving car is one decision-making entity and a pedestrian is another such entity.
Known & Unknown Environment
An environment is a known environment if the results for all actions are known to the agent. Ex: the agent knows all possible results of moving a chess piece in a chess game.
An environment is an unknown environment if the results for all actions are not known to the agent. Ex: a self-driving car does not know all possible results of coming to a shortstop as the driver behind us may react.
Math:
S1 = set of all possible initial states
A = set of all possible actions
S2 = set of all possible resulting states
transition_model: S1 x A -> S2
known: the set S2 contains all possible relevant resulting states
unknown: the set S2 does NOT contain every possible resulting state
Episodic & Sequential Environment
An environment is an episodic environment if actions do not require knowledge of the past and rely only on the current percept. IOW, all actions are executed as stand-alone episodes. Ex: an image classification AI only relies on the current image it senses to classify it.
An environment is a sequential environment if actions do require knowledge of the past and rely on the percept sequence. Ex: self-driving car relies upon the knowledge of both where the pedestrian was before and where it is now to determine the speed of the pedestrian and thus if the car needs to brake to avoid hitting him
Discrete & Continuous Environment
An environment is a discrete environment if all of its precepts are discrete. Ex: The (x,y) position of a chess piece perceived with a camera.
An environment is a continuous environment if all of its precepts are continuous. Ex: The velocity of a self-driving car.
Agent Function
A function that maps a percept sequence to an action
Agent Program
While an agent function is an abstract mathematical function, the agent program is a concrete implementation of the agent function.
An agent program takes in the current percept sequence and outputs an action.
Simple Reflex Agent
An agent that selects an action on the basis of the current percept, ignoring the rest of the percept history.
Agent functions for simple reflex agents are usually written as if-then statements.