AI Week 1 Flashcards

1
Q

Definition of AI

A

AI is when a computer acts rationally

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Agent VS Rational Agent

A

Agent: anything that perceives its environment through sensors and acts upon that environment through actuators.

Rational Agent: an agent that achieves its goal while optimizing some performance measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Dartmouth Meeting

A

A meeting of 10 researchers at Dartmouth in 1956 to discuss AI. Considered the birthplace of modern AI.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How does an agent interact with its environment?

A

Percept/Sensors: The agent can perceive or sense the environment

Actuators/Actions: The agent can execute an action in the environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Types of Environments

A

Remember with the mnemonic “D-SOAKED”:

Deterministicness (deterministic or stochastic):
Staticness (static or dynamic)
Observability (fully observable or partially observable or unobservable)
Agency (single agent or multi-agent)
Knowledge (known or unknown)
Episodicness (episodic or sequential)
Discreteness (discrete or continuous)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Deterministic & Stochastic Environment

A

An environment is deterministic if the next state is perfectly predictable given knowledge of the previous state and the agent’s action. Ex: chess AI.

An environment is stochastic if we can only predict the probablity of the next state occuring given knowledge of the previous state and the agent’s action. Ex: we know the probablity a self-driving car will crash into someone.

Mathematically:
Deterministic: transition_model(state1, action) = state2
Stochastic: transition_model(state1, action) is a probability measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Static & Dynamic Environment

A

An environment is static if it does not change while the agent deliberates. Ex: chess game.

An environment is dynamic if it does change while the agent deliberates. Ex: other cars move while our self-driving car decides to brake.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Fully Observable, Partially Observable, & Unobservable Environment

A

An environment is fully observable if the agent can perceive and access all the information relevant to executing an action. Ex: chess AI. Math: the set S of possible percepts contains all percepts relevant to selecting an action

An environment is partially observable if the agent can perceive and access some of the information relevant to executing an action. Ex: self-driving cars. Math: the set S contains some but not all the percepts relevant to selecting an action

An environment is unobservable if the agent can perceive and access none of the information relevant to executing an action. Ex: no examples of this in the real world. The set S contains no percepts and is an empty set.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Single-Agent & Multi-Agent Environment

A

An environment is a single agent environment if there is only one decision-making entity in the environment. Ex: image classification because there is only one decision-making entity.

An environment is a multi-agent environment if there are multiple decision-making entities in the environment. Ex: a self-driving car is one decision-making entity and a pedestrian is another such entity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Known & Unknown Environment

A

An environment is a known environment if the results for all actions are known to the agent. Ex: the agent knows all possible results of moving a chess piece in a chess game.

An environment is an unknown environment if the results for all actions are not known to the agent. Ex: a self-driving car does not know all possible results of coming to a shortstop as the driver behind us may react.

Math:
S1 = set of all possible initial states
A = set of all possible actions
S2 = set of all possible resulting states
transition_model: S1 x A -> S2
known: the set S2 contains all possible relevant resulting states
unknown: the set S2 does NOT contain every possible resulting state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Episodic & Sequential Environment

A

An environment is an episodic environment if actions do not require knowledge of the past and rely only on the current percept. IOW, all actions are executed as stand-alone episodes. Ex: an image classification AI only relies on the current image it senses to classify it.

An environment is a sequential environment if actions do require knowledge of the past and rely on the percept sequence. Ex: self-driving car relies upon the knowledge of both where the pedestrian was before and where it is now to determine the speed of the pedestrian and thus if the car needs to brake to avoid hitting him

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Discrete & Continuous Environment

A

An environment is a discrete environment if all of its precepts are discrete. Ex: The (x,y) position of a chess piece perceived with a camera.

An environment is a continuous environment if all of its precepts are continuous. Ex: The velocity of a self-driving car.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Agent Function

A

A function that maps a percept sequence to an action

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Agent Program

A

While an agent function is an abstract mathematical function, the agent program is a concrete implementation of the agent function.

An agent program takes in the current percept sequence and outputs an action.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Simple Reflex Agent

A

An agent that selects an action on the basis of the current percept, ignoring the rest of the percept history.

Agent functions for simple reflex agents are usually written as if-then statements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Model-based Reflex Agent

A

The simple reflex agent only makes decisions based on the current percept sequence. But the model-based reflex agent makes decisions based on the entire percept sequence. To store and understand this percept sequence, the model-based reflex agent maintains some sort of ‘internal state’ which reflects part of the unobserved environment. This agent still uses primarily if-then statements.

Agents maintain an ‘internal state’ with

1) transition model, which captures the effects of the agent’s actions on the world. maps actions and the current state to a future state.
2) sensor model, which captures how the percepts represent what is occurring in the world. maps percepts to the current state of the world. Ex: maps a droplet-shaped object that the camera detects to rain.

17
Q

Goal-Based Agent

A

The simple reflex agent and the model-based reflex agent do not have any goals. They just make decisions based on the current or past percept sequences. Yet the goal-based agent makes a decision with the percept-sequence and provides a distinction between a goal and non-goal state via an if-then statement. This provides us with a larger purpose, a target that we are trying to reach. This is an abstraction upon the goal-based agent.

18
Q

Task Environment

A

In designing an agent, the first step is always to specify the task environment as fully as possible. The task environment includes the following factors: PEAS.

P = performance measure 
E = environment (D-SOAKED)
A = actuators                       
S = sensors
19
Q

Utility Agent

A

The goal-based agent provides a crude distinction between reaching and not reaching a goal, that’s it. However, a utility-based agent lets us know precisely how close we are to our goal, we quantify how close the current state is to the goal.

The utility function is an internalized performance measure and can combine multiple conflicting goals. For example, the goals of a self-driving car are to not break the law and to get to a destination quickly. Because these goals conficlt, the utility function is a layer of abstraction and will specify the appropriate tradeoffs between the two.

20
Q

Types of Agents

A
These models are in order of increasing sophistication:
simple-reflex agent
model-based reflex agent
goal-based agent
utility-based agent
21
Q

Search Problem Definition

A

A search problem is a problem with a set of possible actions and resulting states and we must select the actions that will get us to the desired goal-state as efficiently as possible. A solution to this problem is defined as a list of all the actions one must take to get from the current state to the goal state.

This is the simplest kind of AI problem: episodic, single-agent, fully observable, deterministic, static, discrete, and known.

22
Q

Search Problem Components

A

State Space: the set of possible states this environment can be in
Initial State: the state the agent starts in
Goal State: the desired state
Actions: The set of actions available to the agent given a state s
Transition Model: a function that given a current state and an action returns the resulting state
Action Cost Function: the cost of going from one state to another via a certain action. This function maps (s, a, s’) to a real number cost.

23
Q

Best-First-Search()

A

The most basic algorithim to solve a search problem. This is the template for all other search-problem algorithms.