Chapter 1-6 Flashcards

1
Q

This means doing the right thing

A

Rationality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

This type of AI talks about acting like human. Machine Learning, Computer Vision, NLP, Knowledge Presentation, Robotics

A

Acting Human

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

This type of AI talks about how humans think. Psycholoogy, Neuroscience, Introspect

A

Thinking Human

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

This type of AI talks about logic

A

Thinking Rational

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

This type of AI talks about agent based AI

A

Acting Rational

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Give the four characteristics of AI

A

Human
Rational
Thinking
Act

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

This pertains to an agent (entity) that do the right thing. It lis like a function

A

Rational Agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Give some risks of AI

A

Unemployment
Unmanned Warfare
Biased Decision Making
Cybersecurity
Security

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Give some benefits of AI

A

Time Efficiency
Easier surveilance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

These are entities with less rationality that makes them rational when worked with each other

A

Swarm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

An entity that has a percept which yields to an actuator

A

Agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

This is also known as the problem. Everything outside the agent including other agents, physical objects, and digital inputs

A

Environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Devices that receives information about the environment

A

Sensors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Devices that allow the agent to reflect to the environment, such as moving, speaking, or controlling digital interfaces

A

Actuators

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

To define an AI program, you must define its four properties which are:

A

Performance
Environment
Actuators
Sensors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Property of Task Environments: completeness of the information of the environment

A

Fully Observed vs Partially Observed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Property of Task Environments: one or more agent

A

Single vs Multi agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

In a multi agent setting, there are 2 properties of agent

A

Cooperative
Competitive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Property of Task Environments: the next state of the environment is completely determined by the current state and action executed by the agents

A

Deterministic vs Nondeterministic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Nondeterministic with Probability

A

Stochastic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Property of Task Environments: current decision can affect future decisions

A

Sequential

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Property of Task Environments: doesn’t affect future decisions

A

Episodic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Property of Task Environments: the environment doesn’t change

A

Static

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Property of Task Environments: the environment changes

A

Dynamic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Property of Task Environments: there is exact position/state for entities in the environment

A

Discrete

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Property of Task Environments: no exact position/state for entities

A

Continuous

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Property of Task Environments: if the agent knows the environment, then it knows all the possible choices/outcomes already

A

Known vs unknown

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

This is a combination of architecture and program

A

Agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Property of agent which consists of if-else

A

Simple Reflex Agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Property of agent which creates a model (decision tree, rules, formulas)

A

Model Based Agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Property of agent which focuses on attaining a goal

A

Goal Based Agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Property of agent which is based on highest utility

A

Utility Based Agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

A machine learning type agent that uses model to train

A

Learning agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

This agent solves a problem by searching. Example is vacuum cleaner or self driving car that finds the shortest path to a destination

A

Goal Based Agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

How we think and act

A

Intelligence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Understanding but also building intelligent entities. Machines that can compute how to act effectively and safely in a wide variety of novel situation

A

Artificial Intelligence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

This test asks a question: Can a machine think?

A

Turing Test by Alan Turing

38
Q

Communicate successfully in human language

A

Natural Language processing

39
Q

Stores what it knows or hears

A

Knowledge representation

40
Q

Answers questions and to draw new conclusions

A

Automated reasoning

41
Q

Adapt to new circumstances and detect patterns

A

Machine Learning

42
Q

To pass the Turing Test, a robot must have these two:

A

Speech recognition, Computer Vision

43
Q

Trying to catch our own thoughts as they go by

A

Introspection

44
Q

Observing a person in action

A

Psychological Experiments

45
Q

Observing a brain in action

A

Brain imaging

46
Q

Combines models from AI and Psychology to construct precise and testable theories of human mind

A

Cognitive Science

47
Q

Agent comes from the Latin word ____, which means to do

A

Agere

48
Q

There are ___ foundations of AI

A

8 or Eight

49
Q

Foundation of AI - study of reasoning

A

Philosophy

50
Q

Foundation of AI - study of thinking

A

Psychology

51
Q

Foundation of AI - logic

A

Mathematics

52
Q

Foundation of AI - robotics, hardware

A

Computer Engineering

53
Q

Foundation of AI - study of brain

A

Neuroscience

54
Q

Foundation of AI - uncertainty

A

Economics

55
Q

Foundation of AI - NLP

A

Linguistics

56
Q

Foundation of AI - the way how you control things

A

Control Theory and Cybernetics

57
Q

Set of possible states in an environment

A

State Space

58
Q

This state is where the agent starts in

A

Initial State

59
Q

True or False: An agent has only one goal state

A

False

60
Q

True or False: Before an agent can start searching, a well-defined problem must be formulated

A

True

61
Q

State the five parts of a problem

A

Initial State
Actions
Transition Model
Goal States
Action Cost Function

62
Q

Search algorithms are judged based on _____, _______, _______, and _______.

A

Completeness
Cost-optimality
Time complexity
Space Complexity

63
Q

This search method only has access to the problem definition. Algorithms on this search will create a search tree to find the solution

A

Uninformed Search

64
Q

This search has access to a heuristic function. They have access to additional information such as pattern databases with solution costs

A

Informed Search

65
Q

______ selects nodes for expansion using an evaluation function. This is under _____ search

A

Best-First Search, Uninformed

66
Q

______ expands the shallowest nodes first; it is complete, optimal
for unit action costs, but has exponential space complexity. This is under _____ search

A

Breadth-first Search, Uninformed

67
Q

______ expands the node with lowest path cost, g(n), and is optimal
for general action costs. This is under _____ search

A

Uniform-cost search, Uninformed

68
Q

______ expands the deepest unexpanded node first. This is under _____ search

A

Depth-first search, Uninformed

69
Q

_____ is like depth-first search but with specific depth limits until a goal is found. This is under ____ search

A

Iterative deepening search, Uninformed

70
Q

______ expands two frontiers, one around the initial state and one
around the goal, stopping when the two frontiers meet

A

Bidirectional search, Uninformed

71
Q

_______ expands nodes with minimal h(n). It is not optimal but
is often efficient. This is under _____ search

A

Greedy best-first search, Informed

72
Q

______ expands nodes with minimal f (n) = g(n)+h(n). This is under ____ search

A

A* search, Informed

73
Q

______ is sometimes more efficient than A* itself. This is under ______ search

A

Bidirectional A* search, Informed

74
Q

_____ is an iterative deepening version of A*, and thus addresses the space complexity issue. This is under ____ search

A

Iterative Deepening A* search, Informed

75
Q

True or False: In a partially observable environment, Local search is the best way to do.

A

True

76
Q

Hill Climbing and Simulated Annealing is an example of _____

A

Local Search

77
Q

A/an ____ is a stochastic hill-climbing search in which a population of states is maintained.

A

Evolutionary Algorithm

78
Q

In nondeterministic environments. agents can apply _____ search

A

And-or

79
Q

When the environment is partially observable, the _______ represents the set of possible states that the agent might be in

A

Belief State

80
Q

_____ arise when the agent has no idea about the states and actions of its environment

A

Exploration problems

81
Q

_____ represent a state with a set of variable/value pairs and represent the conditions for a solution by a set of constraints on the variables

A

Constraint Satisfaction Problems (CSPs)

82
Q

______ is a form of depth-first search, and is commonly used for solving CSPs

A

Backtracking Search

83
Q

Game Theory is best applicable in multi agents that are _____

A

Competitive

84
Q

This is how the board in a game is set up

A

Initial State

85
Q

This says when the game is over

A

Terminal Test

86
Q

In two-player, discrete, deterministic, turn-taking zero-sum games with perfect information, the ______ algorithm can select optimal moves by a depth-first enumeration of the game tree

A

Minimax

87
Q

The ______ search algorithm computes the same optimal move as minimax, but achieves much greater efficiency by eliminating subtrees that are provably irrelevant

A

Alpha-beta

88
Q

This tree search does not apply heuristic function, but by simulating all of its moves from start to end and choose the best move from it

A

Monte Carlo Tree Search

89
Q

Stochastic games are just nondeterministic games with ____, such as dice, coin flipping, and many more

A

Probability

90
Q

In a competitive environment, an agent must choose the state with the best ____

A

Utility

91
Q

If the outcomes of all the states are equal, the agent must use ______

A

Nash Equilibrium

92
Q

This is a table listing of all possible combinations of decision alternatives and states of nature (utility)

A

Payoffs table