Chapter 1-6 Flashcards

1
Q

This means doing the right thing

A

Rationality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

This type of AI talks about acting like human. Machine Learning, Computer Vision, NLP, Knowledge Presentation, Robotics

A

Acting Human

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

This type of AI talks about how humans think. Psycholoogy, Neuroscience, Introspect

A

Thinking Human

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

This type of AI talks about logic

A

Thinking Rational

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

This type of AI talks about agent based AI

A

Acting Rational

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Give the four characteristics of AI

A

Human
Rational
Thinking
Act

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

This pertains to an agent (entity) that do the right thing. It lis like a function

A

Rational Agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Give some risks of AI

A

Unemployment
Unmanned Warfare
Biased Decision Making
Cybersecurity
Security

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Give some benefits of AI

A

Time Efficiency
Easier surveilance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

These are entities with less rationality that makes them rational when worked with each other

A

Swarm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

An entity that has a percept which yields to an actuator

A

Agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

This is also known as the problem. Everything outside the agent including other agents, physical objects, and digital inputs

A

Environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Devices that receives information about the environment

A

Sensors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Devices that allow the agent to reflect to the environment, such as moving, speaking, or controlling digital interfaces

A

Actuators

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

To define an AI program, you must define its four properties which are:

A

Performance
Environment
Actuators
Sensors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Property of Task Environments: completeness of the information of the environment

A

Fully Observed vs Partially Observed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Property of Task Environments: one or more agent

A

Single vs Multi agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

In a multi agent setting, there are 2 properties of agent

A

Cooperative
Competitive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Property of Task Environments: the next state of the environment is completely determined by the current state and action executed by the agents

A

Deterministic vs Nondeterministic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Nondeterministic with Probability

A

Stochastic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Property of Task Environments: current decision can affect future decisions

A

Sequential

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Property of Task Environments: doesn’t affect future decisions

A

Episodic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Property of Task Environments: the environment doesn’t change

A

Static

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Property of Task Environments: the environment changes

A

Dynamic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Property of Task Environments: there is exact position/state for entities in the environment
Discrete
26
Property of Task Environments: no exact position/state for entities
Continuous
27
Property of Task Environments: if the agent knows the environment, then it knows all the possible choices/outcomes already
Known vs unknown
28
This is a combination of architecture and program
Agent
29
Property of agent which consists of if-else
Simple Reflex Agent
30
Property of agent which creates a model (decision tree, rules, formulas)
Model Based Agent
31
Property of agent which focuses on attaining a goal
Goal Based Agent
32
Property of agent which is based on highest utility
Utility Based Agent
33
A machine learning type agent that uses model to train
Learning agent
34
This agent solves a problem by searching. Example is vacuum cleaner or self driving car that finds the shortest path to a destination
Goal Based Agent
35
How we think and act
Intelligence
36
Understanding but also building intelligent entities. Machines that can compute how to act effectively and safely in a wide variety of novel situation
Artificial Intelligence
37
This test asks a question: Can a machine think?
Turing Test by Alan Turing
38
Communicate successfully in human language
Natural Language processing
39
Stores what it knows or hears
Knowledge representation
40
Answers questions and to draw new conclusions
Automated reasoning
41
Adapt to new circumstances and detect patterns
Machine Learning
42
To pass the Turing Test, a robot must have these two:
Speech recognition, Computer Vision
43
Trying to catch our own thoughts as they go by
Introspection
44
Observing a person in action
Psychological Experiments
45
Observing a brain in action
Brain imaging
46
Combines models from AI and Psychology to construct precise and testable theories of human mind
Cognitive Science
47
Agent comes from the Latin word ____, which means to do
Agere
48
There are ___ foundations of AI
8 or Eight
49
Foundation of AI - study of reasoning
Philosophy
50
Foundation of AI - study of thinking
Psychology
51
Foundation of AI - logic
Mathematics
52
Foundation of AI - robotics, hardware
Computer Engineering
53
Foundation of AI - study of brain
Neuroscience
54
Foundation of AI - uncertainty
Economics
55
Foundation of AI - NLP
Linguistics
56
Foundation of AI - the way how you control things
Control Theory and Cybernetics
57
Set of possible states in an environment
State Space
58
This state is where the agent starts in
Initial State
59
True or False: An agent has only one goal state
False
60
True or False: Before an agent can start searching, a well-defined problem must be formulated
True
61
State the five parts of a problem
Initial State Actions Transition Model Goal States Action Cost Function
62
Search algorithms are judged based on _____, _______, _______, and _______.
Completeness Cost-optimality Time complexity Space Complexity
63
This search method only has access to the problem definition. Algorithms on this search will create a search tree to find the solution
Uninformed Search
64
This search has access to a heuristic function. They have access to additional information such as pattern databases with solution costs
Informed Search
65
______ selects nodes for expansion using an evaluation function. This is under _____ search
Best-First Search, Uninformed
66
______ expands the shallowest nodes first; it is complete, optimal for unit action costs, but has exponential space complexity. This is under _____ search
Breadth-first Search, Uninformed
67
______ expands the node with lowest path cost, g(n), and is optimal for general action costs. This is under _____ search
Uniform-cost search, Uninformed
68
______ expands the deepest unexpanded node first. This is under _____ search
Depth-first search, Uninformed
69
_____ is like depth-first search but with specific depth limits until a goal is found. This is under ____ search
Iterative deepening search, Uninformed
70
______ expands two frontiers, one around the initial state and one around the goal, stopping when the two frontiers meet
Bidirectional search, Uninformed
71
_______ expands nodes with minimal h(n). It is not optimal but is often efficient. This is under _____ search
Greedy best-first search, Informed
72
______ expands nodes with minimal f (n) = g(n)+h(n). This is under ____ search
A* search, Informed
73
______ is sometimes more efficient than A* itself. This is under ______ search
Bidirectional A* search, Informed
74
_____ is an iterative deepening version of A*, and thus addresses the space complexity issue. This is under ____ search
Iterative Deepening A* search, Informed
75
True or False: In a partially observable environment, Local search is the best way to do.
True
76
Hill Climbing and Simulated Annealing is an example of _____
Local Search
77
A/an ____ is a stochastic hill-climbing search in which a population of states is maintained.
Evolutionary Algorithm
78
In nondeterministic environments. agents can apply _____ search
And-or
79
When the environment is partially observable, the _______ represents the set of possible states that the agent might be in
Belief State
80
_____ arise when the agent has no idea about the states and actions of its environment
Exploration problems
81
_____ represent a state with a set of variable/value pairs and represent the conditions for a solution by a set of constraints on the variables
Constraint Satisfaction Problems (CSPs)
82
______ is a form of depth-first search, and is commonly used for solving CSPs
Backtracking Search
83
Game Theory is best applicable in multi agents that are _____
Competitive
84
This is how the board in a game is set up
Initial State
85
This says when the game is over
Terminal Test
86
In two-player, discrete, deterministic, turn-taking zero-sum games with perfect information, the ______ algorithm can select optimal moves by a depth-first enumeration of the game tree
Minimax
87
The ______ search algorithm computes the same optimal move as minimax, but achieves much greater efficiency by eliminating subtrees that are provably irrelevant
Alpha-beta
88
This tree search does not apply heuristic function, but by simulating all of its moves from start to end and choose the best move from it
Monte Carlo Tree Search
89
Stochastic games are just nondeterministic games with ____, such as dice, coin flipping, and many more
Probability
90
In a competitive environment, an agent must choose the state with the best ____
Utility
91
If the outcomes of all the states are equal, the agent must use ______
Nash Equilibrium
92
This is a table listing of all possible combinations of decision alternatives and states of nature (utility)
Payoffs table