Ch 1, 2, 3 Flashcards

1
Q

What is AI?

A

artificial Intelligence is the science of making machines do things that would require intelligence if done by humans

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Turing Test

A

Imitation Game

If it can fool a human it is an AI

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Objections to Turing

A

Machines don’t have a soul
Machines can’t do something surprising
Machines cannot like a taste or color

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Weak AI

A

any system demonstrating intelligent behavior is AI, regardless of how it achieves its end.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Who supports Weak AI?

A

MIT

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Who supports Strong AI?

A

Carnegie-Mellon

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Strong AI?

A

A system should be based on the same methods of learning & cognition used by humans.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Why do people support weak AI?

A

Who cares how we get the answer. AI is used to solve difficult problems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Why do people support strong AI?

A

If it is ever made, it will be a jack of all trades type of solution. A solution for all problems. Also consciousness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are AI today? Strong or weak?

A

Weak, there is no general purpose strong AI yet…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is an Agent?

A

anything that can be viewed as perceiving its environment through sensors and acting on that environment through actuators

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Percept

A

an agent’s perceptual input at a given point in time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Percept Sequence

A

complete history of all percepts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Agent Behavior

A

Agent Function - maps percepts or percept sequence to an action

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Heuristics are ______ solutions

A

Greedy solutions that get the answer quickly rather than exactly the best one

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How are heuristics used by AI?

A
  • solving a simpler problem to get closer to answer
  • working backward (two way search)
  • identify similar solved problems and testing those solutions
  • if we cannot find the solution can we get “closer” to the solution
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Examples of AI

A
  • Medical diagnosis
  • using shopping history, what will the user most likely buy next?
  • Chess player
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Episodic vs Sequential. What does this mean to AI?

A

Does the agent need to remember the history of previous states. Does this decision affect future ones?
example - Winning a hand of poker has 0 impact on the cards you will get next hand. This would be episodic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Static vs Dynamic

A

Does the environment change while the AI is making a decision?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Example of Static environment

A

Sudoku

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Example of Semidynamic environment

A

Chess

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Example of dynamic environment

A

Taxi driving

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Types of environments for AI

A
Fully vs Partially Observable
Single agent vs Multiagent
Deterministic vs Stochastic (random)
Episodic vs Sequential
Static vs Dynamic
Discrete vs Continous
Known vs Unknown
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Toughest type of AI environment

A

is partially observable, multiagent, stochastic, sequential, dynamic, continuous, and unknown
Example - Driving a rental car in a new country with unfamiliar geography and traffic laws can be… interesting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Types of search algorithms used by AI
Unguided and guided (blind vs heurtistics)
26
BFS
all paths of d1 tried then all paths of d2.
27
Disadvantages of BFS
storage can be high. if each state has n successors and searches to a depth d, the storage required is n^d
28
Advantage of BFS
finds an optimal solution as it will find the most shallow answer
29
DFS
go to the deepest path first. Choose a path, follow it as far as possible; if goal not found, backtrack until some other choice could have been made, & try that
30
Advantage of DFS
Less storage required for searching than BFS
31
Disadvantage of DFS
Could find a less than optimal solution. (could have found the answer higher in tree but we have not checked it yet)
32
Iterative Deeping DFS
DFS to depth 1 down each path from root; then DFS to depth 2 down each path from root; then DFS to depth 3 down each path, etc.
33
Advantage of ID-DFS
Does not skip shallow solution. Lowers the cost of storage compared to BFS as it resets storage every traverse of a depth
34
Correct Algorithm
an algorithm is correct if it can find a valid solution
35
Complete Algoritm
an algorithm is complete if it can find every solution; either every solution that exists or every solution from a given start state
36
Optimal Algorithm
an algorithm is optimal if it find the best solution (lowest-cost, nearest, fastest route, etc..)
37
Optimally Efficient Algorithm
An algorithm is optimally efficient if it finds the solution as fast as any other algorithm (big o form)
38
Nonredundant Algorithm
an algorithm is nonredundant if any state rejected as a possible solution is not proposed again. (does not re check a possible solution if it has looked at it)
39
Informed Algorithm
an algorithm is informed if it is able to limit its proposals in some way rather than blindly generating every possible state. Heuristics....
40
Exhaustive enumeration
consists of generating all possibilities. Can lead to wasted effort. EXAMPLE If the first 2 queens we place are attacking each other, there’s no point in placing the other N-2.
41
What we learn from exhaustive enumeration
While generating a state, we should verify that the partially-constructed solution satisfies all constraints
42
Greedy algorithm
A greedy algorithm selects the path that looks ‘best’ based on local information. IE -We always select the lowest-cost choice we have that is valid
43
Dijkstra’s Algorithm
LOOOK IT UP
44
3 Blind methods (brute force) for finding a solution
DFS, BFS, Iterative Deepening DFS
45
Other blind method for finding a solution that is not DFS or BFS
bidirectional search
46
DFS storage needs
O(bd) where b = branching factor and d = search depth. If we only generate states we are currently searching the storage is only O(d)
47
BFS storage
O(b^d) b = branching factor d= depth
48
DFS storage type
uses stack to store states
49
BFS storage type
uses a queue
50
DFS-ID is complete if
the branching factor is finite and optimal when path costs are a nondecreasing function of node depth
51
DFS-ID space complexity
o(bd)
52
DFS-ID time complexity
exponential in worst case
53
Bidirectional search
not complete nor optimal. Could find a solution though
54
Bidirectional search average depth
d/2 for each search separately. 2*b^(d/2) is less than b^d
55
Uninformed Search
wont do! DFS could go down the wrong path forever to find a solution, BFS could run out of memory, DFS-ID could do the same thing as DFS if depth is too far set. Need to use heuristics
56
Types of guided searches
Hill Climbing, Beam Search, Best First
57
Properties of Heuristics
under estimate cost to goal | They should be monotonic; that is, as we progress, the estimate should decrease
58
Hill Climbing, Best First, and Beam Search typically back up?
Hill Climbing does not look back. Best First and Beam can backup
59
Problems with hill climbing
foothills, plateaux, ridge. Does not go backwards
60
Best-first search
open list of nodes that are on the frontier and may be explored, and a closed list of nodes that been examined or excluded. The open list is ordered by heuristics. One path at a time. Unlike hill climbing the best first can recover from dead ends.
61
What does best-first search rely on?
Good heuristics
62
Beam Search focuses a search by
only going down part of the search tree at each level. Only take the best #w nodes
63
In beam searches a bigger W means
the solution should me more optimal than a smaller W
64
A search algorithm is admissible if
it always results in an optimal solution if such a solution exists. ONLY THE GOAL not intermediate nodes
65
A search algorithm is monotonic or consistent if
it is admissible and also guarantees optimal paths to intermediate nodes
66
h1(n) < h2(n) for some node
h2 is more informed that h1
67
h1(n) < h2(n) for all nodes
h2 dominates h1
68
Relaxing a problem helps
to create heuristics
69
Removing constraints for a problem helps
generate heuristics
70
Pattern Database
store exact solution costs for subproblems
71
Disjoint Pattern Database
when two subproblems can be executed without effecting each other. These solutions can be generated and added together to get to the goal.
72
Branch and Bound is
complete since every node is checked. It is also optimal.
73
A* Search is
complete, optimal and optimally efficient.
74
Cost of A*
f(n) = g(n) + h(n) := actual cost from the start node plus the estimated (heuristic) cost to the goal
75
if heuristics are available __________ is usually the method of choice.
A*
76
Types of constraints
unary, binary, n-ary, global constraints, soft preference
77
unary
apply to 1 variable
78
binary constraint
apply to 2 variables
79
Example of Global Constraint
ALL DIF
80
Example of soft preference
I’d prefer to fly in the afternoon, but will fly in the morning if it’ll save me $100.
81
CSPs
Constraint Satisfaction Problems
82
Constraint propagation allows __________ in CSPs. What does Constraint propagation mean?
Inference. Meaning if this variable can only be one of a small set of values can that impact other domains of values for other variables.
83
Enforcing local consistency throughout the constraint graph causes (if we apply local constraint propagation what does it do to our set of possible solutions?)
inconsistent values to be eliminated. Heuristic working!
84
A variable is node consistent if
every value in that variable domain satisfies the variables unary constraints
85
A variable is arc-consistent if
every value in that variable's domain satisfies the variables binary constraints
86
N-ary constraints can
be made into binary constraints
87
AC3 is
the most popular algorithm for arc consistancy
88
AC3 works how?
maintain a queue (or set) of arcs to consider. At first this is all of them. Choose one at random. (Xi, Xj). Make Xi arc consistent with Xj by using the correct values for variables. If this does not impact the domain Di for Xi then go on to next arc. We want to narrow the domain Di. If this reduces Di, add to the queue all arcs (Xk, Xi) where Xk is a neighbor of Xi (connected nodes with Xi). If Dk is reduced to 0 then the CSP has no consistent solution. Failure return immediately.
89
AC3 O()?
If variables all have domains of size at most d and there are c binary constraints O(cd^3) = O(cd) (d^2)
90
Path Consistency
there is an assignment for all variables that makes a path from Xi to Xj passing through Xm consistent with all binary constraints on all 3 variables.
91
Finding a domain size of 1 means
We have found a value for this variable. If all variables have a domain of one we have found a solution
92
Domain of size 0 means
there is no solution
93
PC2 stands for
Path Consistency (3 nodes) Xi -> Xm -> Xj
94
A CSP is strongly K-consistent if
CSP: (CONSTRAINT STATISFACTION PROBLEM) it is k-consistent and also (k-1) consistent, (k-2) consistent, etc all the way down to node consistent
95
K-consistant BIG O
O(n^2 * d) Establishing strong N-consistency takes exponential time and space in worst case
96
Bounds Propagation example
Flight1 max is 165 Flight2 Max is 385. d1 = [0,165] d2 = [0,385] but if we add that f1 + f2 must equal 420 then the domain changes to d1=[35,165] d2=[255,385]
97
A CSP is bounds consistent if
for every variable X and both the lower and upper bounds of X, there exists some value of Y that satisfies the constraint between X and Y for every variable Y.
98
A problem is commutative if
the order of application of any given set of actions has no effect on the outcome
99
Backtracking search number of leaves
d^n
100
A backtracking search is
a depth-first search that chooses values for one variable at a time and backtracks when a variable has no legal values left to assign
101
CSPs can be sped up without problem-specific heuristics, by addressing these questions:
Which variable should be assigned next? In what order should values be tried? What inferences should be performed at each step in the search? When the search arrives at an assignment that violates a constraint, can the search avoid repeating this failure?
102
The least-constraining-value heuristic prefers
to use the value that rules out the fewest values in surrounding variables in the constraint graph
103
Which variable should be chosen first when using value ordering?
the most constraining variable
104
Which value should be chose first when value ordering?
the least constraining value
105
CSP Taboo list
used to remember which recently visited states and not allowing them to be checked again. Prevent aimless wandering
106
Constraint Weighting
Start with all constraints having a weight of 1 At each step, choose variable/value that minimizes total weights of violated constraints Increment the weight of all constraints still violated This adds topography, ensuring progress is possible, and adds weight to constraints that are proving difficult to satisfy