Year 2 Term 1 Exams Flashcards

1
Q

What are the 3 problem types of blind search?

A

Single-state problem, Sensorless problem, Contingency problem

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the characteristics of single-state problems?

A

Deterministic & fully observable: Agent will base decisions solely on its state and doesn’t consider the sequence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the characteristics of sensorless problems?

A

Non-observable: Innovative approaches required to compensate for lack of sensory input

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the characteristics of contingency problems?

A

Non-deterministic &/or partially observable: Agent may adjust action according to new information about current state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What dimensions are search strategies evaluated on?

A

Completeness, Time Complexity, Space Complexity, Optimality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

In BFS, are nodes added to the start or end of the queue?

A

The end

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

BFS completeness, time complexity, space complexity, and optimality

A

Complete: Yes - if b and d are finite
Time Complexity: O(b^(d+1))
Space: O(b^(d+1))
Optimality: Yes - if step costs don’t vary

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Summarise the dimensions of BFS

A

Space complexity is terrible as every node is kept in memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

In DFS, are nodes added to the start or end of the queue?

A

The start

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

DFS completeness, time complexity, space complexity, and optimality

A

Complete: Yes - if b and d are finite
Time Complexity: O(b^m) - m is max depth
Space: O(bm)
Optimality: No

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Summarise the dimensions of DFS

A

Good space complexity but not optimal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

DLS completeness, time complexity, space complexity, and optimality

A

Complete: Maybe - if solution is shallower than n
Time Complexity: O(b^n)
Space: O(bn)
Optimality: No

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Summarise the dimensions of DLS

A

Not complete or optimal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

IDS completeness, time complexity, space complexity, and optimality

A

Complete: Yes
Time Complexity: O(b^d) - unexpectedly okay
Space: O(bd)
Optimality: Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Graph search’s use

A

Incorporated into BFS as it stores all searched nodes
Detects repeated states, preventing linear problems turning into exponential ones

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is bidirectional search?

A

Two BFS or IDS searches start from the initial state and goal state
Reduces time complexity from b^d to 2b^(d/2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What do evaluation functions do in search?

A

Finds the most desirable node to expand

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Greedy Best-First Search completeness, time complexity, space complexity, and optimality

A

Complete: No - only considers next nodes, so may prematurely conclude which path is best
Time Complexity: O(b^m)
Space: O(b^m)
Optimality: No

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Summarise the dimensions of Greedy Best-First Search

A

Bad space complexity, and not complete or optimal as the greedy nature may see it take suboptimal paths

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What makes a good heuristic?

A

Admissibility (validity), Consistency, Informative, Domain-Specific Knowledge, Simplicity & Efficiency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What’s an admissible heuristic?

A

One that never overestimates the cost to reach the goal state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What’s a consistent heuristic?

A

One that’s cost to neighbour state to goal state is not less than the cost to goal state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What’s an informative heuristic?

A

One that differentiates between promising and less promising paths

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What’s a heuristic that has domain-specific knowledge?

A

One that considers processing time and the dependencies of tasks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Whats a simple and efficient heuristic?
One that is easy to compute
26
How is A* Search different to Greedy Best-First Search?
It considers the next step in advance
27
A* Search completeness, time complexity, space complexity, and optimality
Complete: Yes - for finite graphs with finite branching factors Time Complexity: O(b^m) Space: O(b^m) - worse than greedy best-first search Optimality: Yes - if admissible and consistent
28
Describe the optimality of A* Search in 3 steps
1) evaluation function f(n) is non-decreasing, so the estimated cost will be the same or increase 2) therefore, the node sequence is in non-decreasing order of f(n) 3) so, the first goal to be selected for expansion must be an optimal solution
29
What is a state space?
The set of all configurations/states the problem can exist in
30
What's the premise of local-search algorithms?
They keep a current state and explore neighbouring states and evaluate if they're closer to the goal state
31
What are the 4 steps of hill-climbing search
1) Initialisation: start with an initial state 2) Evaluation: evaluate the state using an objective function 3) Neighbour Generation: generate neighbouring solutions by making small changes to the current state 4) Selection: select the neighbour solution that provides the biggest improvement in the objective function
32
What's the problem with hill-climbing search?
It can get stuck in local maxima, making the search not optimal
33
What techniques can be used to reach the global maxima in hill-climbing search?
Simulated Annealing, Local Beam
34
How does simulated annealing search work?
To escape local maxima, allow some 'bad' moves but gradually decrease their likelihood. The slower T decreases, the greater the chance of finding the global maxima
35
How does local beam search work?
Keep track of k states rather than one and select the k best successors until one is a goal state. This can however become narrowly focused in a small region of the search space
36
Describe the process of genetic algorithms
Each state is represented as a string over a finite alphabet A fitness function determines the quality of each state A successor state is produced by combining two parent states
37
What 3 steps are used to create the next generation of states in genetic algorithms?
1) Selection: states are chosen based on fitness 2) Crossover: genetic information is combined to create offspring 3) Mutation: offspring's genetic information is subtley, randomly changed to promote diversity
38
How are constraint satisfaction problems better than greedy best-first search or A* search?
GBFS and A* don't necessarily make the best move, they just improve the quality of the state. CSPs look inside the state, giving us stronger heuristics to determine where to go
39
What 3 CSP constraints are there?
Unary constraint: involves a single variable Binary constraint: involves pairs of variables Higher-order constraint: involves 3 or more variables
40
What are constraint graphs?
A graph where nodes are variables and edges are the constraints shared by the variables
41
What are the 3 CSP states and what do they entail?
Initial state: the empty assignment Successor function: a value is assigned to an unassigned variable such that there are no conflicts with any constraints Goal test: the assignment is complete
42
What type of search do we use for CSPs? Give 3 reasons why
DFS: We don't need the path All solutions are at depth n so there is not optimal solution You can't get stuck in loops as there is no way to revisit states
43
Breifly describe backtracking search and what is does to space complexity
If you go down a branch and it fails, you can go back up Space complexity marginally improved from O(bd) to O(d)
44
What 3 heuristics are used to improve backtracking efficiency
1st) Most constrained variable: the variable with fewest legal values 2nd) Most constraining variable: the variable with most constraints on remaining values 3rd) Least constraining variable: given a variable, the value that rules out the fewest neighbouring variables
45
What is forward checking?
In a CSP, the remaining legal values of unassigned variables are kept track of
46
What technique is used in forward checking?
Arc consistency: provides early detection for failures by following how variables with 1 legal value influence bordering variables
47
How is local search used in CSPs?
If there are any conflicted variables, one is randomly selected Its value is selected using a min-conflicts heuristic
48
Describe the process of Minimax
Search to a given ply Working up the tree, use MAX and MIN evaluations to decide which branch is the best and worst respectively for us. If at the bottom of the tree, return the evaluation function.
49
What are the evaluation functions for Minimax?
+1 for winning move for MAX -1 for winning move for MIN
50
Minimax completeness, time complexity, space complexity, and optimality
Complete: Yes - if tree is finite Time Complexity: O(b^m) Space: O(bm) - DFS exploration Optimality: Yes - against optimal opponent
51
Give examples of what features may be used when evaluating a state in chess
Piece type, pawn structure, king safety, mobility
52
What is the horizon effect?
Refers to bias due to the agent's limited view - the evaluation function will not be perfect e.g. sometimes the best move is to sacrifice the queen. What if the oppopent is looking n+1 moves ahead (over the horizon)?
53
What is quiescent search?
Says that we should be following branches that look stably good/consistent as we go down
54
What is single extension?
Sometimes a move is clearly better, in which case the others are not worth exploring
55
What are strong methods and weak methods?
Strong method: specialised method that takes the structure of the game into account Weak method: general method that can be applied to any game
56
What is alpha-beta pruning?
A powerful weak method. It performs DFS but allows us to disregard certain branches
57
What are alpha and beta in alpha-beta pruning?
Alpha: the lower bound on a node value (the worst we can do), initialised to negative infinity Beta: the upper bound on a node value (the best we can do), initialised to infinity
58
Why is directionality important in alpha-beta pruning?
Different subtrees are removed depending on the direction of traversal If the leaves are ordered best to worst, you can prune the most If the leaves are ordered worst to best, you can't prune anything
59
How does alpha-beta pruning compare to minimax?
If there's no systematic ordering of the leaves, we can search twice as deep in alpha-beta pruning than minimax for the same effort
60
What are the 2 learning types and what are their subcategories?
Supervised: Classification - examples are labelled with variables describing the data. There are classes, and new examples which we want to classify Regression - the output is a real number Unsupervised: no labels - performs patterns recognition on the data
61
What are the 2 classification approaches?
Top-Down Classification: broad categories are divided into smaller, specific subcategories Bottom-Up Classificiation: individual elements are grouped into larger categories based on common features
62
What is cross validation and what 3 steps does it take?
A classification technique: 1) Partitions dataset into k subsets 2) Trains the model on k-1 of the subsets 3) Evaluates the performance of the remaining subset
63
What are the 3 steps in the classification process of decision trees?
1) Takes a series of inputs defining a situation 2) Checks the properties of the situation by outputting binary classifications 3) An outcome is predicted
64
Describe the hypothesis space of decision trees
A truth table: given n Boolean attributes... - the table will have 2^n rows - there will be 2^2^n distinct truth tables - each turth table must be searched which is inefficient
65
Why is ordering important in decision trees?
Inefficient ordering of questions results in low performance We choose the next attribute whose value reduces uncertainty about the classification the most / provides the highest information gain
66
What is the problem with the naive algorithmin decision trees?
It's overfitted to the training data and may not perform well of test data
67
At what point is asking questions in decision trees considered redundant?
If the best question is below a certain threshold of information gain If the difference in information gain between the best and worst question is below a certain threshold
68
What is the nearest neighbours method?
A method where instances are classified based on similar instances Requires no training
69
What is the degree of similarity?
The distance between the data points
70
Describe the 3 steps in forming a Voronoi diagram?
1) Start with the seed points in the plane 2) Plane is partitioned into regions each associated with a seed point 3) Every point in a region is closer to the associated seed point than any other seed point
71
What is the problem with Voronoi diagrams
It's an approximation based method, so is less reliable nearer the borders
72
What is the k-nearest-neighbours method?
The k closest neighbours of a point are chosen The point's outcome is determined by majority voting or weighting importance based on closeness
73
Explain the difficulty in setting the value of k in k-nearest-neighbours
If k is too small, you over-fit to noise in the data If k is too big, you lose structural detail
74
What 4 difficulties are there with using euclidean distance as a distance metric?
Data points may have multiple dimensions One axis may dominate another Data may be categorical There may be spatial limitations
75
What is a perceptron?
An artificial neuron: Inputs are features with weightings The perceptron 'learns' by learning the weights (importance) of the features
76
What is an artificial neural network (ANN)?
A combination of perceptrons
77
What are the 2 training methods used in ANNs?
Two-layer feedforward: - Inefficient training method - input fed in and output calculated Back propagation: - Efficient training method - Error between predicted & actual output calculated. Weights are adjusted to reduce error
78
What are hidden layers in two-layer feeds?
Layers inbetween input and output layers that help approximate functions to higher accuracy
79
What happens if there are too few or too many hidden nodes?
Not enough hidden nodes -> network is underfitted (not powerful enough to learn) Too many hidden nodes -> network is overfitted (just memorises training patterns)
80
What are the planning environment attributes?
Fully observable, deterministic, static (nothing changes unless you change it), discrete
81
What 3 features should a planning language have?
A representation of the states A representation of the goals A representation of the actions
82
Describe states in planning languages
A conjunction of positive literals. If something is not said to be true, it can be assumed false
83
How is a goal satisfied in planning languages?
If the state contains all the positive literals in the goal state and no others
84
Describe actions in planning languages
To apply an action, preconditions must be satisfied If they are and the action is applied, it will have an effect
85
What is an action schema?
An abstract representation of actions (variables aren't yet instantiated and bound to literals)
86
What are add-lists and delete-lists?
Things that are positive get added to the world Things that are negative get deleted from the world
87
What is 'θ' in planning languages?
Where variables are bound to values in the action schema e.g. With θ = {p/P1,from/JFK,to/SFO}
88
How do you write a solution to a planning language problem?
[action1(), action2(), ... ]
89
What are progression and regression planners?
Progression planners: starts from initial state and uses actions to reach goal state Regression planners: start from goal state and determine predecessors to reach initial state
90
Why might you use regression planning over preogression planning?
The branching factor is generally lower going backwards than forwards so reduces the amount of search
91
What is least commitment strategy?
When choices that don't need to be made yet are delayed
92
What 3 reasons are there for using partial-order planning over fully-ordered planning?
You get more flexibility A POP can represent many FOPs You can delay commitment, reducing the need to backtrack
93
What is the Turing Test?
If C can't tell which of A and B is the computer, then the computer "must be intelligent"
94
What is the symbol-grounding problem?
If intelligence is thinking thoughts, what makes a thought meaningful? Does the thought of a symbol mean that symbol because it looks like it? Or because it was caused by it? Or because a mechanism creates thoughts about it?
95
What is physical symbol system hypothesis?
The idea that if a machine can manipulate symbols appropriately, it can be intelligent
96
What is Searle's Chinese room?
An idea that challenges the physical symbol system hypothesis. Manipulating symbols alone isn't considered intelligence because what is actually understanding the symbols? It would pass the Turing Test though.
97
What are the 2 types of reasoning?
Logic/rule based reasoning Stochastic reasoning
98
What is logic/rule based reasoning?
Where you start with basic rules (axioms) that you're sure about You derive logic from this Whatever you infer, you're sure about
99
What is the issue with logic/rule based reasoning?
It's not probabilistic, which limits it
100
What is stochastic reasoning?
Reasoning that involves possibilistic reasoning or Bayesian probability
101
What is good about Bayesian reasoning?
It captures the uncertainty of knowledge It can integrate prior knowledge into reasoning processes
102
What are the elements of Bayesian belief update (Bayes Rule variables)?
(Prior, likelihood, denominator) -> posterior
103
What are Bayes Nets (3-part)?
Bayesian Networks: Directed acyclic graphs (no closed loops) Represented by a truth table, with probabilities instead of T/F Can model complex probabilistic dependecies based on more than one variable
104
What are the 3 issues with inference?
Can't capture evidence - can only reason off prior knowledge Doesn't scale well with increasing no. variables Doesn't show relationships
105
How are Bayes Nets built?
Using domain experts: structure is manually built, probabilities are built or learned from data Building from data: structure and probabilities are learned from the data
106
What is the premise of robotics?
That intelligence is defined as the ability to adjust to changes in the environment
107
What are deliberative control architectures?
A sequence of sensing, updating its model, planning, and executing
108
What are the 3 problems with deliberative control architectures?
Solutions are fragile in the presence of uncertainty Requires frequent replanning Reacts slowly to changes in the environment
109
What are behaviour-based control architectures?
Where simple behaviours achieving subtasks are combined to achieve the overall task
110
What are the 2 problems with behaviour-based control architecture?
Emergent behaviour makes exact behaviour difficult to predict Requires difficult coordination design
111
What does strong AI cover?
Singularity - AI becoming superior to humans How singularity raises moral/legal issues
112
What does weak AI cover?
Human-Agent Collectives - where a collaborative weak AI domain is built around us, where human users are integral parts of the system