Year 2 Term 1 Exams Flashcards

1
Q

What are the 3 problem types of blind search?

A

Single-state problem, Sensorless problem, Contingency problem

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the characteristics of single-state problems?

A

Deterministic & fully observable: Agent will base decisions solely on its state and doesn’t consider the sequence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the characteristics of sensorless problems?

A

Non-observable: Innovative approaches required to compensate for lack of sensory input

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the characteristics of contingency problems?

A

Non-deterministic &/or partially observable: Agent may adjust action according to new information about current state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What dimensions are search strategies evaluated on?

A

Completeness, Time Complexity, Space Complexity, Optimality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

In BFS, are nodes added to the start or end of the queue?

A

The end

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

BFS completeness, time complexity, space complexity, and optimality

A

Complete: Yes - if b and d are finite
Time Complexity: O(b^(d+1))
Space: O(b^(d+1))
Optimality: Yes - if step costs don’t vary

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Summarise the dimensions of BFS

A

Space complexity is terrible as every node is kept in memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

In DFS, are nodes added to the start or end of the queue?

A

The start

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

DFS completeness, time complexity, space complexity, and optimality

A

Complete: Yes - if b and d are finite
Time Complexity: O(b^m) - m is max depth
Space: O(bm)
Optimality: No

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Summarise the dimensions of DFS

A

Good space complexity but not optimal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

DLS completeness, time complexity, space complexity, and optimality

A

Complete: Maybe - if solution is shallower than n
Time Complexity: O(b^n)
Space: O(bn)
Optimality: No

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Summarise the dimensions of DLS

A

Not complete or optimal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

IDS completeness, time complexity, space complexity, and optimality

A

Complete: Yes
Time Complexity: O(b^d) - unexpectedly okay
Space: O(bd)
Optimality: Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Graph search’s use

A

Incorporated into BFS as it stores all searched nodes
Detects repeated states, preventing linear problems turning into exponential ones

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is bidirectional search?

A

Two BFS or IDS searches start from the initial state and goal state
Reduces time complexity from b^d to 2b^(d/2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What do evaluation functions do in search?

A

Finds the most desirable node to expand

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Greedy Best-First Search completeness, time complexity, space complexity, and optimality

A

Complete: No - only considers next nodes, so may prematurely conclude which path is best
Time Complexity: O(b^m)
Space: O(b^m)
Optimality: No

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Summarise the dimensions of Greedy Best-First Search

A

Bad space complexity, and not complete or optimal as the greedy nature may see it take suboptimal paths

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What makes a good heuristic?

A

Admissibility (validity), Consistency, Informative, Domain-Specific Knowledge, Simplicity & Efficiency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What’s an admissible heuristic?

A

One that never overestimates the cost to reach the goal state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What’s a consistent heuristic?

A

One that’s cost to neighbour state to goal state is not less than the cost to goal state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What’s an informative heuristic?

A

One that differentiates between promising and less promising paths

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What’s a heuristic that has domain-specific knowledge?

A

One that considers processing time and the dependencies of tasks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Whats a simple and efficient heuristic?

A

One that is easy to compute

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

How is A* Search different to Greedy Best-First Search?

A

It considers the next step in advance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

A* Search completeness, time complexity, space complexity, and optimality

A

Complete: Yes - for finite graphs with finite branching factors
Time Complexity: O(b^m)
Space: O(b^m) - worse than greedy best-first search
Optimality: Yes - if admissible and consistent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Describe the optimality of A* Search in 3 steps

A

1) evaluation function f(n) is non-decreasing, so the estimated cost will be the same or increase

2) therefore, the node sequence is in non-decreasing order of f(n)

3) so, the first goal to be selected for expansion must be an optimal solution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What is a state space?

A

The set of all configurations/states the problem can exist in

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What’s the premise of local-search algorithms?

A

They keep a current state and explore neighbouring states and evaluate if they’re closer to the goal state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What are the 4 steps of hill-climbing search

A

1) Initialisation: start with an initial state

2) Evaluation: evaluate the state using an objective function

3) Neighbour Generation: generate neighbouring solutions by making small changes to the current state

4) Selection: select the neighbour solution that provides the biggest improvement in the objective function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What’s the problem with hill-climbing search?

A

It can get stuck in local maxima, making the search not optimal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What techniques can be used to reach the global maxima in hill-climbing search?

A

Simulated Annealing, Local Beam

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

How does simulated annealing search work?

A

To escape local maxima, allow some ‘bad’ moves but gradually decrease their likelihood. The slower T decreases, the greater the chance of finding the global maxima

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

How does local beam search work?

A

Keep track of k states rather than one and select the k best successors until one is a goal state. This can however become narrowly focused in a small region of the search space

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Describe the process of genetic algorithms

A

Each state is represented as a string over a finite alphabet
A fitness function determines the quality of each state
A successor state is produced by combining two parent states

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What 3 steps are used to create the next generation of states in genetic algorithms?

A

1) Selection: states are chosen based on fitness
2) Crossover: genetic information is combined to create offspring
3) Mutation: offspring’s genetic information is subtley, randomly changed to promote diversity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

How are constraint satisfaction problems better than greedy best-first search or A* search?

A

GBFS and A* don’t necessarily make the best move, they just improve the quality of the state. CSPs look inside the state, giving us stronger heuristics to determine where to go

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

What 3 CSP constraints are there?

A

Unary constraint: involves a single variable
Binary constraint: involves pairs of variables
Higher-order constraint: involves 3 or more variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

What are constraint graphs?

A

A graph where nodes are variables and edges are the constraints shared by the variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What are the 3 CSP states and what do they entail?

A

Initial state: the empty assignment
Successor function: a value is assigned to an unassigned variable such that there are no conflicts with any constraints
Goal test: the assignment is complete

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

What type of search do we use for CSPs? Give 3 reasons why

A

DFS:

We don’t need the path
All solutions are at depth n so there is not optimal solution
You can’t get stuck in loops as there is no way to revisit states

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Breifly describe backtracking search and what is does to space complexity

A

If you go down a branch and it fails, you can go back up
Space complexity marginally improved from O(bd) to O(d)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

What 3 heuristics are used to improve backtracking efficiency

A

1st) Most constrained variable: the variable with fewest legal values
2nd) Most constraining variable: the variable with most constraints on remaining values
3rd) Least constraining variable: given a variable, the value that rules out the fewest neighbouring variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

What is forward checking?

A

In a CSP, the remaining legal values of unassigned variables are kept track of

46
Q

What technique is used in forward checking?

A

Arc consistency: provides early detection for failures by following how variables with 1 legal value influence bordering variables

47
Q

How is local search used in CSPs?

A

If there are any conflicted variables, one is randomly selected
Its value is selected using a min-conflicts heuristic

48
Q

Describe the process of Minimax

A

Search to a given ply
Working up the tree, use MAX and MIN evaluations to decide which branch is the best and worst respectively for us.
If at the bottom of the tree, return the evaluation function.

49
Q

What are the evaluation functions for Minimax?

A

+1 for winning move for MAX
-1 for winning move for MIN

50
Q

Minimax completeness, time complexity, space complexity, and optimality

A

Complete: Yes - if tree is finite
Time Complexity: O(b^m)
Space: O(bm) - DFS exploration
Optimality: Yes - against optimal opponent

51
Q

Give examples of what features may be used when evaluating a state in chess

A

Piece type, pawn structure, king safety, mobility

52
Q

What is the horizon effect?

A

Refers to bias due to the agent’s limited view - the evaluation function will not be perfect e.g. sometimes the best move is to sacrifice the queen.
What if the oppopent is looking n+1 moves ahead (over the horizon)?

53
Q

What is quiescent search?

A

Says that we should be following branches that look stably good/consistent as we go down

54
Q

What is single extension?

A

Sometimes a move is clearly better, in which case the others are not worth exploring

55
Q

What are strong methods and weak methods?

A

Strong method: specialised method that takes the structure of the game into account
Weak method: general method that can be applied to any game

56
Q

What is alpha-beta pruning?

A

A powerful weak method. It performs DFS but allows us to disregard certain branches

57
Q

What are alpha and beta in alpha-beta pruning?

A

Alpha: the lower bound on a node value (the worst we can do), initialised to negative infinity
Beta: the upper bound on a node value (the best we can do), initialised to infinity

58
Q

Why is directionality important in alpha-beta pruning?

A

Different subtrees are removed depending on the direction of traversal
If the leaves are ordered best to worst, you can prune the most
If the leaves are ordered worst to best, you can’t prune anything

59
Q

How does alpha-beta pruning compare to minimax?

A

If there’s no systematic ordering of the leaves, we can search twice as deep in alpha-beta pruning than minimax for the same effort

60
Q

What are the 2 learning types and what are their subcategories?

A

Supervised:
Classification - examples are labelled with variables describing the data. There are classes, and new examples which we want to classify
Regression - the output is a real number

Unsupervised: no labels - performs patterns recognition on the data

61
Q

What are the 2 classification approaches?

A

Top-Down Classification: broad categories are divided into smaller, specific subcategories
Bottom-Up Classificiation: individual elements are grouped into larger categories based on common features

62
Q

What is cross validation and what 3 steps does it take?

A

A classification technique:
1) Partitions dataset into k subsets
2) Trains the model on k-1 of the subsets
3) Evaluates the performance of the remaining subset

63
Q

What are the 3 steps in the classification process of decision trees?

A

1) Takes a series of inputs defining a situation
2) Checks the properties of the situation by outputting binary classifications
3) An outcome is predicted

64
Q

Describe the hypothesis space of decision trees

A

A truth table: given n Boolean attributes…
- the table will have 2^n rows
- there will be 2^2^n distinct truth tables
- each turth table must be searched which is inefficient

65
Q

Why is ordering important in decision trees?

A

Inefficient ordering of questions results in low performance
We choose the next attribute whose value reduces uncertainty about the classification the most / provides the highest information gain

66
Q

What is the problem with the naive algorithmin decision trees?

A

It’s overfitted to the training data and may not perform well of test data

67
Q

At what point is asking questions in decision trees considered redundant?

A

If the best question is below a certain threshold of information gain
If the difference in information gain between the best and worst question is below a certain threshold

68
Q

What is the nearest neighbours method?

A

A method where instances are classified based on similar instances
Requires no training

69
Q

What is the degree of similarity?

A

The distance between the data points

70
Q

Describe the 3 steps in forming a Voronoi diagram?

A

1) Start with the seed points in the plane
2) Plane is partitioned into regions each associated with a seed point
3) Every point in a region is closer to the associated seed point than any other seed point

71
Q

What is the problem with Voronoi diagrams

A

It’s an approximation based method, so is less reliable nearer the borders

72
Q

What is the k-nearest-neighbours method?

A

The k closest neighbours of a point are chosen
The point’s outcome is determined by majority voting or weighting importance based on closeness

73
Q

Explain the difficulty in setting the value of k in k-nearest-neighbours

A

If k is too small, you over-fit to noise in the data
If k is too big, you lose structural detail

74
Q

What 4 difficulties are there with using euclidean distance as a distance metric?

A

Data points may have multiple dimensions
One axis may dominate another
Data may be categorical
There may be spatial limitations

75
Q

What is a perceptron?

A

An artificial neuron:
Inputs are features with weightings
The perceptron ‘learns’ by learning the weights (importance) of the features

76
Q

What is an artificial neural network (ANN)?

A

A combination of perceptrons

77
Q

What are the 2 training methods used in ANNs?

A

Two-layer feedforward:
- Inefficient training method
- input fed in and output calculated

Back propagation:
- Efficient training method
- Error between predicted & actual output calculated. Weights are adjusted to reduce error

78
Q

What are hidden layers in two-layer feeds?

A

Layers inbetween input and output layers that help approximate functions to higher accuracy

79
Q

What happens if there are too few or too many hidden nodes?

A

Not enough hidden nodes -> network is underfitted (not powerful enough to learn)
Too many hidden nodes -> network is overfitted (just memorises training patterns)

80
Q

What are the planning environment attributes?

A

Fully observable, deterministic, static (nothing changes unless you change it), discrete

81
Q

What 3 features should a planning language have?

A

A representation of the states
A representation of the goals
A representation of the actions

82
Q

Describe states in planning languages

A

A conjunction of positive literals. If something is not said to be true, it can be assumed false

83
Q

How is a goal satisfied in planning languages?

A

If the state contains all the positive literals in the goal state and no others

84
Q

Describe actions in planning languages

A

To apply an action, preconditions must be satisfied
If they are and the action is applied, it will have an effect

85
Q

What is an action schema?

A

An abstract representation of actions (variables aren’t yet instantiated and bound to literals)

86
Q

What are add-lists and delete-lists?

A

Things that are positive get added to the world
Things that are negative get deleted from the world

87
Q

What is ‘θ’ in planning languages?

A

Where variables are bound to values in the action schema
e.g. With θ = {p/P1,from/JFK,to/SFO}

88
Q

How do you write a solution to a planning language problem?

A

[action1(), action2(), … ]

89
Q

What are progression and regression planners?

A

Progression planners: starts from initial state and uses actions to reach goal state
Regression planners: start from goal state and determine predecessors to reach initial state

90
Q

Why might you use regression planning over preogression planning?

A

The branching factor is generally lower going backwards than forwards so reduces the amount of search

91
Q

What is least commitment strategy?

A

When choices that don’t need to be made yet are delayed

92
Q

What 3 reasons are there for using partial-order planning over fully-ordered planning?

A

You get more flexibility
A POP can represent many FOPs
You can delay commitment, reducing the need to backtrack

93
Q

What is the Turing Test?

A

If C can’t tell which of A and B is the computer, then the computer “must be intelligent”

94
Q

What is the symbol-grounding problem?

A

If intelligence is thinking thoughts, what makes a thought meaningful?
Does the thought of a symbol mean that symbol because it looks like it? Or because it was caused by it? Or because a mechanism creates thoughts about it?

95
Q

What is physical symbol system hypothesis?

A

The idea that if a machine can manipulate symbols appropriately, it can be intelligent

96
Q

What is Searle’s Chinese room?

A

An idea that challenges the physical symbol system hypothesis. Manipulating symbols alone isn’t considered intelligence because what is actually understanding the symbols? It would pass the Turing Test though.

97
Q

What are the 2 types of reasoning?

A

Logic/rule based reasoning
Stochastic reasoning

98
Q

What is logic/rule based reasoning?

A

Where you start with basic rules (axioms) that you’re sure about
You derive logic from this
Whatever you infer, you’re sure about

99
Q

What is the issue with logic/rule based reasoning?

A

It’s not probabilistic, which limits it

100
Q

What is stochastic reasoning?

A

Reasoning that involves possibilistic reasoning or Bayesian probability

101
Q

What is good about Bayesian reasoning?

A

It captures the uncertainty of knowledge
It can integrate prior knowledge into reasoning processes

102
Q

What are the elements of Bayesian belief update (Bayes Rule variables)?

A

(Prior, likelihood, denominator) -> posterior

103
Q

What are Bayes Nets (3-part)?

A

Bayesian Networks:
Directed acyclic graphs (no closed loops)
Represented by a truth table, with probabilities instead of T/F
Can model complex probabilistic dependecies based on more than one variable

104
Q

What are the 3 issues with inference?

A

Can’t capture evidence - can only reason off prior knowledge
Doesn’t scale well with increasing no. variables
Doesn’t show relationships

105
Q

How are Bayes Nets built?

A

Using domain experts: structure is manually built, probabilities are built or learned from data

Building from data: structure and probabilities are learned from the data

106
Q

What is the premise of robotics?

A

That intelligence is defined as the ability to adjust to changes in the environment

107
Q

What are deliberative control architectures?

A

A sequence of sensing, updating its model, planning, and executing

108
Q

What are the 3 problems with deliberative control architectures?

A

Solutions are fragile in the presence of uncertainty
Requires frequent replanning
Reacts slowly to changes in the environment

109
Q

What are behaviour-based control architectures?

A

Where simple behaviours achieving subtasks are combined to achieve the overall task

110
Q

What are the 2 problems with behaviour-based control architecture?

A

Emergent behaviour makes exact behaviour difficult to predict
Requires difficult coordination design

111
Q

What does strong AI cover?

A

Singularity - AI becoming superior to humans
How singularity raises moral/legal issues

112
Q

What does weak AI cover?

A

Human-Agent Collectives - where a collaborative weak AI domain is built around us, where human users are integral parts of the system