Artificial Intelligence Flashcards

1
Q

Branching factor

A
  • Maximum number of successors of any node
  • Denoted as b
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Depth of shallowest goal node

A

Denoted as d

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Maximum length of path

A

Denoted as m

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Breadth first search

A
  • Expands shallowest node on the frontier
  • Complete if d and b are finite
  • Optimal only if path cost is a nondecreasing function of a node depth, for example, if all actions have identical cost
  • Time complexity: O(b^d)
  • Space complexity: O(b^d)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Depth first search

A
  • Expands deepest node on the frontier
  • Complete if graph-search and finite state space. No if it’s used in a tree search.
  • Not optimal
  • Time complexity: O(bm)
  • Space complexity: O(bm)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Iterative deepening search

A
  • Run depth-first search to a fixed depth z
  • Starting at z = 1, if the solution is not found then increment z and rerun the algorithm
  • Time complexity: O(bd)
  • Space complexity: O(bd)
  • Complete if state space is finite
  • Optimal if step costs are all identical
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Uniform-cost search

A
  • Expand node with lowest path cost
  • Complete unless there is a path with an infinite sequence of zero-cost actions
  • Optimal
  • Time and space complexity: O(b^d) unless there are identical action costs, then it would be O (b^[1+ (C*/ϵ)]) with:
  • C* = cost of the optimal solution
  • ϵ = minimum action cost
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Minimax algorithm

A
  • Complete if game tree is finite
  • Optimal against an optimal opponent
  • Time complexity: O(bm)
  • Space complexity: O(bm) or O(m)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Pruning decision trees

A
  • Avoids over fitting to training data
  • Reduces test error
  • Reduces complexity of the final classifier
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Greedy decision-tree learning

A
  • Uses information gain to choose attributes to split on
  • Does not prune
  • Finds the decision tree that minimises error in the training data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Admissible heuristic

A
  • Never overestimates the cost to reach the goal
  • For example, straight line distance
  • If non-negative then 0 ≤ h(I) ≤ action cost
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Consistent heuristic

A
  • Is also admissible
  • h(n) <= c(n, a, nʹ) + h(nʹ)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Constraint satisfaction problem

A
  • X = {X1,…,Xn} = set of variables
  • D = {D1,..,Dn} = set of domains specifying the values each variable can take
  • C = set of constraints that specifies values that the variables are allowed to have collectively
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Q-learning

A

Formula to find new value: (γ p) + (α (r + (γ * p)))

  • γ = discount factor
  • p = previous value
  • α = learning rate
  • r = reward from previous turn

Given classes A and B:

  • After transition A, perform action, B, the agent updates the Q value of state A
  • After history A, perform action, B, the Q-value of state A changes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Greedy best-first search

A
  • Expands the node that appears to be closest to goal.
  • Complexity: O(b^m) for both for tree search but a good heuristic can reduce it substantially
  • Complete only when performing graph search in finite state set otherwise no (it can get into loops).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A* search

A
  • g(n) + h(n)
  • g(n) = path cost from start node to node n
  • h(n) = estimated cost of the cheapest path from n to goal state
  • Complete
  • Tree search version is optimal with an admissible heuristic
  • Graph search version is optimal with a consistent heuristic.
  • No other optimal algorithm is guaranteed to expand fewer nodes than A*
  • Expands m nodes
17
Q

Markov Decision Process

A
  • S = set of states
  • A(s), s ∈ S = set of actions available from each state
  • P(s’ | s,a) = transition model
  • R(s, a, s’) = reward function
18
Q

Bellman’s Equation

A
s = state
a = action
s' = successor state
19
Q

Bellman Optimality Equation

A
20
Q

Value Iteration Update

A
21
Q

Deterministic policy in MDP

A

At a state, one action is selected with total probability 1.

22
Q

Interpretation in propositional logic

A
  • Assignment of a truth value to each symbol (true/false)
  • For example, for two symbols P and Q there would be 4 interpretations
23
Q

Meaning in propositional logic

A
  • A set of interpretations
24
Q

Naive Bayes Classifier

A
  • All features are conditionally independent of each other given their parent
  • Features are mutually dependent of each other if they have the same parent
25
Q

Bayes net

A
  • Each node is conditionally independent of its non descendants given its parents
  • Each node is conditionally independent of all other nodes in the network given its Markov blanket
  • A node is not independent of it’s descendants or ancestors
26
Q

Markov Blanket

A
  • Parents, children and children’s parents of selected node
  • Each node is independent of all other nodes outside the Markov blanket given it’s Markov blanket
27
Q

Perceptron network

A
  • If a weighted linear combination is less than the threshold, then output is 0 and doesn’t fire, otherwise output is 1 and fired
  • Can’t represent xor linearly
28
Q

Holdout cross-validation

A
  • Split the training set in half
  • Train classifier on one half and measure the error rate on another half
  • Repeat on different splits
29
Q

K-fold cross-validation

A
  • Divide the training set into k equal subsets
  • Perform k rounds of learning
  • In each round, train the classifier on k-1 subsets then test on the remaining subset
  • Use the mean error rate across rounds
30
Q

Leave one out cross-validation

A
  • Extreme version of k-fold cross validation, where k=n, and n is the number of instances in the training set