final Flashcards

1
Q

What is the time complexity of the alpha-beta pruning algorithm?
b - max branch factor of the search tree
d - depth of the least-cost sol.
m - max depth of the state space
a) O(bm)
b) O(b^(m/2))
c) O(b^m)
d) O(bd + 1)
e) O(b^d + 1)
f) O(b^(d+1))

A

b) O(b^(m/2)) (not 100% sure tho)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Which is INCORRECT about the agent’s need for learning?
a) The developer cannot aniticipate all the changes since some of them are unpredictable
b) The learning ability makes sure that the agent always create the exact solutions for any input tasks
c) The developer cannot provide enough info to cover every circumstances
d) The developer cannot anitcipate all the possible situations since there are uncertainities in the environment
e) There might be error or noise in the dataset.

A

b) The learning ability makes sure that the agent always create the exact solutions for any input tasks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is NOT a component to learn in the deisgn of learning element?
a) Mechanism to do inference based on perception
b) Problem generator to suggest experiments
c) Perception that the agent detects the change in the surrounding environment and adjust itself to adapt
d) Mapping from conditions to actions

A

d) Mapping from conditions to actions (not 100% sure)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which of the following statement is true?
a) All admissible heuristics are consistent
b) All consistent heuristics are admissible
c) The heuristic h(n) = 1 is admissible for every search problem

A

b) All consistent heuristics are admissible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is NOT the main features of the learning agent?
a) problem generator
b) Critic
c) Knowledge-base element
d) Learning element
e) Performance standard

A

The key components of the learning agent are categorized as the “Critic”, the “Learning Element”, the “Performance Element”, and “Problem Generator”.
So the answer is: c) Knowledge-base element

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Given two admissible heuristics hA and hB. Which of the following is guaranteed to also be admissible heuristic?
a) 2 * max(hA, hB)
b) min(hA, hB)
c) hA * hB
d) hA + hB

A

b) min(hA, hB)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Which of the following statement is true?
a) Value iteration is used for computing the max values of states
b) At convergence, the value iteration does not change the value function for any state
c) Each iteration of value iteration produces a value function that has higher value than the prior value functions for all states.

A

b) At convergence, the value iteration does not change the value function for any state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which is true?
a) If the perceptron algorithm terminates, then it is guaranteed to find a max-margin separating decision boundary.
b) In the binary classification case, logistic regression is exaclty equivalent to a single-layer neural network with a sigmoig activation and the cross-entropy loss function
c) It is possible for the perceptiron algorithm to never terminate on a dataset that is linearly separable in its feature space.

A

c) It is possible for the perceptiron algorithm to never terminate on a dataset that is linearly separable in its feature space.

The perceptron algorithm is a simple linear classifier that learns by iteratively updating its weights. It is guaranteed to converge to a separating hyperplane if the dataset is linearly separable. However, there are cases where the perceptron algorithm can get stuck in a loop and never terminate. This can happen if the dataset is linearly separable but the perceptron algorithm is not able to find a separating hyperplane.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which statement is true?
a) Q-learning belongs to the type of model-based reinforcement learning
b) Q-learning requires that all samples must be from the optimal policy to find optimal q-values
c) In Q-learning, you do not learn the model including the transition function and reward function.

A

c) In Q-learning, you do not learn the model including the transition function and reward function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

In coin tossing, which case produces the lowest information gain?
a) Probability is 0-100 for head and tail
b) Probability is 99-1 for head and tail
c) Probability is 75-25 for head and tail
d) Probability is 50-50 for head and tail
e) Probability is 25-75 for head and tail
f) Probability is 5-95 for head and tail
g) Probability is 100-0 for head and tail

A

d) Probability is 50-50 for head and tail -> max entropy == min info gain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly