previous exam questions (for exam 2/3) Flashcards

these questions are taken from JKU Discord pinned PDF file and were simply copied, not checked by me.

1
Q

Select the true statement(s): What is the difference between Artifcial Intelligence, Deep Neural Networks and Machine Learning?

  1. Nothing, they are all the same
  2. Artifcial Intelligence is a subset of Machine Learning and Deep Neural Networks
  3. Machnie Learning is a subset of Artifcial Intelligence and Deep Neural Networks
  4. Deep Neural Networks is a subset of Artifcial Intelligence and Machnie Learning
A

4 is correct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

True or False: A computer is said to learn, if its perfomance at a task, as measured by its performance, declines with experience.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Select the true statement(s): What are some applications of Machine Learning?

  1. Recognizing Spam-Mail
  2. Market Basket analysis
  3. Recognize handwritten characters
  4. Logical reasoning
  5. Analyse dreams
  6. Solve ethical problems
A

1, 2 and 3 are correct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Select the true statement(s): What was achieved by using Machine Learning in astronomy?

  1. SKICAT can classify celestial objects with a 94% accuracy.
  2. AI is very useless in astronomy since it cannot find differences between stars and galaxies.
  3. SKICAT can classify celestial objects with a 6% accuracy.
  4. 16 new quasars were found.
  5. AI classification was even slightly more accurate than astronomers.
  6. AI was good at analyzing pictures of quasars but could not find anything new.
A

1, 4 and 5 are correct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is NOT a learning methode for Machine Learning?

  1. Supervised Learning
  2. Reinforcement Learning
  3. Auto-supervised Learning
  4. Unsupervised Learning
A

3 is not a learning method

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Select the true statement(s): How can a machine learn to play a game?

  1. Cheating and performing invalid moves.
  2. Machines cannot learn.
  3. Rereading the game rules.
  4. Playing lots of games against itself.
A

4 is correct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Match the different learning scenarios in machine learning with their corresponding description:

  1. Unsupervised Learning
  2. Reinforcement learning
  3. Supervised learning
  4. Semi-Supervised Learning

a. Only a subgroup of the training data
set have an additional label as target
data.
b. The target value is provided for all
training samples. This learning method
is used for classification and regression.
c. This learning procedure relies on the
feedback of the teacher and not on example
values.
d. There is no further information on the
training samples. This method is used
for clustering and association rule discovery.

A

1d
2c
3b
4a

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

True or False: When we’re given a training data set with potential noise in it, our goal is to capture every single data point of our training samples in our hypothesis h.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

True or False: Simulation search stops searching at a fixed position and uses an eveluation function to compute the output.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Select the true statement(s): What does the TD in TD-Gammon stand for?

  1. Turing Decoding
  2. Temporal Difference Learning
  3. Test Dice Roll
  4. Try and Defeat Learning
A

2 is correct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Choose the correct answers regarding Simulation Search.

  1. In the Monte Carlo tree search method, many games are simulated by making random
    moves and then the average score of these games is used to evaluate a decision.
  2. Simulation Search is a Machine Learning method that is used when the complete tree is
    not searchable.
  3. Simulation Search is a Machine Learning method that can only be used for rather small
    decision trees.
  4. With Simulation Search, every single branch is searched and evaluated.
  5. The Monte Carlo tree search method evaluates good decisions by trying out every possible
    move.
  6. With Simulation Search only a few branches are searched and evaluated.
A

1, 2 and 6 are correct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

True or False: Shannon Type-A search algorithm is a brute force approach.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Select the correct statement(s): Choose the steps for genetic algorithms in the (usually applied / best-practice) order.

  1. Selection - Fitness - Mutation - Crossover
  2. Fitness - Mutation - Selection - Crossover
  3. Fitness - Selection - Crossover - Mutation
  4. Fitness - Crossover - Mutation - Selection
A

3 is correct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Map the following important search concepts.

  1. Evaluation Functions
  2. Iterative Deepening
  3. Selective Search
  4. Transposition Tables

a. … limit(s) the depth
b. … avoid(s) re-computing the same state again
c. … limit(s) used time/processing power
d. … limit(s) the height

A

1d
2a
3c
4b

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

True or False: Alpha-Beta-Search is in general faster than Min-Max (because of pruning uninteresting paths) but therefore only yields approximate results which are not necessarily the same.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Select the correct statement(s) about Overfitting.

1 … is not avoidable without a validation set
2 … only occurs if the model is not adequate independently of the data
3 … causes the test-set error to increase, although the training set error is not affected
4 … does not affect MLPs but only Deep Neural Networks

A

3 is correct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

True or False: Deep Neural Networks are easily explainable by design

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Associate the following Machine-Learning Concepts.

  1. Credit Assignment Problem
  2. MCTS
  3. UCB
  4. MENACE

a. Describes Exploration vs Exploitation
b. Essentially uses the Tree- and Rollout-Policy
c. Mainly Delayed Reward
d. Pioneer-Project regarding Reinforcement Learning

A

1c
2b
3a
4d

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Choose the correct order for Monte-Carlo Tree Search.

1 Selection - Simulation - Expansion - Backpropagation
2 Selection - Backpropagation - Simulation - Expansion
3 Selection - Expansion - Simulation - Backpropagation
4 Selection - Separation - Expansion - Backpropagation

A

3 is correct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Match the definitions correctly:

  1. Supervised learning
  2. Unsupervised learning
  3. Semi-supervised Learning
  4. Reinforcement learning

a. Only subset of the training examples
are labeled as good or bad actions
b. There are occasional rewards (feedback)
for good actions
c. correct answers are not given (there is
no information except the training examples)
d. correct answers are given for each example

A

1d
2c
3a
4b

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is the idea behind Neural Networks?

1 to memorise many examples
2 to find a way to implement very complex principles
3 to process information like a serial computer
4 to model brains and nervous systems

A

4 is correct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

True or False: Artifical Neurons are connected to each other via synapses.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Select the correct statement(s): What is a way of avoiding overfitting?

1 keeping a seperate validation set to watch the performance of test und training sets
2 iterating only once through all examples
3 making no adjustments throughout the fitting process
4 There is no solution for overfitting

A

1 is correct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

True or False: The terms Machine Learning, Artificial Intelligence and Deep Neural Networks mean the same thing.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Select the correct statement(s):

The idea of neural networks is based on

1 The fibonacci number
2 The human brain
3 A famous computerprogram
4 Ants

A

2 is correct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Which three of these Machine Learning methods do exist?

1 Randomized Learning
2 Reinforcement Learning
3 Improvised Learning
4 Semi-supervised Learning
5 Independent Learning
6 Supervised Learning

A

2, 4, 6 are correct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

True or False: A neural network is a method in artificial intelligence that teaches computers to process in a way that is inspired by the human brain

A

True

28
Q

Map the terms connected to ML in games to corresponding definitions.

  1. Reinforcement learning
  2. Regret
  3. Upper confidence bound
  4. Roll-out

a. Machine learning technique, which
tries to maximize reward function.
b. Algorithm balancing between exploiting its current possibilities and exploring new ones
c. Technque to evaluate unexplored paths in game tree.
d. Difference between obtained and optimal gain

A

1a, 2d, 3b, 4c

29
Q

Select the correct statement(s): When is machine learning suited for where classical programming comes to its limits?

1 finding patterns in data
2 unknown environments
3 real-time computation
4 highly complex situations
5 Playing games like Sudoku
6 processing a lot of data

A

1, 2, 4 and 6 are correct

30
Q

Match the components to their descriptions.

  1. should be minimized during training
  2. introduce nonliniarity
  3. combining multiple input links to a single value
  4. specifies how much the weights change every update

a. error
b. activation function
c. learning rate
d. input function

A

1a, 2b, 3d, 4c

31
Q

True or False: For defining a search problem, a state description and a goal function is needed.

A

True

32
Q

Select the true statement(s): What is overfitting?

1 stopping to learn when the validation loss goes up
2 the AI system being biased
3 The AI getting better on the training set but worse on unseen data
4 using too many hidden layers

A

3 is correct

33
Q

In 1912, Spanish engineer Leonardo Torres y Quevedo completed a machine El Ajedrecista that was able to automatically play (and win) the KRK chess endgame. Which statements are true?

1 The individual moves of the machine depended on the moves of the human opponent.
2 The individual moves of the machine was independed on the moves of the human opponent.
3 The chessboard was divided into 3 Zones.
4 The king and the rook of the computer had a fix start position, the king of the human opponent could be placed anywhere of the sqare from h1 to c6.
4 The chessboard was divided into 4 Zones.
5 The king and the rook of the computer had a fix start position, also the king of the human opponent.

A

1, 3 and 4 are correct

34
Q

True or False: Ernst Zermelo (1912) provided a first general formalization of game playing for deterministic, 2-person, perfect information games. The theorem essentially states in such games either one of the two players can force a win (or both players can force a draw).

A

True

35
Q

Which statements about Retrograde Analysis (RA) are correct?

1 The RA Algorithm goes back to Babbage (1846).
2 RA can’t ever be too complex for any game (because it’s easy to store all possible game
states)
3 Several Games habe been solved partial using RA (Chess, Checkers)
4 RA starts to analyze the game forwards.
5 The RA Algorithm builds up a database if we want to strongly solve a game.
6 Several Games habe been solved completely using RA (Tic-Tac-Toe, Go-Moku, Connect-4, …)

A

3, 5 and 6 are correct

36
Q

True or False: The “Shannon Number” is a frequently cited number for formalizing chess is due to a back-of-the-envelope calculation by Claude Shannon.

A

True

37
Q

Which sentences about “Zero-Sum-Game” are true?

1 If Player 2 maximizes its score, it automaticly maximizes the score of Player 1 too.
2 In principle Minimax can solve any 2-person zero-sum perfect information game. (But in practice it is intractable to find a solution with minimax, because the Sizes of some game trees are too big.)
3 Both players try to optimize (maximize) their result.
4 Both players try to minimize (0 is the best) their result.
5 If Player 2 maximizes its score, it minimizes the score of Player 1.
6 Minimax is not suitable for solving zero-sum games.

A

2, 3 and 5 are correct

38
Q

Which sentences about “Heuristic Evaluation Function” (in Game playing) are true?

1 The idea is to produce an estimate of the expected utility of the game from a given position.
2 A requirement is only a lot of time for the computation.
3 The idea is to save all possibles moves of the game.
4 The performance of the algorithm depends on quality of the evaluation function
5 The performance of the algorithm is indepent on quality of the evaluation function.
6 Requirements are: The evaluation function should order terminal-nodes in the same way
as the (unknown) true function; Computation should not take too long; For non-terminal
states the evaluation function should be strongly correlated with the actual chance of
winning.

A

1, 4 and 6 are correct

39
Q

True or False: Iterative deepening is a repeated search with fixed depth for depths d = 1, …, D. One advantage is the simplification of time management. Therefore, the total number of nodes searched is often smaller than in non-iterative search!

A

True

40
Q

True or False: The move ordering is crucial to the performance of alphabeta search.

A

True

41
Q

The basic idea from “Transposition Tables” is to store found positions in a hash table. If the position occurs a second time, the value of the node does not have to be recomputed. Which sentences about the the implementation of Transposition Tables are correct? Each entry in the hash table stores the…

1 … probability of finding that particular position.
2 … search depth of stored value (in case we search deeper)
3 … hash key of position (to eliminate collisions)
4 … state evaluation value (except whether this was as exact value or a fail high/low value)
5 … time it took to calculate the position.
6 … state evaluation value (including whether this was as exact value or a fail high/low value)

A

2, 3 and 6 are correct

42
Q

What is the status of AI in Game Playing?

  1. World-Championship strength
  2. Draw
  3. Solved
  4. Human Supremacy

a. Bridge, Poker
b. Chess, Backgammon, Scrabble, Othello, Go, Shogi
c. Checkers
d. Tic-Tac-Toe, Connect-4, Go-Moku, 9-men Morris

A

1b, 2c, 3d, 4a

43
Q

In the scientific community is a difference between Artificial Intelligence, Machine Learning and Deep Neural Networks. Match correct!

  1. Which category means the same like probability?
  2. Which category is a subset on the one hand and contains another subset on the other hand?
  3. Which categorie contains the two other subsets?
  4. Which categorie is a subset of the two others?

a. None
b. Artificial Intelligence
c. Deep Neural Networks
d. Maschine Learning

A

1a, 2d, 3b, 4c

44
Q

True or False: A Perceptron can implement all elementary logical functions (“AND” , “OR”, “NOT”).

A

True

45
Q

Which of the following (boolean) function can a Perceptron NOT model?

1 logical “AND” (true if both inputs are true)
2 logical “ XOR” (true if one of the inputs is true and the other is false)
3 logical “NOT” (true if the input is false)
4 logical “OR” (true if any input is true)

A

2 can’t be modelled

46
Q

True or False: Recurrent Neural Networks (RNN) allow to process sequential data by feeding back the output of the network into the next input.

A

True

47
Q

True or False: It’s about the Error Landscape of Perceptron Learning. The error function for one training example may be considered as a function in a multi-dimensional weight space. The best weight setting for one example is where the error measure for this example is minimal.

A

True

48
Q

Which sentences about “Multilayer Perceptrons are correct?

1 The size of this hidden layer is determined automatically.
2 The output node(s) never may be combined with another perceptron.
3 The output nodes may be combined with another perceptron. (Which may also have multiple output nodes.)
4 The size of this hidden layer is determined manually.
5 Perceptrons may have multiple output nodes. (They may be viewed as multiple parallel perceptrons.)
6 Perceptrons never have multiple output nodes. (But they may have a lot single perceptrons one after the other.)

A

3, 4 and 5 are correct

49
Q

True or False: It’s about Convolution in Convolutional Neural Networks. For each pixel of an image, a new feature is computed using a weighted combination of its n x n neighborhood.

A

True

50
Q

Match the four key learning steps of AlphaGo correctly!

  1. use self-play (from the selection policy) to train a utility function for evaluating a given game position
  2. learn from expert games an accurate expert policy
  3. refine the expert policy (using reinforcement learning)
  4. learn from expert games a fast but inaccurate roll-out policy

a. second step
b. fourth step
c. first step
d. third step

A

1b, 2a, 3d, 4c

51
Q

Which sentence is correct?

1 AlphaGo Zero is a version that learned to play only from self-play. It beat AlphaGo 50-50 using much less training data.
2 AlphaGo is a improved version that learned to play only from self-play. It beat AlphaGo Zero 100-0 using much less training data.
3 AlphaGo Zero is a improved version that learned to play only from self-play. It beat AlphaGo 100-0 using much less training data.
4 AlphaGo is a version that learned to play only from self-play. It beat AlphaGo Zero 50-50 using much less training data.

A

3 is correct

52
Q

It’s about opponent Modeling. Think of the example of Roshambo (Rock/Paper/Scissors). Which sentence is correct?

1 Optimal solutions don’t exists.
2 Optimal solutions are not maximal.
3 Optimal solutions are always minimal.
4 Optimal solutions are always maximal.

A

2 is correct

53
Q

True or False: The Monte-Carlo Search is a extreme case of Simulation search: (Play a large number of games where both players make their moves randomly. Make the move that has the highest average score of the previous games.)

A

True

54
Q

True or False: The goal of “Reinforcement Learning” is learning of policies (action selection strategies) based on feedback from the environment.

A

True

55
Q

Match the information on reinforcement learning in MENACE correctly.

  1. Learning algorithm: Game won
  2. Initialisation
  3. Learning algorithm: Game lost
  4. This results in

a. All moves are equally likely
b. Positive reinforcement
c. increased likelihood that a successful move will be tried again respectively decreased likelihood that an unsuccessful move will be repeated
d. Negative reinforcement

A

1b, 2a, 3d, 4c

56
Q

Which statements are true for TD-gammon?

1 Improvements over MENACE are faster convergence because of Temporal Difference
Learning; and a Neural Network (instead of boxes and beads)
2 There are no improvements over MENACE.
3 For the training were human needed, who throwed the dices and write down the moves of the programme.
4 Developed from beginner to worldchampion strength after 1,500,000 training games against itself
5 Led to changes in backgammon theory and was used as a popular practice and analysis partner of leading human players
6 TD-Gammon program won the 1988 World Championship.

A

1, 4 and 5 are correct

57
Q

True or False: “Exploitation” means: Try to play the best possible move; and “Exploration” means: Try new moves to learn something new.

A

True

58
Q

How can be describe different types of Games?

  1. players alternate moves
  2. one player’s gain is the other player’s (or players’) loss
  3. Does every player see the entire game situation?
  4. Do random components influence the progress of the game?

a. perfect vs. imperfect information
b. Zero-Sum Games
c. turn-taking
d. deterministic games vs. games of chance

A

1c, 2b, 3a, 4d

59
Q

Connect each of the three game playing programs with exactly one of the possible characterizations (one will remain unassigned):

  1. AlphaZero
  2. AlphaGo
  3. Deep Blue

a. has learned from expert games
b. searches all possible move sequences until a fixed depth
c. uses no expert knowledge at all

A

1c, 2a, 3b

60
Q

Bring the four phases of Monte-Carlo Tree Search in to their correct order:

Backpropagation
Expansion
Selection
Simulation

A

Selection
Expansion
Simulation
Backpropagation

61
Q

Assume you have a set of pictures of cats, and a set of pictures of dogs, and you use these to train a learning algorithm to recognize cats and dogs.
What learning algorithm is described here?

a. reinforcement learning
b. semi-supervised learning
c. unsupervised learning
d. supervised learning

A

d. supervised learning

62
Q

In a game, each side can make 3 possible moves in each position. A machine performs an exhaustive look-ahead search for 4 plies (half moves).
How many possible leaf positions will it consider?

a. 81
b. 32
c. 12
d. 64

A

a. 81 is correct

63
Q

True or False: Retrograde analysis attempts to solve a game by computing the game-theoretic value of each possible game position.

A

True

64
Q

True or False: It is not possible to write a chess program that always plays an optimal move, not even with unlimited memory and infinitely fast computation.

A

False

65
Q

True or False: The credit assignment problem describes the situation that an agent sometimes has to make sub-optimal actions in order to obtain information.

A

False

66
Q

True or False: A good machine learning algorithm tries to perfectly fit every point in the training dataset.

A

False