previous exam questions (for exam 2/3) Flashcards
these questions are taken from JKU Discord pinned PDF file and were simply copied, not checked by me.
Select the true statement(s): What is the difference between Artifcial Intelligence, Deep Neural Networks and Machine Learning?
- Nothing, they are all the same
- Artifcial Intelligence is a subset of Machine Learning and Deep Neural Networks
- Machnie Learning is a subset of Artifcial Intelligence and Deep Neural Networks
- Deep Neural Networks is a subset of Artifcial Intelligence and Machnie Learning
4 is correct
True or False: A computer is said to learn, if its perfomance at a task, as measured by its performance, declines with experience.
False
Select the true statement(s): What are some applications of Machine Learning?
- Recognizing Spam-Mail
- Market Basket analysis
- Recognize handwritten characters
- Logical reasoning
- Analyse dreams
- Solve ethical problems
1, 2 and 3 are correct
Select the true statement(s): What was achieved by using Machine Learning in astronomy?
- SKICAT can classify celestial objects with a 94% accuracy.
- AI is very useless in astronomy since it cannot find differences between stars and galaxies.
- SKICAT can classify celestial objects with a 6% accuracy.
- 16 new quasars were found.
- AI classification was even slightly more accurate than astronomers.
- AI was good at analyzing pictures of quasars but could not find anything new.
1, 4 and 5 are correct.
What is NOT a learning methode for Machine Learning?
- Supervised Learning
- Reinforcement Learning
- Auto-supervised Learning
- Unsupervised Learning
3 is not a learning method
Select the true statement(s): How can a machine learn to play a game?
- Cheating and performing invalid moves.
- Machines cannot learn.
- Rereading the game rules.
- Playing lots of games against itself.
4 is correct
Match the different learning scenarios in machine learning with their corresponding description:
- Unsupervised Learning
- Reinforcement learning
- Supervised learning
- Semi-Supervised Learning
a. Only a subgroup of the training data
set have an additional label as target
data.
b. The target value is provided for all
training samples. This learning method
is used for classification and regression.
c. This learning procedure relies on the
feedback of the teacher and not on example
values.
d. There is no further information on the
training samples. This method is used
for clustering and association rule discovery.
1d
2c
3b
4a
True or False: When we’re given a training data set with potential noise in it, our goal is to capture every single data point of our training samples in our hypothesis h.
False
True or False: Simulation search stops searching at a fixed position and uses an eveluation function to compute the output.
False
Select the true statement(s): What does the TD in TD-Gammon stand for?
- Turing Decoding
- Temporal Difference Learning
- Test Dice Roll
- Try and Defeat Learning
2 is correct
Choose the correct answers regarding Simulation Search.
- In the Monte Carlo tree search method, many games are simulated by making random
moves and then the average score of these games is used to evaluate a decision. - Simulation Search is a Machine Learning method that is used when the complete tree is
not searchable. - Simulation Search is a Machine Learning method that can only be used for rather small
decision trees. - With Simulation Search, every single branch is searched and evaluated.
- The Monte Carlo tree search method evaluates good decisions by trying out every possible
move. - With Simulation Search only a few branches are searched and evaluated.
1, 2 and 6 are correct
True or False: Shannon Type-A search algorithm is a brute force approach.
True
Select the correct statement(s): Choose the steps for genetic algorithms in the (usually applied / best-practice) order.
- Selection - Fitness - Mutation - Crossover
- Fitness - Mutation - Selection - Crossover
- Fitness - Selection - Crossover - Mutation
- Fitness - Crossover - Mutation - Selection
3 is correct
Map the following important search concepts.
- Evaluation Functions
- Iterative Deepening
- Selective Search
- Transposition Tables
a. … limit(s) the depth
b. … avoid(s) re-computing the same state again
c. … limit(s) used time/processing power
d. … limit(s) the height
1d
2a
3c
4b
True or False: Alpha-Beta-Search is in general faster than Min-Max (because of pruning uninteresting paths) but therefore only yields approximate results which are not necessarily the same.
False
Select the correct statement(s) about Overfitting.
1 … is not avoidable without a validation set
2 … only occurs if the model is not adequate independently of the data
3 … causes the test-set error to increase, although the training set error is not affected
4 … does not affect MLPs but only Deep Neural Networks
3 is correct
True or False: Deep Neural Networks are easily explainable by design
False
Associate the following Machine-Learning Concepts.
- Credit Assignment Problem
- MCTS
- UCB
- MENACE
a. Describes Exploration vs Exploitation
b. Essentially uses the Tree- and Rollout-Policy
c. Mainly Delayed Reward
d. Pioneer-Project regarding Reinforcement Learning
1c
2b
3a
4d
Choose the correct order for Monte-Carlo Tree Search.
1 Selection - Simulation - Expansion - Backpropagation
2 Selection - Backpropagation - Simulation - Expansion
3 Selection - Expansion - Simulation - Backpropagation
4 Selection - Separation - Expansion - Backpropagation
3 is correct
Match the definitions correctly:
- Supervised learning
- Unsupervised learning
- Semi-supervised Learning
- Reinforcement learning
a. Only subset of the training examples
are labeled as good or bad actions
b. There are occasional rewards (feedback)
for good actions
c. correct answers are not given (there is
no information except the training examples)
d. correct answers are given for each example
1d
2c
3a
4b
What is the idea behind Neural Networks?
1 to memorise many examples
2 to find a way to implement very complex principles
3 to process information like a serial computer
4 to model brains and nervous systems
4 is correct
True or False: Artifical Neurons are connected to each other via synapses.
True
Select the correct statement(s): What is a way of avoiding overfitting?
1 keeping a seperate validation set to watch the performance of test und training sets
2 iterating only once through all examples
3 making no adjustments throughout the fitting process
4 There is no solution for overfitting
1 is correct
True or False: The terms Machine Learning, Artificial Intelligence and Deep Neural Networks mean the same thing.
False
Select the correct statement(s):
The idea of neural networks is based on
1 The fibonacci number
2 The human brain
3 A famous computerprogram
4 Ants
2 is correct
Which three of these Machine Learning methods do exist?
1 Randomized Learning
2 Reinforcement Learning
3 Improvised Learning
4 Semi-supervised Learning
5 Independent Learning
6 Supervised Learning
2, 4, 6 are correct
True or False: A neural network is a method in artificial intelligence that teaches computers to process in a way that is inspired by the human brain
True
Map the terms connected to ML in games to corresponding definitions.
- Reinforcement learning
- Regret
- Upper confidence bound
- Roll-out
a. Machine learning technique, which
tries to maximize reward function.
b. Algorithm balancing between exploiting its current possibilities and exploring new ones
c. Technque to evaluate unexplored paths in game tree.
d. Difference between obtained and optimal gain
1a, 2d, 3b, 4c
Select the correct statement(s): When is machine learning suited for where classical programming comes to its limits?
1 finding patterns in data
2 unknown environments
3 real-time computation
4 highly complex situations
5 Playing games like Sudoku
6 processing a lot of data
1, 2, 4 and 6 are correct
Match the components to their descriptions.
- should be minimized during training
- introduce nonliniarity
- combining multiple input links to a single value
- specifies how much the weights change every update
a. error
b. activation function
c. learning rate
d. input function
1a, 2b, 3d, 4c
True or False: For defining a search problem, a state description and a goal function is needed.
True
Select the true statement(s): What is overfitting?
1 stopping to learn when the validation loss goes up
2 the AI system being biased
3 The AI getting better on the training set but worse on unseen data
4 using too many hidden layers
3 is correct
In 1912, Spanish engineer Leonardo Torres y Quevedo completed a machine El Ajedrecista that was able to automatically play (and win) the KRK chess endgame. Which statements are true?
1 The individual moves of the machine depended on the moves of the human opponent.
2 The individual moves of the machine was independed on the moves of the human opponent.
3 The chessboard was divided into 3 Zones.
4 The king and the rook of the computer had a fix start position, the king of the human opponent could be placed anywhere of the sqare from h1 to c6.
4 The chessboard was divided into 4 Zones.
5 The king and the rook of the computer had a fix start position, also the king of the human opponent.
1, 3 and 4 are correct
True or False: Ernst Zermelo (1912) provided a first general formalization of game playing for deterministic, 2-person, perfect information games. The theorem essentially states in such games either one of the two players can force a win (or both players can force a draw).
True
Which statements about Retrograde Analysis (RA) are correct?
1 The RA Algorithm goes back to Babbage (1846).
2 RA can’t ever be too complex for any game (because it’s easy to store all possible game
states)
3 Several Games habe been solved partial using RA (Chess, Checkers)
4 RA starts to analyze the game forwards.
5 The RA Algorithm builds up a database if we want to strongly solve a game.
6 Several Games habe been solved completely using RA (Tic-Tac-Toe, Go-Moku, Connect-4, …)
3, 5 and 6 are correct
True or False: The “Shannon Number” is a frequently cited number for formalizing chess is due to a back-of-the-envelope calculation by Claude Shannon.
True
Which sentences about “Zero-Sum-Game” are true?
1 If Player 2 maximizes its score, it automaticly maximizes the score of Player 1 too.
2 In principle Minimax can solve any 2-person zero-sum perfect information game. (But in practice it is intractable to find a solution with minimax, because the Sizes of some game trees are too big.)
3 Both players try to optimize (maximize) their result.
4 Both players try to minimize (0 is the best) their result.
5 If Player 2 maximizes its score, it minimizes the score of Player 1.
6 Minimax is not suitable for solving zero-sum games.
2, 3 and 5 are correct
Which sentences about “Heuristic Evaluation Function” (in Game playing) are true?
1 The idea is to produce an estimate of the expected utility of the game from a given position.
2 A requirement is only a lot of time for the computation.
3 The idea is to save all possibles moves of the game.
4 The performance of the algorithm depends on quality of the evaluation function
5 The performance of the algorithm is indepent on quality of the evaluation function.
6 Requirements are: The evaluation function should order terminal-nodes in the same way
as the (unknown) true function; Computation should not take too long; For non-terminal
states the evaluation function should be strongly correlated with the actual chance of
winning.
1, 4 and 6 are correct
True or False: Iterative deepening is a repeated search with fixed depth for depths d = 1, …, D. One advantage is the simplification of time management. Therefore, the total number of nodes searched is often smaller than in non-iterative search!
True
True or False: The move ordering is crucial to the performance of alphabeta search.
True
The basic idea from “Transposition Tables” is to store found positions in a hash table. If the position occurs a second time, the value of the node does not have to be recomputed. Which sentences about the the implementation of Transposition Tables are correct? Each entry in the hash table stores the…
1 … probability of finding that particular position.
2 … search depth of stored value (in case we search deeper)
3 … hash key of position (to eliminate collisions)
4 … state evaluation value (except whether this was as exact value or a fail high/low value)
5 … time it took to calculate the position.
6 … state evaluation value (including whether this was as exact value or a fail high/low value)
2, 3 and 6 are correct
What is the status of AI in Game Playing?
- World-Championship strength
- Draw
- Solved
- Human Supremacy
a. Bridge, Poker
b. Chess, Backgammon, Scrabble, Othello, Go, Shogi
c. Checkers
d. Tic-Tac-Toe, Connect-4, Go-Moku, 9-men Morris
1b, 2c, 3d, 4a
In the scientific community is a difference between Artificial Intelligence, Machine Learning and Deep Neural Networks. Match correct!
- Which category means the same like probability?
- Which category is a subset on the one hand and contains another subset on the other hand?
- Which categorie contains the two other subsets?
- Which categorie is a subset of the two others?
a. None
b. Artificial Intelligence
c. Deep Neural Networks
d. Maschine Learning
1a, 2d, 3b, 4c
True or False: A Perceptron can implement all elementary logical functions (“AND” , “OR”, “NOT”).
True
Which of the following (boolean) function can a Perceptron NOT model?
1 logical “AND” (true if both inputs are true)
2 logical “ XOR” (true if one of the inputs is true and the other is false)
3 logical “NOT” (true if the input is false)
4 logical “OR” (true if any input is true)
2 can’t be modelled
True or False: Recurrent Neural Networks (RNN) allow to process sequential data by feeding back the output of the network into the next input.
True
True or False: It’s about the Error Landscape of Perceptron Learning. The error function for one training example may be considered as a function in a multi-dimensional weight space. The best weight setting for one example is where the error measure for this example is minimal.
True
Which sentences about “Multilayer Perceptrons are correct?
1 The size of this hidden layer is determined automatically.
2 The output node(s) never may be combined with another perceptron.
3 The output nodes may be combined with another perceptron. (Which may also have multiple output nodes.)
4 The size of this hidden layer is determined manually.
5 Perceptrons may have multiple output nodes. (They may be viewed as multiple parallel perceptrons.)
6 Perceptrons never have multiple output nodes. (But they may have a lot single perceptrons one after the other.)
3, 4 and 5 are correct
True or False: It’s about Convolution in Convolutional Neural Networks. For each pixel of an image, a new feature is computed using a weighted combination of its n x n neighborhood.
True
Match the four key learning steps of AlphaGo correctly!
- use self-play (from the selection policy) to train a utility function for evaluating a given game position
- learn from expert games an accurate expert policy
- refine the expert policy (using reinforcement learning)
- learn from expert games a fast but inaccurate roll-out policy
a. second step
b. fourth step
c. first step
d. third step
1b, 2a, 3d, 4c
Which sentence is correct?
1 AlphaGo Zero is a version that learned to play only from self-play. It beat AlphaGo 50-50 using much less training data.
2 AlphaGo is a improved version that learned to play only from self-play. It beat AlphaGo Zero 100-0 using much less training data.
3 AlphaGo Zero is a improved version that learned to play only from self-play. It beat AlphaGo 100-0 using much less training data.
4 AlphaGo is a version that learned to play only from self-play. It beat AlphaGo Zero 50-50 using much less training data.
3 is correct
It’s about opponent Modeling. Think of the example of Roshambo (Rock/Paper/Scissors). Which sentence is correct?
1 Optimal solutions don’t exists.
2 Optimal solutions are not maximal.
3 Optimal solutions are always minimal.
4 Optimal solutions are always maximal.
2 is correct
True or False: The Monte-Carlo Search is a extreme case of Simulation search: (Play a large number of games where both players make their moves randomly. Make the move that has the highest average score of the previous games.)
True
True or False: The goal of “Reinforcement Learning” is learning of policies (action selection strategies) based on feedback from the environment.
True
Match the information on reinforcement learning in MENACE correctly.
- Learning algorithm: Game won
- Initialisation
- Learning algorithm: Game lost
- This results in
a. All moves are equally likely
b. Positive reinforcement
c. increased likelihood that a successful move will be tried again respectively decreased likelihood that an unsuccessful move will be repeated
d. Negative reinforcement
1b, 2a, 3d, 4c
Which statements are true for TD-gammon?
1 Improvements over MENACE are faster convergence because of Temporal Difference
Learning; and a Neural Network (instead of boxes and beads)
2 There are no improvements over MENACE.
3 For the training were human needed, who throwed the dices and write down the moves of the programme.
4 Developed from beginner to worldchampion strength after 1,500,000 training games against itself
5 Led to changes in backgammon theory and was used as a popular practice and analysis partner of leading human players
6 TD-Gammon program won the 1988 World Championship.
1, 4 and 5 are correct
True or False: “Exploitation” means: Try to play the best possible move; and “Exploration” means: Try new moves to learn something new.
True
How can be describe different types of Games?
- players alternate moves
- one player’s gain is the other player’s (or players’) loss
- Does every player see the entire game situation?
- Do random components influence the progress of the game?
a. perfect vs. imperfect information
b. Zero-Sum Games
c. turn-taking
d. deterministic games vs. games of chance
1c, 2b, 3a, 4d
Connect each of the three game playing programs with exactly one of the possible characterizations (one will remain unassigned):
- AlphaZero
- AlphaGo
- Deep Blue
a. has learned from expert games
b. searches all possible move sequences until a fixed depth
c. uses no expert knowledge at all
1c, 2a, 3b
Bring the four phases of Monte-Carlo Tree Search in to their correct order:
Backpropagation
Expansion
Selection
Simulation
Selection
Expansion
Simulation
Backpropagation
Assume you have a set of pictures of cats, and a set of pictures of dogs, and you use these to train a learning algorithm to recognize cats and dogs.
What learning algorithm is described here?
a. reinforcement learning
b. semi-supervised learning
c. unsupervised learning
d. supervised learning
d. supervised learning
In a game, each side can make 3 possible moves in each position. A machine performs an exhaustive look-ahead search for 4 plies (half moves).
How many possible leaf positions will it consider?
a. 81
b. 32
c. 12
d. 64
a. 81 is correct
True or False: Retrograde analysis attempts to solve a game by computing the game-theoretic value of each possible game position.
True
True or False: It is not possible to write a chess program that always plays an optimal move, not even with unlimited memory and infinitely fast computation.
False
True or False: The credit assignment problem describes the situation that an agent sometimes has to make sub-optimal actions in order to obtain information.
False
True or False: A good machine learning algorithm tries to perfectly fit every point in the training dataset.
False