Studen Questions Exam Part 2 Flashcards

1
Q

What is the difference between Artificial Intelligence, Deep Neural Netwo

  1. Nothing, they are all the same
  2. Artificial Intelligence is a subset of Machine Learning and Deep Neural Networks
  3. Machnie Learning is a subset of Artificial Intelligence and Deep Neural Networks
  4. Deep Neural Networks is a subset of Artificial Intelligence and Machnie Learning

Machine Learning

A

4 is True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

True or False

A computer is said to learn, if its perfomance at a task, as measured by its performance, declines with experience.

Machine Learning

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are some applications of Machine Learning?

  1. Recognizing Spam-Mail
  2. Market Basket analysis
  3. Recognize handwritten characters
  4. Logical reasoning
  5. Analyse dreams
  6. Solve ethical problems

Machine Learning

A

1,2,3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What was achieved by using Machine Learning in astronomy?

  1. SKICAT can classify celestial objects with a 94% accuracy.
  2. AI is very useless in astronomy since it cannot find differences between stars and galaxies.
  3. SKICAT can classify celestial objects with a 6% accuracy.
  4. 16 new quasars were found.
  5. AI classification was even slightly more accurate than astronomers.
  6. AI was good at analyzing pictures of quasars but could not find anything new.

Machine Learning

A

1,4

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is NOT a learning methode for Machine Learning?

  1. Supervised Learning
  2. Reinforcement Learning
  3. Auto-supervised Learning
  4. Unsupervised Learning

Machine Learning

A

3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How can a machine learn to play a game?

  1. Cheating and performing invalid moves.
  2. Machines cannot learn.
  3. Rereading the game rules.
  4. Playing lots of games against itself.

Machine Learning

A

4

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Match the different learning scenarios in machine learning with their cor

  1. Unsupervised Learning
  2. Reinforcement learning
  3. Supervised learning
  4. Semi-Supervised Learning

a. Only a subgroup of the training data set have an additional label as target data.
b. The target value is provided for all training samples. This learning method is used for classification and regression.
c. This learning procedure relies on the feedback of the teacher and not on example values.
d. There is no further information on the training samples. This method is used for clustering and association rule discovery

Machine Learning

A

1d
2c
3b
4a

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

True or False

When were given a training data set with potential noise in it, our goal is to capture every single data point of our training samples in our hypothesis h.

Machine Learning

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Overfitting…

  1. … is not avoidable without a validation set
  2. … only occurs if the model is not adequate independently of the data
  3. … causes the test-set error to increase, although the training set error is not affected
  4. … does not affect MLPs but only Deep Neural Networks
A

Let’s analyze each statement about overfitting:
1. ”… is not avoidable without a validation set”
False. While a validation set helps detect and mitigate overfitting, it is not the only way to prevent it. Techniques like regularization, dropout, data augmentation, and early stopping can help reduce overfitting even without a validation set.
2. ”… only occurs if the model is not adequate independently of the data”
False. Overfitting happens when a model becomes too complex for the data (e.g., a model with too many parameters relative to the size of the dataset). It depends on the interaction between the model and the data; even an otherwise “adequate” model can overfit if the data is insufficient or noisy.
3. ”… causes the test-set error to increase, although the training set error is not affected”
True. Overfitting typically leads to excellent performance on the training set (low training error) but poor generalization to unseen data (high test set error). This is a hallmark of overfitting.
4. ”… does not affect MLPs but only Deep Neural Networks”
False. Overfitting can affect any type of model, including Multilayer Perceptrons (MLPs). While it is more common in deep neural networks due to their higher complexity, MLPs with too many parameters or insufficient data can also overfit.

Correct Answer: 3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

True or False

Deep Neural Networks are easily explainable by design

Machine Learning

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Match the definitions correctly:

  1. Supervised learning
  2. Unsupervised learning
  3. Semi-supervised Learning
  4. Reinforcement learning

a. Only subset of the training examples are labeled as good or bad actions
b. There are occasional rewards (feedback) for good actions
c. correct answers are not given (there is no information except the training examples)
d. correct answers are given for each example

Machine Learning

A

1d
2c
3a
4b

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the idea behind Neural Networks?

  1. to memorise many examples
  2. to find a way to implement very complex principles
  3. to process information like a serial computer
  4. to model branins and nervous systems

Machine Learning

A

4

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

True or False

Artificial neurons are connected to each other via synapses

Machine Learning

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a way of avoiding overfitting?

  1. keeping seperate validation set to watch the performance of test and training sets
  2. iterating only once through all examples
  3. making no adjustments throughout the fitting process
  4. there is no solution to overfitting

Machine Learning

A

1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

True or False

The terms Machine Learning, Aritificial Intelligence and Deep Neural Networks mean the same thing?

Machine Learning

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

The idea of neural networks is based on:

  1. the fibonacci number
  2. the human brain
  3. a famous computerprogram
  4. ants

Machine Learning

A

2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Machine Learning

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Which three of these Machine Learning methods do exist?

  1. randomized learning
  2. reinforcement learning
  3. improvised learning
  4. semi-supervised learning
  5. independent learning
  6. supervised learning

Machine Learning

A

2,4,6

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

When is machine learning suited where classical programming comes to its

  1. finding patterns in data
  2. unknown environments
  3. real-time computation
  4. highly complex situations
  5. playing games like sudoku
  6. processing a lot of data

Machine Learning

A

1,2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Match the components to their descriptions.

  1. should be minimized during training
  2. introduce nonliniarity
  3. combining multiple input links to a
    single value
  4. specifies how much the weights change
    every update

a. error
b. activation function
c. learning rate
d. input function

Machine Learning

A

1a
2b
3d
4c

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is overfitting?

  1. stopping to learn when the validation loss goes up
  2. the ai system being biased
  3. the ai getting better on the training set but worse on unseen data
  4. using too many hidden layers

Machine Learning

A

3 is correct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

In the scientific comunity is a difference between Artificial Intelligenc

  1. Which category means the same like
    probability?
  2. Which category is a subset on the one
    hand and contains another subset on
    the other hand?
  3. Which categorie contains the two
    other subsets?
  4. Which categorie is a subset of the two
    others?

a. None
b. Artificial Intelligence
c. Deep Neural Networks
d. Maschine Learning

Machine Learning

A

1a
2d
3b
4c

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

True or False

A perceptron can implement all elementary logical functions (“AND”, “OR”, “NOT”)

Machine Learning

A

True

24
Q

Which of the following (boolean) function can a Perceptron NOT model?

  1. logical ”AND” (true if both inputs are true)
  2. logical ” XOR” (true if one of the inputs is true and the other is false)
  3. logical ”NOT” (true if the input is false)
  4. logical ”OR” (true if any input is true)

Machine Learning

A

2 is correct

25
Q

True or False

Recurrent Neural Networks (RNN) allow to process sequential data by feeding back the output
of the network into the next input.

Machine Learning

A

True

26
Q

*Related to the Error Landscape of Perceptron Learning. *

The error function for one training example may be considered as a function in a multi-dimensional weight space. The best weight setting for one example is where the error measure for this example is minimal.

Machine Learning

A

True

27
Q

Which sentences about ”Multilayer Perceptrons are correct?

  1. The size of this hidden layer is determined automatically.
  2. The output node(s) never combine with another perceptron.
  3. The output nodes may be combined with another perceptron. (Which may also have multiple output nodes.)
  4. The size of this hidden layer is determined manually.
  5. Perceptrons may have multiple output nodes. (They may be viewed as multiple parallel perceptrons.)
  6. Perceptrons never have multiple output nodes. (But they may have a lot single perceptrons one after the other.)

Machine Learning

A

3,4,5

28
Q

True or False

*about Convolution in Convolutional Neural Networks. *

For each pixel of an image, a new feature is computed using a weighted combination of its n x n neighborhood.

Machine Learning

A

True

29
Q

True or False

Simulation search stops searching at a fixed position and uses an eveluation function to compute the output.

Machine Learning in Games

A

True

Explanation:

Simulation search, such as in Monte Carlo Tree Search (MCTS) or other game-playing algorithms, often stops searching at a fixed position (e.g., after reaching a certain depth or computational budget). At this point, an evaluation function is used to estimate the value or potential outcome of that position, rather than continuing to simulate to the end of the game. This approach balances computational efficiency with decision quality.

30
Q

What does the TD in TD-Gammon stand for?

  1. Turing Decoding
  2. Temporal Difference Learning
  3. Test Dice Roll
  4. Try and Defeat Learning

Machine Learning in Games

A

Correct Answer:
2. Temporal Difference Learning

Explanation:

The “TD” in TD-Gammon stands for Temporal Difference Learning, which is a reinforcement learning technique. TD-Gammon, a program developed by Gerald Tesauro, uses temporal difference learning to improve its backgammon-playing strategy by predicting future rewards and adjusting its evaluations accordingly. This approach combines aspects of dynamic programming and Monte Carlo methods.

31
Q

Choose the correct answers regarding Simulation Search.

  1. In the Monte Carlo tree search method, many games are simulated by making random moves and then the average score of these games is used to evaluate a decision.
  2. Simulation Search is a Machine Learning method that is used when the complete tree is not searchable.
  3. Simulation Search is a Machine Learning method that can only be used for rather small decision trees.
  4. With Simulation Search, every single branch is searched and evaluated.
  5. The Monte Carlo tree search method evaluates good decisions by trying out every possible move.
  6. With Simulation Search only a few branches are searched and evaluated.

Machine Learning in Games

A

Correct Answers:

1,2,6

Correct Answers

Nr. 1. In the Monte Carlo tree search method, many games are simulated by making random moves and then the average score of these games is used to evaluate a decision.
• Explanation: This describes the essence of Monte Carlo Tree Search (MCTS). The method involves simulating games from a given state by making random or guided moves and then using the average outcomes (e.g., win/loss ratio) of these simulations to guide future decisions.

Nr. 2. Simulation Search is a Machine Learning method that is used when the complete tree is not searchable.
• Explanation: Simulation Search is applied when the decision tree is too vast to search exhaustively, which is common in games with high complexity like Go or Chess. The method selectively explores the most promising parts of the tree, making it feasible to handle such large spaces.

Nr. 6. With Simulation Search only a few branches are searched and evaluated.
• Explanation: Simulation Search (like MCTS) focuses on selectively exploring the tree. Instead of evaluating all branches, it concentrates computational effort on promising moves, making it efficient for large decision spaces.

Why the Other Options are Incorrect

3.	Simulation Search is a Machine Learning method that can only be used for rather small decision trees.
•	Incorrect: Simulation Search is designed for large and complex trees where exhaustive search isn’t feasible. It’s particularly useful for handling vast decision trees, such as those in games like Go or Poker.
4.	With Simulation Search, every single branch is searched and evaluated.
•	Incorrect: Simulation Search deliberately avoids searching every branch. It prioritizes areas of the tree that appear promising based on previous simulations, saving time and computational resources.
5.	The Monte Carlo tree search method evaluates good decisions by trying out every possible move.
•	Incorrect: MCTS does not attempt every move. Instead, it builds its evaluation by selectively simulating games for moves that seem most promising, based on previous outcomes and the balance between exploration and exploitation.

Summary

Simulation Search is a powerful method for decision-making in games and large trees. It excels by simulating random plays, evaluating a limited number of branches, and focusing computational resources on promising paths—making it suitable for vast, complex decision spaces.

32
Q

Associate the following Machine-Learning Concepts

  1. Credit Assignment Problem
  2. MCTS
  3. UCB
  4. MENACE

a. Describes Exploration vs Exploitation
b. Essentially uses the Tree- and Rollout-Policy
c. Mainly Delayed Reward
d. Pioneer-Project regarding Reinforcement Learning

A

Correct Associations:

1.	Credit Assignment Problem → c. Mainly Delayed Reward
•	Explanation: The Credit Assignment Problem involves determining which actions in a sequence are responsible for future rewards, especially when rewards are delayed.
2.	MCTS (Monte Carlo Tree Search) → b. Essentially uses the Tree- and Rollout-Policy
•	Explanation: MCTS uses a tree policy to select nodes to expand and a rollout policy to simulate random plays for evaluating outcomes.
3.	UCB (Upper Confidence Bound) → a. Describes Exploration vs Exploitation
•	Explanation: UCB balances exploration (trying less-visited options) and exploitation (focusing on options with the highest known reward), which is a key challenge in reinforcement learning.
4.	MENACE (Matchbox Educable Noughts And Crosses Engine) → d. Pioneer-Project regarding Reinforcement Learning
•	Explanation: MENACE, developed by Donald Michie, is an early example of reinforcement learning where matchboxes were used to “learn” strategies for playing tic-tac-toe.
33
Q

Choose the correct order for Monte-Carlo Tree Search

  1. Selection - Simulation - Expansion - Backpropagation
  2. Selection - Backpropagation - Simulation - Expansion
  3. Selection - Expansion - Simulation - Backpropagation
  4. Selection - Separation - Expansion - Backpropagation

Machine Learning in Games

A

3 is correct

34
Q

Map the terms connected to ML in games to corresponding definitions.

  1. Reinforcement learning
  2. Regret
  3. Upper confidence bound
  4. Roll-out

a. Machine learning technique, which tries to maximize reward function.
b. Algorithm balancing between exploiting its current possibilities and exploring new ones
c. Technque to evaluate unexplored paths in game tree.
d. Difference between obtained and optimal gain

Machine Learning in Games

A

1a
2d
3b
4c

Correct Mappings:

1.	Reinforcement learning → a. Machine learning technique, which tries to maximize reward function.
•	Explanation: Reinforcement learning is a technique where an agent learns to take actions in an environment to maximize a cumulative reward.
2.	Regret → d. Difference between obtained and optimal gain
•	Explanation: Regret measures how much worse an agent’s decisions are compared to the best possible decisions in hindsight.
3.	Upper confidence bound (UCB) → b. Algorithm balancing between exploiting its current possibilities and exploring new ones
•	Explanation: UCB is used to strike a balance between exploration (trying less-known actions) and exploitation (focusing on the most rewarding known actions), often applied in multi-armed bandit problems.
4.	Roll-out → c. Technique to evaluate unexplored paths in game tree.
•	Explanation: A roll-out is used to simulate random moves from a given position in a game tree to evaluate potential outcomes when an exhaustive search is computationally impractical.
35
Q

Arrange the four key learning steps of AlphaGo in the right order!

  1. use self-play (from the selection policy) to train a utility function for evaluating a given game position
  2. learn from expert games an accurate expert policy
  3. refine the expert policy (using reinforcement learning)
  4. learn from expert games a fast but inaccurate roll-out policy

Machine Learning in Games

A

4,2,3,1

36
Q

Which sentence is correct?

1.AlphaGo Zero is a version that learned to play only from self-play. It beat AlphaGo 50-50 using much less training data.
2. AlphaGo is a improved version that learned to play only from self-play. It beat AlphaGo Zero 100-0 using much less training data.
3. AlphaGo Zero is a improved version that learned to play only from self-play. It beat AlphaGo 100-0 using much less training data.
4. AlphaGo is a version that learned to play only from self-play. It beat AlphaGo Zero 50-50 using much less training data.

Machine Learning in Games

A

3 is correct

37
Q

*about opponent Modeling. *

Think of the example of Roshambo (Rock/Pape

  1. Optimal solutions don’t exists.
  2. Optimal solutions are not maximal.
  3. Optimal solutions are always minimal.
  4. Optimal solutions are always maximal.

Machine Learning in Games

A

Correct Answer:

  1. Optimal solutions don’t exist.

Explanation:

In the context of Roshambo (Rock/Paper/Scissors), the game is inherently non-optimal in the sense that there is no single optimal strategy that guarantees a win. The game involves randomness and unpredictability, and the best a player can do is to make decisions that are unpredictable and well-balanced to avoid being exploited by an opponent’s strategy.

In opponent modeling, the goal is to adapt based on the opponent’s tendencies, but since the game is zero-sum and based on random choices, optimal strategies don’t exist in the traditional sense of always having a deterministic, unbeatable approach. Instead, players often use strategies like mixed strategies or probabilistic approaches to balance their choices.

Why Other Options Are Incorrect:

•	2. Optimal solutions are not maximal.: This is misleading because the concept of optimal solutions typically involves finding the best possible outcome, which in games like Roshambo, is tied to randomness and mixed strategies.
•	3. Optimal solutions are always minimal.: This is incorrect because optimal solutions can vary depending on the strategy, and in Roshambo, the idea is to avoid predictable patterns rather than minimize choices.
•	4. Optimal solutions are always maximal.: This is incorrect in the context of Roshambo because optimality is about balancing unpredictability, not always maximizing a particular outcome.
38
Q

True or False

The Monte-Carlo Search is a extreme case of Simulation search: (Play a large number of games
where both players make their moves randomly. Make the move that has the highest average score
of the previous games.)

Machine Learning in Games

A

True

39
Q

True or False

The goal of ”Reinforcement Learning” is learning of policies (action selection strategies) based
on feedback from the environment.

Machine Learning in Games

A

True

40
Q

Match the information on reinforcement learning in MENACE correctly.

  1. Learning algorithm: Game won
  2. Initialisation
  3. Learning algorithm: Game lost
  4. This results in

a. All moves are equally likely
b. Positive reinforcement
c. increased likelihood that a successful move will be tried again respectively decreased likelihood that an unsuccessful move will be repeated
d. Negative reinforcement

Machine Learning in Games

A

1b
2a
3d
4c

41
Q

Which statements are true for TD-gammon?

  1. Improvements over MENACE are faster convergence because of Temporal Difference Learning; and a Neural Network (instead of boxes and beads)
  2. There are no improvements over MENACE.
  3. For the training were human needed, who throwed the dices and write down the moves of the programme.
  4. Developed from beginner to worldchampion strength after 1,500,000 training games against itself
  5. Led to changes in backgammon theory and was used as a popular practice andanalysis partner of leading human players
  6. TD-Gammon program won the 1988 World Championship.

Machine Learning in Games

A

1,4,5

Let me analyze each statement about TD-Gammon, the groundbreaking backgammon program:

True statements are: 1, 4, and 5

Let’s examine each statement:

  1. ✓ TRUE: TD-Gammon had two major improvements over MENACE:
    • Used Temporal Difference Learning (TD) which provided faster learning
    • Employed neural networks instead of the simple lookup tables (boxes and beads) used in MENACE
    • This combination allowed for better generalization and faster convergence
  2. ❌ FALSE: There were clear improvements over MENACE as noted above
  3. ❌ FALSE: TD-Gammon was self-playing:
    • It learned entirely through self-play
    • No human intervention was needed for training
    • The program handled both moves and dice rolls internally
  4. ✓ TRUE: TD-Gammon’s development:
    • Started as a beginner
    • Through 1.5 million self-play games
    • Reached world-class level of play
    • This was a major achievement in machine learning at the time
  5. ✓ TRUE: TD-Gammon’s impact:
    • Changed opening theory in backgammon
    • Professional players used it for practice
    • Some moves that were previously considered weak were re-evaluated
    • Became a valuable analysis tool for top players
  6. ❌ FALSE: TD-Gammon never won the World Championship:
    • While it reached world-class strength
    • It was primarily used as a training tool
    • The 1988 date is incorrect as TD-Gammon was developed in the early 1990s

TD-Gammon was revolutionary not just for its playing strength, but for demonstrating that reinforcement learning through self-play could achieve expert-level performance in complex games.​​​​​​​​​​​​​​​​

42
Q

True or False

”Exploitation” means: Try to play the best possible move; and ”Exploration” means: Try new moves to learn something new.

Machine Learning in Games

A

True

Machine Learning in Games

43
Q

True or False

Shannon Type-A search algorithm is a brute force approach.

Game Playing

A

True

44
Q

Map the following important search concepts

  1. Evaluation Functions
  2. Iterative Deepening
  3. Selective Search
  4. Transposition Tables

a. … limit(s) the depth
b. … avoid(s) re-computing the same state again
c. … limit(s) used time/processing power
d. … limit(s) the height

Game Playing

A

Let me analyze the correct mapping between these search concepts and their primary functions:

The correct mapping is:

  1. Evaluation Functions -> d. …limit(s) the height
  2. Iterative Deepening -> c. …limit(s) used time/processing power
  3. Selective Search -> a. …limit(s) the depth
  4. Transposition Tables -> b. …avoid(s) re-computing the same state again

Let’s understand why:

  1. Evaluation Functions → d. (height)
    • Provides a value estimate at a given position
    • Allows stopping the search tree at a certain height
    • Avoids searching all the way to terminal positions
    • The “height” refers to how far up/down we need to search
  2. Iterative Deepening → c. (time/processing)
    • Gradually increases search depth
    • Can stop anytime with a complete shallower search
    • Manages computational resources effectively
    • Controls total processing time used
  3. Selective Search → a. (depth)
    • Focuses on promising variations
    • Reduces search depth in less promising lines
    • Limits how deep we search certain branches
    • Controls the effective depth of search
  4. Transposition Tables → b. (re-computing)
    • Stores previously evaluated positions
    • Prevents redundant calculations
    • Recognizes same positions reached via different move orders
    • Saves computation by reusing results

The key distinction between “height” and “depth”:
- Height (1.d) refers to the overall vertical distance in the tree
- Depth (3.a) refers to how far down specific branches go​​​​​​​​​​​​​​​​

45
Q

True or False

Alpha-Beta-Search is in general faster than Min-Max (because of pruning uninteresting paths) but therefore only yields approximate results which are not necessarily the same.

Game Playing

A

True

46
Q

Which statements are true?

In 1912, Spanish engineer Leonardo Torres y Quevedo completed a machine El Ajedrecista that was able to automatically play (and win) the KRK chess endgame.

  1. The individual moves of the machine depended on the moves of the human opponent.
  2. The individual moves of the machine was independed on the moves of the human opponent.
  3. The chessboard was divided into 3 Zones.
  4. The king and the rook of the computer had a fix start position, the king of the humanopponent could be placed anywhere of the sqare from h1 to c6.
  5. The chessboard was divided into 4 Zones.
  6. The king and the rook of the computer had a fix start position, also the king of the human opponent.

Game Playing

A

1,2,3

47
Q

True or False

Ernst Zermelo (1912) provided a first general formalization of game playing for deterministic, 2-person, perfect information games. The theorem essentially states in such games either one of the two players can force a win (or both players can force a draw).

Game Playing

A

True

48
Q

Which statements about Retrograde Analysis (RA) are correct?

  1. The RA Algorithm goes back to Babbage (1846).
  2. RA can’t ever be too complex for any game (because it’s easy to store all possible game states)
  3. Several Games have been solved partial using RA (Chess, Checkers)
  4. RA starts to analyze the game forwards.
  5. The RA Algorithm builds up a database if we want to strongly solve a game.
  6. Several Games habe been solved completely using RA (Tic-Tac-Toe, Go-Moku, Connect 4, …)

Game Playing

A

3,5,6

49
Q

The ”Shannon Number” is a frequently cited number for formalizing chess is due to a back-of-the-envelope calculation by Claude Shannon.

Game Playing

A

True

50
Q

Which sentences about ”Zero-Sum-Game” are true?

  1. If Player 2 maximizes its score, it automaticly maximizes the score of Player 1 too.
  2. In principle Minimax can solve any 2-person zero-sum perfect information game. (But in practice it is intractable to find a solution with minimax, because the Sizes of some game trees are too big.)
  3. Both players try to optimize (maximize) their result.
  4. Both players try to minimize (0 is the best) their result.
  5. If Player 2 maximizes its score, it minimizes the score of Player 1.
  6. Minimax is not suitable for solving zero-sum games.

Game Playing

A

2,3,5

51
Q

Which sentences about ”Heuristic Evaluation Function” (in Game playing)

  1. The idea is to produce an estimate of the expected utility of the game from a given position.
  2. A requirement is only a lot of time for the computation.
  3. The idea is to save all possibles moves of the game.
  4. The performance of the algorithm depends on quality of the evaluation function
  5. The performance of the algorithm is indepent on quality of the evaluation function.
  6. Requirements are: The evaluation function should order terminal-nodes in the same way as the (unknown) true function; Computation should not take too long; For non terminal states the evaluation function should be strongly correlated with the actual chance of winning.

Game Playing

A

1,4,6

52
Q

True or False

Iterative deepening is a repeated search with fixed depth for depths d = 1, …, D. One advantage is the simplification of time management. Therefore, the total number of nodes searched is often smaller than in non-iterative search!

Game Playing

A

True

53
Q

True or False

The move ordering is crucial to the performance of alphabeta search.

Game Playing

A

True

54
Q

The basic idea from ”Transposition Tables” is to store found positions in a hash table. If the position occurs a second time, the value of the node does not have to be recomputed. Wich sentences about the the implementation of Transposition Tables are correct? Each entry in the hash table stores the…

  1. … probability of finding that particular position.
  2. … search depth of stored value (in case we search deeper)
  3. … hash key of position (to eliminate collisions)
  4. … state evaluation value (except whether this was as exact value or a fail high/low value)
  5. … time it took to calculate the position.
  6. … state evaluation value (including whether this was as exact value or a fail high/low value)

Game Playing

A

3

55
Q

What is the status of AI in Game Playing?

  1. World-Championship strength
  2. Draw
  3. Solved
  4. Human Supremacy

a. Bridge, Poker
b. Chess, Backgammon, Scrabble, Othello, Go, Shogi
c. Checkers
d. Tic-Tac-Toe, Connect-4, Go-Moku, 9-men Morris

Game Playing

A

1b
2c
3d
4a

56
Q

Match the games to the correct categories!

There are different types of Games.

  1. games of chance and imperfect information
  2. games of chance and perfect information
  3. deterministic games with imperfect information
  4. deterministic games with perfect information

a. battleship, kriegspiel, matching pennies, Roshambo
b. chess, checkers, Go, Othello
c. backgammon, monopoly
d. bridge, poker, scrabble

Game Playing

A

1d
2c
3a
4b

57
Q

Connect the sentences

How can be describe different types of Games?

  1. players alternate moves
  2. one player’s gain is the other player’s (or players’) loss
  3. Does every player see the entire game situation?
  4. Do random components influence the progress of the game?

a. perfect vs. imperfect information
b. Zero-Sum Games
c. turn-taking
d. deterministic games vs. games of chance

Game Playing

A

1c
2b
3a
4d