games-flashcards

1
Q

What are the differences between human and computer game playing?

A
  1. Processing: Computers analyze millions of moves/second while humans process sequentially
  2. Pattern Recognition: Humans excel at intuition and abstract patterns while computers rely on programmed patterns
  3. Learning: Humans learn organically through experience while computers learn through algorithms
  4. Strategy: Humans consider psychology and can bluff while computers focus on move calculations
  5. Resources: Computers maintain consistent performance while humans experience fatigue
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a game tree and how is it constructed?

A

A game tree is a graph representing all possible game states and moves.

Construction:
1. Root node is initial position
2. Branches represent possible moves
3. Child nodes show resulting positions
4. Alternate between players’ moves
5. Continue until reaching terminal positions (wins/draws/losses)
6. Include evaluation scores at leaf nodes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How does Retrograde analysis work?

A

Retrograde analysis works backward from known end positions:

  1. Start with all known win/loss/draw positions
  2. Work backwards to determine optimal moves
  3. Label positions based on best achievable outcome
  4. Continue until all reachable positions are analyzed
  5. Particularly useful for endgame tablebases
  6. Guarantees perfect play if within calculated depth
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Why is exhaustive search infeasible and what strategies can you employ to solve that problem?

A

Exhaustive search is infeasible due to exponential growth of possibilities.

Solutions include:
1. Alpha-beta pruning to eliminate irrelevant branches
2. Iterative deepening
3. Move ordering optimization
4. Transposition tables
5. Selective search using heuristics
6. Quiescence search for stable positions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are Shannon’s strategies?

A

Shannon’s strategies include:
1. Type A: Full-width exhaustive search to fixed depth with position evaluation
2. Type B: Selective search focusing on promising moves using chess knowledge and heuristics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the horizon effect and how can it be countered?

A

The horizon effect occurs when the search depth ends before a crucial sequence completes, leading to poor evaluations.

Counter measures:
1. Quiescence search beyond normal depth
2. Extension of critical lines
3. Adaptive search depth
4. Recognition of tactical themes
5. Smart extension heuristics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the differences between selective search and (depth-limited) exhaustive search?

A

Key differences:
1. Selective search examines only promising moves while exhaustive search examines all moves to a fixed depth
2. Selective search can reach greater depths in important variations
3. Exhaustive search guarantees finding tactical combinations within its depth
4. Selective search relies on heuristics and may miss tactics
5. Selective search is generally faster but less reliable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What strategies have been used to combine selective search and (depth-limited) exhaustive search?

A

Combination strategies:
1. Principal Variation Search (PVS)
2. Multi-cut pruning
3. Null-move pruning in non-tactical positions
4. Forward pruning with verification searches
5. Adaptive node expansion based on position characteristics
6. Dynamic depth adjustment based on position complexity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the key ideas behind techniques such as transition tables, Zobrist keys, null-move pruning?

A
  1. Transposition tables store previously evaluated positions to avoid redundant calculation
  2. Zobrist keys provide efficient position hashing for table lookup
  3. Null-move pruning skips turns to quickly refute weak moves
  4. These techniques significantly improve search efficiency and reduce computational overhead
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Do Deeper Searches necessarily help?

A

Not always.

Considerations:
1. Diminishing returns after certain depth
2. Quality of evaluation function matters more in some positions
3. Some positions require strategic understanding over tactical calculation
4. Search resources might be better spent on critical variations
5. Evaluation errors can compound at greater depths

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What have been the key events in game playing?

A
  1. 1997: Deep Blue defeats Kasparov in chess
  2. 2016: AlphaGo defeats Lee Sedol in Go
  3. 2017: AlphaZero masters chess from scratch
  4. 1950s: First computer chess programs (Turing, 1951)
  5. 1994: Chinook becomes world champion at checkers
  6. 2019: Pluribus defeats top poker professionals in 6-player poker
How well did you know this?
1
Not at all
2
3
4
5
Perfectly