Introduction to AI - concepts Flashcards

1
Q

current trends in AI

A

exploit the strengths of the computer and don’t model human thought processes
focus on particular tasks and not on solving AI as a whole
strong scientific standards

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

conference series

A

Biennial International Joint Conference on AI
Annual National Conference on AI
Biennial European Conference on AI

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

The birth of AI

A

1956, Dartmouth Conference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

symbolic AI

A

model knowledge and planning in a way that is understandable to programmers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

subsymbolic AI

A

model intelligence similar to a neuron

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Why did the initial expectations of AI not work out?

A

lack of scalability
difficulty of knowledge representation
limitations on techniques and representations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Forward checking

A

Search
Keeping track of remaining legal values for unassigned variables
Terminate search if any variable has no more legal values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Local search for CSP

A

search
Work with complete states
Allow unsatisfied constraints
Min-conflict heuristic: select a conflicted variable and choose the value that violates the fewest constraints

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Zermelo

A

Game-playing
One player can force a win or both players can force a draw

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Retrograde analysis

A

Zermelo, game-playing
Generate all possible positions
Mark all positions where A would win…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Horizon effect

A

Game-playing
The catastrophy can be delayed by a sequence of moves that don’t make any progress

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Forward pruning

A

Game-playing
Alpha-beta only prunes search trees when it is safe to do so

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Null-move-pruning

A

game-playing
add a “null-move” to the search (assume the current player does not make a move)
if the null-move results in a cutoff, assume that making a move will do the same

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Iterative deepening

A

game-playing
repeated fixed-depth search which works well for transposition tables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

move ordering - heuristics

A

game-playing

domain-dependent heuristics:
capture moves first
forward moves first

domain-independet heuristics:
killer heuristics (manage a list of moves that produced cutoffs at the current level of search)
history heuristics (maintain a table of all possible moves; if a move produces a cutoff, its value is increased)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

transposition tables - what does an entry store?

A

game-playing

state evaluation value
search depth of the stored value
hash key of the position
best move from the position (optional)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What techniques does AlphaGo use?

A

game-playing

deep learning
reinforcement learning
monte-carlo tree search

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Inductive learning

A

Machine learning

Learn a function from examples

Ignores prior knowledge
Assumes examples are given

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Credit assignment problem

A

Machine learning games

Delayed reward

20
Q

Knightcap

A

Machine learning games

Learned to play expertly in chess

Improvements over TD gammon:
Temporal difference learning with deep searches
Played against partners on the internet (instead of self-play)

21
Q

Simulation search

A

Machine learning games

Estimate the expected value of each move by counting the number of wins
At each chance node select one of the options at random
Make all move choices quicky

Often works well even if the move selection is not that strong - fast algorithm

22
Q

UCT Search

A

Machine learning games

Best-known formulation of MCTS
Combines a UCB-based tree policy with random roll-outs

Exploitation vs Exploration
Choose move that has been visited most often (reliability) not necessarily the one with the highest value (high variance)

23
Q

AlphaGo

A

MCTS, deep learning, reinforcement learning from self-play

MCTS: roll-out policy, learned evaluation function instead of real game outcomes

24
Q

Policy networks

A

Machine learning games

How “good” is it to move to a position
Input position
Output position

25
Value networks
Machine learning games How desirable is it to be in this position Input position Output single value
26
AlphaGo Zero
Learned only from self-play Much less training data
27
forward chaining
knowledge derive new facts from known facts elementary production principle: for every rule and a set of facts and a substitution (which maps the body to the set of facts) one can derive one proof step can be iterated until no further facts can be derived
28
Resolution principle
knowledge backward chaining for disproving a statement, assume its opposite and show that it leads to a contradiction
29
RDF
knowledge resource description framework allows for deductive reasoning (given facts and rules, we can derive new facts) opposite: induction deriving models from facts
30
ontology
knowledge explicit specification of a conceptualization encode the knowledge about a domain form a common vocabulary and describe the semantics of its terms logical theory
31
OWL
knowledge Web ontology language syntactic extensions of RDF
32
Freebase
knowledge 2000s, collaborative editing no fixed schema acquired and shut down by Google
33
Wikidata
knowledge goal: centralize data from wikipedia collaborative imports other datasets one of the largest public knowledge graphs
34
DBPedia
knowledge extraction frmo Wikipedia using maps & heuristics together with YAGO one of the most used knowledge graphs
35
NELL
knowledge never ending language learner input: ontology, seed examples, text corpus output: facts, text patterns large degree of automation occasional human feedback
36
Linked open data
knowledge many dataset are publicly available and connected to each other using standards like RDF andd URIs for identification of entries
37
Which fields have contributed to AI research?
history philosophy mathematics psychology economics linguistics neuroscience control theory
38
Why is NLP so hard? highly ambiguous at multiple levels...
lexical: same word, different meanings syntactic: same sentence, different interpretations semantic: the interpretation depends on its context, requires understanding of our world discourse: the meaning of a sentence depends on the previous sentences
39
Traditional NLP tasks
word segmentation (divide the input text into small semantic entities) part-of-speech-tagging (assign each word its most probable role in a sentence) syntactic analysis (find the most probable grammatical interpretation of the sentence) semantic analysis (find the most probable meaning of a sentece and resolve references)
40
Word2Vec
NLP every word is represented as a lower-dimensional, non-sparse vector train a deep neural network in a supervised way, using context of a word as additional input 2 variants: continuous bag of words skip-gram
41
POS tagging is a process of tagging words in a sentece based on...
NLP the definition of the word (thesaurus) the context of the word (sequence learning task, Hidden Markov Models, Conditional Random Fields)
42
N-gram model
NLP language model, uses only n-1 words of prior context unigram, bigram, trigram
43
GPT
General Pre-trained Transformer
44
Loebner competition
Philosophy Modern day version of the turing test
45
Mitsuki
Philosophy Also known as Kuki Successor of Eliza Five-time winner of the Loebner competition
46
Sussman anomaly
Planning Subgoals are not independent