Panik Flashcards

1
Q

table-lookup agent

A

mapping indexed by percept sequences. Each action is just an entry in that table.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

reflexive agent

A

condition-action rules that are pattern matched against a percept.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Goal-oriented agents

A

try to fulfil (binary) goals by imagining the outcome of actions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Utility-based agents

A

have a utility function that judges the expected outcomes of different actions somehow.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Learning agents

A

improves the system by changing and retrieving knowledge from the performance element

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

accessible

A

All relevant aspects of the world are available to the sensors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

deterministic

A

The next state depends completely on the current state and chosen action.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

episodic

A

The choice of an action depends only on the current state (not on the past).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

static

A

The world does not change while deciding on an action.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

discrete

A

There are only finitely many world states in a range.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

successor function S ( x )

A

returns the set of states reachable by any action from state x.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

path

A

sequence of actions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Single-state problem
Multiple-state problem
Contingency problem
Exploration problem

A
World knowledge | Action knowledge
complete | complete
incomplete | complete
to be found at run-time | incomplete
unknown | unknown
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Completeness:

A

Always finds a solution if it exists.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

(blind) search

A

uninformed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

(heuristic) search

A

informed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Iterative deepening

A

combines BFS and DFS. It executes depth-limited search at increasing depths until
a solution is found.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Non-complete search?

A

Iterative Deepening

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

The heuristic h used for A* must be admissible, that is

A

h ( n ) ≤ h* ( n ) must hold where h* ( n ) is the cost of an optimal path from n to the nearest goal.

20
Q

To compare the quality of admissible heuristics

A

we introduce the effective branching factor b*

21
Q

Iterative Deepening A*

A

a variant of Iterative Deepening search that explores branches up to a given threshold for the value of f ( n ). If this threshold is passed and no solution was found the threshold is set to the minimal value of f ( n ) for all found nodes n that exceeded the threshold.

22
Q

Simplified Memory-Bounded A*

A

uses a bounded priority queue. New nodes are added to the queue. Each time a node is added, the algorithm checks whether all sibling nodes are in the queue, then
the parent is removed. If the memory is full, the node with the highest cost is removed from the queue and its parent is added (if not already present).

23
Q

Hill Climbing

A

iteratively expands the highest-valued successor of the current node.

24
Q

Games are

A

special cases of search problems. States are usually accessible. Actions are possible moves by a player.

25
Q

Knowledge Level:

A

What is known by the knowledge base.

26
Q

Symbolic Level:

A

The encoding of the knowledge base in a formal language.

27
Q

Implementation Level:

A

Internal representation of sentences, like lists or strings of things.

28
Q

Deductive Inference

A

a process to compute the logical consequences of a KB, i.e. given a KB and a sentence alpha we want to compute if KB |= alpha.

29
Q

correct

A

the computation is correct, e.g. if we compute KB |= a then KB |= a.

30
Q

complete

A

for every sentence a such that KB |= a we can also compute that KB |= a.

31
Q

Skolemisation is not equivalence preserving but

A

satisfiability preserving.

32
Q

The Most General Unifier (MGU) can be calculated using the following algorithm in

A

exponential time.

33
Q

Planning

A

Given a set of actions, an initial and a goal state, find a plan to reach the initial state from the goal state. Such a plan will consist of an arrangement of (possibly only partially) ordered actions.

34
Q

Planning VS Search

A

uses more detailed information e.g. preconditions and not just edges

35
Q

complete plan

A

requires that for any step every precondition to be fulfilled by some predecessor and that no step between the fulfilment and the requirement of a condition undoes this.

36
Q

consistent plan

A

requires that no two actions take place at the same time.

37
Q

solution

A

a plan that is both complete and consistent

38
Q

Partially ordered plans

A

preorder on steps which means it defines a must-happen-before-relationship

39
Q

d-Separation time

A

polynomial

40
Q

d-Separation complete?

A

no

41
Q

supervised learning

A

the learner has access to input and correct output.

42
Q

Reinforcement learning

A

only gives feedback in terms of rewards and punishment, but not correct answers.

43
Q

Unsupervised learning

A

happens without any feedback to the learner. The learner must learn on its own.

44
Q

Feed-forward networks

A

directed acyclic graph

45
Q

Recurrent networks

A

arbitrarily complex connections