Cram1 Flashcards

1
Q

The agent perceives ______ acts according to a ______

A

environment, performance criteria

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Performance criteria are

A

domain dependent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Possible performance criteria:

A

m2 per hour
how clean is the room after vacuuming
power consumption

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Ideal rational agent

A

The agent chooses an action which maximizes its performance for a given
percept sequence and knowledge about the world.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

to avoid trivialization use ____

A

active sensing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Categorization of Agents

A

Percepts: Symptoms, findings, patient’s answers,
Actions: Questions, tests, treatments,
Goals (performance measures): Healthy patient, minimize costs,
Environment: Patient, hospital

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The goal of learning

A

Optimize future behavior on the basis of the history of percepts, actions, and knowledge about the world.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

accessible vs. nonaccessible

A

Are all relevant aspects of the world accessible to the sensors?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

deterministic vs. nondeterministic/stochastic

A

Does the next state depend completely on the current state and the action chosen.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

episodic vs. nonepisodic

A

Does the choice of an action depend only on the current state or also on the past?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

static vs. dynamic

A

Can the world change while deciding on the next action?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

discrete vs. continuous

A

Is the world discrete (as in chess) or not (mobile robots)?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Properties of Environments

A

accessible, deterministic, episodic, static, discrete

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A goal is

A

a set of world states, which the agent finds desirable (wants to reach one of them).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Search

A

Finding an action sequence which transforms an initial state into a goal state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Single-state problem

A

complete world knowledge,

complete knowledge about the actions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Multiple-state problem

A

incomplete world knowledge,

complete knowledge about the actions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Contingency problem

A

incomplete knowledge about actions,

needs to gather information at run-time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Exploration problem

A

World states and effect of actions are

both unkown. Very difficult!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Initial state

A

World state which the agent believes to be in initially

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

State space

A

Set of all possible states

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Operator

A

Description of which state is reached by an action from a given state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Successor function S

A

S(x) returns the set of states reachable by any action from state x

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Goal test

A

Tests whether a state is a goal state.

25
Q

Path

A

Sequence of actions

26
Q

Path cost

A

Cost function over paths, usually the sum of the costs of the actions of the path, often denoted as g.

27
Q

Solution

A

Path from the initial state to a state that satisfies the goal test

28
Q

Search cost

A

Time and memory needed to find a solution

29
Q

Total cost

A

Sum of the path cost (of the solution) and search cost

30
Q

8-Puzzle

A

States: Describe the location of each tile including the blank.
Operators: Blank moves left, right, up, or down.
Goal test: Does the state match picture on the right?
Path cost: Each step costs 1 unit.
Search cost: Problem is NP-complete in the size of the puzzle.

31
Q

Completeness

A

Does it always find a solution if one exists?

32
Q

Time Complexity

A

How long does it take to find a solution in the worst case?

33
Q

Space Complexity

A

How much memory is used in the worst case?

34
Q

Optimality

A

Does it always find the best solution?

35
Q

Uninformed (blind) search

A

No information available about the length or the cost of a solution.
breadth-first search, uniform cost search, depth-first search

36
Q

Informed (heuristic) search

A

has information about the length or the cost of a solution

Greedy search, A* and some variants, hill-climbing

37
Q

Cost of Breadth-First Search

A

O(b^(d+1))

38
Q

Iterative Deepening

A

Combines depth- and breadth-first search
Time complexity: O(b^d)
Space complexity: O(b*d)

39
Q

Problems with Bidirectional Search

A

Operators not always reversible

Incompletely described goal states (e.g. checkmate)

40
Q

A*

A

combines uniform cost search with greedy search
g(n) = actual cost from the initial state to n.
h(n) = estimated cost from n to the nearest goal
f (n) := g(n) + h(n)
f (n) = is the estimated cost of the cheapest path which passes through n.

41
Q

h is called admissible if

A

h(n) <= h*(n) for all n

42
Q

Path-Max Equation

A

n parent of n’

f (n’) = max(f(n), g(n’) + h(n’))

43
Q

Games are a special variant of

A

search problems

44
Q

The states are usually

A

accessible

45
Q

Games belong to the class of

A

contingency problems (uncertain knowledge of action effects).

46
Q

When the search space is large, the game tree can only be generated up to a certain depth. Minimax works then as well. Simply replace

A

TERMINAL-TEST by CUT-OFF-TEST and the utility function UTILITY by the evaluation function EVAL.

47
Q

Evaluation Functions should be

A

easy to compute and accurately reflect the chance of winning

48
Q

Complexity of Alpha-Beta

A

random leaves: O((b/logb)^d)

best: O(b^(d/2))
usually: O(3d/4)

49
Q

Knowledge Base (KB):

A
  • contains sentences in a language with a truth theory (Logic) which we can interpret as propositions about the world
  • causal effect on the agent’s behavior in correlation with the contents of the sentences
50
Q

Knowledge Level

A

what is known by the KB. E.g.: automatic taxi driver knows that Vaalser St connects Aachen and Vaals

51
Q

Symbolic Level

A

Encoding of the KB as sentences in a formal language: Connects(Vaalser _St, Aachen, Vaals)

52
Q

Implementation Level

A

The internal representation of sentences.
Taxi-example:
- a string “Connects(Vaalser_St, Aachen,
Vaals)”
- a bit in in a 3-D-matrix representing connections
between places.

53
Q

declarative language

A

System believes P iff P is thought to be true

54
Q

precise language

A
  • which strings are sentences;
  • what it means for a sentence to be true (without having to explicitly specify for every sentence whether or not it is true).
55
Q

explicit knowledge

A

KB

56
Q

implicit knowledge

A

{a | KB |= a}

57
Q

Deductive inference

A

Process to compute the logical consequences of a KB.

given a KB and a query a, computes whether KB|=a.

58
Q

Herbrand universe

A

the set of all terms which can be formed from the functions and terms in S