lv. 3 - CS37 Flashcards

1
Q

An entity that perceives its environment and acts upon that environment.

A

Agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A configuration of an agent in its environment.

A

State

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

The state from which the search algorithm starts.

A

Initial State

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Choices that can be made in a state.

A

Actions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A description of what state results from performing any applicable action in any state.

A

Transition Model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The set of all states reachable from the initial state by any sequence of actions.

A

State Space

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The condition that determines whether a given state is a goal state.

A

Goal Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A numerical cost associated with a given path.

A

Path Cost

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A sequence of actions that leads from the initial state to the goal state.

A

Solution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A solution that has the lowest path cost among all solutions.

A

Optimal Solution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

contains the following data:
* A state
* Its parent node, through which the current node was generated
* The action that was applied to the state of the parent to get to the current node
* The path cost from the initial state to this node

A

node

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

the mechanism that “manages” the nodes

A

frontier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

search algorithm that exhausts each one direction before trying another direction

A

Depth-First Search

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

search algorithm where the frontier is managed as a stack data structure

A

Depth-First Search

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

search algorithm that will follow multiple directions at the same time, taking one step in each possible direction before taking the second step in each direction.

A

Breadth-First Search

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

search algorithm where the frontier is managed as a queue data structure

A

Breadth-First Search

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A type of algorithm that considers additional knowledge to try to improve its performance

A

Informed Search Algorithm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

search algorithm that expands the node that is the closest to the goal, as determined by a heuristic function h(n).

A

Greedy Best-First Search

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

ignores walls and counts how many steps up, down, or to the sides it would take to get from one location to the goal location

A

Manhattan Distance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

function that estimates how close to the goal the next node is, but it can be mistaken.

A

heuristic function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

The efficiency of the greedy best-first algorithm depends on

A

how good a heuristic function is

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

considers not only h(n), the estimated cost from the current location to the goal, but also g(n), the cost that was accrued until the current location.

A

A* Search

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

For a* search to be optimal, the heuristic function has to be:

A

Admissible & Consistent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

In the heuristic function h(n) of an A* search algorithm, it is consistent if

A

for every node n and successor node n’ with step cost c, n ≤ n’ + c

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

In the heuristic function h(n) of an A* search algorithm, what does it mean to be admissible?

A

Never overestimates the actual cost to reach a goal from any node

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

algorithm that faces an opponent that tries to achieve the opposite goal.

A

Adversarial Search

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

represents winning conditions as (-1) for one side and (+1) for the other side.

A

Minimax

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Recursively, the algorithm simulates all possible games that can take place beginning at the current state and until a terminal state is reached. Each terminal state is valued as either (-1), 0, or (+1).

A

Minimax

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

an optimization technique for minimax that skips some of the recursive computations that are decidedly unfavorable

A

Alpha-Beta Pruning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Minimax that considers only a pre-defined number of moves before it stops, without ever getting to a terminal state.

A

Depth-Limited Minimax

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

estimates the expected utility of the game from a given state, or, in other words, assigns values to states.

A

Evaluation function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

agents that reason by operating on internal representations of knowledge.

A

Knowledge-Based Agents

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

an assertion about the world in a knowledge representation language

A

Sentence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

based on propositions, statements about the world that can be either true or false

A

Propositional Logic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

letters that are used to represent a proposition.

A

Propositional Symbols

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

logical symbols that connect propositional symbols in order to reason in a more complex way about the world.

A

Logical Connectives

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

List all logical connectives:

A

Not (¬)
And (∧)
Or (∨)
Implication (→)
Biconditional (↔)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

inverses the truth value of the proposition.

A

Not

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

connects two different propositions

A

And

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

is true as as long as either of its arguments is true.

A

Or

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

represents a structure of “if P then Q.”

A

Implication

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

In the case of P implies Q (P → Q), P is the ____

A

Antecedent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

In the case of P implies Q (P → Q), Q is the ____

A

Consequent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

an implication that goes both directions

A

Biconditional

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

an assignment of a truth value to every proposition.

A

Model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

set of sentences known by a knowledge-based agent.

A

Knowledge Base (KB)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

a relation that means that if all the information in α is true, then all the information in β is true.

A

Entailment (⊨)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

the process of deriving new sentences from old ones.

A

Inference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Define the Model Checking algorithm

A

Model Checking algorithm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

the process of figuring out how to represent propositions and logic in AI

A

Knowledge Engineering

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

What makes the Model Checking algorithm inefficient?

A

It has to consider every possible model before giving the answer

52
Q

allows the generation of new information based on existing knowledge without considering every possible model.

A

Inference Rules

53
Q

if we know an implication and its antecedent to be true, then the consequent is true as well.

A

Modus Ponens

54
Q

If an And proposition is true, then any one atomic proposition within it is true as well

A

And Elimination

55
Q

A proposition that is negated twice is true

A

Double Negation Elimination

56
Q

An implication is equivalent to an Or relation between the negated antecedent and the consequent

A

Implication Elimination

57
Q

A biconditional proposition is equivalent to an implication and its inverse with an And connective.

A

Biconditional Elimination

58
Q

It is possible to turn an And connective into an Or connective

A

De Morgan’s Law

59
Q

A proposition with two elements that are grouped with And or Or connectives can be distributed, or broken down into, smaller units consisting of And and Or

A

Distributive Property

60
Q

inference rule that states that if one of two atomic propositions in an Or proposition is false, the other has to be true

A

Resolution

61
Q

two of the same atomic propositions where one is negated and the other is not

A

Complementary Literals

62
Q

disjunction of literals

A

Clause

63
Q

consists of propositions that are connected with an Or logical connective

A

disjunction

64
Q

consists of propositions that are connected with an And logical connective

A

conjunction

65
Q

conjunction of clauses

A

Conjunctive Normal Form (CNF)

66
Q

Steps in Conversion of Propositions to Conjunctive Normal Form

A
  • Eliminate biconditionals
    Turn (α ↔ β) into (α → β) ∧ (β → α).
  • Eliminate implications
    Turn (α → β) into ¬α ∨ β.
  • Move negation inwards until only literals are being negated (and not clauses), using De Morgan’s Laws.
    Turn ¬(α ∧ β) into ¬α ∨ ¬β
67
Q

Process used when a case where a clause contains the same literal twice is encountered

A

Factoring

68
Q

process to remove a duplicate literal

A

Factoring

69
Q

Result after resolving a literal and its negation

A

empty clause ()

70
Q

Why is an empty clause always false?

A

it is impossible that both P and ¬P are true

71
Q

Define the resolution algorithm

A
  • To determine if KB ⊨ α:
    • Check: is (KB ∧ ¬α) a contradiction?
      • If so, then KB ⊨ α.
      • Otherwise, no entailment.
72
Q

If our knowledge base is true, and it contradicts ¬α, it means that ¬α is false, and, therefore, α must be true.

A

Proof by Contradiction

73
Q

Define the proof by contradiction algorithm

A

To determine if KB ⊨ α:
* Convert (KB ∧ ¬α) to Conjunctive Normal Form.
* Keep checking to see if we can use resolution to produce a new clause.
* If we ever produce the empty clause (equivalent to False), congratulations! We have arrived at a contradiction, thus proving that KB ⊨ α.
* However, if contradiction is not achieved and no more clauses can be inferred, there is no entailment.

74
Q

logic that allows us to express more complex ideas more succinctly than propositional logic

A

First Order Logic

75
Q

Types of symbols used by first order logic:

A

Constant Symbols & Predicate Symbols

76
Q

these symbols represent objects

A

Constant Symbols

77
Q

these symbols are like relations or functions that take an argument and return a true or false value

A

Predicate Symbols

78
Q

tool that can be used in first order logic to represent sentences without using a specific constant symbol

A

Universal Quantification

79
Q

used to create sentences that are true for at least one x

A

Existential Quantification

80
Q

Uncertainty can be represented as a number of events and the likelihood, or probability, of each of them happening.

A

Probability

81
Q

Axioms in Probability

A

0 < P(ω) < 1

82
Q

the degree of belief in a proposition in the absence of any other evidence.

A

Unconditional Probability

83
Q

the degree of belief in a proposition given some evidence that has already been revealed.

A

Conditional Probability

84
Q

variable in probability theory with a domain of possible values that it can take on

A

Random Variable

85
Q

the knowledge that the occurrence of one event does not affect the probability of the other event

A

Independence

86
Q

commonly used in probability theory to compute conditional probability.

A

Bayes’ Rule

87
Q

the likelihood of multiple events all occurring.

A

Joint Probability

88
Q

data structure that represents the dependencies among random variables.

A

Bayesian Networks

89
Q

Properties of Inference

A

Query X: the variable for which we want to compute the probability distribution.
Evidence variables E: one or more variables that have been observed for event e.
Hidden variables Y: variables that aren’t the query and also haven’t been observed.
The goal: calculate P(X | e).

90
Q

a scalable method of calculating probabilities, but with a loss in precision.

A

approximate inference

91
Q

technique of approximate inference.

A

Sampling

92
Q

Sampling is inefficient because it discards samples. Likelihood weighting addresses this by incorporating the evidence into the sampling process.

A

Likelihood Weighting vs Sampling

93
Q

Start by fixing the values for evidence variables.
Sample the non-evidence variables using conditional probabilities in the Bayesian network.
Weight each sample by its likelihood: the probability of all the evidence occurring.

A

Likelihood Weighting Steps

94
Q

an assumption that the current state depends on only a finite fixed number of previous states.

A

Markov Assumption

95
Q

a sequence of random variables where the distribution of each variable follows the Markov assumption.

A

Markov Chain

96
Q

a type of a Markov model for a system with hidden states that generate some observed event

A

Hidden Markov Model

97
Q

choosing the best option from a set of possible options.

A

Optimization

98
Q

search algorithm that maintains a single node and searches by moving to a neighboring node

A

Local Search

99
Q

a function that we use to maximize the value of the solution.

A

Objective Function

100
Q

a function that we use to minimize the cost of the solution

A

Cost Function

101
Q

the state that is currently being considered by the function.

A

Current State

102
Q

a state that the current state can transition to.

A

Neighbor State

103
Q

In this algorithm, the neighbor states are compared to the current state, and if any of them is better, we change the current node from the current state to that neighbor state.

A

Hill Climbing

104
Q

a state that has a higher value than its neighboring states

A

A local maximum

105
Q

a state that has the highest value of all states in the state-space.

A

global maximum

106
Q

Variants of Hill Climbing

A

Steepest-ascent
Stochastic
First-choice
Random-restart
Local Beam Search

107
Q

Variant that chooses the highest-valued neighbor.

A

Steepest-ascent

108
Q

Variant that chooses randomly from higher-valued neighbors.

A

Stochastic

109
Q

Variant that chooses the first higher-valued neighbor

A

First-choice

110
Q

Variant that conduct hill climbing multiple times

A

Random-restart

111
Q

Variant that chooses the k highest-valued neighbors.

A

Local Beam Search

112
Q

allows the algorithm to “dislodge” itself if it gets stuck in a local maximum.

A

Simulated Annealing

113
Q

the task is to connect all points while choosing the shortest possible distance.

A

Traveling Salesman Problem

114
Q

a family of problems that optimize a linear equation

A

Linear Programming

115
Q

Components of Linear Programming

A

Cost Function we want to minimze
Constraint represented as a sum of variables that is either less than or equal to a value
or precisely equal to this value
Individual bounds on variables

116
Q

a class of problems where variables need to be assigned values while satisfying some conditions.

A

Constraint Satisfaction

117
Q

Constraint Satisfaction properties

A

Set of Variables
Set of domains for each variable
Set of constraints C

118
Q

a constraint that must be satisfied in a correct solution.

A

Hard Constraint

119
Q

a constraint that expresses which solution is preferred over others.

A

Soft Constraint

120
Q

a constraint that involves only one variable.

A

Unary Constraint

121
Q

a constraint that involves two variables.

A

Binary Constraint

122
Q

when all the values in a variable’s domain satisfy the variable’s unary constraints.

A

Node consistency

123
Q

when all the values in a variable’s domain satisfy the variable’s binary constraints

A

Arc consistency

124
Q

a type of a search algorithm that takes into account the structure of a constraint satisfaction search problem.

A

Backtracking search

125
Q

This algorithm will enforce arc-consistency after every new assignment of the backtracking search.

A

Maintaining Arc-Consistency algorithm