lv. 3 - CS37 Flashcards
An entity that perceives its environment and acts upon that environment.
Agent
A configuration of an agent in its environment.
State
The state from which the search algorithm starts.
Initial State
Choices that can be made in a state.
Actions
A description of what state results from performing any applicable action in any state.
Transition Model
The set of all states reachable from the initial state by any sequence of actions.
State Space
The condition that determines whether a given state is a goal state.
Goal Test
A numerical cost associated with a given path.
Path Cost
A sequence of actions that leads from the initial state to the goal state.
Solution
A solution that has the lowest path cost among all solutions.
Optimal Solution
contains the following data:
* A state
* Its parent node, through which the current node was generated
* The action that was applied to the state of the parent to get to the current node
* The path cost from the initial state to this node
node
the mechanism that “manages” the nodes
frontier
search algorithm that exhausts each one direction before trying another direction
Depth-First Search
search algorithm where the frontier is managed as a stack data structure
Depth-First Search
search algorithm that will follow multiple directions at the same time, taking one step in each possible direction before taking the second step in each direction.
Breadth-First Search
search algorithm where the frontier is managed as a queue data structure
Breadth-First Search
A type of algorithm that considers additional knowledge to try to improve its performance
Informed Search Algorithm
search algorithm that expands the node that is the closest to the goal, as determined by a heuristic function h(n).
Greedy Best-First Search
ignores walls and counts how many steps up, down, or to the sides it would take to get from one location to the goal location
Manhattan Distance
function that estimates how close to the goal the next node is, but it can be mistaken.
heuristic function
The efficiency of the greedy best-first algorithm depends on
how good a heuristic function is
considers not only h(n), the estimated cost from the current location to the goal, but also g(n), the cost that was accrued until the current location.
A* Search
For a* search to be optimal, the heuristic function has to be:
Admissible & Consistent
In the heuristic function h(n) of an A* search algorithm, it is consistent if
for every node n and successor node n’ with step cost c, n ≤ n’ + c
In the heuristic function h(n) of an A* search algorithm, what does it mean to be admissible?
Never overestimates the actual cost to reach a goal from any node
algorithm that faces an opponent that tries to achieve the opposite goal.
Adversarial Search
represents winning conditions as (-1) for one side and (+1) for the other side.
Minimax
Recursively, the algorithm simulates all possible games that can take place beginning at the current state and until a terminal state is reached. Each terminal state is valued as either (-1), 0, or (+1).
Minimax
an optimization technique for minimax that skips some of the recursive computations that are decidedly unfavorable
Alpha-Beta Pruning
Minimax that considers only a pre-defined number of moves before it stops, without ever getting to a terminal state.
Depth-Limited Minimax
estimates the expected utility of the game from a given state, or, in other words, assigns values to states.
Evaluation function
agents that reason by operating on internal representations of knowledge.
Knowledge-Based Agents
an assertion about the world in a knowledge representation language
Sentence
based on propositions, statements about the world that can be either true or false
Propositional Logic
letters that are used to represent a proposition.
Propositional Symbols
logical symbols that connect propositional symbols in order to reason in a more complex way about the world.
Logical Connectives
List all logical connectives:
Not (¬)
And (∧)
Or (∨)
Implication (→)
Biconditional (↔)
inverses the truth value of the proposition.
Not
connects two different propositions
And
is true as as long as either of its arguments is true.
Or
represents a structure of “if P then Q.”
Implication
In the case of P implies Q (P → Q), P is the ____
Antecedent
In the case of P implies Q (P → Q), Q is the ____
Consequent
an implication that goes both directions
Biconditional
an assignment of a truth value to every proposition.
Model
set of sentences known by a knowledge-based agent.
Knowledge Base (KB)
a relation that means that if all the information in α is true, then all the information in β is true.
Entailment (⊨)
the process of deriving new sentences from old ones.
Inference
Define the Model Checking algorithm
Model Checking algorithm.
the process of figuring out how to represent propositions and logic in AI
Knowledge Engineering
What makes the Model Checking algorithm inefficient?
It has to consider every possible model before giving the answer
allows the generation of new information based on existing knowledge without considering every possible model.
Inference Rules
if we know an implication and its antecedent to be true, then the consequent is true as well.
Modus Ponens
If an And proposition is true, then any one atomic proposition within it is true as well
And Elimination
A proposition that is negated twice is true
Double Negation Elimination
An implication is equivalent to an Or relation between the negated antecedent and the consequent
Implication Elimination
A biconditional proposition is equivalent to an implication and its inverse with an And connective.
Biconditional Elimination
It is possible to turn an And connective into an Or connective
De Morgan’s Law
A proposition with two elements that are grouped with And or Or connectives can be distributed, or broken down into, smaller units consisting of And and Or
Distributive Property
inference rule that states that if one of two atomic propositions in an Or proposition is false, the other has to be true
Resolution
two of the same atomic propositions where one is negated and the other is not
Complementary Literals
disjunction of literals
Clause
consists of propositions that are connected with an Or logical connective
disjunction
consists of propositions that are connected with an And logical connective
conjunction
conjunction of clauses
Conjunctive Normal Form (CNF)
Steps in Conversion of Propositions to Conjunctive Normal Form
- Eliminate biconditionals
Turn (α ↔ β) into (α → β) ∧ (β → α). - Eliminate implications
Turn (α → β) into ¬α ∨ β. - Move negation inwards until only literals are being negated (and not clauses), using De Morgan’s Laws.
Turn ¬(α ∧ β) into ¬α ∨ ¬β
Process used when a case where a clause contains the same literal twice is encountered
Factoring
process to remove a duplicate literal
Factoring
Result after resolving a literal and its negation
empty clause ()
Why is an empty clause always false?
it is impossible that both P and ¬P are true
Define the resolution algorithm
- To determine if KB ⊨ α:
- Check: is (KB ∧ ¬α) a contradiction?
- If so, then KB ⊨ α.
- Otherwise, no entailment.
- Check: is (KB ∧ ¬α) a contradiction?
If our knowledge base is true, and it contradicts ¬α, it means that ¬α is false, and, therefore, α must be true.
Proof by Contradiction
Define the proof by contradiction algorithm
To determine if KB ⊨ α:
* Convert (KB ∧ ¬α) to Conjunctive Normal Form.
* Keep checking to see if we can use resolution to produce a new clause.
* If we ever produce the empty clause (equivalent to False), congratulations! We have arrived at a contradiction, thus proving that KB ⊨ α.
* However, if contradiction is not achieved and no more clauses can be inferred, there is no entailment.
logic that allows us to express more complex ideas more succinctly than propositional logic
First Order Logic
Types of symbols used by first order logic:
Constant Symbols & Predicate Symbols
these symbols represent objects
Constant Symbols
these symbols are like relations or functions that take an argument and return a true or false value
Predicate Symbols
tool that can be used in first order logic to represent sentences without using a specific constant symbol
Universal Quantification
used to create sentences that are true for at least one x
Existential Quantification
Uncertainty can be represented as a number of events and the likelihood, or probability, of each of them happening.
Probability
Axioms in Probability
0 < P(ω) < 1
the degree of belief in a proposition in the absence of any other evidence.
Unconditional Probability
the degree of belief in a proposition given some evidence that has already been revealed.
Conditional Probability
variable in probability theory with a domain of possible values that it can take on
Random Variable
the knowledge that the occurrence of one event does not affect the probability of the other event
Independence
commonly used in probability theory to compute conditional probability.
Bayes’ Rule
the likelihood of multiple events all occurring.
Joint Probability
data structure that represents the dependencies among random variables.
Bayesian Networks
Properties of Inference
Query X: the variable for which we want to compute the probability distribution.
Evidence variables E: one or more variables that have been observed for event e.
Hidden variables Y: variables that aren’t the query and also haven’t been observed.
The goal: calculate P(X | e).
a scalable method of calculating probabilities, but with a loss in precision.
approximate inference
technique of approximate inference.
Sampling
Sampling is inefficient because it discards samples. Likelihood weighting addresses this by incorporating the evidence into the sampling process.
Likelihood Weighting vs Sampling
Start by fixing the values for evidence variables.
Sample the non-evidence variables using conditional probabilities in the Bayesian network.
Weight each sample by its likelihood: the probability of all the evidence occurring.
Likelihood Weighting Steps
an assumption that the current state depends on only a finite fixed number of previous states.
Markov Assumption
a sequence of random variables where the distribution of each variable follows the Markov assumption.
Markov Chain
a type of a Markov model for a system with hidden states that generate some observed event
Hidden Markov Model
choosing the best option from a set of possible options.
Optimization
search algorithm that maintains a single node and searches by moving to a neighboring node
Local Search
a function that we use to maximize the value of the solution.
Objective Function
a function that we use to minimize the cost of the solution
Cost Function
the state that is currently being considered by the function.
Current State
a state that the current state can transition to.
Neighbor State
In this algorithm, the neighbor states are compared to the current state, and if any of them is better, we change the current node from the current state to that neighbor state.
Hill Climbing
a state that has a higher value than its neighboring states
A local maximum
a state that has the highest value of all states in the state-space.
global maximum
Variants of Hill Climbing
Steepest-ascent
Stochastic
First-choice
Random-restart
Local Beam Search
Variant that chooses the highest-valued neighbor.
Steepest-ascent
Variant that chooses randomly from higher-valued neighbors.
Stochastic
Variant that chooses the first higher-valued neighbor
First-choice
Variant that conduct hill climbing multiple times
Random-restart
Variant that chooses the k highest-valued neighbors.
Local Beam Search
allows the algorithm to “dislodge” itself if it gets stuck in a local maximum.
Simulated Annealing
the task is to connect all points while choosing the shortest possible distance.
Traveling Salesman Problem
a family of problems that optimize a linear equation
Linear Programming
Components of Linear Programming
Cost Function we want to minimze
Constraint represented as a sum of variables that is either less than or equal to a value
or precisely equal to this value
Individual bounds on variables
a class of problems where variables need to be assigned values while satisfying some conditions.
Constraint Satisfaction
Constraint Satisfaction properties
Set of Variables
Set of domains for each variable
Set of constraints C
a constraint that must be satisfied in a correct solution.
Hard Constraint
a constraint that expresses which solution is preferred over others.
Soft Constraint
a constraint that involves only one variable.
Unary Constraint
a constraint that involves two variables.
Binary Constraint
when all the values in a variable’s domain satisfy the variable’s unary constraints.
Node consistency
when all the values in a variable’s domain satisfy the variable’s binary constraints
Arc consistency
a type of a search algorithm that takes into account the structure of a constraint satisfaction search problem.
Backtracking search
This algorithm will enforce arc-consistency after every new assignment of the backtracking search.
Maintaining Arc-Consistency algorithm