More Test1 Review Flashcards
What are the 4 categories of AI?
- Thinking humanly
- Thinking rationally
- Acting humanly
- Acting rationally
What is the definition of AI?
Intelligence demonstrated by machines, in contrast to natural intelligence displayed by humans/animals.
What is “thinking humanly?”
Trying to make a computer program which mimics the human brain (cognitive modeling).
What is “acting rationally?”
An rational agent which acts so as to achieve the best outcome or, whenever there is uncertainty, the best expected outcome. Also described as “doing the right thing.” (rational agent approach)
What is “acting humanly?”
Mimicking human behavior. The Turing test is an example. It’s an “operational” definition of intelligence (blink reflex, Turing test).
What is “thinking rationally?”
The attempt to build machines which are based on logical rules (syllogisms) that govern beliefs/behavior. (laws of thought).
Machine learning.
The science of getting computers to act without being explicitly programmed.
Nested hierarchy of AI, machine learning, representation learning, and deep learning.
Outermost to innermost: AI, machine learning, representation learning, deep learning.
2 factors that deterred AI.
Lack of enough data & lack of sufficient computing power
Definition of rationality.
Rational agents select an action that is expected to maximize its performance measure, given evidence provided by the percept sequence and built-in knowledge.
PEAS
Performance, Environment, Actuator, Sensor.
Performance (PEAS) definition
The performance measure that defines the criteria of success.
Environment (PEAS) definition
The agent’s prior knowledge of the environment
Actuator (PEAS) definition
The actions that the agent can perform
Sensor (PEAS) definition
The agent’s percept sequence to date
5 structures of intelligent agent
Simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents, learning agents.
Simple reflex agent definition.
Select actions on the basis of the current percept, ignoring the rest of the percept history.
Model-based reflex agent definition.
The agent uses a “model” of the world to guide its actions. A model is knowledge about how the agent’s world works.
Goal-based agent definition.
The agent has information about a goal it is supposed to achieve. It uses this goal and information about the results of possible actions in order to choose actions which achieve that goal.
Utility-based agent definition.
Agent uses a utility function which maps a state or sequence of states onto a real number, which describes the associated degree of “happiness” from performing the action.
Learning agent definition.
Starts with some basic knowledge and is then able to act and adapt autonomously, through learning, to improve its own performance.
Goal vs utility
Goal: may seek to get from point A to point B, and succeeds when it gets there. Utility: get from point A to point B with additional specified criteria with trade-offs (shortest time, minimum fuel expenditure, etc).
BFS completeness, time & space complexity, and optimality.
Time: b^(d+1)
Space: b^(d+1)
Complete? Yes (if b is finite)
Optimal? Yes if all steps have the same cost, but not optimal in general
Uniform-cost search completeness, time & space complexity, and optimality.
Time: b^[C/e]
Space: b^[C/e]
Complete? Yes if step cost > e
Optimal? Yes
DFS completeness, time & space complexity, and optimality.
Time: b^m
Space: bm
Complete? No
Optimal? No
Depth-limited search completeness, time & space complexity, and optimality.
Time: b^l
Space: bl
Complete? Yes if l >= d
Optimal? No
Iterative deepening DFS completeness, time & space complexity, and optimality.
Time: b^d
Space: bd
Complete? Yes
Optimal? Yes
Bidirectional search time & space complexity
O((bd)/2)
A* search completeness, time & space complexity, and optimality.
Time: n^2
Space: Keeps all nodes in memory
Complete? Yes unless there are infinitely many nodes
Optimal? Yes
BFS uses a ____ and DFS uses a ____
Queue, stack
Greedy search completeness, time & space complexity, and optimality.
Time: b^m
Space: b^m
Complete? No, but complete in finite space with repeated state checking.
Optimal? No
A search problem consists of
Initial state, set of actions, goal test, path cost
graph search vs tree search
graph search = infinite # of nodes without a circle. Tree search = finite nodes without a circle.
Naive solution for TSP
Start & end at city 1, generate all (n-1)! permutations of cities, calculate the cost of every permutation & keep track of the minimum cost permutation, return the permutation with the minimum cost.