Midterm 1 Flashcards
Material up until first order logic
What are the four AI categories?
- Acting Humanly: The Turing Test
- Thinking Humanly: Cognitive Modeling
- Thinking Rationally: “Laws of Thought” (notation and rules of derivation for thoughts)
- Acting Rationally: The Rational Agent (making the correct inference)
What is an Agent?
anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators
What are human agent and robotic agent sensors and actuators?
Human:
- sensors: eyes, ears, etc.
- actuators: hands, legs, mouth etc.
Robot
- sensors: cameras, infrared range finders, microphones, pressure sensors etc.
- actuators: various motors etc.
Provide the percepts and actions that would be associated with a smart vacuum
Also explain what agent functions and programs are/do.
Percepts (percept sequence): [location, content] i.e. [A, Dirt]
Actions: Left, Right, Clean
agent function maps from percept histories to actions.
f: P -> A
agent program runs on the physical architecture to produce f
Def: Rational agent
chooses whichever action maximizes the expected value of the performance measure given the percept sequence to date
= exploration, learning, autonomy
Agent structure =
program + architecture
What are the 5 types of agents?
- Simple Reflex Agents: acts according to a rule whose condition matches the current state, as defined by the percept EX. if car-in-front-is-braking then initiate brake
- Model-based Reflex Agents: agent maintains an internal state that depends on the percept history. UPDATE-STATE responsible for creating new internal state description. EX. Autonomous car going from A to B with a state for gas
- Goal-based Agents: agent tries to get as close to the goal as possible and makes predictions about the impact of its actions on the world. EX. at a road junction, auton vehicle can go left, right, or straight. Correct decision depends on where trying to get to (need to have goal information that describes situations that are desirable)
- Utility-based Agents: Multiple solutions, some more efficient than others. Utility function quantifies how happy the agent is (internal). When multiple goals provide a way in which the likelihood of success can be weighed against the importance of the goals
- Learning Agents: composed of learning element to make improvements, a performance element, to select external actions, and problem generator to suggest actions that will lead to new and informative experiences
Why do we need to explore the state space? (do searches)
to see which path gets us to the goal state and with minimum cost
What are the two main types of searches?
- Uniformed (aka “blind search”): only use information available in the problem definition
- Informed (heuristic): use problem-specific knowledge beyond the definition of the problem itself
What are the four factors strategies are evaluated by?
Completeness: is the algorithm guaranteed to find a solution if there is one?
Optimality: Does the strategy find the optimal solution?
Time complexity: How long does it take to find a solution?
Space complexity: How much memory is required to perform the search?
What are the properties of Breadth First Search?
Complete?: Yes, if the branching factor is finite
Optional?: Yes, if actions all have same cost (explores all options closest to initial space first and works its way to least optimals solution)
Time/ Space: O(b^d)
What is the Pseudocode for Breadth first search?
Look at notes…
What are the properties of Uniform Cost Search?
Complete: yes, (search every node if cose is positive at every step, every node searched if you don’t find the solution)
Optimal: yes, (total cost C* - initial to solution path cost)
Time/ space: O(b^(C/e+1) - (C = initial to path cost, e = smallest path cost we have (worst case))
What is the Pseudocode for Uniform Cost Search?
look at notes…
What are the properties of Depth First Search?
Complete: No (fails for infinate depth), yes, for limited depth
Optimal? No
Time: O(b^m)
Space: O(bm) O(bd)
Depth limited search: (nodes at depth L treated as if they have no successors) If L < d: If L > d: Time: Space: How do we set L?
If L < d: in complete
If L > d: complete, sub-optimal (i.e. d=20 (sol’n), L=50)
Time: O(b^L)
Space: O(bL)
one way is domain knowledge
What is the iterative deepening search sudo code?
check notes
What are the properties of iterative-deepening search?
complete: yes
optimal: yes
time: O(b^d)
space: O(bd)
What is the bidirectional search? and what is the motivation for using it?
When goal known, two searches carried out simultaneously, one forward from inital state, one backward from goal
motivation: O(b^(d/2)) + O(b^(d/2)) «_space;O(b^d)
What is the Best-First search> and what is its evaluation function?
Identicle to uniform-cost except that the general evaluation function f(n) is used
its evaluation function for n -> h(n) = 0 is
f(n) = g(n) + … + h(n)
want to minimize remaining cost to the goal
What is the Greedy Best-First Search? and what is its evaluation function?
expands the node that appears closest to the goal, therefore, evaluates node using only heuristic function
f(n) = h(n)
What are the properties of Greedy Best-First Search?
Complete: yes, in finite spaces
Optimal: not neccessarily, other routes might be shorter
Time: O(b^m) -> worst case = BFS (could be reduced with proper heuristic/ evaluation functions
Space: O(b^m) -> worst, most likely keep everything
What is the A* search? what is its evaluation function? and is it complete/ optimal?
similar to uniform-cost but different function:
f(n) = g(n) + h(n)
both optimal and complete under certain circumstances (depending on f(n))
What are some viable heuristics for the 8-puzzle problem?
h1 = count number of misplaced (incorrect) tiles (initially 8 - goal is to minimize this), if a move does not result in correct placement we will not value that move
h2 = sum of distances of tiles with respoect to their correct locations