CS3033 - Artificial Intelligence Flashcards
What is Moravec’s Paradox?
High-level (“human”) reasoning requires little computation while sensor motor reasoning is comparatively much harder. AKA Things humans find easy machines find hard and vice versa`
What is rational behavior?
Doing the “right thing”. The right thing: that which is expected to maximise goal achievement, given the available information
What is an agent?
An entity that perceives and acts independently. AKA takes input and performs actions based on it. A rational Agent will attempt to perform the action that maximises goal achievement.
What is a Multiagent system
A multiagent system is one that consists of a number of agents, which interact with one-another. In the most general case, agents will be acting on behalf of users with different goals and motivations. To successfully interact, they will require the ability to cooperate, coordinate and negotiate with each other
What is a reactive system
A reactive system is one that maintains an ongoing interaction with its environment, and responds to changes that occur in it (in time for the response to be useful); i.e. in real time
Define Accessability/Observability
An accessible or fully observable environment is one where an agent can obtain complete, accurate, up-to-date information about the environment’s state.
Define a Deterministic environment
A deterministic environment is one where each action has a single guaranteed effect. There is no uncertainty about the state that will result from performing an action
Define Episodic enviroment
An episodic environment in one in which the next action taken by the agent does not depend on the previous action. E.g. detecting defects on an assembly line
Define Sequential enviroment
An agents decision could affect all future decisions made by the agent.
Define static vs dynamic
A static environment is one where the time where an agent decides what action to take, the environment does not change. Semi-dynamic environments are when the environment does not change by the agent’s performance score does. Ex: chess clock
Discrete vs Continuous
A discrete environment can can be abstracted into countable variables measuring the qualities of the environment.
Known vs Unknown
A known environment: all outcomes from all actions are given. Unknown: the agent will have to learn as it goes
Define a simple reflex agent
Selects an action based on the current precept. History is not considered. Only works in a fully observable enviroment
Define model-based reflex agents
Maintains an internal state using a transition and sensor module.
Goal-based agent
Extends model-based reflex agent. Stores the goal state and uses it to inform it’s next action.
Utility-based agent
Will choose the action that maximises the expected utility of the action outcomes.
What is an uninformed search algorithm?
An algorithm that does not rely on a heuristic function, so it exhausts the state space finding for the goal state. BFS being an example.
How is a search problem formulated. (Four steps)
Goal formulation, problem formulation, search, execution. Think back to programming assessment
How do we measure how good a search algorithm is? (four points)
Completness, Optimality, time complexity, space complexity.
Explain Breadth First Search
Start from root node, explore all neighbouring nodes and expand them. BFS is complete in graph search if state space is finite but can get stuck on infinite loop in tree search
Explain Uniform Cost Search
It acts same as BFS but expands the node with the lowest path cost (cost to get from start node to node), this is done using a priority queue.
Explain Depth First Search
Explore the deepest node, backtrack when a dead end is reached. Variations include include Depth Limited Search and Iterative Deepening Depth-first Search.
Explain Iterative Deepening Depth First search
Combines benefits of BFS and DFS, by acting as a DFS with a gradually increasing limit until goal is found. It’s optimal and complete.
What is an informed search algorithm?
An algorithm that uses a heuristic function to make informed decisions when selecting a state to move to. It aims to move in the direction of the goal state and get closer to it at each move.
Explain the difference between Greedy-Best and A* search
Greedy best makes use of the heuristic function only so the estimated cost is the heuristic cost f(n) = h(n), while A* uses the sum of the cost to reach the current state from the initial state plus the estimated cost to reach the goal state from the current state, namely f(n) = g(n) + h(n)
Explain heuristic admissibility?
Never overestimates the cost
Explain heuristic dominance?
A measure of which heuristic is more efficient
Define local search and its advantages
Does not keep track of paths. Only cares about the final state, not the path to get there. Does not search the entire space
Advantages:
Low memory cost
Good for infinite or very large state spaces
Disadvantages:
Not optimal
Explain Hill Climbing algorithm?
It aims to move in the direction with the steepest ascent on the state space graph. Issues arise when the state space graph contains plateaus, ridges, and local maxima since the algorithm may terminate prematurely. Solutions are to allow sideways moves, stochastic hill climbing and random restart hill climbing.
Explain Simulated Annealing?
Apply a random walk to hill climbing. Have a tolerance to move to a less than optimal state but decrease this tolerance at each iteration in hope of finding a global maxima. It is both efficient and complete.
Explain Local Beam Search?
The algorithm works the same as a BFS, except that it will select the best k states from the states generated for each node. Since each node expanded is based on the previous k expanded nodes, there’s a risk that the search is clustered in a specific state space region. Stochastic LBS is a solution.
What are the 4 types of crossovers in genetic algorithms? Briefly explain.
1-point crossover : choose a random point on parents and swap their tails around
n-point crossover : choose n random crossover points and swaps along those points and attaches alternating pairs
uniform crossover: assign heads to one parent and tail to other, then do a coin flip to assign each gene and assign its inverse to the parent
mutations : pick the best portions from two genes and hope to get a better gene out.
Explain the difference between crossover and mutation?
Crossover is exploratory and combines information from two parents, mutation is exploitative and introduces new information in the genes.
Why do Local Search algorithms fail in continuous state spaces?
Because there’s an infinite branching factor for each node expansion. To fix: use a discretization method to abstract a continuous space into a discrete one.
Explain the two variations of multi-agent environments?
Agents can be cooperative or competitive
What are the key ingredients in Game Theory (Three factors)
What are the two standard representations
What is a dominant strategy in game theory. (Prisoners dilemma)
What is the Nash Equilibrium in Game Theory
How can transitions be modelled in a non-deterministic search (Sucky suck)
Describe adversarial search and MINMAX strategy
What are two strategies reduce complexity in adversarial search
What are the 5 components of a constraint satisfaction problem
What are the three types of solutions to a CSP
Name an example of Incremental standard search formulation in CSPs and explain it
What are three improvements to the performance of backtracking search
Explain local search for CSPs. Explain Min-conflict heuristic, plateau search and constraint weighing,
What are the five requirements of a knowledge based agent
Explain inference via enumeration
Explain forwards and backwards chaining
What are the three failure causes due to large domains. (Probabilities over time)
What is decision theory