Midterm grind Flashcards
<p>In Game Playing Algorithms, what is the evaluation function f(n) (or utility value), where n is a board configuration measure?</p>
<p>The "goodness" of the board configuration n. This is the number of possible wins, which are not blocked by the opponent minus the number of possible wins for the opponent not blocked by the player:<br></br>
<br></br>f(n) = m(n) - o(n)<br></br>
<br></br>"m" is the possible winning states, "o" is the opponent's possible winning states</p>
What is the open list at a particular state in a search algorithm
The set of nodes in the ready queue AFTER the node has been visited and expanded
<p></p>
<p></p>
<p>Briefly describe DLS</p>
<p></p>
<p></p>
<p>Just like DFS but limits the search to a maximum depth parameter "l". Once the algorithm discovers nodes below depth "l", it does not expand them.</p>
<p></p>
<p></p>
<p>What is the process of "searching"</p>
<p></p>
<p></p>
<p>Moving from state-to-state in the problem space to find a goal (or to terminate)</p>
<p></p>
<p>Describe the Simulated Annealing Algorithm Step-by-step</p>
<p></p>
<p>Set Current Solution S0,initial temperature T0, and Select temperature reduction function</p>
<p>Repeat:</p>
<p> Repeat:</p>
<p> Select soliution Sifrom the neighborhood N(S)</p>
<p> Calculate the change in cost according to the new solutionΔC</p>
<p> IfΔC < 0, then accept this solution (since it is improving) S=Si</p>
<p> Else, generate a random number x in the range (0, 1)</p>
<p> if x < e-ΔC/Tthen accept this new solution S=Si</p>
<br></br><br></br><p> Until termination criterion met</p><br></br><br></br><p> Decrease temperature T using reduction function</p>
<p></p>
<p></p>
<p>Give a reason to use 2 heuristics simultaneously in a searching algorithm</p>
<p></p>
<p></p>
<p>Use 1 heuristic as primary, and 1 as secondary to break ties</p>
<p></p>
<p></p>
<p>What does SA do to escape from local optimums?</p>
<p></p>
<p></p>
<p>Accepts non-improving solutions depending on what the temperature is</p>
<p></p>
<p></p>
<p>What is the motivation behind alpha-beta pruning?</p>
<p></p>
<p></p>
<p>In the game tree, some branches are never played since rational players don't pick sub-optimal moves. Therefore, there is no need to generate the subtree for such moves.</p>
<p></p>
<p></p>
<p>What is the difference between trajectory methods and population-based methods?</p>
<p></p>
<p></p>
<p>Trajectory methods handle one solution at a time, whereas Population methods handles multiple solutions at a time</p>
<p></p>
<p></p>
<p>Under what conditions does UCS become Dijkstra's single-source shortest path algorithm?</p>
<p></p>
<p></p>
<p>if g(n) is the overall cost function for all nodes to a source node</p>
<p></p>
<p></p>
<p>What are Meta-Heuristic algorithms</p>
<p></p>
<p></p>
<p>Meta-Heuristics are algorithms that combine heuristics in a higher level framework aimed at efficiently and effectively exploring the search space.
<br></br>
<br></br>They are strategies that "guide" the search process</p>
<p></p>
<p></p>
<p>Are Meta-Heuristics problem specific? are they at risk of getting trapped in confined areas of the search space?</p>
<p></p>
<p></p>
<p>Meta-Heuristics are not problem-specific
<br></br>
<br></br>They utilize mechanisms (depending on the method) to avoid getting trapped in dead ends</p>
<p></p>
<p></p>
<p>Describe the difference between "A Search" and "A* Search"</p>
<p></p>
<p></p>
<p>"A*" is admissible, in which h(n) <= h*(n). Whereas "A" is not admissible, h(n) has no upper bound in the A algorithm.</p>
Is DLS complete?
Does DLS always terminate?
Is DLS optimal?
<p></p>
<p></p>
<p>As long as the solution is within the depth "d", then it is guaranteed to be complete since there is a limit on the depth that it can search to.
<br></br>
<br></br>DLS always terminates, and it is not optimal</p>
<p></p>
<p></p>
<p>How does the min/max algorithm relate to game playing</p>
<p></p>
<p></p>
<p>The game tree alternates between the 2 opponent's moves, and the min/max algorithm can be used to optimize for the best outcome in this game tree.</p>
<p></p>
<p></p>
<p>What are search trees, and how are they abstracted from graphs.</p>
<p></p>
<p></p>
<p>Search trees are superimposed over top of graph representation of a problem. The graph might be finite, but the search tree can be either finite or infinite. infinite if we allow for repeated states due to reversible actions or cycles.</p>
<p></p>
<p></p>
<p>What do Heuristics Aim to do?</p>
<p></p>
<p></p>
<p>Heuristics aim to efficiently generate good solutions, but does not guarantee optimality.
They guide the algorithm
</p>
<p></p>
<p></p>
<p>What are the 2 general optimization methods in searching algorithms?</p>
<p></p>
<p></p>
<p>Constructive methods: starting from scratch and getting to the solution one component at a time. Ex: Sudoku solver, N-Queens
<br></br><br></br>Local Search Methods: Starts from an initial solution and iteratively tries to replace the current solution with better solution Ex: TSP</p>
<p></p>
<p></p>
<p>What does a heuristic function do?</p>
<p></p>
<p></p>
<p>Heuristic functions give an estimate of the distance to the goal based on domain-specific information. It guides the algorithm to a better solution by estimating the "goodness" of a node/state.</p>
<p></p>
<p></p>
<p>Under what condition is A* complete?</p>
A* is complete if the branching factor is finite, and every operator (evaluation function) has a fixed positive cost
<p></p>
<p></p>
<p>What is the difference between well-structured problems and ill-structured problems</p>
<p></p>
<p></p>
<p>Well Structured: The existing state and desired state are clearly identified, and the methods to reach the desired state are fairly obvious
<br></br>
<br></br>Ill Structured: Existing state is and desired state are unclear and hence the methods of reaching the desired state cannot be found.</p>
<p></p>
<p></p>
<p>What is the purpose of the minimax or min/max algorithm</p>
<p></p>
<p></p>
<p>To determine the best next move and the cost of such move</p>
<p></p>
<p></p>
<p>What does "admissible" mean with respect to heuristics?</p>
<p></p>
<p></p>
<p>A heuristic is said to be "admissible" if it NEVER overestimates the distance to the goal node.
<br></br>
<br></br>It is also known as an "optimistic" heuristic</p>
<p></p>
<p></p>
<p>List the 5 types of informed search algorithms</p>
Hill Climbing Search Greedy Best-first Search Beam Search A Search A* Search
<p></p>
<p></p>
<p>Under what condition does A* act like UCS?</p>
<p></p>
<p></p>
<p>When h(n) = 0 for all n. This is the null heuristic.</p>
<p></p>
<p></p>
<p>What is a rational agent</p>
<p></p>
<p></p>
<p>An agent that for each perceived sequence of events, it does what is expected to maximize performance on the basis of perceptual history and built-in knowledge</p>
<p></p>
<p></p>
<p>Describe Fully observable vs Partially observable environments</p>
<p></p>
<p></p>
<p>Fully Observable environment:<br></br>sensors detect all aspects that are relevant to the action
<br></br><br></br>Partially Observable environment:<br></br>Lots of noise and inaccurate sensors due to parts of state missing</p>
<p></p>
<p></p>
<p>What does Behaving rationally mean?</p>
<p></p>
<p></p>
<p>Perceiving the “world” and acting to achieve
<br></br>some goals given a set of beliefs.</p>
<p></p>
<p></p>
<p>Agents work within an environment with certain characteristics. What are these 6 characteristics/dimensions?</p>
Fully Observable vs Partially Observable <br>Deterministic vs Stochastic <br>Episodic vs Sequential <br>Static vs Dynamic <br>Discrete vs Continuous <br>Competitive vs Co-operative
<p></p>
<p></p>
<p>Briefly describe UCS</p>
<p></p>
<p></p>
<p>Expand the node with the lowest cost in the fringe, use a heap/sorted list, always take the min-cost node. If there are numerous nodes with the same cost, take the most recently added one.
<br></br>
<br></br>Always finds the minimum-cost path to the goal node.</p>
<p></p>
<p></p>
<p>What is the branching factor in a search Tree?</p>
<p></p>
<p></p>
<p>The maximum number of children possible for a node. Denoted with "b"</p>
<p></p>
<p></p>
<p>There are 2 types of Games:
<br></br>
<br></br>Perfect information & Imperfect information
<br></br>
<br></br>Describe and Contrast them</p>
<p></p>
<p></p>
<p>Perfect information:
<br></br>Each player has complete information about the opponent's position and available choices (ex: chess, monopoly)
<br></br>
<br></br>Imperfect information:
<br></br>Each player does NOT have complete information about the opponent's position and available choices (ex: poker)</p>
In Simulated Annealing, what is the cost function
The quantity which is trying to be minimized as much as possible. It is a function of the current state
<p></p>
<p></p>
<p>What is the purpose of using memory structure in trajectory methods?</p>
<p></p>
<p></p>
<p>To escape local minima
<br></br>
<br></br>To improve explorative strategy by avoiding visited nodes</p>
<p></p>
<p></p>
<p>Describe the "Problem Formulation" of Game Playing as Search</p>
<p></p>
<p></p>
<p>State Description: the configuration of the board
<br></br>
<br></br>Initial State: initial board position & who's move it is
<br></br>
<br></br>Operators/Action State: the legal moves that a player can make
<br></br>
<br></br>Goal: is the game over?
<br></br>
<br></br>Utility Function: Measures an outcome of the game and its desirability</p>
What are the 2 categories of engineering problems in terms of the structure of the problem
<p></p>
<p></p>
<p>Well-Structured Problems & Ill-Structured Problems</p>
<p></p>
<p></p>
<p>Does SA find an approximation or exact value? What size search space does it work well in?</p>
<p></p>
<p></p>
<p>SA works very well in large search spaces, and provides a good approximation</p>
<p></p>
<p></p>
<p>Briefly describe BFS</p>
<p></p>
<p></p>
<p>traverse layer-by-layer, use a FIFO structure (queue) as the fringe.
<br></br>
<br></br>Always finds the closest path, but takes long.</p>
<p></p>
<p></p>
<p>What is cooperation</p>
<p></p>
<p></p>
<p>A group to solve a join problem or perform a common task, based on sharing the responsibility for reaching a goal.
<br></br>
<br></br>OR
<br></br>
<br></br>Cooperation is the practice of software/hardware entities working together in order to achieve certain objective</p>
<p></p>
<p></p>
<p>Alpha-Beta Pruning is an optimization of the minimax searching algorithm.
<br></br>
<br></br>What are the tradeoffs? does it affect the completeness of minimax?</p>
<p></p>
<p></p>
<p>Alpha-Beta is guaranteed to return exactly the same value as the min/max algorithm.
<br></br>
<br></br>It is a pure optimization of min/max without any tradeoffs. It does not affect the completeness, but reduces the number of computations</p>
<p></p>
<p></p>
<p>What are the 4 properties of search algorithms?</p>
Completeness:
Whether or not the algorithm is guaranteed to find a goal node (if it exists).
Optimality:
Is the algorithm guaranteed to find the BEST goal node - with the cheapest cost.
Time Complexity:
How many nodes are expanded?
Space Complexity:
What is the maximum number of nodes stored in memory?
<p></p>
<p></p>
<p>Are meta-heuristics exhaustive?</p>
<p></p>
<p></p>
<p>Meta-Heuristics are non-exhaustive, and they use their own internal heuristic</p>
<p></p>
<p></p>
<p>What are strong vs weak methods?</p>
<p></p>
<p></p>
<p>Strong methods are those designed to address a specific type of problem.
<br></br>
<br></br>Weak methods are general approaches that may be applied to many types of problems. Weak methods are extremely general and not tailored to a specific situation. They are called "weak" because they do not take advantage of more powerful domain-specific heuristics.</p>
Suppose “h” us a heuristic function.
What does can you say about the current state if:
1. h(n) = 0
2. h(n) = infinity
<p></p>
<p></p>
<p>h(n)=0 means that the state is a goal node. h(n) = infinity means that the state "n" is a dead-end that can never lead to a goal</p>
<p></p>
<p></p>
<p>Is Greedy Best-first search Complete? is it Optimal?</p>
<p></p>
<p></p>
<p>No and No, it purely depends on the heuristic that it uses</p>
<p></p>
<p></p>
<p>What happens to the expanded nodes in A* search if h(n) is too high or too low?</p>
<p></p>
<p></p>
<p>h(n) too high: Optimal solution will be overlooked (non admissible heuristic)
<br></br>
<br></br>h(n) too low: More nodes than needed are expanded</p>
<p></p>
<p></p>
<p>Compare Hill Climbing Search to Beam Search</p>
<p></p>
<p></p>
<p>Hill Climbing search is equivalent to Beam search with ß=1. Beam is like the middle ground between Hill Climbing and Greedy Best First Search</p>
<p></p>
<p></p>
<p>At what point do you prune in minimax alpha beta pruning</p>
<p></p>
<p></p>
<p>If there is an alpha/beta value assigned to a parent node, and ANY grandchild node is below/above the cutoff, then you prune the other grandchildren</p>
<p></p>
<p></p>
<p>Under what condition does A* perform like a perfect algorithm, only expanding the nodes which lead to the solution?</p>
<p></p>
<p></p>
<p>When h(n) = h*(n) for all n. This means that h(n) = actual cost to goal node. In this case, the heuristic gives a perfect distance to the solution for all nodes</p>
<p></p>
<p></p>
<p>Under what circumstances is DFS incomplete? is DFS optimal?</p>
<p></p>
<p></p>
<p>spaces with loops, or infinite-depth spaces.
<br></br>
<br></br>DFS is not optimal for finding the shortest path</p>
<p></p>
<p></p>
<p>What is the strategy for solving ill-structured problems?</p>
<p></p>
<p></p>
<p>Retrieve a solution/answer, Start from a guess, and improve it. Search among alternatives. Exploring surrounding states.</p>
<p></p>
<p></p>
<p>in SA, what is the acceptance probability?</p>
<p></p>
<p></p>
<p>if delta C <= 0: P = 1
<br></br>
<br></br>if delta C > 0: P = exp(-1*delta C / T)</p>
<p></p>
<p></p>
<p>What is Greedy Best-First search?</p>
<p></p>
<p></p>
<p>A searching algorithm in which the next state is picked using the evaluation function f(n)=h(n). The fringe is a priority queue (heap or self-sorted list).</p>
What does the min/max algorithm assume for the opposing player’s strategy?
<p></p>
<p></p>
<p>Assumes the opponent is rational and optimizes their behaviour & considers their best response/move</p>
<p></p>
<p></p>
<p>Is the min/max search complete? is it optimal?
<br></br>
<br></br>If so, under what conditions</p>
<p></p>
<p></p>
<p>It is complete if the tree is finite,
<br></br>
<br></br>it is optimal assuming that the opponent plays rationally & optimally</p>
<p></p>
<p></p>
<p>What do α and β parameters denote in alpha-beta pruning?
<br></br>
<br></br>Why do these matter?</p>
<p></p>
<p></p>
<p>α: the best value for MAX seen so far. This is the lower bound on the value that a MAX node can be assigned.
<br></br>
<br></br>β: the best value for MIN seen so far. This is the upper bound on the value that a MIN node can be assigned.
<br></br>
<br></br>We use these values to prune cousin nodes and assign cutoff values</p>
<p></p>
<p></p>
<p>What is Beam Search</p>
<p></p>
<p></p>
<p>A greedy best-first search variant which tries to minimize the amount of memory which greedy best-first search uses by only storing ß nodes in the fringe, and using a heuristic. Evaluation function is f(n) = h(n), and the algorithm will not add new nodes to the fringe if it is full.</p>
<p></p>
<p></p>
<p>Briefly describe IDS</p>
<p></p>
<p></p>
<p>Iteratively increases the limit of the search. Algorithm performs DLS for d=0, if no solution is found, it increases d=1 and restarts the DLS. If no solution is found it increases again to d=2 and restarts. It keeps on iteratively increasing the maximum depth and restarting the DLS until a solution is found</p>
<p></p>
<p></p>
<p>Is Beam search Optimal? is it Complete?</p>
<p></p>
<p></p>
<p>No and No, it purely depends on the heuristic that it uses</p>
<p></p>
<p></p>
<p>What are the 3 variations of Hill Climbing Search? Describe them</p>
Stochastic Hill Climbing:
Random Selection among the uphill moves
First-Choice Hill Climbing:
Keeps generating successors until a better one is found & takes it (good when there are many successors)
Random-Restart hill climbing:
Tries to avoid getting stuck in local maxima
<p></p>
<p></p>
<p>Why do we limit the depth in expanding the game tree for min/max search?</p>
<p></p>
<p></p>
<p>Typically, the number of states (board configurations) is too large, leading to an unreasonable depth in the game tree. We optimize this by limiting the depth of the game tree in the traversal.</p>
<p></p>
<p></p>
<br></br>
<br></br><p>Describe the step-by-step process of using the min/max algorithm</p>
<p></p>
<p></p>
<br></br>
<br></br><p>Generate the game tree (alternating max & min)<br></br>
<br></br><br></br>
<br></br>Apply the utility function to all leaf nodes to give them their utility values<br></br>
<br></br><br></br>
<br></br>From bottom to top; use the leaf nodes’ utility values to determine the utility values of their parents. Repeat this until you get to the root node.<br></br>
<br></br><br></br>
<br></br>Pick the most optimal move from the root node</p>
<p></p>
<p></p>
<p>Describe Hill-Climbing Search</p>
<p></p>
<p></p>
<p>For all successors, it strictly only picks a state which has a lower heuristic value (not even equal), and lower than all the other neighbouring states' heuristic value. It picks this state and ignores all other options.
<br></br>
<br></br>Hill-Climbing search strictly looks just 1 step ahead to determine if a successor is better than the current state, and moves to the best successor. Very memory efficient.
<br></br>
<br></br>It is like climbing a mountain with a thick fog around you</p>
<p></p>
<p></p>
<p>In SA, what happens when a neighbouring state has higher cost?</p>
<p></p>
<p></p>
<p>A random number is generated between 0 and 1. If P is greater than this random number, it is accepted. In this case P = exp(-1*delta C / T)
<br></br>
<br></br>Note that in this case, the probability of accepting depends on the magnitude of change in cost, and the temperature</p>
<p></p>
<p></p>
<p>What are game playing problems?</p>
<p></p>
<p></p>
<p>Problems which have well-defined rules with moves that can be searched intelligently</p>
<p></p>
<p></p>
<p>What is adaptation</p>
<p></p>
<p></p>
<p>The ability to improve behaviour, evolve, adjust to new or different situations</p>
<p></p>
<p></p>
<p>What condition does UCS need to work?</p>
<p></p>
<p></p>
<p>No zero-cost edges or negative-cost edges</p>
<p></p>
<p></p>
<p>Describe the Process of goal formation</p>
<p></p>
<p></p>
<p>We decide which aspects we are interested in, and which aspects can be ignored.
<br></br>
<br></br>We need goals to limit search and allow termination.</p>
What are the 2 types of optimization Algorithms with respect to solving a non-discrete problem
<p></p>
<p></p>
<p>Exact Algorithms and Approximate Algorithms</p>
<p></p>
<p></p>
<p>What is Artificial Intelligence</p>
<p></p>
<p></p>
<p>The Science and Engineering of making intelligent machines, especially intelligent computer programs. Examples: Visual perception, speech recognition, decision making, language translation</p>