350 AI Flashcards
Name a few game AI Objectives
1) Aiding human players by delegating cumbersome tasks to AI system
E.g., Move groups of units from X to Y along shortest path,
attack town with large number of units from multiple angles)
2) Create challenging adversaries and collaborators
* Strategic AI (e.g., where to build what when?
* Tactical AU (e.g., small-scale combat)
* Push the state-of-the-art of AI in general, challenging human, experts.
* Can help designers to balance game parameters and find bugs
3) Learning: Ability to improve through gameplay experience
What are the advantages and disadvantages of scripted AI systems?
Advantages:
* Does not require lots of CPU time ( important because gfx still gets majority of cycles )
* can mimic expert behaviour directly
Disadvantages:
* predictable
* cannot deal well with unforseen events
Describe some map representations commonly used in video games.
1) Grid Based Methods: Divide the map into a grid of squares of hexagons.
- Advantages:
1. Conceptually simple representation
2. Local changes have local effects which is well suited for dynamic environments
3. Perfectly represents tile-based environments
4. Exact paths easy to determine - Disadvantages:
1. Imprecise representation of arbitrary barriers
2. Increased precision in one area increases complexity everywhere and potentially causing a large memory footprint
3. Awkward for objects that are not tile-szied and shaped
4. Need to post-process paths if environment allows arbitrary motion angles ( or tweak A*)
2) Polygon-Based Maps: Use polygons to represent various areas and features. Then find path between two points that does not cross constraints
Describe advantages and disadvantages of polygon-based map representations.
- Advantages:
1. Arbitrary polygon obstacles
2. Arbitrary motion angles
3. Memory efficient
4. Finding Optimal paths for circular objects is not hard
5. Topological abstractions - Disadvantages:
1. Complex code
2. Robustness issues
3. Point localization no longer takes constant time
3) Node and Graph Maps: Utilize nodes and edges to represent locations and paths
Explain the concept of visibility graphs and their properties w.r.t path finding
Place nodes at corners of obstacles and then place edges between nodes that are visibile to each other. To find the path from one point to another:
* add the nodes to a graph, connecting to visible nodes.
* run pathfinding algorithm on resulting graph.
* Advantages:
1. Path provably optimal
* disadvantages:
1. Adding and changing world can be expensive as graph can be dense.
2. since every node can potentially “see” every other, leading to quadratic space and time requirements.
3. Connecting the start/end nodes to the graph can already take linear time.
4. Corner-corner visibility test is not cheap.
not optimal
Explain what free-space decompositions are
Free-space decomposition is when we decompose empty areas into simple convex shapes and create way point graph by placing nodes on unconstrained edges and in the face interior, if needed. The nodes are connected according to their direct reachability. Then we can find path between points such as A, B and locate faces in which A, B reside. Then connect A,B to all face nodes. Lastly, find a path and then smooth the path.
Give a formal definition of directed and undirected graphs
An undirected graph is a pair G = (V,E) that is composed of a non-empty vertex ( or node) set V and an edge set E, such that each edge contains one or two vertices, i.e. for all e ∈ E, e ⊆ V and 1 ≤ |e| ≤ 2. The number of neighbours of v is called degree (deg(v)).
A directed graph is a pair G = (V,E) that is composed of a non-empty vertex set V and an edge set E ⊆ V × V. Edge e = (u,v) connects vertex u with vertex v. deg+(v) : out-degree, deg−(v) : in-degree
Describe the following algorithms and their purpose: BFS
BFS is a graph traversal algorithm that explores all neighbours at one depth level before moving to the next. It is implemented using a queue and is used for shortest paths in unweighted graphs and level-order tree traversal.
For G = (V, E) the runtime of BFS is Θ(|V| + |E|) because in the worst
case every node is visited once in step (1) and all edges are visited
twice in step (2). BFS uses Θ(|V|) space
This is good news. BFS’s runtime is asymptotically optimal, because
the least we have to do is accessing all input information
Dijkstra’s
Dijkstra’s Algorithm
- Purpose: Finds the shortest path from a single source to all other nodes in a graph.
-
Process:
- Iteratively expands the “reached set” S_i by selecting the closest unvisited node u with minimal distance d[u] .
- Updates distances of u ‘s neighbors to include potentially shorter paths through u .
-
Runtime Complexity:
- With Array: Θ(|V|^2) ) where ( |V| is the number of nodes (optimal for dense graphs, where (|E|∈ Θ(|V|^2) ).
- With Priority Queue: O(|E| log |E|) ) (better for sparse graphs, where ( |E| ∈ O(|V|)), leading to ( O(|V| log |V|) ).
-
Key Points:
- Suitable for weighted graphs with non-negative edge weights.
- Priority queue enhances efficiency, especially in sparse graphs.
A*
Main Loop:
▶ remove lowest-f node n from OPEN
▶ n goal? yes => stop
▶ move n to CLOSED
▶ expand n: consider its children
▶ as far as we know, we are done with this node (but can be re-opened
later)
Consider a child:
▶ check if state seen before (in OPEN or CLOSED):
▶ if state has been seen with the same or smaller g value, reject
▶ Otherwise, remove child state from OPEN and CLOSED and add
corresponding node to OPEN for consideration
The OPEN/CLOSED lists act as a cache of previously seen results.
This algorithm is used for efficient path finding
UCS
- Frontier Priority Queue:
* Uses a priority queue to manage the frontier - nodes that have been seen but not expanded.
* Nodes are prioritized by their path cost from the start node, with the lowest cumulative cost nodes being selected first. - Cost function:
* UCS solely relies on the cost function g(n), which represents the total cost from the start node to node n along the best known path. - Path Expansion:
* The algorithm selects the node with the lowest g(n) from the priority queue and expands it, examining its adjacent nodes(successors)
* If a newly discovered node has not been seen before, or a cheaper path to it is found, it is added to the priority queue with its corresponding g(n) value.
4.Goal test:
* Upon each node expansion, UCS checks if the goal has been reached. If the goal is reached, UCS guarantees that the path found is the least-cost path due to the order of node expansion.
* The search stops when the goal node is selected for expansion, not when it is first seen.
UCS is ideal for scenarios where the path cost is a critical factor, and the goal is to minimize this cost. Used in route planning, network routing and scenarious where consistent path costs are applied.
Runtime Complexity:
Time complexity of O(b^d) where b is the branching factor and d is the depth of the least cost solution.
space complexity is also O(b^d) as it keeps all frontier nodes in memory.
What exactly does “f(n) ∈ O(n²)” mean?
there exists c > 0, n0 ∈ N such that for all n ≥ n0 : |f(n)| ≤ c|g(n)|
When is a heuristic admissible/consistent?
A heuristic h is admissible if and only if h(n) >= 0 and h(n) <= h’‘(n) for all n, where h’‘(n) is the true minimal cost to the goal from node n.
A heuristic is consistent (or monotone) iff:
h(G) = 0 for all goal states G and
h(n) <= cost(n,s) + h(s) for all n and their successors s (also known as triangle inequality)
Sketch the A* correctness proof
Key Mechanism: Combines actual path cost (g) and a heuristic estimate (h) to form f = g + h for each node.
Heuristic: Must be admissible
Correctness proof:
1. Applied to graphs with bounded neighbourhoods and positive edge cost (≥ ε > 0).
2. Lemma for OPEN Nodes:
* At any point, there is a node n in OPEN with:
* n is on an optimal path to goal (G)
* A* has found an optimal path to n
* f(n) ≤ C, where C is the minimal cost from start (S) to G.
3. Base Case:
* Initially, S is in OPEN with an optimal (empty) path
* (S) ≤ C* since h is admissible
- Inductive Step:
* If the currect node n is not the goal and is expanded, then:
* Its successors include a node M on an optimal path.
* An optimal path to m is found ( otherwise, a shorter path to G exists, contradicting the optimality of the path through n)
* f(m) ≤ C* due to the admissibility of h. - Termination with Success:
* If A* terminates with “success,” it has found the optimal path.
* If G is expanded via an suboptimal path, a contradiction arises since a node n with a smaller f exists in OPEN. - Termination Proof:
* A* generates acyclic paths (no cycles) with positive edge weights.
* Every new or promoted node in OPEN represents a new path.
* The graph is finite under the given conditions, so A* terminates after exploring all acyclic paths up to C*
* the maximal distance to
consider is C which will be reached after at most ⌈C*/ϵ⌉ steps and hence the algorithm stops.
Prove that if h is consistent then f is non-decreasing along any path A* considers ( i.e., f is monotone)
- Consistent Heuristic Defined:
* A heuristic h is consistent if, for every node n and every successor n′ of n, the estimated cost of reaching the goal from n is no greater than the step cost of getting to n′ plus the estimated cost from n′ to the goal. - Formula for f and g:
* f(n) = g(n) + h(n) where:
* g(n) is the actual cost from the start node to node n.
* h(n) is the estimated cost from n to the goal.
* f(n) is the sum of the actual cost to n and heuristic cost from n to the goal. - Assumptions and Implications:
* Assume n′ is a successor of n, which means:
* g(n′) = g(n) + cost(n, n′).
* f(n′) = g(n′) + h(n′) = g(n) + cost(n,n′) + h(n′).
* By the consistency of h, cost(n, n′) + h(n′)
* By the consistency of h, cost(n,n′) + h(n′) >= h(n).
* Therefore, f(n′) >= g(n) + h(n) = f(n) - Result:
* The calculation shows that f(n′), the evaluation function for any successor n′ of n, is at least as great as f(n).
* This proves that f is non-decreasing (monotone) along any path that A* considers.
This shows that A* progresses along a path, the f values of the nodes do not decrease. This property of f being monotone guarantees that A* expands nodes in the order of their optimal path costs, leading to the correct solution.
Prove that h being consistent implies h is admissible ( you can assume that f is monotone)
- Assumption for Contradiction:
* Assume for some state S from which goal G is reachable that h(S) is greater than h*(S).
* Consider an optimal path from S to G. By assumption of monotonicty, f(n) is non decreasing along this path. - Monotonicity of f:
* The evaluation function f(G) at the goal G is g(G) + g(G), where g(G) is the true cost from S to G.
* Since h. is consistent, f is monotone, and this f(G) >= f(S). - Contradiction:
* Starting with f(G), we have f(G) = g(G) + h(G) by definition of f.
* Since h is consistent, we replace h(G) with 0 ( because the heuristic cost to reach the goal from itself is 0), and g(G) with f(S)(by monotonicity).
* This leads to g(G) > h(S), contradicting our assumption that h(S) > h(S).
* Therefore, h(S) <= h*(S) for all S, which means h is admissible.
What algorithm would you choose for computing all shortest distances between a fixed node and all other nodes in a graph? Why? What is the algorithm’s worst case runtime complexity on graph G=(V,E)?
Dijkstra’s algorithm would be ideal for fixed-node shortest path problems.
Complexity: O(V²), but with min-priority queue, it can be reduced to O(V+E log V).
What data structure is well-suited for the A* OPEN variable? Why?
Priority Queue which will efficiently manage the nodes to be explored based on their ‘f’ values.
Describe the following algorithms: HPA*
Hierarchial Pathfinding A **
* Purpose: HPA is an algorithm designed to efficiently find paths in large and complex maps by abstracting the search space into a hierarchy of interconnected regions
* Pre-Processing: Sectors are defined over the map grid to create an abstract graph.
1. Graph nodes represent sector entraces, and edges represent possible paths between these nodes.
2. Intra-sector edges connect nodes within the same sector, while inter-sector edges connect nodes across different sector.
* Path-finding:
1. Paths are found in the abstract graph, which is much smaller than the original map, making the search faster.
2. The abstract path is then translated back into the originial map, often requiring a smoothing step to create a more direct route.
* HPA Process:
1. Divide the Map: Break down the map into smaller, managable sectors.
2. Build Graph: Create a graph where nodes are entrances/exits of sectors of edges represent traversable paths.
3. Compute Distances: Calculate the distances within sector (intra-sector) and between sectors (inter-sector)
4. Find Abstract Path: Use A* to find a path in this abstract graph.
5. Smooth path: Refine the abstract path to align with the actual map, improving path quality.
- Improvements in HPA implementation:
1. Uses Dijkstra’s algorithm (or UCS) for computing intra-sector distances, which ensures that the shortest paths within sectors are found.
2. On-demand intra-sector connections allow for dynamic updates to the graph, benefiting environments where changes occur.
3. Window-based smoothing helps create more natural paths without sharp turns or unnecessary detours. - Performance:
1. Significantly faster than standard A* in large maps due to the reduced number of nodes in the abstract graph.
2. Smoothing is a trade-off: it improves path quality but can slow down the algorithm. - Advantages:
1. Ideal for dynamic worlds where local changes do not require fulll recomputation.
2. Scalable for large maps due to the hierarchical abstraction.
3. Simplicity in implementation and concept. - Disadvantages:
1. Initial setup of connecting nodes to the abstract graph can be time-consuming.
2. Without smoothing, the path may not be the most direct route.
PRA*
- Purpose: PRA* is an algorithm for efficient pathfinding in large maps using hierarchical abstraction and refinement.
- Key Components:
1. Abstraction: Utilizes “clique abstraction” to build a hierarchiacal map representation. Identifies abstract nodes for start and goal positions.
2. Path-finding: - Performs A* search at a high abstraction level.
- Refines path by projecting down to lower levels.
- Employs corridor-based A* searches for refinement.
- PRA* Process:
* Abstraction Hierarchy: Create levels of map abstraction.
* High-Level Search: Use A* at a higher level for initial path.
* Path Refinement: Project path to lower levels, refining at each step.
- Performance:
1. Faster than traditional A* on large maps.
2. Maintains high path quality, nearly optimal.
3. Ideal for applications needing speed and quality, like video games.
*Advantages:
1. Efficient for large-scale maps due to hierarchical approach.
2. Balances speed and path optimality effectively.
- Disadvantages:
1. More complex than basic A* due to multiple levels of abstraction and refinement.
TA*
- Purpose: TA* is a pathfinding algorithm that operates on a mesh of triangles ( a triangulation) instead of a traditional grid. It is designed to optimize pathfinding in polygonal spaces by leveraging the geometric properties of triangulations.
- Key Features:
1. Base Triangulation: TA* runs on a base layer of triangulation, using the triangles as the primary elements of the search space.
2. Search States: Each triangle represents a search state. Adjacent triangles, reachable across unconstrained edges, are considered neighbours.
3. Heuristic: The heuristic (h - value) is the Euclidean distance from the goal to the closest point on the triangle’s entry edge, providing an estimate of the remaining distance to the goal.
4. Distance Estimate: The g-value, or the distance traveled so far, is underestimated for efficient computation. This estimate is improved as the search progresses.
5. Anytime Algorithm: TA* is an anytime algorithm, meaning it can return a valid path even if the search if stopped before completion.
6. Multiple paths: It considers multiple paths to a triangle, refinding the search for the shortest path iteratively.
7. Search continuation: The search continues until either a time limit is rached or no paths can significantly improve the current shortest distance found. - Algorithm Process:
- Initiate Search: Begin with triangles in the base triangulation, identifying the start and goal triangles.
- Expand Search: Expand to adjacent triangles, updating g-values and h-values accordingly.
- Path Selection: Evaluate paths using an A* search framework, selecting paths that minimize the f-value (g + h).
- Path Update: If a shorter path is found, update the current best path.
- Termination: Conclude the search once the best path is found or time constraints are met.
Applications:
TA* is particularly well-suited for environments with complex geometric shapes, where traditional grid-based pathfinding may be inefficient or inaccurate.
TRA*
- Purpose: TRA* is an advanced pathfinding algorithm that operates on a reduced triangulation map to enhance efficiency in finding optimal paths.
- Key Concepts:
1. Triangulation Abstraction: TRA* bullds on TA* algorithm, utilizing an abstraction of the original map into a traingulated graph.
2. Special Cases Handling: Before engaging in a complex search, TRA* checks for special cases where a direct path can be easily determined without a full search.
3. Degree-3 Nodes: These are central nodes within the tringulation that have three connections. TRA* uses these nodes as key waypoints or decision points in the pathfinding process. - Algorithm Process:
- Search State Selection: TRA* starts by moving from the initial and goal positions to their respective adjacent degree-3 nodes.
- Child Generation: Degree-3 nodes generate children, which are also degree-3 nodes, connected via corridors in the triangulation.
- Anytime Algorithm: TRA* functions as an anytime algorithm, meaning it can return a valid path at any interruption point and can handle multiple potential paths to any node.
- Heuristic and Cost Values: TRA* utilizes the same heuristic (h-value) and cost-so-far (g-value) as TA*, often based on Euclidean distances and actual travel costs within the triangulated abstraction.
- Efficiency and Performance:
- Reduced Search Space: By focusing on degree-3 nodes, TRA* effectively reduces the search space, leading to faster computation times.
- Multiple Path Consideration: The ability to consider and refine multiple paths ensures that TRA* can find highly efficient routes without exhaustive search.
- Usage:
1. TRA* is particularly useful in large maps with complex geometries, where traditional grid-based A* might be inefficient due to the high number of nodes and potential paths.
What are triangulations and how to construct them incrementally?
- Start with a Defined Area
* Begin with a specific area, such as a rectangle, identify a set of points within this area, including the corners of the rectangle. - Add Eges:
* Create edges between these points to form triangles. Ensure that the edges you add do not cross each other. - Continue Adding Edges:
* Proceed to add edges until it is no longer possible to add any more without them crossing existing edges. - Resulting Decomposition:
* The final result is a triangle-based decomposition that covers the convex hull of all the given points, meaning the smallest set that contains all the points.
- Triangulation Quality:
- In a set of points, multiple trianguations can exist. However, it is desirable to avoid creating very narrow or ‘silver-like’ triangle because they can increase the quality of the triangulation and make certain calculations, like distance heuristic, less reliable.
Describe what Delaunay triangulations are how to construct them
- Delaunay Triangulations are a specific type of triangulation that maximizes the minimum angle in each triangle. This tends to create more regular and equiangular triangles, avoiding sliver-like shapes.
- These can be created locally by “edge flipping,” which is the process of swapping a shared edge between two triangles to ensure that no point lies within the circumcircle of any triangle except for its vertices.
- Initial Triangulation:
Begin with a large ‘helper’ triangle that encompasses all the points in the set.
- Point Insertion:
Randomly select a point from the set and locate the triangle within the existing triangulation that the point falls in.
3. Subdivision:
Subdivide this triangle into smaller triangles by adding the new point as a vertex.
- Edge Flipping:
Perform edge flipping to maintain the Delaunay property (no point is inside the circumcircle of any triangle).
- Repeat:
Continue this process of adding points and edge flipping until all points are included in the triangulation.