Exam points Flashcards
What is the time complexity of BFS in a connected graph?
For a connected graph G(V, E), the number of vertices is in O(number of edges), there are a least E+1 nodes, so time complexity for BFS, O(V+E) can be reduced to O(E).
What are the key steps in formulating a dynamic programming solution?
The key steps are:
Define Subproblems (OPT): Clearly specify what each OPT(i), OPT(i, j), etc., represents.
Recurrence Relation: Determine how to compute OPT based on smaller subproblems.
Initial Conditions: Identify base cases to start the recursion.
Order of Computation: Decide the sequence to compute subproblems (often bottom-up).
Reconstruction (if needed): Keep track of choices to reconstruct the solution.
What properties must a problem have to be suitable for dynamic programming?
The problem must exhibit:
Optimal Substructure: An optimal solution contains optimal solutions to its subproblems.
Overlapping Subproblems: The problem can be broken down into subproblems that are reused multiple times.
How do you define the OPT function in the Knapsack problem for dynamic programming?
Define OPT(i, w) as the maximum total value achievable using the first i items without exceeding weight w. The recurrence is:
OPT(i, w) = max {
OPT(i - 1, w), (excluding item i)
vi + OPT(i - 1, w - wi) (including item i, if wi ≤ w) }
What is the exchange argument in greedy algorithms, and how is it used to prove correctness?
The exchange argument shows that any optimal solution can be transformed into the greedy solution through a series of exchanges without decreasing the optimality. This proves that the greedy choice leads to a globally optimal solution.
When does a greedy algorithm guarantee an optimal solution?
When the problem has:
Greedy Choice Property: A locally optimal choice leads to a globally optimal solution.
Optimal Substructure: An optimal solution to the problem contains optimal solutions to subproblems.
What is Dijkstra’s algorithm, and what are its limitations?
Dijkstra’s algorithm finds the shortest paths from a source node to all other nodes in a graph with non-negative edge weights by iteratively selecting the node with the smallest tentative distance. Limitation: It cannot handle graphs with negative edge weights.
How does the Bellman-Ford algorithm handle negative edge weights, and what is its time complexity?
The Bellman-Ford algorithm computes shortest paths in graphs with negative edge weights by relaxing all edges repeatedly. It also detects negative cycles. Time complexity: O(VE), where V is the number of vertices and E is the number of edges.
How do you prove that a problem is NP-complete?
To prove NP-completeness:
Show the problem is in NP: A solution can be verified in polynomial time.
Reduction: Choose a known NP-complete problem and provide a polynomial-time reduction from it to your problem.
Equivalence: Demonstrate that a solution to your problem corresponds to a solution of the known problem.
What is Big O notation, and why is it important in algorithm analysis?
Big O notation describes an upper bound on the time (or space) complexity of an algorithm, focusing on the growth rate as the input size increases. It’s important for classifying algorithms by their efficiency and for comparing their performance on large inputs.
What are the differences between Breadth-First Search (BFS) and Depth-First Search (DFS)?
BFS:
Explores neighbors level by level.
Used for shortest paths in unweighted graphs, testing bipartiteness.
Time complexity: O(V + E).
DFS:
Explores as far as possible along each branch before backtracking.
Used for cycle detection, topological sorting, finding connected components.
Time complexity: O(V + E).
What is a topological sort, and in what type of graph is it applicable?
A topological sort is an ordering of the vertices in a directed acyclic graph (DAG) such that for every directed edge (u, v), u comes before v in the ordering. It’s applicable in scheduling tasks where some tasks must precede others.
How can you detect negative cycles in a graph?
Use the Bellman-Ford algorithm. After |V| - 1 iterations, perform one more iteration. If any edge can still be relaxed, a negative cycle exists.
What is the divide and conquer approach in algorithm design?
Divide and conquer involves:
Dividing the problem into smaller subproblems of the same type.
Conquering by solving the subproblems recursively.
Combining the solutions of subproblems to solve the original problem.
How do you analyze the time complexity of a divide and conquer algorithm using the Master Theorem?
For a recurrence of the form T(n) = aT(n/b) + f(n):
Case 1: If f(n) = O(n^c) where c < log_b(a), then T(n) = Θ(n^{log_b(a)}).
Case 2: If f(n) = Θ(n^{log_b(a)} * log^k n), then T(n) = Θ(n^{log_b(a)} * log^{k+1} n).
Case 3: If f(n) = Ω(n^c) where c > log_b(a), and if af(n/b) ≤ kf(n) for some k < 1, then T(n) = Θ(f(n)).