All relevance Flashcards
*Asymptotic Notation
Mathematical notation used to describe the limiting behavior of a function; includes Big O, Big Theta (Θ), and Big Omega (Ω).
*Big O Notation
Describes the upper bound of an algorithm’s running time, used for worst-case analysis.
*Big Omega Notation
Describes the lower bound of an algorithm’s running time, used for best-case analysis.
*Big Theta Notation
Describes a tight bound on the running time, i.e., both upper and lower bounds.
*Time Complexity
Measure of the amount of time an algorithm takes to complete as a function of the length of the input.
*Space Complexity
Amount of memory space required by an algorithm as a function of input size.
*Greedy Algorithm
Algorithm that makes locally optimal choices at each step with the hope of finding a global optimum.
*Divide and Conquer
Design paradigm that divides a problem into subproblems, solves them recursively, and combines their solutions.
*Dynamic Programming
Solving complex problems by breaking them down into simpler subproblems and storing the results.
*Backtracking
Algorithmic technique for solving recursive problems by trying to build a solution incrementally and removing solutions that fail.
Heap
A special tree-based data structure that satisfies the heap property. Used for priority queues.
*Priority Queue
Abstract data type where each element has a priority; elements with higher priorities are dequeued before lower ones.
Disjoint Set
Data structure that keeps track of a partition of a set into disjoint subsets. Often used in Kruskal’s algorithm.
Union-Find
A disjoint-set data structure with operations to find which set an element is in and to unite two sets.
*Dijkstra’s Algorithm
Greedy algorithm that finds the shortest paths from a source vertex to all other vertices in a weighted graph.
*Bellman-Ford Algorithm
Algorithm that computes shortest paths from a source vertex to all vertices in a weighted digraph, handles negative weights.
*Floyd-Warshall Algorithm
Dynamic programming algorithm for finding shortest paths in a weighted graph with positive or negative edge weights.
*Kruskal’s Algorithm
Greedy algorithm for finding a minimum spanning tree in a graph.
*Prim’s Algorithm
Greedy algorithm that builds the minimum spanning tree by adding the cheapest edge from the tree to a vertex not yet in the tree.
*Topological Sort
Linear ordering of vertices in a DAG such that for every directed edge uv, vertex u comes before v.
*P vs NP
P is the class of problems solvable in polynomial time. NP is the class for which solutions can be verified in polynomial time.
*NP-Complete
Problems that are both in NP and as hard as any problem in NP. If any NP-complete problem is in P, then P = NP.
*NP-Hard
At least as hard as the hardest problems in NP; not necessarily in NP (no requirement of polynomial-time verifiability).
*Reduction
Transforming one problem into another to prove hardness.
Cook-Levin Theorem
Establishes that the Boolean satisfiability problem (SAT) is NP-complete.
*Polynomial Time
Class of problems that can be solved in time O(n^k) for some constant k.
*Exponential Time
Problems that require time O(2^n) or more.
Logarithmic Time
Problems solvable in O(log n) time, typically through divide-and-conquer strategies.
*Greedy vs Dynamic Programming
Greedy chooses locally optimal solutions; DP solves all subproblems and combines solutions.
*Divide & Conquer vs Dynamic Programming
Both divide the problem, but DP stores subproblem solutions to avoid recomputation.
*P ⊆ NP
Every problem solvable in polynomial time can have its solution verified in polynomial time.
*NP-Complete ⊆ NP-Hard
All NP-Complete problems are NP-Hard, but not all NP-Hard problems are NP-Complete.
*Dijkstra vs Bellman-Ford
Dijkstra is faster but can’t handle negative weights; Bellman-Ford can handle negative weights but is slower.
Why is Dijkstra’s algorithm faster than Bellman-Ford?
Dijkstra’s uses a greedy approach with a priority queue, achieving O((V + E) log V), while Bellman-Ford does not use this structure and runs in O(VE).
When would you use Bellman-Ford over Dijkstra?
When the graph contains negative weight edges; Dijkstra’s algorithm cannot handle those correctly.
How does dynamic programming differ from divide and conquer?
Both divide problems into subproblems, but dynamic programming stores results to avoid recomputation, while divide and conquer recomputes them.
What makes a problem NP-Complete?
It must be in NP and as hard as any problem in NP, meaning any NP problem can be reduced to it in polynomial time.
Why are greedy algorithms not always optimal?
They make locally optimal choices without considering the global structure, which can lead to suboptimal solutions in complex problems.
How are Kruskal’s and Prim’s algorithms different in building MSTs?
Kruskal’s grows a forest and adds the smallest edge between any two trees, while Prim’s grows a single tree by adding the smallest edge from the tree.
How does topological sort relate to DAGs?
Topological sort is only defined for Directed Acyclic Graphs (DAGs), providing a linear ordering respecting edge directions.
What is the significance of P ⊆ NP?
Every problem that can be solved in polynomial time can also have its solution verified in polynomial time.
Why is NP-Hard not necessarily in NP?
NP-Hard problems might not have solutions that can be verified in polynomial time or even be decision problems.
How does the Union-Find structure optimize Kruskal’s algorithm?
It efficiently tracks disjoint sets to avoid cycles when adding edges, using union by rank and path compression.