Computational Time Complexity Flashcards
What is computational time complexity?
Time complexity is a measure of the amount of time an algorithm takes to run as a function of the input size
It helps us analyze and compare the efficiency of different algorithms
The Big O notation is commonly used to represent time complexity.
Understanding time complexity is crucial for designing scalable and high-performance software
When considering the “worst” case scenario…
How do they analyze this?
It is the most common (and useful) analysis in most
cases
It provides a guarantee that the algorithm will not perform worse than a certain level, regardless of the input
What is the “Big O” in time complexity?
It is used to talk about how fast an algorithm runs in the worst case
Big O notation is used because the growth rate of a function is also referred to as the order of the function
This is crucial for critical applications where: = performance guarantees are essential
What are 4 aspects used to determine an algorithms computational time complexity?
- The input size is a key factor in analyzing time complexity.
- Determine which parameter(s) affect the running time as the input size increases
- Iterative Algorithms:
- count the number of iterations
- analyze how the number of iterations changes with the input size
- the loop structure determines the time complexity - Nested Loops:
- by counting the number of iterations of each loop
- determine how the loop variables change and how they relate to the input size
- multiply the iterations to obtain the overall time complexity
Why is there a focus on dominant factors in time complexity of algorithms?
Only the highest order (or dominant term) is considered because it provides:
= the most significant information about the algorithm’s scalability and efficiency as the input size increases
= as the input size increases, the impact of lower-order terms and constant factors becomes negligible compared to the dominant term
The dominant term has the most significant influence on the algorithm’s time complexity and provides the primary factor that determines how the algorithm scales with larger inputs
What does the linear search for time complexity look like?
Process: Starts at the beginning, compares each element with the target until found or end of list
Best Case: Target is first element → 1 comparison
Average Case: Target is anywhere → n/2 comparisons on average
Worst Case: Target not found or last element → n comparisons
IN LINEAR SEARCH TIME COMPLEXITY IS = O(n)
What does binary search for time complexity look like?
Process: Repeatedly divide the sorted list in half, compare middle element with target
Each Step: Reduces search space by half
After k comparisons → n / 2ᵏ items left
In worst case stops when: Only 1 item remains → n / 2ᵏ = 1
Solve for k: k = log₂(n)
BINARY SEARCH TIME COMPLEXITY = O(log n)
NOTE: List must be sorted
What does bubble sort for time complexity look like?
Process: Repeatedly compare and swap adjacent elements to “bubble” largest values to the end
Loops:
Outer loop: Runs n - 1 times (passes)
Inner loop: Compares adjacent elements, swaps if needed
Comparisons:
Total = n(n - 1)/2
Worst Case: Also up to n(n - 1)/2
BUBBLE SORT TIME COMPLEXITY: O(n²)
Best Case (sorted list): O(n) (with optimization)
What does short bubble sort look like for time complexity?
Optimization: Stops early if no swaps in a pass → list is sorted
Best Case (sorted list): O(n)
Average Case: Improved over basic Bubble Sort (varies)
Worst Case (fully unsorted): Still O(n²)
Key Benefit: Faster on nearly sorted data
What does selection sort look like for time complexity?
Process: In each pass, find the minimum element and swap it to its correct position
Comparisons: n(n - 1)/2 → same as Bubble Sort
Swaps: Exactly n - 1 (one per pass)
Time Complexity:
Best, Average, Worst: O(n²) (no change based on input order)
Key Point: Fewer swaps than Bubble Sort, but same number of comparisons