BIG O NOTATION Flashcards
what is an assignment statement
any statement where a value is set or updated
- fewer assignment statements mean higher efficiency
what is tractability
tractability refers to how feasible it is to solve a problem efficiently as the input size grows
- a function is tractable if it is polynomial time complexity or better
best to worst functions
O(1) – Constant time
O(log n) – Logarithmic time
O(n) – Linear time
O(n log n) – Log-linear time
O(n^k) – Polynomial time
O(2ⁿ) – Exponential time
O(n!) – Factorial time
- last 2 are intractable
rules for big O notation
- Focus on the Largest Growing Term:
- For the function f(n) = 5n^2 + 3n + 10, the dominant term is n ^2, so big O representation is is O(n²).
- Ignore Constant Factors
- f(n)=3n and f(n)=10n are both O(n)
- Drop Lower-Order Terms
- For f(n)=n^3+5n^2+20n, the cubic term grows fastest, so the complexity is O(n³)
- Non-Dominant Operations
If an algorithm contains several steps, the step with the highest complexity determines the overall Big O.
what is big O notation
- a way to measure how the runtime or space requirements of an algorithm grow as the input size increases
- gives us an idea of how efficiently an algorithm performs when dealing with larger amounts of data.
why do we use big O
- To understand the scalability of algorithms.
- To compare different algorithms based on their performance.
- To ensure that programs can handle larger datasets without becoming too slow.
O(1) - Constant time complexity
- The algorithm’s runtime does not depend on the size of the input. It always takes the same amount of time.
- eg, Accessing an element from an array by its index.
O(logn) - logarithmic time complexity
- The algorithm’s time grows logarithmically. As the input size increases, the time only increases slowly.
eg - Binary search in a sorted array.
O(n) - linear time complexity
- The runtime grows linearly with the size of the input.
- If the input doubles, the time taken also doubles.
eg - iterating from 1 to a certain number
O(n log n) - Linearithmic Time
- This is often seen in efficient sorting algorithms like merge sort or quicksort.
- The time grows faster than linear but slower than quadratic.
O(n²) - Quadratic Time
- The runtime grows quadratically.
- If the input size doubles, the time taken grows by four times.
- Common in algorithms with nested loops.Example:
A simple implementation of bubble sort.
O(2^n) - Exponential Time
- The time grows exponentially
- The runtime doubles with each additional input
- Algorithms that involve recursive branching often have this complexity.
eg - Solving the Fibonacci sequence recursively.
O(n!) - Factorial Time
- The runtime grows factorially.
- Extremely slow for even moderate input sizes
- Common in algorithms that involve generating permutations
eg - Solving the traveling salesman problem by brute force.
time complexity for a hash table
Inserting, Deleting, and Searching: O(1)
- (O(n) in the worst case due to collisions)
time complexity for an array
- Accessing an element by index: O(1)
- Searching for an element: O(n)
- Inserting an element at the end: O(1) (if there’s space)
- Inserting at the beginning: O(n)
time complexity for queues and stacks
- Push (insert), Pop (remove): O(1)
- Access the top element: O(1)
linked list time complexity
- Insertion and Deletion: O(1) (if we have a reference to the node)
- Searching for an element: O(n)
binary tree time complexity
Searching, Insertion, and Deletion: O(log n) on average, but O(n) in the worst case (if the tree becomes unbalanced)
time complexity for sorting algorithms
Bubble Sort: O(n²)
Merge Sort: O(n log n)
Quick Sort: O(n log n) on average (but O(n²) in the worst case)
why is big O important
- Choosing Efficient Algorithms: Knowing the time complexity of an algorithm helps us choose the right one for large inputs.
- eg : Merge Sort (O(n log n)) is more efficient for large datasets than Bubble Sort (O(n²)).
- Scalability: Algorithms with lower time complexities can handle larger data efficiently.
- If you need to handle huge datasets, linear or logarithmic algorithms are preferable over quadratic or exponential ones.
-
what is algorithmic complexity
How much time or resources an algorithm needs as the size of the input increases.
what are hardware constraints
when the physical hardware of a computer can limit how fast or how much data can be processed.
what is a tractable algorithm?
what is an intractable problem
A problem is called tractable if it can be solved in polynomial time or better
A problem is called intractable if it cannot be solved in polynomial time.
example of an intractable problem
The Traveling Salesman Problem (TSP) – Given a set of cities and the distances between them, the problem is to find the shortest route that visits each city and returns to the starting point. The time complexity is O(n!), making it an intractable problem.
how do people solve intractable problems
heuristic approaches
what are undecidable algorithms
problems that simply cannot be solved by any algorithm, regardless of how fast or powerful our hardware is
example of an undecidable algorithm
The Halting Problem – This problem asks whether a computer program will eventually stop running (halt) or continue forever for a given input.
why is bubble sort o(n^2)
because it involves nested loops where each element is compared with others,
resulting in n comparisons for each of the n elements.
what is a heuristic approach
knowledge/rules that can be used to find an approximate but not optimal solution to a problem
why is merge sort n log n time complexity
because the list is divided in half log n times, and merging takes linear time, resulting in n log n operations.
why is binary search log n
because each comparison halves the search space,
why is iterating to a number n time complexity
because you have to check each element once in a linear scan.
why is accessing an element 1 time complexity
because it’s a direct memory access based on index.