Algorithm Complexity Flashcards
How do we measure the run-time of an algorithm ?
Empirical analysis (benchmarking on representative set of inputs)
Analyse the (time) complexity
Empirical analysis
Implement the algorithm (a program) and time it
How can we time a program?
Manual - using a stopwatch
Automatic - using some timer function
Run the program for various input sizes and measure run time
Time Complexity
the number of operations that an algorithm requires to execute in terms of the size of the input or problem.
usually focusing on worst-case analysis
O(1)
Constant time - Input size does not matter
Usually better than linear
On a graph this will show up as a straight line
Extremely fast
O(log n)
Logarithmic
Binary search
As the data size increases this algorithm becomes more and more efficient compared to the early stages with the smaller data set.
Get the middle element of the array when looking for a value see if it is smaller of bigger and look to the relevant half accordingly
O(n log2 n)
Log Linear
O(n^2)
Quadratic time complexity
the runtime is proportional to the square of the input size
Common in nested loops
As the amount of data increases it is going to take increasingly more and more time to complete anything that has a runtime
Extremely slow with large data (slower than linear) but can be faster in the case of a smaller data set
O(n3) =
Cubic time complexity
The runtime is proportional to the cube of the input size
Common in algorithms with three nested loops
O(^2n) =
Exponential time complexity; the runtime doubles with each additional input sizes
Look up a value v in a sorted array
Search from the middle, if the number is smaller we focus on the right hand side of the array
This is a way to get rid of half of the array
Big O notation
Simplified analysis of an algorithm’s efficiency
Types of measurement for an algorithm’s efficiency
Worst-case
Best-case
Average-case
0(1)
Constant
Input size does not affect the time of completion
0(n)
Linear time
As the amount of data increases, the amount of steps increases proportionally
What is the purpose of analysing Big O Notation?
To predict the scalability and efficiency of an algorithm as the input grows
What is the difference between best case and worse case complexity
Best-case = the minimum time complexity for any input
Worst-case = the maximum
Logarithmic Time Complexity
Why is it considered more efficient?
Runtime increases very slowly even for large input sizes
Common in divide-and-conquer algorithms
What is the Big O complexity for finding the maximum value in an unsorted array?
O(n) linear complexity
Exponential time
O(2n)
Runtime doubles with each additional input element
Extremely rapid
Impractical for large inputs, typically avoided
Grows much faster than O(n3) and O(n2)
T(n)=2^n or K^n where K>1
Cubic Time
O(n3)
Slower than exponential but faster than quadratics
Can handle moderately large inputs but is inefficient for very large inputs
T(n)=n^3
Quadratic Time
O(n2) grows at a manageable rate and is often feasible for medium to large data sets
T(n)=n^2
Linear Time
Runtime grows directly proportional to n
Slow growth
T(n)=n
Constant time
Runtime remains constant regardless of input size
No growth, constant for all data
T(n)=c (constant value)
K>1
K is the base in exponential growth K^n
K>1 ensures the true exponential growth occurs
K=1 would lead to a constant function
K<1 would result in an exponential decay