Asymptotic Notation Flashcards
Asymptotic notation is a way to describe the running time or space complexity of an algorithm based on the input size. It is commonly used in complexity analysis to describe how an algorithm performs as the size of the input grows. The three most commonly used notations are:
Big O, Omega, and Theta.
Big O notation
This notation provides an upper bound on the growth rate of an algorithm’s running time or space usage. It represents the worst-case scenario, i.e., the maximum amount of time or space an algorithm may need to solve a problem. For example, if an algorithm’s running time is O(n), then it means that the running time of the algorithm increases linearly with the input size n or less.
Omega notation
This notation provides a lower bound on the growth rate of an algorithm’s running time or space usage. It represents the best-case scenario, i.e., the minimum amount of time or space an algorithm may need to solve a problem. For example, if an algorithm’s running time is Ω(n), then it means that the running time of the algorithm increases linearly with the input size n or more.
Theta notation
This notation provides both an upper and lower bound on the growth rate of an algorithm’s running time or space usage. It represents the average-case scenario, i.e., the amount of time or space an algorithm typically needs to solve a problem. For example, if an algorithm’s running time is Θ(n), then it means that the running time of the algorithm increases linearly with the input size n.
Usually, the analysis of an algorithm is done based on three cases:
Best Case (Omega Notation (Ω))
Average Case (Theta Notation (Θ))
Worst Case (O Notation(O))
Omega (Ω) notation specifies the asymptotic lower bound for a function f(n). For a given function g(n), Ω(g(n)) is denoted by:
Ω (g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤ c*g(n) ≤ f(n) for all n ≥ n0}.
This means that, f(n) = Ω(g(n)), If there are positive constants n0 and c such that, to the right of n0 the f(n) always lies on or above c*g(n).
Calculating Omega notation for a program
Break the program into smaller segments.
Find the number of operations performed for each segment(in terms of the input size) assuming the given input is such that the program takes the least amount of time.
Add up all the operations and simplify it, let’s say it is f(n).
Remove all the constants and choose the term having the least order or any other function which is always less than f(n) when n tends to infinity, let say it is g(n) then, Omega (Ω) of f(n) is Ω(g(n)).
Big-Theta(Θ) notation specifies a bound for a function f(n). For a given function g(n), Θ(g(n)) is denoted by:
Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ n0}.
This means that, f(n) = Θ(g(n)), If there are positive constants n0 and c such that, to the right of n0 the f(n) always lies on or above c1g(n) and below c2g(n).
Calculate theta notation
Break the program into smaller segments.
Find all types of inputs and calculate the number of operations they take to be executed. Make sure that the input cases are equally distributed.
Find the sum of all the calculated values and divide the sum by the total number of inputs let say the function of n obtained is g(n) after removing all the constants, then in Θ notation, it’s represented as Θ(g(n)).
Big – O (O) notation specifies the asymptotic upper bound for a function f(n). For a given function g(n), O(g(n)) is denoted by:
O (g(n)) = {f(n): there exist positive constants c and n0 such that f(n) ≤ c*g(n) for all n ≥ n0}.
This means that, f(n) = O(g(n)), If there are positive constants n0 and c such that, to the right of n0 the f(n) always lies on or below c*g(n).
Calculating Big O notation for a program:
Break the program into smaller segments.
Find the number of operations performed for each segment (in terms of the input size) assuming the given input is such that the program takes the maximum time i.e the worst-case scenario.
Add up all the operations and simplify it, let’s say it is f(n).
Remove all the constants and choose the term having the highest order because for n tends to infinity the constants and the lower order terms in f(n) will be insignificant, let say the function is g(n) then, big-O notation is O(g(n)).
why perform analysis based on input size in complexity analysis of algorithms?
There are many important things that should be taken care of, like user-friendliness, modularity, security, maintainability, etc. Why worry about performance? The answer to this is simple, we can have all the above things only if we have performance. So performance is like currency through which we can buy all the above things. Another reason for studying performance is – speed is fun! To summarize, performance == scale. Imagine a text editor that can load 1000 pages, but can spell check 1 page per minute OR an image editor that takes 1 hour to rotate your image 90 degrees left OR … you get it. If a software feature can not cope with the scale of tasks users need to perform – it is as good as dead.
How to study efficiency of algorithms?
The way to study the efficiency of an algorithm is to implement it and experiment by running the program on various test inputs while recording the time spent during each execution. A simple mechanism in Java is based on use of the currentTimeMillis() method of the System class for collecting such running times. That method reports the number of milliseconds that have passed since a benchmark time known
as the epoch (January 1, 1970 UTC).
The key is that if we record the time immediately before executing the algorithm and then immediately after it.
long start = System.currentTimeMillis( ); // record the starting time
/∗ (run the algorithm) ∗/
long end = System.currentTimeMillis( ); // record the ending time
long elapsed = end − start; //Total time elapsed
Measuring elapsed time provides a reasonable reflection of an algorithm’s efficiency.
Given two algorithms for a task, how do we find out which one is better?
asymptotic analysis
linear search
order of growth is linear