Complexity Analysis Flashcards
Define:
Asymptotic Notation
The efficiency of an algorithm depends on the amount of time, storage and other resources required to execute the algorithm. The efficiency is measured with the help of asymptotic notations.
An algorithm may not have the same performance for different types of inputs. With the increase in the input size, the performance will change.
The study of change in performance of the algorithm with the change in the order of the input size is defined as asymptotic analysis.
Define:
Complexity Analysis
The process of determining how efficient an algorithm is. The complexity of an algorithm is the amount of time or space required by the algorithm to process the input and produce the output.
Complexity analysis is effectively used to determine how “good” an algorithm is and whether it’s “better” than another one.
Define:
Time Complexity
The time required by an algorithm to produce output for an input of size n
. It is represented by the function T(n)
, time required per the input size n
.
Lesser the time complexity, more efficient will be the algorithm.
Define:
Space Complexity
The memory that an algorithm is going to consume to produce output for an input of size n
. It is represented by the function S(n)
, memory used per the input size n
.
Define:
Big O
Big O Notation
Big O Notation describes, how well an algorithm scales with the input size. It is used to describe the worst case scenario of an algorithm. It is used to compare algorithms and to determine which algorithm is better.
f(n) = O(g(n))
if there is a positive constant n0
and C
such that to the right of n0
, the value of f(n)
always lies on or below C * g(n)
.
Define
Big Θ
Big Theta Notation
While Big O Notation refers to the upper bound of a function, Big Theta Notation refers to the exact bound of a function. Big Theta Notation is used to describe the exact growth rate of a function.
f(n) = Θ(g(n))
if there is a positive constant n0
, C1
and C2
such that to the right of n0
, the value of f(n)
always lies between C1 * g(n)
and C2 * g(n)
icnlusive.
Big O is notation is usually used to indicate dominating (fastest-grwoing) term.
Define:
Big Ω
Big Omega Notation
Big Omega notation is used to describe the lower bound of a function. It is the opposite of Big O notation. While Big O is used to describe the worst case scenario of an algorithm, Big Omega is used to describe the best case scenario of an algorithm.
f(n) = Ω(g(n))
if there is a positive constant n0
and C
such that to the right of n0
, the value of f(n)
always lies on or above C * g(n)
.
Describe:
Logarithm
Complexity
A mathematical concept that’s widely used in Computer Science and that’s defined by the following equation:
log_b(x) = y
if and only if b^y = x
In the context of coding interviews, the logarithm is used to describe the complexity analysis of algorithms, and its usage always implies a logarithm of base 2. In other words, the logarithm used in the context of conding interviews is defined by the following equation:
log(x) = y
if and only if 2^y = x
In plain English, if an algorithm has a logarithmic time complexity (O(log(n)), where n is the size of the input), then whenever the algorithm’s input doubles in size (i.e., whenever n doubles), the number of operations needed to complete the algorithm only increases by one unit. Conversely, an algorithm with linear time complexity would see its number of operations double if its input size doubled.
As an example, a linear-time-complexity algorithm with an input of size 1,000 might take roughly 1,000 operations to complete, whereas a logarithmic-time-complexity algorithm with the same input would take roughly 10 operations to complete, since 2^10 ~= 1,000.
Notation:
Constant
O(1)
Define:
Logarithmic
Notation
O(log(n))
Define:
Linear
Notation
O(n)
Define:
Quadratic
Notation
O(n^2)
Define:
Polynomial
Notation
O(n^c)
where c
is a constant greater than 1
Define:
Exponential
Notation
O(c^n)
where c
is a constant greater that 1
Define:
N-power-N
Notation
O(n^n)
Define:
Factorial
Notation
O(n!)
Define:
Master Theorem
By definition, the master theorem is used to solve recurrence relations. It is stated as
T(n) = a * T(n / b) + f(n)
where, a > 0
and b > 1
.
In this relation, n
is the size of the input and a
is the number of subproblems in the recursion. b
is the factor by which the size subproblem is reduced in each recursive call. Lastly, f(n)
is the cost of dividing the problem into subproblems and merging the individual solutions of the subproblems into the solution.
-
f(n) = O(n^d)
andd < log_b(a)
, thenT(n) = Θ(n ^ log_b(a))
-
f(n) = Θ(n^d * log^k(n))
andd = log_b(a)
, thenT(n) = Θ(n^log_b(a) * log^(k+1)(n))
-
f(n) = Ω(n^d)
andd > log_b(a)
, thenT(n) = Θ(f(n))
orT(n) = Θ(n^d)