Complexity Analysis Flashcards

1
Q

Define:

Asymptotic Notation

A

The efficiency of an algorithm depends on the amount of time, storage and other resources required to execute the algorithm. The efficiency is measured with the help of asymptotic notations.

An algorithm may not have the same performance for different types of inputs. With the increase in the input size, the performance will change.

The study of change in performance of the algorithm with the change in the order of the input size is defined as asymptotic analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Define:

Complexity Analysis

A

The process of determining how efficient an algorithm is. The complexity of an algorithm is the amount of time or space required by the algorithm to process the input and produce the output.
Complexity analysis is effectively used to determine how “good” an algorithm is and whether it’s “better” than another one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Define:

Time Complexity

A

The time required by an algorithm to produce output for an input of size n. It is represented by the function T(n), time required per the input size n.
Lesser the time complexity, more efficient will be the algorithm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Define:

Space Complexity

A

The memory that an algorithm is going to consume to produce output for an input of size n. It is represented by the function S(n), memory used per the input size n.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Define:

Big O

Big O Notation

A

Big O Notation describes, how well an algorithm scales with the input size. It is used to describe the worst case scenario of an algorithm. It is used to compare algorithms and to determine which algorithm is better.

f(n) = O(g(n)) if there is a positive constant n0 and C such that to the right of n0, the value of f(n) always lies on or below C * g(n).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Define

Big Θ

Big Theta Notation

A

While Big O Notation refers to the upper bound of a function, Big Theta Notation refers to the exact bound of a function. Big Theta Notation is used to describe the exact growth rate of a function.

f(n) = Θ(g(n)) if there is a positive constant n0 , C1 and C2 such that to the right of n0, the value of f(n) always lies between C1 * g(n) and C2 * g(n) icnlusive.

Big O is notation is usually used to indicate dominating (fastest-grwoing) term.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Define:

Big Ω

Big Omega Notation

A

Big Omega notation is used to describe the lower bound of a function. It is the opposite of Big O notation. While Big O is used to describe the worst case scenario of an algorithm, Big Omega is used to describe the best case scenario of an algorithm.

f(n) = Ω(g(n)) if there is a positive constant n0 and C such that to the right of n0, the value of f(n) always lies on or above C * g(n).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Describe:

Logarithm

Complexity

A

A mathematical concept that’s widely used in Computer Science and that’s defined by the following equation:

log_b(x) = y if and only if b^y = x

In the context of coding interviews, the logarithm is used to describe the complexity analysis of algorithms, and its usage always implies a logarithm of base 2. In other words, the logarithm used in the context of conding interviews is defined by the following equation:

log(x) = y if and only if 2^y = x

In plain English, if an algorithm has a logarithmic time complexity (O(log(n)), where n is the size of the input), then whenever the algorithm’s input doubles in size (i.e., whenever n doubles), the number of operations needed to complete the algorithm only increases by one unit. Conversely, an algorithm with linear time complexity would see its number of operations double if its input size doubled.

As an example, a linear-time-complexity algorithm with an input of size 1,000 might take roughly 1,000 operations to complete, whereas a logarithmic-time-complexity algorithm with the same input would take roughly 10 operations to complete, since 2^10 ~= 1,000.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Notation:

Constant

A

O(1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Define:

Logarithmic

Notation

A

O(log(n))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Define:

Linear

Notation

A

O(n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Define:

Quadratic

Notation

A

O(n^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Define:

Polynomial

Notation

A

O(n^c)

where c is a constant greater than 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Define:

Exponential

Notation

A

O(c^n)

where c is a constant greater that 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Define:

N-power-N

Notation

A

O(n^n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Define:

Factorial

Notation

17
Q

Define:

Master Theorem

A

By definition, the master theorem is used to solve recurrence relations. It is stated as
T(n) = a * T(n / b) + f(n)
where, a > 0 and b > 1.
In this relation, n is the size of the input and a is the number of subproblems in the recursion. b is the factor by which the size subproblem is reduced in each recursive call. Lastly, f(n) is the cost of dividing the problem into subproblems and merging the individual solutions of the subproblems into the solution.

  1. f(n) = O(n^d) and d < log_b(a), then T(n) = Θ(n ^ log_b(a))
  2. f(n) = Θ(n^d * log^k(n)) and d = log_b(a), then T(n) = Θ(n^log_b(a) * log^(k+1)(n))
  3. f(n) = Ω(n^d) and d > log_b(a), then T(n) = Θ(f(n)) or T(n) = Θ(n^d)