7: Analysis Flashcards

Analysing program complexity

1
Q

Define asmptotic complexity

A

The degree to which an algorithm (or its subprocesses) become more complex or the nuber of required operaions becomes greater in number as the length of the input lengpth increases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Give the notations for best, worst, and average complexities

A

Best case = Θ(n), exact upper bound
Average case = O(n), rough upper bound
Worst case = Ω(n), lower bound

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

When is an algorithm in the exponential class of time compelxity?

A

When the exponent in an exponential is (a) variable, e.g. length of array as input.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is loop hoisting?

A

Trying to refactor program loigic to remove loops to optimise complexity/ies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the lower bound on sorting comparables? Assume involves comparisons of whole comparable objects being sorted and not other properties or fields, such as sort keys.

A

Minimum time complexity Ω(nlog(n))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are sort keys?

A

Fields in items in a list that dictate the order of the list. In a database they are fields and a type of key (e.g. primary).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What’s the proof of the lower bound for sorting?

A

Proof for minimum time complexity Ω(nlog(n)) for sorting algorithms:

For a list of n elements, there are n! different possible orderings of the elements.

A comparison has 2 results: which of 2 elements to put 1st. With k comparisons made to sort the list and comparisons involved in the whole algorithm, this results in 2 ^ k choices.

2 ^ k must be >= n! since we have to distinguish between all the different possible orderings, of which there are n!.

This is true when k >= log2(n!)

Via Stirling’s approximation, log2(n!) is Ω(nlog(n)) worst case.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What would P != NP actually mean?

A

Each NP problem would have a superpolynomial lower bound of complexity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are P problems? What does P stand for?

A

P problems have proofs showing they can be solved in polynomial time by a deterministic TM. P stands for polynomial; since no N for nondeterministic, “P” problems are deterministically (and therefore also nondeterministically) solvable within polynomial (P) time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are NP problems? What does NP stand for?

A

NP problems have no proof showing they can be solved in polynomial time by a deterministic TM but do have a proof showing they can be verified in polynomial time by a deterministic TM.

Note verification is checking against previously found results, e.g. factorials of BigIntegers in a HashMap in Java so you don’t need to recalculate each time.

NP stands for Nondeterministic Polynomial, since solvable in polynomial time only nondeterministically and not deterministically.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are NP-hard problems?

A

NP-hard problems are all problems that NP problems can be reduced to in polynomial time by a deterministic TM.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are NP-complete problems?

A

NP-complete problems are problems in both NP and NP-hard. They have no existing proof they can be done in polynomial time by a deterministic TM, have a proof they can be verified in polynomial time by a deterministic TM, and can be reduced to from NP problems in polynomial time by a deterministic TM.

Note verification is checking against previously found results, e.g. facotorials of BigIntegers in a HashMap in Java so you don’t need to recalculate each time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is verification?

A

Verification is checking against previously found results, e.g. facotorials of BigIntegers in a HashMap in Java so you don’t need to recalculate each time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What do divide and conquer algorithms do?

A

Recursively divide problems into smaller subproblems, before solving (“conquering”) the original larger problem by combining the results of solving the subproblems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is a recurrence relation?

A

Recurrence relations are equations defining each element in sequences as a function of the preceding ones. They are used to define recursive functions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the mathematical definition of a recurrence relation?

A

T(x) = time taken for input size x

T(n) depends on T(m) where m < n: the time taken for each problem size depends on the size 1 less (and thereby all problem sizes smaller than it).

17
Q

How can you find the complexity of an algorithm from its recurrence relation?

A

Expand it and then apply known solutions or repeatedly guess and then verify possible complexities, usually by induction: trial and error.

18
Q

How does the trial and error method of finding the complexity of an algorithm from a recurrence relation work?

A

It involves repeatedly guessing and then verifying possible complexities; verification usually via mathematical induction.

T(n) will = T(m) + another term, z; z depends on the nature of the algorithm, m < n, usually m = n - 1. When guessing T(n) is of a certain order of complexity, set the extra term to be that of the order ÷ n. For example, when trying O(n^2), set the term order as O(n).

You know the trial works when you can expand it into multiple recursive substeps, and the number of steps and complexities in each step corresponds to the nature of the algorithm. (? this + relevance math induction)

19
Q

How do you solve the complexity of a problem from its recurrence relation by hand?

A

Try different n values in T(n) to try to work out a generalisation of T(n) in terms of only the base case (T(1) usually) and n - not in terms of T(m) where m < n, a usual.

You can then derive the complexity from this. You might need to use understanding of the problem, e.g. number of comparisons = k in sorting and number of decisions from them = 2 ^ k <= n!, for example substituting in k somewhere to be able to reduce a term to its complexity order only.

20
Q

What is the Master Theorem for divide-and-conquer recurrences?

A

Provides a general form of divide-and-conquer recurrences and 3 cases for them that must hold.

21
Q

What is the Master Theorem’s general form?

A

T(n) = a * T(n / b) + f(n)

a and b are constants such a >= 1, b > 1

22
Q

What is the first Master Theorem case?

A

If f(n) = O(n^(logb(a-e)) for some constant e > 0, then T(n) = Θ(n ^ (logb(a)))

23
Q

What is the second Master Theorem case?

A

If f(n) = Θ(n ^ (logb(a)) * log^k(n)) for some constant k >= 0, then T(n) = Θ(n ^ (logb(a)) * log^(k+1) of (n))

24
Q

What is the third Master Theorem case?

A

If f(n) = Ω(n ^ logb(a+e)) for some constant e > 0, and f(n) satisfies the regularity condition, then T(n) = Θ(f(n)).

Note the regularity condition is when a problem has af * (n / b) <= c * f(n) for a sufficently large n and a constant c < 1.

25
Q

What is the regularity condition?

A

From the Master Theorem, when a problem has af * (n / b) <= c * f(n) for a sufficently large n and a constant c < 1.

26
Q

What is the Karatsuba multiplication algorithm?

A

An algorithm allowing for the multiplication of 2 numbers with better time complexity than long multiplication, reducing best case time complexity from Θ(n ^ 2) with long multiplication down to polynomial order, exactly n ^ (log2(3)).

27
Q

What are the simplifying rules, and why are they useful?

A

They allow us to reduce the complex complexity results down to their orders, which allow us to interpret them more easily and pragmatically in terms of their consequences.

28
Q

What is the first simplifying rule?

A

If a function g is an upper/lower/tight bound for your cost, then any bound of the same type for g is also a vound of the same type for your cost.

If t = O(f), and f = O(g), then t = O(g)

If t = Ω(f), and f = Ω(g), then t = Ω(g)

If t = Θ(f), and f = Θ(g), then t = Θ(g)

29
Q

What is the second simplifying rule?

A

If a function h is an upper bound for two costs, then h is also an upper bound for the sum of these costs.

If f = O(h), and g = O(h), then f + g = O(h)

If f = Ω(h), and g = Ω(h), then f + g = Ω(h)

If f = Θ(h), and g = Θ(h), then f + g = Θ(h)

30
Q

What is the third simplifying rule?

A

If some functions g and h are upper/lower bounds for a cost, then their max or min is also an upper/lower bound for that cost.

If f = O(g), and f = O(h), then f = O(max(g, h)), f = O(min(g, h))

If f = Ω(g), and f = Ω(h), then f = Ω(max(g, h)), f = Ω(min(g, h))

If f = Θ(g), and f = Θ(h), then f = Θ(max(g, h)), f = Θ(min(g, h))

31
Q

What is the fourth simplifying rule?

A

If some functions f1 and g1 are upper/lower/tight bounds for 2 costs, then the sum and product of these is also an upper/lower/tight bound for the products of those costs.

If f = O(f1), g = O(g1), then f + g = O(f1 + g1), f * g = O(g1 * g1) and analagous

32
Q

What is Master Theorem Avoidance?

A

When an algorithm doesn’t match the Master Theorem, you can attempt to represent its time complexity closed form solution or a simplified form as a summation function, i.e. mathematical series. A simplified form should be the sum to the number of nodes of the time complexity to process each node, with a node on a tree model being 1 branch of computation/recursion. Then multiply the number of nodes * the function of complexity at each node, and substitute in values for variables other than n to find worst/average/best case complexity.

(?)

33
Q

What is a series?

A

A form of notating summation function to a certain number of terms. Usually arithmetic (+/- a constant between each pair of terms) or geometric (* a constant between each pair of terms).

34
Q

What is a geometric series?

A

A series (a form of notating summation function to a certain number of terms) in which there is a constant ratio between terms, i.e. an = a(n - 1) * f, where ai is the ith term and f is some constant factor. Geometric series converge iff - 1 < f < 1.