Basics of Algorithm Analysis Flashcards
Definition of Efficiency when evaluating algorithms?
Proposed Definition of Efficiency (1): An algorithm is efficient if, when implemented, it runs quickly on real input instances.
Proposed Definition of Efficiency (2): An algorithm is efficient if it achieves qualitatively better worst-case performance, at an analytical level, than brute-force search.
Proposed Definition of Efficiency (3)” An algorithm is efficient if it has a polynomial running time.
Define polynomial running time, or that it is a polynomial-time algorithm
There are absolute constants c > 0 and d > 0 so that on every input instance of size N, its running time is bounded by cNd primitive computational steps.
Ex:
- If the input size increases from N to 2N, the bound on the running time increases from cNd to c(2N)d = c. 2dNd
Asymptotic Order of Growth
an algorithm’s worst-case running time on inputs of size n grows at a rate that is at most proportiona! to some function f(n).
The function f(n) then becomes a bound on the running
time of the algorithm
When we seek to say something about the running time of an algorithm on inputs of size n, one thing we could aim for would be a very concrete statement such as,
- “On any input of size n, the algorithm runs for at most 1.62n2 + 3.5n + 8 steps.”
What are the short-comming of this approach?
- First, getting such a precise bound may be an exhausting activity, and more detail than we wanted anyway.
- Second, because our ultimate goal is to identify broad classes of algorithms that have similar behavior, we’d actually like to classify running times at a coarser level of granularity so that similarities among different algorithms, and among different problems, show up more clearly.
- Finally, extremely detailestatements about the number of steps an algorithm executes are oftene–meaningless
Asymptotic Upper Bounds
- T(n) be a function–say, [he worst-case running time of a certain algorithm on an input of size
- Given another function f(n), we say that T(n) is O(f(n)) (read as “T(n) is order f(n)”) if, for sufficiently large n, the function T(n) is bounded above by a constant multiple of f(n).
- More precisely, T(n) is O(f(n)) if there exist constants c > 0 and no >_ 0 so that for all n >_ no, we have T(n) <_ c. f(n). In
Ex: Asymptotic Upper Bounds
just argued that
T(n) <_ (p + q + r)n2,
and since we also have n2 < n3, we can conclude that T(n) < (p + q + r)n3 as the definition of O(n3) requires
Consider an algorithm whose running time has the form
T(n) = pn2 + qn + r
for positive constants p, q, and r. We’d like to claim that any such function is O(n2).
To see why, we notice that for all n > 1, we have
qn < qn2 and r < rn2.
So we can write: T(n) =
pn~ + qn + r < pn2 + qn2 rn2 = (p + q + r)n2
for all n >_ 1.
This inequality is exactly what the definition of O(-) requires:
** T(n) < cn2, where c =p + q + r.**
Asymptotic Lower Bounds
we want to express the notion that for arbitrarily large input sizes n, the function T(n) is at least a constant multiple of some specific function f(n).
Thus, we say that T(n) is ~Omaga(f(n)) if there exist constants ç > 0 and no _> 0 so that for all n > n0, we have T(n) >= .ç * f(n).
Asymptotic Lower Bounds
the function T(n) = pn2 + qn + r, where p, q, and r are positive constants
let’s claim that T(n) = Omaga (n2). Whereas establishing the upper bound involved “inflating” the terms in T(n) until it looked like a constant times n2,
we need to reduce the terms in T(n) until it looks like
a constant times n2 such that for all n >= O, we have**: **
T(n) = pn2 + qn + r > pn2,
which meets what is required by the definition of Omaga(.) with ç = p > 0.