3 | Elementary operations / Asymptotics I Flashcards
(31 cards)
Definition:
Problem vs. instance
Problem: a specification of desired input-output relationship.
Instance: (of a problem) all inputs needed to compute solution to problem.
- A problem specifies its domain of definition
i.e., the set of instances to be considered. - Many interesting problems have infinitely many instances.
- Some problems have a single instance (e.g. sink/source)
How to show that an algorithm is incorrect?
- find one instance for which algorithm does not find correct solution.
What is a bit?
A bit (short for binary digit) is the smallest unit of data in a computer
A bit has a single binary value either 0 or 1
What is a byte?
8 bits = 1 byte
used for storing data and executing instructions
Steps to convert decimal number to binary?
Divide the number by 2.
- The modulus (remainder) is the next digit of the new binary number (in reverse order).
- The floor is carried over for the next iteration
Repeat until the floor is zero
size of bits needed for n = size(n)
Formula?
size(π) = βlog2(π) + 1β (= βlog2(π)β + 1 )
Proof of formula:
size(π) = βlog2(π) + 1β
see notes
Size of an instance: formal definition?
Formal definition
Size of an instance corresponds to the number of bits needed to represent the instance on a computer using a precisely defined coding scheme.
Size of an instance: less formal definition?
Less formal
Size stands for any integer than measures the number of components of an instance.
Size of an instance:
examples?
Examples
In sorting, the size corresponds to the number of items.
(ignoring the fact that more than one bit may be needed to represent each integer)
In graphs, the size is the number of nodes, number of edges, or both
Advantages of the theoretical approach to determine efficiency
It does not depend on
- the computer being used
- the programming language employed
- the skills of the programmer
It allows to study the efficiency of an algorithm on instances on any size
Empirical approach β small vs. large instances
Expressing efficiency
storage β bit
time β seconds, minutes, etc. (no convention)
Algorithm efficiencey - Principle of invariance?
Two different implementations of the same algorithm will not differ in efficiency by more than some multiplicative constant.
The running time of either implementation is bounded by the running time of the other
Algorithm efficiencey - Principle of invariance?
formulas?
If π‘1(π) and π‘2(π) (seconds) are the running times of two implementations of an algorithm, then there exist two positive constants π and π for which
π‘2(π) /π β€ π‘1(π) β€ ππ‘2(π)
whenever π is sufficiently large.
1/π = πβ² β€ π‘1(π)/π‘2(π) β€ π (assuming π‘(π) β 0
1.
Principle of invariance remains trueβ¦
2
β¦ what allows constant speed up?
3.
Only change in ____ allow us a marked speed up which reflects the change in the size of the instance
The principle remains true irrespective of
change of computer
change of programming language / compiler
Change in implementation detail may allow us a constant speed up
Only change in algorithm allow us a marked speed up which reflects the change in the size of the instance
An algorithm takes time in the order of β¦..
An algorithm takes a time in the order of π(π) , if there
exists a positive constant π and an implementation of the
algorithm capable of solving every instance of size π in
not more than ππ(π) seconds.
No unit for expressing theoretical efficiency!
Name 4 types of algorithms based on efficiency
Linear algorithms
ππ
Quadratic algorithms
ππ2
Polynomial algorithms
πππ
Exponential algorithms
ππ
Issue with the multiplicative (hidden) constants in algorithm efficiency?
e.g. π2 days vs. π3 seconds for the same algorithm
when is π2 πππ¦π < π3 π ππππππ ?
asymptotically the first is better than the second, practically not
Average case analysis - definition? Issues?
Average-case analysis: average over all possible instances
but are all instances equally likely?
requirement for an adequate distribution of instances
β> more difficult than worst-case analysis
Insertion sort - principle?
For each element, from the second to the last, we find the appropriate place among its predecessors
Insertion sort - pseudocode?
procedure insert(π[1. . π])
for π β 2 to π do π₯ β π[π] π β π β 1 while π > 0 and π₯ < π[π] do π [π + 1] β π π π β π β 1 π[π + 1] β π₯
Selection sort β principle?
Find the smallest element in the array and bring it to the front;
continue applying the principle to the array from the second position.
Selection sort - pseudocode?
procedure select(π[1. . π])
for π β1 to π β 1 do ππππ β π ππππ₯ β π[π] for π β π + 1 to π do if π[π] < ππππ₯ then ππππ β π ππππ₯ β π[π] π[ππππ] β π[π] π[π] β ππππ₯
Worst case complexity:
Insertion sort?
Selection sort?
O(n2)
Linear algorithms
cn