3 | Elementary operations / Asymptotics I Flashcards
Definition:
Problem vs. instance
Problem: a specification of desired input-output relationship.
Instance: (of a problem) all inputs needed to compute solution to problem.
- A problem specifies its domain of definition
i.e., the set of instances to be considered. - Many interesting problems have infinitely many instances.
- Some problems have a single instance (e.g. sink/source)
How to show that an algorithm is incorrect?
- find one instance for which algorithm does not find correct solution.
What is a bit?
A bit (short for binary digit) is the smallest unit of data in a computer
A bit has a single binary value either 0 or 1
What is a byte?
8 bits = 1 byte
used for storing data and executing instructions
Steps to convert decimal number to binary?
Divide the number by 2.
- The modulus (remainder) is the next digit of the new binary number (in reverse order).
- The floor is carried over for the next iteration
Repeat until the floor is zero
size of bits needed for n = size(n)
Formula?
size(𝑛) = ⌊log2(𝑛) + 1⌋ (= ⌊log2(𝑛)⌋ + 1 )
Proof of formula:
size(𝑛) = ⌊log2(𝑛) + 1⌋
see notes
Size of an instance: formal definition?
Formal definition
Size of an instance corresponds to the number of bits needed to represent the instance on a computer using a precisely defined coding scheme.
Size of an instance: less formal definition?
Less formal
Size stands for any integer than measures the number of components of an instance.
Size of an instance:
examples?
Examples
In sorting, the size corresponds to the number of items.
(ignoring the fact that more than one bit may be needed to represent each integer)
In graphs, the size is the number of nodes, number of edges, or both
Advantages of the theoretical approach to determine efficiency
It does not depend on
- the computer being used
- the programming language employed
- the skills of the programmer
It allows to study the efficiency of an algorithm on instances on any size
Empirical approach – small vs. large instances
Expressing efficiency
storage – bit
time – seconds, minutes, etc. (no convention)
Algorithm efficiencey - Principle of invariance?
Two different implementations of the same algorithm will not differ in efficiency by more than some multiplicative constant.
The running time of either implementation is bounded by the running time of the other
Algorithm efficiencey - Principle of invariance?
formulas?
If 𝑡1(𝑛) and 𝑡2(𝑛) (seconds) are the running times of two implementations of an algorithm, then there exist two positive constants 𝑐 and 𝑑 for which
𝑡2(𝑛) /𝑑 ≤ 𝑡1(𝑛) ≤ 𝑐𝑡2(𝑛)
whenever 𝑛 is sufficiently large.
1/𝑑 = 𝑑′ ≤ 𝑡1(𝑛)/𝑡2(𝑛) ≤ 𝑐 (assuming 𝑡(𝑛) ≠ 0
1.
Principle of invariance remains true…
2
… what allows constant speed up?
3.
Only change in ____ allow us a marked speed up which reflects the change in the size of the instance
The principle remains true irrespective of
change of computer
change of programming language / compiler
Change in implementation detail may allow us a constant speed up
Only change in algorithm allow us a marked speed up which reflects the change in the size of the instance
An algorithm takes time in the order of …..
An algorithm takes a time in the order of 𝒕(𝒏) , if there
exists a positive constant 𝒄 and an implementation of the
algorithm capable of solving every instance of size 𝒏 in
not more than 𝒄𝒕(𝒏) seconds.
No unit for expressing theoretical efficiency!
Name 4 types of algorithms based on efficiency
Linear algorithms
𝑐𝑛
Quadratic algorithms
𝑐𝑛2
Polynomial algorithms
𝑐𝑛𝑘
Exponential algorithms
𝑐𝑛
Issue with the multiplicative (hidden) constants in algorithm efficiency?
e.g. 𝑛2 days vs. 𝑛3 seconds for the same algorithm
when is 𝑛2 𝑑𝑎𝑦𝑠 < 𝑛3 𝑠𝑒𝑐𝑜𝑛𝑑𝑠?
asymptotically the first is better than the second, practically not
Average case analysis - definition? Issues?
Average-case analysis: average over all possible instances
but are all instances equally likely?
requirement for an adequate distribution of instances
–> more difficult than worst-case analysis
Insertion sort - principle?
For each element, from the second to the last, we find the appropriate place among its predecessors
Insertion sort - pseudocode?
procedure insert(𝑇[1. . 𝑛])
~~~
for 𝑖 ← 2 to 𝑛 do
𝑥 ← 𝑇[𝑖]
𝑗 ← 𝑖 − 1
while 𝑗 > 0 and 𝑥 < 𝑇[𝑗] do
𝑇 [𝑗 + 1] ← 𝑇 𝑗
𝑗 ← 𝑗 − 1
𝑇[𝑗 + 1] ← 𝑥
~~~
Selection sort – principle?
Find the smallest element in the array and bring it to the front;
continue applying the principle to the array from the second position.
Selection sort - pseudocode?
procedure select(𝑇[1. . 𝑛])
~~~
for 𝑖 ←1 to 𝑛 − 1 do
𝑚𝑖𝑛𝑗 ← 𝑖
𝑚𝑖𝑛𝑥 ← 𝑇[𝑖]
for 𝑗 ← 𝑖 + 1 to 𝑛 do
if 𝑇[𝑗] < 𝑚𝑖𝑛𝑥 then
𝑚𝑖𝑛𝑗 ← 𝑗
𝑚𝑖𝑛𝑥 ← 𝑇[𝑗]
𝑇[𝑚𝑖𝑛𝑗] ← 𝑇[𝑖]
𝑇[𝑖] ← 𝑚𝑖𝑛𝑥
~~~
Worst case complexity:
Insertion sort?
Selection sort?
O(n2)
Linear algorithms
cn
Polynomial algorithms
𝑐𝑛𝑘
Exponential algorithms
𝑐n
Quadratic algorithms
𝑐𝑛2
Insertion sort complexities?
O(n2)
Θ(n2)
Ω(n)
Selection sort complexities?
all n2
O(n2)
Θ(n2)
Ω(n2)
best case symbol
Ω
average case symbol
Θ