3.3 Complexity Of Algorithms Flashcards
How can the efficiency of an algorithm be analyzed?
One measure of efficiency is the time
used by a computer to solve a problem using the algorithm when input values are of a specified
size. A second measure is the amount of computer memory required to implement the algorithm
when input values are of a specified size.
What do you know about COmputaional Complexity?
An analysis of the time required to solve a problem of a particular size involves the time complexity
of the algorithm. An analysis of the computer memory required involves the space complexity
of the algorithm.
Time Complexity :
How is it described, why that way…etc
Time complexity is described in terms of the number of operations required instead of actual computer time because of the difference in time needed for different computers to perform
basic operations. Moreover, it is quite complicated to break all operations down to the basic bit
operations that a computer uses. Furthermore, the fastest computers in existence can perform
basic bit operations (for instance, adding, multiplying, comparing, or exchanging two bits) in
10−11 second (10 picoseconds), but personal computers may require 10−8 second (10 nanoseconds), which is 1000 times as long, to do the same operations.
The operations used to measure time
complexity can be the comparison of integers, the addition of integers, the multiplication of
integers, the division of integers, or any other basic operation.
WORST-CASE COMPLEXITY
By the worst-case performance of an algorithm, we mean the largest number of operations needed to solve the given problem using this algorithm on input of specified
size. Worst-case analysis tells us how many operations an algorithm requires to guarantee that
it will produce a solution
AVERAGE-CASE COMPLEXITY
The average number of operations used
to solve the problem over all possible inputs of a given size is found in this type of analysis. Average-case time complexity analysis is usually much more complicated than worst-case
analysis.
Example 2 is messed up.
Although we have counted the comparisons needed to determine whether we have
reached the end of a loop, these comparisons are often not counted. From this point on we will
ignore such comparisons.
sorting n items can be done in ? its most efficient time complexity:
sorting n items can be done in
O(n log n) time
Run animations to gain insight of efficiency:
bubble
sort, the insertion sort, the shell sort, the merge sort, and the quick sort.
ALGORITHM 1 Matrix Multiplication.
Simple:
determine amount of additions and multiplications.
also for most efficient algo:
DOUBT
procedure matrix multiplication(A, B: matrices)
for i := 1 to m
for j := 1 to n
cij := 0
for q := 1 to k
cij := cij + aiqbqj
return C {C = [cij] is the product of A and B}
c = 0 + a11b11 is not counted?
O(n√7) for most efficient algo.
ALGORITHM 2 The Boolean Product of Zero–One Matrices.
How many bit operations?
procedure Boolean product of Zero–One Matrices (A, B: zero–one matrices)
for i := 1 to m
for j := 1 to n
cij := 0
for q := 1 to k
cij := cij ∨ (aiq ∧ bqj)
return C {C = [cij] is the Boolean product of A and B}
2n^3 bit operations are required to compute A ⊙ B
In which order should the matrices A1, A2, and A3—where A1 is 30 × 20, A2 is 20 × 40, and
A3 is 40 × 10, all with integer entries—be multiplied to use the least number of multiplications
of integers
A1 (A2A3)
Note that m1m2m3 multiplications of integers
are performed to multiply an m1 × m2 matrix and an m2 × m3 matrix.
example 9, page 236.
Algorithmic paradigm
That is, a general approach based on a particular concept that can be used to construct algorithms for solving a variety of problems.
Some of the algorithms we have already studied are based on an algorithmic paradigm
known as brute force, which we will describe in this section. Algorithmic paradigms, studied later in this book, include divide-and-conquer algorithms studied in Chapter 8, dynamic
programming, also studied in Chapter 8, backtracking, studied in Chapter 10, and probabilistic algorithms, studied in Chapter 7. There are many important algorithmic paradigms besides
those described in this book. Consult books on algorithm design such as [KlTa06] to learn more
about them.
BRUTE-FORCE ALGORITHMS
In a brute-force algorithm, a problem is solved in the most straightforward manner
based on the statement of the problem and the definitions of terms. Brute-force algorithms are
designed to solve problems without regard to the computing resources required.
They are naive approaches
for solving problems that do not take advantage of any special structure of the problem or clever
ideas.
ALGORITHM 3 Brute-Force Algorithm for Closest Pair of Points.
Its big theta
and big o for more efficient:
procedure closest-pair((x1, y1),(x2, y2),… ,(xn, yn): pairs of real numbers)
min= ∞
for i := 2 to n
for j := 1 to i − 1
if (xj − xi)^2 + (yj − yi)^2 < min then
min := (xj − xi)2 + (yj − yi)^2
closest pair := ((xi, yi),(xj, yj))
return closest pair
This algorithm uses Θ(n2) operations, in terms of arithmetic operations and comparisons.
More efficient :O(n log n)
Commonly Used Terminology for the
Complexity of Algorithms:
Θ(1) Constant complexity
Θ(log n) Logarithmic complexity
Θ(n) Linear complexity
Θ(n log n) Linearithmic complexity
Θ(n^b) Polynomial complexity,b ≥ 1 (integer)
Θ(b^n), where b > 1 Exponential complexity
Θ(n!) Factorial complexity