Section Twelve: Algorithms Flashcards
Chapter 59 – Analysis and design of algorithms
Comparing algorithms
Algorithms may be compared on how much time they need to solve a particular problem. This is referred to as the time complexity of the algorithm. The goal is to design algorithms which will run quickly while taking up the minimal amount of resources such as memory.
In order to compare the efficiency of different algorithms in terms of execution time, we need to quantify the number of basic operations or steps that the algorithm will need, in terms of the number of items to be processed.
Chapter 59 – Analysis and design of algorithms
A linear function
A linear function is expressed in general terms as f(x) = ax + c
Chapter 59 – Analysis and design of algorithms
A polynomial function
A polynomial expression is expressed as f(x) = axm + bx + c
Chapter 59 – Analysis and design of algorithms
An exponential function
An exponential function takes the form f(x) = abx.
This function grows very large, very quickly!
Chapter 59 – Analysis and design of algorithms
A logarithmic function
A logarithmic function takes the form f(x) = a logn x
Chapter 59 – Analysis and design of algorithms
Permutations
The permutation of a set of objects is the number of ways of arranging the objects. For example, if you have 3 objects A, B and C you can choose any of A, B or C to be the first object. You then have two choices for the second object, making 3 x 2 = 6 different ways of arranging the first two objects, and then just one way of placing the third object. The six permutations are ABC, ACB, BAC, BCA, CAB, CBA.
The formula for calculating the number of permutations of four objects is 4 x 3 x 2 x 1, written 4! and spoken as “four factorial”. (Note that 10! = 3.6 million so don’t try getting 10 students to line up in all possible ways!)
Chapter 59 – Analysis and design of algorithms
Big-O notation
Now that we have got all the maths out of the way and hopefully understood, we can study the so-called Big-O notation which is used to express the time complexity, or performance, of an algorithm. (‘O’ stands for ‘Order’.)
Chapter 59 – Analysis and design of algorithms
O(1) (Constant time)
O(1) describes an algorithm that takes constant time (the same amount of time) to execute regardless of the size of the input data set.
Suppose array a has n items. The statement
length = len(a)
will take the same amount of time to execute however many items are held in the array.
Chapter 59 – Analysis and design of algorithms
O(n) (linear time)
O(n) describes an algorithm whose performance will grow in linear time, in direct proportion to the size of the data set. For example, a linear search of an array of 1000 unsorted items will take 1000 times longer than searching an array of 1 item.
Chapter 59 – Analysis and design of algorithms
O(n2) (Polynomial time)
O(n2) describes an algorithm whose performance is directly proportional to the square of the size of the data set. A program with two nested loops each performed n times will typically have an order of time complexity O(n2). The running time of the algorithm grows in polynomial time.
Chapter 59 – Analysis and design of algorithms
O(2n) (Exponential time)
O(2n) describes an algorithm where the time taken to execute will double with every additional item added to the data set. The execution time grows in exponential time and quickly becomes very large.
Chapter 59 – Analysis and design of algorithms
O(log n) (Logarithmic time)
The time taken to execute an algorithm of order O(log n) (logarithmic time) will grow very slowly as the size of the data set increases. A binary search is a good example of an algorithm of time complexity O(log2n). Doubling the size of the data set has very little effect on the time the algorithm takes to complete.
Chapter 59 – Analysis and design of algorithms
O(n!) (Exponential time)
The time taken to execute an algorithm of order O(n!) will grow very quickly, faster than O(2n).
Suppose that the problem is to find all the permutations of n letters. If n=2, there are 2 permutations to find. If n=6, there are 720 permutations – far more than 2n+1, which is only 64.
Chapter 59 – Analysis and design of algorithms
Calculating the time complexity of an algorithm
Here are two different algorithms for finding the smallest element in an array called arrayX of size n. Assume the index starts at 1.
The first algorithm puts the first value in the array equal to a variable called minimum. It then compares each subsequent item in the array to the first item, and if it is smaller, replaces minimum with the new lowest value.
minimum = arrayX[0]
for k = 1 to n - 1
if arrayX[k] < minimum then
minimum = arrayX[k]
endif
next k
To calculate the time complexity of the algorithm in Big-O notation, we need to count the number of basic operations it performs. There is one initial statement, and n if statements, so the time complexity is 1 + n. However, as we have already discussed, the 1 is insignificant compared to n and this algorithm therefore executes in linear time and has time complexity O(n).
Chapter 60 – Searching algorithms
Linear search
Sometimes it is necessary to search for items in a file, or in an array in memory. If the items are not in any particular sequence, the data items have to be searched one by one until the required one is found or the end of the list is reached. This is called a linear search.
Chapter 60 – Searching algorithms
Time complexity of linear search
We can determine the algorithm’s efficiency in terms of execution time, expressed in Big-O notation. To do this, you need to compute the number of operations that the algorithm will require for n items. The loop is performed n times for a list of length n, and there are two steps in the loop (an IF statement and an assignment statement), giving a total of 3 + 2n steps (including 3 steps at the start). The constant term and the coefficient of n become insignificant as n increases in size, and the time complexity of the
algorithm basically depends on how often the loop has to be performed in the worst-case scenario.