1) Fundamentals of algorithms Flashcards
Big-O Notation
Is a measure of the time complexity of an algorithm.
An algorithm with a function f(x) = x+2
The set values that can go into a function (x) is known as the domain.
The set of values produced from the function is known as the codomain.
How Big-O Notation calculates the order of complexity?
Big-O notation calculates the maximum amount of time it takes for an algorithm to complete.
The notation refers to the order of growth (order of complexity) which is the measure of how much more time or space it takes to execute as the input size increases.
Linear Functions
O (N)
This means that the runtime of the function increases directly proportional to the input size.
Example; Linear search of an array
Quadratic Functions
O (N^2)
This means that the runtime of the algorithm is directly proportional to the square of the input size.
Examples;
- Bubble sort
- Selection sort
- Insertion sort
Logarithmic Functions
O (log(N))
O(N) = logv2 (N)
This means that the time taken to run the algorithm increases or decreases in line with a logarithm.
Example: Binary search
Exponential Functions
O (2^N)
The runtime of the algorithm increases as an exponential function to the number of inputs.
For each additional input, the time taken might double.
Example: Intractable problems
Time Constant Functions
O (1)
This is when the time taken to run an algorithm does not vary with input size.
Example: Indexing an array
Deriving the complexity of an algorithm
An algorithm that requires no input data, loops or recursion, such as a simple statement will have a time complexity of O(1).
An algorithm that loops through an array accessing each element will have a time complexity of O (n).
An algorithm with nested loops will increase runtime with the increasing number of inner loops, O (n^2).
An algorithm that uses recursion to call itself will have a time complexity of O (2^N).
Tractable and Intractable problems
A tractable problem is a problem that is said to be solvable in polynomial time.
An intractable problem is a problem that can be solved theoretically but cannot be solved within polynomial time.
The Halting problem
The halting problem is an example of an unsolvable problem where it is impossible to write a program that can work out that whether another problem will halt given a particular input.
Theoretically can be solved but is not possible to solve within a reasonable amount of time.
Permutations
Different ways in which an item can be arranged.
Linear Search Algorithm
SUB linearSearch (namelist,nameSought) index = -1 i = 0 found = False WHILE i < len(namelist) AND NOT found: IF namelist [i] = nameSought: index = i found = True ENDIF i = i + 1 ENDWHILE RETURN index ENDSUB
Total Number of steps:
2n + 3 (In Worst Case)
Time Complexity = O (n)
Bubble Sort Algorithm
FOR i = 1 to n FOR j = 1 to (n - i) IF numbers [j] > numbers [j + 1] a = numbers [j] numbers [j] = numbers [j + 1] numbers [j + 1] = a ENDIF ENDFOR ENDFOR
Time Complexity = O (n^2)
Binary Search Algorithm
Binary search is a very efficient way of searching a sorted algorithm.
The middle item is examined in a list, if this is the item you are searching for, return the index.
Eliminate half of the list, depending on whether the item is greater or less than the middle item.
Repeat until the item has been found or it is proved not to be in the list.
SUB binarySearch (array, element, start, end)
IF start > end:
RETURN -1
mid = (start + end) // 2
IF element == array [mid]:
RETURN mid
IF element < array [mid]:
RETURN binarySearch (array, element, start, mid -1)
ELSE:
RETURN binarySearch (array, element, mid+1, end)
Merge Sort Algorithm
de