Definitions Flashcards
Sedgewick (2011): Algorithm
The term algorithm is used in computer science to describe
methods for solving problems that are suited for computer
implementation.
Aho, Hopcroft & Ulman (1975): Algorithm
An algorithm is a finite sequence of instructions, each of which
has a clear meaning and can be performed with a finite
amount of effort in a finite length of time.
Levitin (2003): Algorithm
An algorithm is a finite sequence of an ambiguous instructions for solving a problem, i.e, for obtaining the required output for any legitimate input in a finite amount of time.
Iterative Algorithm
Iteration is the repetition of a process in order to generate a sequence of outcomes. The sequence will approach some endpoint or end value. Each repetition of the process is a single iteration, and the outcome of each iteration is then the starting point of the next iteration.
Direct Recursion Algorithm
If a function calls itself, it’s known as direct recursion.
Indirect Recursion Algorithm
Indirect recursion occurs when a function is called not by itself but by another function that it called (either directly or indirectly). For example, if f calls f, that is direct recursion, but if f calls g which calls f, then that is indirect recursion of f.
Divide and Conquer Algorithm
Divide and conquer is an algorithm design paradigm based on multi-branched recursion. A divide-and-conquer algorithm works by recursively breaking down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly.
Dynamic Programming Algorithms
Like divide-and-conquer method, Dynamic Programming solves problems by combining the solutions of subproblems. Moreover, Dynamic Programming algorithm solves each sub-problem just once and then saves its answer in a table, thereby avoiding the work of re-computing the answer every time.
Randomised Algorithms
An algorithm that uses random numbers to decide what to do next anywhere in its logic is called Randomised Algorithm.
Brute Force Algorithms
Brute Force Algorithms refer to a programming style that does not include any shortcuts to improve performance, but instead relies on sheer computing power to try all possibilities until the solution to a problem is found.
Euclid’s Algorithm for GCD
- Step 1: If n = 0, return the value of m as the answer and stop; otherwise, proceed to step 2.
- Step 2: Divide m by n and assign the value of the remainder to r.
- Step 3: Assign the value of n to m and the value of r to n. Go to step 1.
cₒₚ
The cost (time) to execute an algorithm’s basic operation on some computer
C(n)
The number of times the operation needs to be executed
~1: Constant time
Normally the amount of time that a instruction takes under the RAM model
~log n: Logarithmic
It normally occurs in algorithms that transform a bigger problem into a smaller version that whose input size is a ration of the original problem. Common in searching and in some tree algorithms.
~n: Linear
Algorithms that are forced to pass through all elements of the input (of size n) a number (constant) of times yield linear running time.
~nᵏlog n: Polylogarithmic
Typical of algorithms that break the problem into
smaller parts, solve the problem for the smaller parts and combine the solution to obtain the solution of the original problem instance. k >= 1
n²: Quadratic
A subset of the polynomial solutions. Quadratic solutions are still acceptable and are relatively efficient to small to medium scale problem sizes. Typical of algorithms that have to analyse all pairs of elements of the input.
n³: Cubic
Not very efficient but still polynomial. A classical example of algorithms in this class is the matrix multiplication.
2ⁿ: Exponential
Unfortunately quite a few known solutions to practical problems are in this category. This is as bad as testing all possible answers to a problem. When algorithms fall in this category, algorithm designers go in search of approximation algorithms.
C(n) = n: Worst
The worst-case efficiency of an algorithm is its efficiency for the worst-case input of size n, which is an input (or inputs) of size n for which the algorithm runs the longest among all possible inputs of that size.
C(n): Best
The best-case efficiency of an algorithm is its efficiency for the best-case input of size n, which is an input (or
inputs) of size n for which the algorithm runs the fastest among all the inputs of that size.
RAM model
Used for counting the number of steps to calculate the time complexity of an algorithm.
O(g(n)): Notation
A function t(n) is said to be in the O(g(n)), denoted by t(n) = O(g(n)), if there exist some positive constant c and some nonnegative integer n0. t(n) ≤ cg(n), ∀n ≥ n0
o(g(n)): Notation
t(n) < cg(n), ∀n ≥ n0: Strict O Notation
Ω(g(n)): Notation
t(n) ≥ cg(n), ∀n ≥ n0
ω(g(n)): Notation
t(n) > cg(n), ∀n ≥ n0: Strict Ω Notation
Θ(g(n)): Notation
c1 g(n) ≤ t(n) ≤ c2 g(n), ∀n ≥ n0
Transitivity
The relation between three elements such that if it holds between the first and second and it also holds between the second and third it must necessarily hold between the first and third.
Reflexivity
A function is always a lower and upper bound of itself but never a strict lower or upper bound.
Symmetry
f(n) = Θ(g(n)) if and only if g(n) = Θ(f(n))
Transpose Symmetry
f(n) = O(g(n)) if and only if g(n) = Ω(f(n)), f(n) = o(g(n)) if and only if g(n) = ω(f(n))
Recursive Method
The ability to “transform” a problem into one or more smaller instances of the same problem. Then solving these instances to find a solution to the large problem.
Linear Data structure
- There is a unique first element
- There is a unique last element
- Every component has a unique predecessor (except the first)
- Every component has a unique successor (except the last)