Algorithms And Complexity Flashcards
What is an algorithm?
- To produce a program to solve all instances of a problem we must specify a general step by step procedure for producing the solution to each instance
- This procedure is an algorithm
- The algorithm solves the problem
What is a parameter?
- variable values used in algorithms
- eg; for finding a value (x) in a list (s) of size (n) there would be the 3 parameters x, s and n
What is a problem?
A question we seek to answer
What is a solution?
Answer to the question asked in that instance
What is an Instance?
-Each specific assignment of values to the parameters is called an instance of the problem (each time problem is defined with values)
What are the primary and secondary interests of efficiency?
- Primary = running time and operations on data structures (time complexity)
- Secondary = memory usage (space complexity)
How important are algorithms and choosing the right one?
- bigger the problem the more evident an efficient algorithm becomes
- should be considered a technology (same as hardware)
- choice of algorithm is as important as choice of hardware
Wat is the point in algorithmic analysis?
- measuring amount of work as it varies with the size of data affected
- measuring performance against another algorithm at the same task
- predicting performance to solve a task
What is algorithm performance?
- expressed as approximate rate of growth of work as size of data grows
- best, average and worst cases are discovered
- worst case is the primary issue
What is big-o notation?
- estimates rate of algorithm growth rather than time/space resources used
- not exact measurements
What are the rules of Big-o notation?
- discard coefficients, logarithmic bases and constants
- determine how time required by required by repetitive work grows as size of data does
- nested relationships = multiply
- sequential relationships = add
What is the order of efficiency measures in big-o notation?
-fastest to slowest:
0(1), O(logn), O(N), O(NlogN), O(N^2), O(N^3), O(2^N), O(N!)
How do we simplify Big-O expressions?
- split into dominant and lesser terms
- discard lesser terms
What else can be measured using big-o?
- memory requirements
- some algorithms may be equal on time but may consume more memory
What is big O concerned with and what does it to a function?
- concerned with eventual behaviour
- puts asymptotic upper bound on function
What is Omega?
- describes least amount of a resource that an algorithm needs for some class of input
- measures least amount of time mainly
- lower bound of algorithm
What is theta?
-intersection between omega and big O
What is a problem with growth rate measuring?
- for average problem it is easy to recognise the growth rate for that algorithm
- distinction between upper and lower bound is only worthwhile when you have incomplete knowledge about the thing being measured
What is a mistake many people confuse with UB, LB and best/worst case?
- upper bound does not mean worst case and lower bound is not best case
- best and worst give us solid input instance that we can apply to algorithm description to get a cost measure
- what is being bounded is the growth rate for the cost not the cost itself
- growth rate applies to change in cost as input size changes
What is a misconception people have with best and worst case?
- best case does not occur when size is small and vise versa
- best and worst case instances exist for each possible size of input
What is the issue with a fast algorithm? How do we chose algorithms?
- faster an algorithm is the more complicated it will be to write
- when choosing we need to consider; size of data amount of memory you have, amount of time you have, if we want a fast algorithm that produces an ok solution or a slow one that produces a good solution
What is the difference between decomposition and recursive decomposition?
- decomposition divides problem into dissimilar sub problems
- Recursive decomposition problem into smaller version of the same problem
What properties must problems have to be solved recursively?
- must show self similarity
- structure repeats itself at different levels in scale
- solve larger problems means solving smaller problems within
What is the base case?
-simplest version of problem that can be solved