ANALYSIS OF ALGORITHMS Flashcards
What is an algorithm?
An algorithmis a step-by-step procedure for solving a problem in a finite amount of time.
What does analyzing an algorithm entail
• Developing a formula for predicting how fast an algorithm is, based
on the size of the input (time complexity), and/or • Developing a formula for predicting how much memory an algorithm requires, based on the size of the input (space complexity)
What does “size of the input” mean?
• We choose the “size” to be the parameter that most
influences the actual time/space required• It is usually obvious what this parameter is
• Sometimes we need two or more parameters
Why is it better to use worst case for analysis?
• Average case time is often difficult to determine. (is not well defined eg sorting an array How out of order is the “average” unsorted array? )
• Easier to analyze
• Crucial to applications such as
games, finance and robotics
What are the two ways of Analyzing Time Efficiency of an Algorithm
- Experimental study
* Theoretical analysis
Describe the experimental studies approach to analysis of time complexity
• Write a program implementing the algorithm
• Run the program with inputs of varying size and composition
• Use a method like
System.currentTimeMillis() to get an accurate measure of the actual running time
• Plot the results
What are the limitations of using the experimental studies approach to analysis of time complexity
- It is necessary to implement the algorithm, which may be difficult
- Results may not be indicative of the running time on other inputs not included in the experiment.
- In order to compare two algorithms, the same hardware and software environments must be used
- Must run on many data sets to see effects of scaling
Describe the theoretical analysis approach to evaluating time efficiency
• Uses a high-level description of the algorithm instead of an
implementation
• Characterizes running time as a function of the input size, n. • Allows us to evaluate the speed of an algorithm
independent of the hardware/software environment
How is the Theoretical Analysis of Time Efficiency done. Include the formulae
Time efficiency is analyzed by determining the number of
repetitions of the primitive operation as a function of input size
T(n) = cop C(n) where: n- input size T-running time cop- execution time for basic operation C- number of times basic operation is executed
What is a primitive/basic operation
the operation that contributes most towards the running time of the algorithm
What is the problem if counting operations as a way of time complexity analysis
- Hard to analyze
- Precise information may not be needed ie Precise details less relevant than order growth
- May not know times (or relative times) of steps
- Only gives results within range: aC(n) < T(n) < bC(n)
- More interested in growth rates with respect to n ((i.e Big O) )
What is the asymptotic order pf growth
how number of basic ops GROWS as n increases
The asymptotic analysis of an algorithm determines the running time in big-Oh notation
• To perform the asymptotic analysis we find the worst-case number of primitive operations executed as a function of the input size
• We then express this function with big-Oh notation•
Give examples of important functions that often appear in algorithm analysis
- Constant - 1
- Logarithmic -log n
- Linear -n
- N-Log-N - n log n
- Quadratic - n^2
- Cubic - n^3
- Exponential - 2^n
Differentiate between best case, average case and worst case
Best case is the function which performs the minimum number of steps on input data of n elements. Worst case is the function which performs the maximum number of steps on input data of size n. Average case is the function which performs an average number of steps on input data of n elements.
Explain any four elements that affect the running time of algorithms
size of input
the hardware environment and the software environment.
Number of primitive operations
Asymptotic order of growth of the algorithm