Big O Flashcards
Algorithm
A specific process, set of rules, or solution to be followed when problem-solving
Efficient
A quality of code that means it is fast, and/or it does not take up much memory
Time Complexity
A measurement of how the amount of time an algorithm takes to run as the size of the input changes
Space Complexity
A measurement of how much memory an algorithm uses as the size of the input changes
How could you Compare Efficiency Between Algorithms?
Given two algorithms, we can consider which algorithm is more efficient by comparing the time and space complexity.
We can consider time and space complexity by worst and average case, too. Considering the worst case scenario allows us to know how the program will do in the most critical positions. If an algorithm is to be used many times on many different instances, it may be more important to know the average execution time.
An efficient algorithm is one that runs as fast as possible and requires as little computer memory as possible. We often have to make a trade-off between these two goals. For any given algorithm, do we choose to use more memory to make our program faster, or settle for a slower program that uses less memory?
Big O
A notation to describe how the run time or space requirements grow as the input size grows. Big O is a way to express how either the time or space complexity of an algorithm changes as it gets more data sent to it.
How long will it take, when you give it more data? and How much memory with it take up, when you give it more data?
Big O notation and the common complexity categories: constant
O(1) The algorithm will take the same amount of time to execute regardless of the size of input (THIS IS IDEAL)
Big O notation and the common complexity categories: logarithmic
O(log n) The algorithm will grow in complexity proportional to the base log of the input size. Logarithmic algorithms increase very slowly as the size of the input increases. They usually involve an algorithm which excludes 1/2 of the input with each iteration of a loop. (THIS IS VERY GOOD)
Big O notation and the common complexity categories: linear
The algorithm will grow in time or space directly proprotional to the input size. The complexity increases at the same rate that the input increases. So if you double the size of the input list, the algorithm will take twice as long.
Big O notation and the common complexity categories: loglinear
A term used to describe an algorithm which will grow in time or space complexity proportional to the n log n of the input size. “n log n” means that the input size is multiplied by the base-2 log of the input size. The size of the input times the log of the size of the input… this is not as good as linear but pretty close.
Big O notation and the common complexity categories: quadratic
The algorithm will have a runtime or memory usage proportional to the size of the input squared. This often involves 2 nested loops. The amount of time or space it will use increases quadratically with the square of the size of the input. SO that gets slow very fast.
Big O notation and the common complexity categories: exponential
The algorithm’s complexity doubles each time the input size increases by one.
So if you double the size of the input… it will be unbelievably way a lot more slow.
Time Complexity: Generic Steps
(1) Read through the code, and identify all lists that have a variable size.
Typically, with one list, we say it has n number of elements.
(2) Identify all of the operations in the algorithm
(3) Recognize which operations are related to the lists of size n
Typically, the operations inside of a for loop get multipled by n times
This relationship will change depending on the kind of loop and the logic in the loop. Consider cases where the loop breaks.
(4) Create an equation that represents how many operations there are
This equation can use n as a variable
(5) Drop the constants and finding the dominant order (notes below)
(6) Match this Big O to the most relevant complexity
Steps For Time Complexity
- Identify any data structures )like a list) that can have variable size (increase/decrease in size).
- Then - identify all the operations (steps, functions, etc) in the algorithm
- Then determine which operations depend on the size of the data structure.
(meaning - how many operations will need to execute an increased number of times as the size of the data structure of size n increases.) - Create an equation that represents how many operations get performed and use n as a variable.
i.e. O(n * (2) + 1) = o(2n +1) - Then Drop all constants and find the dominant term
i.e. O(n) - Match this to the most relevant complexity.
i.e. O(n)
logorithm
a quantity representing the power to which a fixed number (the base) must be raised to produce a given number.
A logarithm answers the question “How many of this number do we multiply to get that number?”
Example How many 2s must we multiply to get 8?
Answer: 2 × 2 × 2 = 8, so we had to multiply 3 of the 2s to get 8
We say the logarithm of 8 with base 2 is 3
2x2x2 = 8 which is also log2(8) = 3
^–3–^
For example, the base ten logarithm of 100 is 2, because ten raised to the power of two is 100: log 100 = 2.