Lecture 9: Computational Complexity Flashcards
What is the Big-O notation?
The Big-O notation is a way of describing the efficiency of an algorithm by comparing its complexity to other algorithms.
To be precise, it describes in a mathemathical way how the time it takes to run your functions grows as the size of the input grows.
What is meant by ‘time complexity’?
Time complexity is a way of showing how the runtime of a function increases as the size of the input increases.
Look at the following example:
O(1) = Constant
O(log n) = Logarithmic
O(n) = Linear
O(n!) = Factorial
What is the Big-O? What is the time complexity?
The big-O are the formulas; the left side of the table.
The time complexity is the description; the right side of the table.
When trying to find the big-O in a formula, what do you need to do?
- Take out the constants
- Take out the coefficients
- -> Find the fastest growing term
Do constants and coefficients matter to the big-O? If yes/no, explain why.
No, because they make too small of a difference when using large numbers. They are insignificant.
Which are true?
- The MORE operations, the FASTER the algorithm
- The LESS operations, the FASTER the algorithm
- The MORE operations, the SLOWER the algorithm
- The LESS operations, the SLOWER the algorithms
Option 2 and 3
the fewer operations, the faster the algorithm
and
the more operations, the slower the algorithm (; harder to solve due to the amount of tasks it has to complete)
What is O(1)? What is the time complexity?
Constant time, because the complexity never gets worse no matter how many times you repeat an operation.
What happens if you do an O(1) operation 8 times?
The time complexity remains constant. It remains O(1).
What does ‘n’ mean in the big-O notation formulas?
‘n’ is the input
Explain what ‘non-polynomial’ time is.
- number of operations increases very fast
- has a radical in its formulas (e.g. 2^x)
- contains problems that are very hard to solve
- -> slow
Explain what ‘polynomial time’ is.
- preferred in algorithms
- easy to solve; not a lot of operations required
- O(n^c), where c is some constant –> c must be as low as possible
- -> no divisions, negative components, decimal exponents etc.
What does the NP vs. P debate entail?
If NP = P, this would imply that all hard problems (NP) have relatively easy solutions (P). This would be a massive breakthrough in Computer Science, as it would essentially solve all algorithmic challenges and would allow computers to solve almost any task.
However, this remains fiction until this day –> most NP (complex) operations cannot be in P time.
Why is computational complexity important to security?
You would want encryption to be in P-time (easy and quick) and decryption to be in NP-time (hard to solve). As such, big-O is an easy way to talk about the efficiency of algorithms used in security.
What sorting algorithms did we discuss in class?
- Bogo Sort
- Insertion Sort
- Selection Sort
- Bubble Sort
- Merge Sort
What are sorting algorithms?
algorithms that put elements of a list in a specific order; often ascending
Is the bogo sort an example of NP or P? Why?
Non-polynomial time, due to the fact that this algorithm keeps shuffling until the correct solution arrives. This could require a huge (or even infinite) amount of operations. It is thus highly inefficient, as it has such a large runtime and is very slow.
Explain how the Bogo Sort works. What is it’s average runtime in Big-O?
The Bogo Sort keeps shuffling values until the correct solution arrives and it is sorting ascendingly. It’s average runtime is O(n!), because it requires a lot of operations until the correct ‘solution’ has been found.
How does the Insertion Sort work?
It is a bit similar to how you sort cards in your hand. Values from the unsorted part are picked and places at the correct position in the sorted part in ascending order. The current value (picked from the unordered list) is checked against the predecessors in the ordered list to find out where it belongs.
What is the big-O of the Insertion Sort? Why?
O(n^2), because it has two nested for loops.
How does the Selection Sort work?
This algorithm looks for the smallest value in the unsorted list and puts this in the sorted list. then, it finds the next smallest value and puts this second in the sorted list etc. This process repeats itself until all is sorted in the sorted list in ascending order.
What is the Big-O notation of the Selection Sort algorithm? Why?
O(n^2), because it has two nested for loops.
How does the Bubble Sort work?
Each operation, two values are compared to eachother and put in order. The biggest value is shifted to the right. This should be repeated until the largest value has reached the end. Repeat these passes/processes until no pass is needed anymore to put the list in ascending order.
What is the Big-O notation of the Bubble Sort algorithm? Why?
O(n^2). However, sometimes it is O(n log n), because due to the partial sorting on each pass, the list gets sorted fast on average. However, it’s still a lengte and thus slow process as all values are checked against each other.
How does the Merge Sort algorithm work?
This algorithm divides the arre into two halves until the size becomes 1. Then, it starts merging the arreys back till the complete array is rearranged in ascending order.
It is space inefficient, as lots of memory is needed to hold the broken lists apart. However, it is time efficient.
What is the Big-O notation of the Merge Sort algorithm?
Big-O = O(n log n). It is faster than O(n^2).