Efficiency and Complexity Flashcards
How does selection sort work?
Find the smallest item in the unsorted part of the list and put it at the beginning by making a swap
How do we analyse how “fast” an algorithm is?
The efficacy of implementations of an algorithm depend on the language used, the compiler used, and the processor executing the code
Therefore we analyse the ALGORITHM instead, expecting that our analysis will be machine independent
Why is it not possible to formally define the time taken by an algorithm?
We don’t have a formal definition of pseudocode. We work around this by hiding and working around irrelevant details
What fundamental assumptions do we make when analysing program efficiency
Any basic algorithmic operation takes at most “c” units of time
“output A” takes c*len(A) units of time
The cost of following the control flow is 0
Input numbers will always be small enough to not cause overflow
How do we express the worst-case upper bound on an algorithm’s run-time?
As a function f(n) where n is the size of the input
The algorithm terminates in at most f(n) units of time
Fundamental assumption: The size of an input is the number of basic objects it contains
If f and g are functions, what does it mean if f is O(g)?
There are constants C and k such that |f(x)| <= C|g(x)| when x >= k
This essentially means that after a certain point, f(x) never goes above Cg(x)
What is the reasoning behind the use of Big-O notation?
The bound “x >= k” caters for when one algorithm might be faster than another but only up until a certain input size
The constant C allows us to disregard c, and think of all functions that only differ by a multiplicative constant as the same
When is worst-case asymptotic time complexity not a good indication of how practical an algorithm is?
When the hidden constants are large enough to have a significant impact
When the average-case time complexity is much better than the worst-case time complexity (e.g. quick sort)
Some algorithms are more memory-efficient
Some algorithms are “online” while others aren’t
What are efficiently solvable/tractable problems (P)?
Problems that can be solved by an algorithm of time complexity O(n^k)
Which well-known problems aren’t tractable?
Travelling salesman, colouring
What are NP problems?
If a problem is NP, then we can check whether or not a witness for an instance implies that that instance is a yes-instance. P problems are a subset of NP-problems.
What is a polynomial-time reduction algorithm?
An algorithm that takes an instance I of a decision problem X, and converts it into an instance J of a second decision problem Y, such that I is a yes-instance if and only if J is a yes-instance. Y is a polynomial-time reduction of X.
What is an NP-complete problem?
An NP problem Y which is a polynomial-time reduction of any NP problem X (e.g. TSP)
How could we prove that P = NP?
Finding an NP-complete problem which is also in P
What is an NP-hard problem?
A search or optimisation problem whose solution in polynomial time would imply that P = NP (e.g. TSP, colouring, longest cycle in a graph)
Example: if we could solve the TSP in polynomial time, then we could solve the decision version in polynomial time too. Since TSP is NP-complete, it being in P would mean that P = NP