topic 3 Flashcards
what does the phrase analysis of algorithms cover?
it covers the consideration of algorithms from the point of view of how ‘good’ they are (with respect to some resource).
why do we focus on algorithms rather then the implementation of algorithms?
We work only with algorithms, and not their implementations, in the strong expectation that our analysis will be machine-independent but still closely related to and reflective of the time taken by any implementation of the algorithm on any given processor.
what is one of the fundamental assumptions when defining the time taken by any algorithm which is described using our pseudo-code
that variables can be manipulated in a small, fixed number of units of time, as can basic operations involving variables. So, for example, an assignment of some basic value to a variable or a comparison of two variables can be done in at most c units of time, say, where c is some fixed integer.
what is the assumption about the time taken for any basic algorithim operation?
Any basic algorithmic operation can be undertaken in at most c units of time.
what determines whether an algorithm is basic?
any operation that can be undertaken by a processor in a small number of units of time (c units) is deemed to be basic and this is determined by what the algorithmic operation translates to when compiled and executed on a processor
what is an assumption made about the size of numbers n in the selection sort?
the n numbers involved in the input to Selection-sort are all sufficiently small that they canbe dealt with by our processor in the fetch-decode-fetch-execute cycle. we always assume that our input objects are of managable size
what is a fundamental assumption about worst case upper bounds and size?
The size of an input is the number of basic objects it contains, and in general the time taken by an algorithm is strongly influenced by the size of the input and increases as the size of the input increases.
what is a fundamental assumption about worst case upper bounds and size?
The size of an input is the number of basic objects it contains, and in general the time taken by an algorithm is strongly influenced by the size of the input and increases as the size of the input increases.
what is an assumption about the finite amount of inputs in a selection sort?
we only ever have a finite number of inputs of any size n. For although there are an infinite number of potential input numbers (forming elements of an input list to Selection-sort),at some point the size of these numbers will get so big so as to counteract our assumption that any such number can be ‘handled’ by our algorithm (and the underlying computer for which this algorithm is to be implemented) in our small number c of time units.
what are two different resources used by an algorithm that we might measure?
time taken by an algorithm or the amount of memory used by an algorithm
What is the worst-case time complexity of an algorithm? Why is it expressed as a function on the natural numbers?
the worst case upper bound b is a function that is a boundary above the algorithms’ runtime function, when that algorithm is given the inputs that maximize the algorithm’s run time. it is expressed as a function of natural numbers as we group the inputs together according to their size whivh is defined as a natural number reflecting their natural size of the input, and then to express our worst case upperbound as a function of these natural numbers so that any input of size n can be completed in at most f(n) units of time.
why can the worst case upper bound of a selection sort not be completed in b units of time?
ideally you would want the worst case upper bound to be b units of time however this is not possible as if the list of atleast (b/c)+q numbers take atleast c[(b/c)+1) units of time which is greater than b
what is the time complexity ?
Time complexity is defined as the amount of time taken by an algorithm to run, as a function of the length of the input. It measures the time taken to execute each statement of code in an algorithm. the worst case upper bound function is a time complexity
what is the constant c meant to represent?
the unit of processor time that is needed to complete the basic instruction. each processor time will have a different constant c. it is some small fixed integer
what is the cost of following the control flow?
the cost is zero
how has the worst case upper bound reduced the problem?
the problem is now reduced to counting how many units of time the algorithm takes in the worse case when we restrict all inputs to be of size n.
what is the main use of the time complexity?
the main use of the time complexity is to compare algorithms with one another. this can be used to decide which algorithm is best
why does the c function become irrelevent when comparing the two algorithms?
as both algorithms will be in terms of c, the c will cancel out so as a result the final decision is independent from c
what does the judgement on which algorithm to use based on?
the dominant term in the time complexity ie which has the largest power. this is due to the fact that the larger power will grow much faster then lower powers so when n gets large the larger power will be algorithmically better.
what is the definition of big O notation?
For two functions f, g : N → N, we write f = O(g) if there exists some n0 ∈ N and some positive rational k ∈ Q such that f (n) ≤ kg(n)whenever n ≥ n0.