A1 Theory W1 Flashcards
What is expected of an algorithm in general? Following the definitions given in class, can it happen that two algorithms solving the same problem produce non-equivalent outputs? Why?
An algorithm is expected to be a step by step procedure or a well-define set of instructions to solve a specific problem or perform a particular task. Main characteristics of good algorithms include: correctness, finiteness, clear and unambiguous, well-defined input and output, feasibility, determinism. Output differences of algorithms can arise due to different approaches (algorithms may have different approaches to solving a problem), approximation algorithms (algorithms that are designed to provide approximate solutions to problems) , randomised algorithms (algorithms can use randomness as part of their process which can lead to different outputs in different runs), float arithmetic (for algorithms dealing with real numbers, small differences in calculations can yield different outputs).
With respect to the definitions given in class, can it happen that an algorithm does not stop for some of the problem instances? Why?
Yes, the situation is known ‘infinite loop’ or a ‘non-termination’. These may occur due to a bug or error (unintentional infinite loop), unbounded iterations (continuous iterations because number of iterations is not controlled), cyclic dependencies (prevents algorithm from completing computations), recursion without base case (Recursive algorithms need base cases that define when they stop), intractable problems (problems with exponential or higher computational complexity make them impossible to solve with a reasonable amount of time depending on the input size)
What is normally meant to be an input size if the algorithm’s input is an integer, collection of elements or a graph/tree? What do we count when calculating the time complexity of an algorithm?
This refers to the measure of the inputs size or magnitude that can influence the algorithms execution time. Integer input: input size is often represented by the number of bits required to represent that int, e.g. an n-bit integer has a size of O(n) where n is the number of bits. Collection of elements: when input is array, list or set the input size is typically represented by the number of elements in the collection, e.g. if an array has n elements than the input size is O(n). Graph/tree: input size is usually represented by number of vertices (nodes) or edges in the graph. e.g. if graph has V vertices and E edges then the input size is O(V+E).
Calculating time complexity of algorithm: count the number of basic operations performed as a function of the input size. These can be arithmetic, comparisons, assignments or any other operation that contributes to the computation.
Should algorithmic time complexity be measured with respect to something specific? If yes, what? What does it mean mathematically that f(n)=O(g(n)), in your own words?
Yes, it should be measured with respect to the input size of the algorithm. Time complexity analysis aims to to understand how algorithms runtime increases as the input size increases which helps assess efficiency. f(n) = O(g(n)) means that function f(n) grows no faster than the function g(n) as n becomes larger, g(n) serves as an upper bound or an asymptotic upper limit on the growth of f(n). Mathematically, f(n) = O(g(n)) if an only if there exists positive constants c and n0 such that for all n >= n0: |f(n)| < = c * |g(n)|. Meaning f(n) is always bound by a constant multiple of g(n).
Which function grows faster:
f(n) or g(n)? Why?
Example:
f(n)= srt*n and g(n)=logn
Example:
f(n)=n⋅logn and g(n)=n^2
Example:
f(n)=2n and g(n)=n^2
If limit is finite and > 0 the f(n) grows faster, if the limit is zero than g(n) grows faster, if the limit is infinity than they both grow at the same rate. 1, as n approaches infinity the natural logarithm grows much slower than n. 2, as n approaches infinity, the natural logarithm grows slower than any power of n, the limit goes to zero and g(n) grows faster. 3, exponential growth of 2^n surpasses polynomial growth of n^2 as n becomes larger and f(n) will grow faster.
Assume that given a collection of n elements, our function iterates n times and does a constant number of other elementary operations. What is its complexity? Why?
The number of elementary operations are the constant number of operations within each iteration. Meaning the function performs c operations n times. This can be denoted as n * c and O(n) in Big O terms as n is dominating factor as the growth rate is proportional to n.
Assume that given a collection of n elements, our function iterates 50 times and does a constant number of other elementary operations. What is its complexity? Why?
The function is performing a constant number of elementary operations with each iteration. This is expressed as 50 * c, in Big O the dominating factor is the constant as the function performs a fixed number of operations regardless of the input size. Time complexity of the function does not depend on the input size.
Assume that given a collection of n elements, our function iterates n^2 times and does a constant number of other elementary operations. What is its complexity? Why?
The function iterates n^2 and performs a constant number of other elementary operations which can be expressed as n^2 * c. As the input size grows the function will perform n^2 operations which gives a quadratic time complexity of O(n^2).
Assume that given a number n, our function iterates n times. What is its complexity with respect to the input size assuming all the other operations are constant-time? Why?
The function iterates n times and all other operations are constant gives a total number of n * c. As the function is based on the input size of n and not the constants the time complexity will be linear, O(n).
How is the constant-time complexity denoted?
O(1), constant time complexity is when the algorithms run-time does not depend on the input size.
What is the difference between worst-case and best-case complexity? Explain in your own words.
Worst case complexity: represents the maximum time an algorithm takes for the most challenging input. Best case complexity: represents the minimum time an algorithm takes for the most favourable part. The focus is mainly on worst case complexity in practical analysis as it provides a more realistic guarantee on how the algorithm will behave for any input. It is important to consider best and worst case to have an understanding of how an algorithms behaviour under any conditions.