LN1 Algorithms in General and Time Complexity Flashcards

1
Q

What is an algorithm in the context of computer science?

A

An algorithm is a precise and unambiguous set of instructions for performing calculations to solve a specific problem for every possible input instance. It is an abstract mathematical concept, independent of any programming language or machine implementation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Define the terms “problem” and “instance” in algorithm design.

A

A “problem” is a general question to be solved, defined by a set of possible inputs and the desired output for each input. An “instance” is a specific input from the set of possible inputs for the problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why is an algorithm not the same as a computer program?

A

An algorithm is an abstract concept outlining the steps to solve a problem, while a program is a concrete implementation of an algorithm in a specific programming language, complete with syntax and language-specific structures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the importance of distinguishing between algorithms and programs?

A

Distinguishing between algorithms and programs allows for focusing on the problem-solving process without being distracted by programming details. It enables the design of efficient algorithms that can be implemented in any programming language.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Why should algorithm development come before programming?

A

Developing the algorithm first ensures a correct and efficient solution to the problem. Programming without a well-designed algorithm may lead to inefficient or incorrect solutions, as implementation details can distract from underlying problem-solving.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are elementary operations in time complexity analysis?

A

Elementary operations are the basic computations that an algorithm performs, considered to take constant time. Examples include arithmetic operations on single digits, comparisons, and data access operations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How is the size of an instance defined in algorithm analysis?

A

The size of an instance, denoted as n, is typically defined as the amount of input data, such as the number of elements in a list, the number of vertices in a graph, or the number of digits in a number.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is time complexity and why is it crucial in algorithm analysis?

A

Time complexity measures the amount of computational time an algorithm takes relative to the size of the input. It is crucial because it helps predict performance and scalability, guiding the selection of the most efficient algorithm for a problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is Big-O notation?

A

Big-O notation describes an upper bound on the time complexity of an algorithm, characterizing its growth rate by focusing on the dominant term and ignoring constant factors and lower-order terms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How do you formally define that a function t(n) is O(f(n))?

A

A function t(n) is O(f(n)) if there exists a positive constant c and a value n₀ such that for all n ≥ n₀, t(n) ≤ c * f(n). This means t(n) does not grow faster than f(n) up to a constant multiple.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why are constants and lower-order terms ignored in Big-O notation?

A

Constants and lower-order terms become negligible for large input sizes compared to the dominant term, so Big-O notation focuses on the most significant factor affecting growth rate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Explain the difference in time complexity between adding and multiplying two n-digit numbers.

A

Adding two n-digit numbers has a time complexity of O(n) because each digit is processed once. Multiplying two n-digit numbers using the standard algorithm has a time complexity of O(n²) because each digit of one number is multiplied by each digit of the other.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Is it possible to multiply two n-digit numbers faster than O(n²)?

A

Yes, advanced algorithms like Karatsuba’s algorithm and the Fast Fourier Transform (FFT) based algorithms achieve multiplication in O(n^1.585) and O(n log n log log n) time, respectively.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the Interval Scheduling problem?

A

The Interval Scheduling problem involves selecting the maximum number of non-overlapping intervals from a given set of intervals on the real axis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How does the Big-O notation handle the sum of two functions?

A

The Big-O of a sum of functions is determined by the function with the largest growth rate. Formally, O(f(n) + g(n)) = O(max(f(n), g(n))).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does it mean when we say an algorithm has polynomial time complexity?

A

An algorithm has polynomial time complexity if its running time can be expressed as O(n^k) for some constant k, meaning it scales reasonably with input size and is considered efficient.

17
Q

Can you give an example of an algorithm with logarithmic time complexity?

A

Binary search is an example of an algorithm with logarithmic time complexity, specifically O(log n), as it divides the search interval in half with each step.

18
Q

How does Big-O notation compare the growth rates of n^c and a^n for constants c > 0 and a > 1?

A

Exponential functions a^n grow faster than any polynomial n^c, so n^c = o(a^n), meaning n^c grows significantly slower than a^n.

19
Q

What is the significance of logarithmic functions in algorithm analysis?

A

Logarithmic functions often appear in time complexities of algorithms that divide the problem size by a constant factor at each step, such as divide-and-conquer algorithms.

20
Q

Is it correct to say O(n log n) = O(n) because log n grows slowly compared to n?

A

No, this is incorrect. While log n grows slowly, it is not negligible, and O(n log n) represents a higher growth rate than O(n).

21
Q

Does the base of the logarithm matter in Big-O notation?

A

No, the base of the logarithm does not affect Big-O notation because logarithms with different bases differ by a constant factor, which is ignored in Big-O analysis.

22
Q

What is an example where worst-case time complexity is too pessimistic?

A

Quicksort has a worst-case time complexity of O(n²) but an average-case time complexity of O(n log n). In practice, the average-case is more representative, making worst-case analysis too pessimistic for this algorithm.

23
Q

How does O(n + n/2 + n/4 + n/8 + …) simplify to O(n)?

A

The sum forms a geometric series that converges to 2n, so the total complexity is O(n), as the series sums to a constant multiple of n.

24
Q

Why might we allow exceptions in the definition of Big-O notation?

A

Allowing finite exceptions simplifies analysis by focusing on large n behavior without worrying about small input sizes where the inequality might not hold.

25
Q

Can we say that O(n + … + n) = O(n) regardless of the number of terms?

A

No, the total complexity depends on the number of terms. If there are k terms, the complexity is O(k * n). Only if k is a constant does it simplify to O(n).

26
Q

Is O(n log n) in O(n log log n), and vice versa?

A

No, n log n grows faster than n log log n, so O(n log log n) ⊆ O(n log n), but not the other way around.

27
Q

How do we use limits to prove O-bounds?

A

If the limit of t(n)/f(n) as n approaches infinity is a constant c ≥ 0, then t(n) = O(f(n)).

28
Q

What is the practical implication of an algorithm with exponential time complexity?

A

Algorithms with exponential time complexities (like O(2^n)) become impractical for even moderately large input sizes due to the rapid growth in running time.

29
Q

How can we show that O(n^√n) = O(2^n)?

A

Since n^√n = e^(√n * ln n), and √n * ln n grows slower than n, n^√n grows slower than any exponential function like 2^n, so O(n^√n) ⊆ O(2^n).

30
Q

Why are polynomial time algorithms considered efficient?

A

Because their running times increase at a manageable rate as input size grows, making them practical for a wide range of problem sizes.

31
Q

Explain why algorithms with time complexities of O(n log n) are generally preferable to those with O(n²).

A

Because n log n grows significantly slower than n², especially for large n, resulting in faster algorithms that scale better with increased input sizes.

32
Q

Why is average-case analysis sometimes more appropriate than worst-case?

A

In cases where worst-case scenarios are rare or artificial, average-case analysis provides a more realistic measure of an algorithm’s performance on typical inputs.

33
Q

What is a practical example of logarithmic time complexity in data structures?

A

Operations on balanced binary search trees (e.g., insertion, deletion, search) typically have O(log n) time complexity due to the tree’s height being logarithmic in the number of nodes.

34
Q

How does understanding Big-O notation help in algorithm design?

A

It helps in evaluating and comparing algorithms based on their efficiency and scalability, guiding the selection of the most appropriate algorithm for a given problem.

35
Q

What does it mean for one function to be “smaller” than another in terms of Big-O notation?

A

It means that the first function grows at a slower rate than the second function as the input size increases.

36
Q

Can an algorithm with a higher Big-O classification be faster in practice than one with a lower one?

A

Yes, for small input sizes or due to large constant factors in the lower Big-O algorithm, but for large inputs, the algorithm with the lower Big-O classification will generally be faster.

37
Q

Why is Big-O notation considered an “asymptotic” analysis?

A

Because it describes the behavior of functions as the input size approaches infinity, focusing on long-term growth rates rather than exact values for finite n.

38
Q

In the context of Big-O notation, what does “suppressing constant factors” mean?

A

It means ignoring multiplicative constants in the analysis since they do not affect the overall growth rate and are insignificant for large input sizes.