Complexity and sorting Flashcards

1
Q

What would you call a finite sequence of precise instructions for performing a computation or for solving a problem?

A

An algorithm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the seven properties of an algorithm?

A

Input, output, definiteness, correctness, finiteness, effectiveness and generality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

There are several properties that algorithms generally share. What does input mean in this context?

A

An algorithm has input values from a specified set.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

There are several properties that algorithms generally share. What does output mean in this context?

A

From each set of input values an algorithm produces output values from a specified set.
The output values are the solution to the problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

There are several properties that algorithms generally share. What does definiteness mean in this context?

A

The steps of an algorithm must be defined precisely.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

There are several properties that algorithms generally share. What does correctness mean in this context?

A

An algorithm should produce the correct output values for each set of input values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

There are several properties that algorithms generally share. What does finiteness mean in this context?

A

An algorithm should produce the desired output after a finite (but perhaps large) number of steps for any input in the set.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

There are several properties that algorithms generally share. What does effectiveness mean in this context?

A

It must be possible to perform each step of an algorithm exactly and in a finite amount of time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

There are several properties that algorithms generally share. What does generality mean in this context?

A

The procedure should be applicable for all problems of the desired form, not just for a particular set of input values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a greedy algorithm?

A

This approach selects the best choice at each step, instead of considering all sequences of steps that may lead to an optimal solution. Algorithms that make what seems to be the “best” choice at each step are called greedy algorithms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why do you need to verify the results of a greedy algorithm?

A

Once we know that a greedy algorithm finds a feasible solution, we need to determine whether it has found an optimal solution. (Note that we call the algorithm “greedy” whether or not it finds an optimal solution.) To do this, we either prove that the solution is optimal or we show that there is a counterexample where the algorithm yields a nonoptimal solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Name different types of algorithms.

A

When we talk about different types of algorithmic paradigms, we refer also to: greedy algorithm, brute-force algorithms, divide-and-conquer algorithms, probabilistic algorithms, backtracking and dynamic programming.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are brute-force algorithms designed for?

A

Brute-force algorithms are designed to solve problems without regard to the computing resources required. For example, in some brute-force algorithms the solution to a problem is found by examining every possible solution, looking for the best possible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a randomised algorithm?

A

A randomized algorithm is an algorithm which employs a degree of randomness as part of its logic. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the “average case” over all possible choices of random bits. Formally, the algorithm’s performance will be a random variable determined by the random bits; thus either the running time, or the output (or both) are random variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is a nondeterministic algorithm?

A

A nondeterministic algorithm is an algorithm that can exhibit different behaviors on different runs, as opposed to a deterministic algorithm. There are several ways an algorithm may behave differently from run to run.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What can a concurrent algorithm perform?

A

A concurrent algorithm can perform differently on different runs due to a race condition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What can a probabalistic algorithm perform?

A

A probabalistic algorithm’s behaviors depends on a random number generator

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

The time required to solve a problem usually depends on…

A

The number of operations the algorithm uses, the hardware and the software can be used as an example.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the main advantage of using big-O notation?

A

One of the advantages of using big-O notation, is that we can estimate the growth of a function without worrying about constant multipliers or smaller order terms. This means that, using big-O notation, we do not have to worry about the hardware and software used to implement an algorithm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

In what way does using big-O notation simplify the analysis?

A

Using big-O notation, we can assume that the different operations used in an algorithm take the same time, which simplifies the analysis considerably. Big-O notation is used extensively to estimate the number of operations an algorithm uses as its input grows. With the help of this notation, we can determine whether it is practical to use a particular algorithm to solve a problem as the size of the input increases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Can you use big-O notation to compare two algorithms?

A

Using big-O notation, we can compare two algorithms to determine which is more efficient as the size of the input grows. For instance, if we have two algorithms for solving a problem, one using 100n2 + 17n + 4 operations and the other using n3 operations, big-O notation can help us see that the first algorithm uses considerably fewer operations when n is large, even though it uses more operations for small values of n, such as n = 10.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is a useful approach for finding a pair of witnesses?

A

A useful approach for finding a pair of witnesses is to first select a value of k for which the size of |f(x)| can be readily estimated when x > k and to see whether we can use this estimate to find a value of C for which |f(x)| ≤ C|g(x)| for x > k.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What are subprocedures and what are they for?

A

Many algorithms are made up of two or more separate subprocedures. The number of steps used by a computer to solve a problem with input of a specified size using such an algorithm is the sum of the number of steps used by these subprocedures. To give a big-O estimate for the number of steps needed, it is necessary to find big-O estimates for the number of steps used by each subprocedure and then combine these estimates.

24
Q

Big-O notation is used extensively to describe the growth of functions, but it has limitations. Name some of the limitations.

A

Big-O notation does not provide a lower bound for the size of f(x) for large x. For this, we use big-Omega (big-Ω) notation. When we want to give both an upper and a lower bound on the size of a function f(x), relative to a reference function g(x),we use big-Theta (big-Θ) notation.

25
Q

What was Donalds Knuths motivation when introducing big-Omega and big-Theta notation?

A

Donald Knuths motivation for introducing these notations was the common misuse of big-O notation when both an upper and a lower bound on the size of a function are needed.

26
Q

When does an algorithm provide a satisfactory solution to a problem?

A

First, it must always produce the correct answer. This can be guaranteed by applying program verification techniques. Second, it should be efficient. How can the efficiency of an algorithm be analyzed? One measure of efficiency is the time used by a computer to solve a problem using the algorithm, when input values are of a specified size. A second measure is the amount of computer memory required to implement the algorithm when input values are of a specified size

27
Q

Why is it important to consider time and space complexity when implementing algorithms?

A

Considerations of the time and space complexity of an algorithm are essential when algorithms are implemented. It is obviously important to know whether an algorithm will produce an answer in a microsecond, a minute, or a billion years. Likewise, the required memory must be available to solve a problem, so that space complexity must be taken into account.

28
Q

How is time complexity in an algorithm expressed?

A

The time complexity of an algorithm can be expressed in terms of the number of operations used by the algorithm when the input has a particular size. The operations used to measure time complexity can be the comparison of integers, the addition of integers, the multiplication of integers, the division of integers, or any other basic operation.

29
Q

What does worst-case complexity mean?

A

By the worst-case performance of an algorithm, we mean the largest number of operations needed to solve the given problem using this algorithm on input of specified size. Worst-case analysis tells us how many operations an algorithm requires to guarantee that it will produce a solution..

30
Q

What is an average-case complexity?

A

Another important type of complexity analysis, besides worst-case analysis, is called average-case analysis. The average number of operations used to solve the problem over all possible inputs of a given size is found in this type of analysis. Average-case time complexity analysis is usually much more complicated than worst-case analysis.

31
Q

What is O(1) complexity called?

A

Constant complexity

32
Q

What is O(log n) complexity called?

A

Logarithmic complexity

33
Q

What is O(n) complexity called?

A

Linear complexity

34
Q

What is O(n log n) complexity called?

A

Linearithmic complexity

35
Q

What is O(n^b) complexity called?

A

Polynomial complexity

36
Q

What is O(b^n) complexity called, where b > 1?

A

Exponential complexity

37
Q

What is O(n!) complexity called?

A

Factorial complexity

38
Q

When can a binary search be used?

A

Binary search algorithm can be used when the list has terms occurring in order of increasing size (for instance: if the terms are numbers, they are listed from smallest to largest; if they are words, they are listed in lexicographic, or alphabetic, order). It proceeds by comparing the element to be located to the middle term of the list. The list is then split into two smaller sublists of the same size, or where one of these smaller lists has one fewer term than the other. The search continues by restricting the search to the appropriate sublist based on the comparison of the element to be located and the middle term. This way it makes less comparisons.

39
Q

What type of algorithm does a bubble sort use?

A

The bubble sort algorithm is a polynomial-time algorithm because it uses O(2n) comparisons in the worst case.

40
Q

An algorithm has exponential complexity if….

A

…it has time complexity O(bn), where b > 1

41
Q

An algorithm has factorial complexity if …

A

….it has O(n!) time complexity.

42
Q

What kind of algorithm can find all orders that a travelling salesperson could use to visit n citites?

A

The algorithm that finds all orders that a traveling salesperson could use to visit n cities has factorial complexity.

43
Q

What does recursion mean?

A

Sometimes it is difficult to define an object explicitly. However, it may be easy to define this object in terms of itself. This process is called recursion. Sometimes we can reduce the solution to a problem with a particular set of input values to the solution of the same problem with smaller input values. An algorithm is called recursive if it solves a problem by reducing it to an instance of the same problem with smaller input.

44
Q

What does this function illustate? f(n) = af(n/b ) + g(n)

A

This is called a divide-and-conquer recurrence relation.

45
Q

What does the binary search algorith do?

A

The binary search algorithm reduces the search for an element in a search sequence of size n to the binary search for this element in a search sequence of size n/2 , when n is even. Two comparisons are needed to implement this reduction (one to determine which half of the list to use and the other to determine whether any terms of the list remain). Hence, if f(n) is the number of comparisons required to search for an element in a search sequence of size n, then f(n) = f(n2 ) + 2 when n is even.

46
Q

What are divide-and-conquer algorithms?

A

Divide-and-conquer algorithms divide a problem into one or more instances of the same problem of smaller size and they conquer the problem by using the solutions of the smaller problems to find a solution of the original problem, perhaps with some additional work.

47
Q

What does the bubble sort do?

A

It puts a list into increasing order by successively comparing adjacent elements, interchanging them if they are in the wrong order. To carry out the bubble sort, we perform the basic operation, that is, interchanging a larger element with a smaller one following it, starting at the beginning of the list, for a full pass.We iterate this procedure until the sort is complete. We can imagine the elements in the list placed in a column. In the bubble sort, the smaller elements bubble to the top as they are interchanged with larger elements. The larger elements sink to the bottom. The bubble sort is one of the simplest sorting algorithms, but not one of the most efficient.

48
Q

What is the significant advantage of a bubble sort compared to other sorting methods?

A

The only significant advantage that bubble sort has over most other implementations, even quicksort, but not insertion sort, is that the ability to detect that the list is sorted efficiently is built into the algorithm. When the list is already sorted (best-case), the complexity of bubble sort is only O(n). By contrast, most other algorithms, even those with better average-case complexity, perform their entire sorting process on the set and thus are more complex. However, not only does insertion sort have this mechanism too, but it also performs better on a list that is substantially sorted (having a small number of inversions).

49
Q

In which case should you avoid using bubble sort?

A

Bubble sort should be avoided in the case of large collections. It will not be efficient in the case of a reverse-ordered collection.

50
Q

What kind of complexity does a bubble sort have?

A

Bubble sort has worst-case and average complexity both (n^2), where n is the number of items being sorted. There exist many sorting algorithms, such as mergesort with substantially better worst-case or average complexity of O(n log n). Even other (n^2) sorting algorithms, such as insertion sort, tend to have better performance than bubble sort. Therefore, bubble sort is not a practical sorting algorithm when n is large.

51
Q

What does a selection sort do?

A

The algorithm divides the input list into two parts: the sublist of items already sorted, which is built up from left to right at the front (left) of the list, and the sublist of items remaining to be sorted that occupy the rest of the list. Initially, the sorted sublist is empty and the unsorted sublist is the entire input list. The algorithm proceeds by finding the smallest (or largest, depending on sorting order) element in the unsorted sublist, exchanging (swapping) it with the leftmost unsorted element (putting it in sorted order), and moving the sublist boundaries one element to the right.

52
Q

Why is selection sort not difficult to analyse compared to other sorting algorithms?

A

Selection sort is not difficult to analyse compared to other sorting algorithms since none of the loops depend on the data in the array. Selecting the lowest element requires scanning all n elements (this takes n−1 comparisons) and then swapping it into the first position. Finding the next lowest element requires scanning the remaining n−1 elements and so on, for (n−1)+(n−2)+…+2+1 = n(n−1)/2 ∈ Θ(n2) comparisons. Each of these scans requires one swap for n − 1 elements (the final element is already in place).

53
Q

In what case is selection sort greatly outperformed?

A

Selection sort is greatly outperformed on larger arrays by Θ(nlogn) divide-and-conquer algorithms such as mergesort. However, insertion sort or selection sort are both typically faster for small arrays (i.e. fewer than 10-20 elements). A useful optimisation in practice for the recursive algorithms is to switch to insertion sort or selection sort for ”small enough” sublists.

54
Q

What type of sorting algorithm work as follows: Divides the unsorted list into n sublists, each containing 1 element (a list of 1 element is considered sorted). Repeatedly merge sublists to produce new sorted sublists until there is only 1 sublist remaining. This will be the sorted list.

A

Mergesort

55
Q

What is a quicksort?

A

Quicksort (sometimes called partition-exchange sort) is an efficient sorting algorithm, serving as a sys- tematic method for placing the elements of an array in order. In efficient implementations it is not a stable sort, meaning that the relative order of equal sort items is not preserved. Quicksort can operate in-place on an array, requiring small additional amounts of memory to perform the sorting

56
Q

What kind of algorithm is a quicksort?

A

Quicksort is a divide-and-conquer algorithm. Quicksort first divides a large array into two smaller sub-arrays: the low elements and the high elements. Quicksort can then recursively sort the sub-arrays.

57
Q

Which type of of sorting has both worst case and average complexity in (n^2)?

A

Bubble sort has worst-case and average complexity both (n2), where n is the number of items being sorted. There exist many sorting algorithms, such as mergesort with substantially better worst-case or average complexity of O(nlogn). Even other (n2) sorting algorithms, such as insertion sort, tend to have better performance than bubble sort. Therefore, bubble sort is not a practical sorting algorithm when n is large.