*Sorting, Searching & Algorithm Analysis* Flashcards

1
Q

Bubble sort

A

An algorithm that’s an easy way to arrange data in ascending or descending order. It starts by comparing the first element in an array to its neighbor and if element 0 is greater than element 1, then they’re swapped. This process of comparing neighboring values in the array is repeated for all values in the array multiple times until they’re all in ascending order. It’s called a bubble sort because it makes several passes through the elements of an array and the larger values “bubble” toward the end of the array with each pass.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Selection sort

A

Contrary to the bubble sort, which begins by comparing elements 0 and 1, this algorithm begins by scanning the entire array to locate the smallest value and move it to position 0. Then the next smallest value is identified and moved to position 1. This process is repeated until all of the elements have been placed in the proper order.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Insertion sort

A

Like the bubble sort, the insertion sort begins by comparing elements 0 and 1, and if necessary, swaps them so that they’re in the proper order. Then the third element is compared to the first two (the sorted part of the array) and is inserted in the appropriate place relative to the others. This process is repeated for each element in the array until it is fully sorted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Quicksort

A

The quicksort algorithm is a popular routine developed in 1960 by C.A.R. Hoare. It divides an array into two sublists with a pivot value between them. It arranges the sublists so that all the values less then the pivot are stored in the left sublist and all values greater than the pivot are stored in the right sublist. Then the algorithm recursively sorts the sublists in the same way.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Can any of these sorting algorithms be applied to objects as opposed to integers or doubles?

A

Yes. You can use them all to sort an array of objects by implementing the Comparable interface so you can use the compareTo method for comparing the elements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Sequential/ linear search

A

The sequential search algorithm uses a loop to sequentially search through an array, starting with the first element. It compares each element with the target value and stops when the value is found or the end of the array is reached. Its main advantages are its simplicity and flexibility; it can be used on any array. However this flexibility has a cost. If the array has 20,000 elements then the algorithm may need to look at all 20,000 if the target value is the last value or isn’t present in the array. In an average case though, an item is just as likely to be found near the beginning as the end. Therefore the sequential search will typically locate a target value in n/2 attempts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Binary search

A

An efficient alternative to the sequential search algorithm that can be leveraged as long as the elements of an array are sorted. It begins by testing the middle element for the target value. If they are equal, then the search ends. Otherwise the process continues for the elements in the first half or second half of the array depending on if the target value is less than or greater than the middle value. Therefore this cuts the portion of the array being searched in half each time the loop fails to locate the target value. Powers of 2 are used to calculate the maximum amount of comparisons needed to find a target value in an array of any size. Simply find the smallest power of 2 that is equal to or greater than the number of elements in the array. For example, for an array of 50,000 elements, a maximum number of 16 comparisons are needed (2^16 = 65,536).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What two criteria are most often used to judge the efficiency of an algorithm?

A

Space (the amount of memory required) and time (the length of execution time).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Worst case complexity function

A

In cases where the algorithm may perform different amounts of work for different inputs of the same size, it is common to measure the efficiency of the algorithm by the work done on an input size of n that requires the most work. This is the most steps it may perform and gives an indication of the longest time that the algorithm will take to solve an instance of size n. It is a good measure of efficiency when we’re looking for a performance guarantee.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the worst case complexity of binary search?

A

The target value is not found in the array. In this case the algorithm performs L + 1 steps, where L is the number of loop iterations and 1 accounts for the additional step to initialize variables before the loop. In this situation binary search requires time proportional to log n.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the worst case complexity of selection sort?

A

N^2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Average case complexity

A

This gives a good approximation of how an algorithm will perform in situations for which the relative frequencies with which different inputs are likely to occur are known. The average case complexity function uses such frequencies to form a weighted average of the number of steps performed on each input. Although it yields a good measure of the expected performance of an algorithm, it is difficult to use in practice because reliable estimates of input frequencies are usually not available.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Asymptotic complexity

A

The usage of asymptotic analysis for the estimation of computational complexity of algorithms and computational problems, commonly associated with the usage of the big O notation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Big O notation

A

A mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. In computer science, big O notation is used to classify algorithms according to how their running time or space requirements grow as the input size grows. The letter O is used because the growth rate of a function is also referred to as the order of the function. A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function. Associated with big O notation are several related notations, using the symbols o, Ω, ω, and Θ, to describe other kinds of bounds on asymptotic growth rates.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Complexity classes

A

A family of functions that grow no faster than g(n).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Constant time / O(1)

A

A function f(n) is in this class if there is a constant K > 0 such that f(n) =< for all n => 1. An algorithm whose worst case complexity function is in this class is said to run in constant time.

16
Q

Logarithmic time / O(log n)

A

Because log n grows much slower than n , a huge increase in input results in a small increase in running time of the algorithm. This complexity is characteristic of search algorithms that eliminate half of the search space with each operation. This, binary search is in this class.

17
Q

Linear time / O(n)

A

Any increase in the input size results in a proportionate increase in the running time of the algorithm. This complexity is characteristic of the algorithms like sequential search that make a single pass, or a constant number of passes, over their input.

18
Q

“N log n” time / O(n log n)

A

An increase in the size of the problem results in a slightly greater increase in the running time of the algorithm. The average case complexity of Quicksort lies in this class. Two well known sorting algorithms, Mergesort and Heapsort, have worst case complexity functions that also lie in this class.

19
Q

Quadratic time / O(n^2)

A

This performance is characteristic of algorithms that make multiple passes over input data using two nested loops. An increase in input size causes a much greater increase in the running time of the algorithm. The worst case complexity functions of bubble sort, selection sort, insertion sort, and Quicksort all lie in this class.

20
Q

Polynomial time

A

The union of all classes O(n^1), O(n^2), O(n^3), etc. forms a class class called polynomial time.

21
Q

Basic operation

A

A step executed by an algorithm that always takes the same amount of time (is constant) regardless of the size of the input.