Sorting Flashcards
Best , average, worst time of quicksort
Best: Ω(n log(n))
Average: Θ(n log(n))
Worst: O(n^2)
Worst space compl of quicksort
O(log(n))
Merge sort time best, avg, worst
Mergesort
Ω(n log(n))
Θ(n log(n))
O(n log(n))
Merge sort space worst
O(n)
Timsort best, avg, worst time
TimsortΩ(n)Θ(n log(n))O(n log(n))
Timsort worst space
O(n)
Heapsort b,a,w time
HeapsortΩ(n log(n))Θ(n log(n))O(n log(n))
Heapsort worst space
O(1)
Bubble sort b, a, worst time
Bubble SortΩ(n)Θ(n^2)O(n^2)
Bubble sort worst space
o(1)
Insertion sort b,a ,w time
Insertion SortΩ(n)Θ(n^2)O(n^2)
Inesrtion sort w space
O(1)
Selection sort b,a,w time
Selection SortΩ(n^2)Θ(n^2)O(n^2)
Selection sort worst space
O(1)
Tree sort time b,a,w
Tree SortΩ(n log(n))Θ(n log(n))O(n^2)
Tree sort worst space complexity
O(n)
Shell sort b,a,w time
Shell SortΩ(n log(n))Θ(n(log(n))^2)O(n(log(n))^2)
Shell sort worst space
O(1)
Bucket sort b,a,w time
Bucket SortΩ(n+k)Θ(n+k)O(n^2)
Bucket sort worst space
O(n)
Radix sort b,a,w time
Radix SortΩ(nk)Θ(nk)O(nk)
Radix sort worst space
O(n+k)
Counting sort b,a,w time
Counting SortΩ(n+k)Θ(n+k)O(n+k)
Counting sort worst space
O(k)
Cube sort b,a,w time
CubesortΩ(n)Θ(n log(n))O(n log(n))
Cubesort worst space
O(n)
Shellsort description
Shellsort, also known as Shell sort or Shell’s method, is an in-place comparison sort. It can be seen as either a generalization of sorting by exchange (bubble sort) or sorting by insertion (insertion sort).[3] The method starts by sorting pairs of elements far apart from each other, then progressively reducing the gap between elements to be compared. Starting with far apart elements, it can move some out-of-place elements into position faster than a simple nearest neighbor exchange. Donald Shell published the first version of this sort in 1959.[4][5] The running time of Shellsort is heavily dependent on the gap sequence it uses. For many practical variants, determining their time complexityremains an open problem.
Bucket sort description
Bucket sort, or bin sort, is a sorting algorithm that works by distributing the elements of an array into a number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm, or by recursively applying the bucket sorting algorithm. It is a distribution sort, a generalization of pigeonhole sort, and is a cousin of radix sort in the most-to-least significant digit flavor
Bucket sort works as follows:
Set up an array of initially empty “buckets”.
Scatter: Go over the original array, putting each object in its bucket.
Sort each non-empty bucket.
Gather: Visit the buckets in order and put all elements back into the original array.
Cubesort description
Cubesort is a parallel sorting algorithm that builds a self-balancing multi-dimensional array from the keys to be sorted. As the axes are of similar length the structure resembles a cube. After each key is inserted the cube can be rapidly converted to an array.[1]
Timsort description
Timsort is a hybrid stable sorting algorithm, derived from merge sort and insertion sort, designed to perform well on many kinds of real-world data. It uses techniques from Peter McIlroy’s “Optimistic Sorting and Information Theoretic Complexity”. The algorithm finds subsequences of the data that are already ordered, and uses that knowledge to sort the remainder more efficiently. This is done by merging an identified subsequence, called a run, with existing runs until certain criteria are fulfilled.
Counting sort description
Counting Sort
Counting sort is a sorting technique based on keys between a specific range. It works by counting the number of objects having distinct key values (kind of hashing). Then doing some arithmetic to calculate the position of each object in the output sequence.