Algorithms and Data Structures Flashcards
Linear Search
Linear Search
O(n) Worst Case
Start from the leftmost element of arr[] and one by one compare x with each element of arr[]
for (int i = 0; i < n.size(); i++)
Binary Search
Binary Search
O(log n)
Search a sorted array by repeatedly dividing the search interval in half. Begin with an interval covering the whole array. If the value of the search key is less than the item in the middle of the interval, narrow the interval to the lower half. Otherwise narrow it to the upper half. Repeatedly check until the value is found or the interval is empty.
int binarySearch(int arr[], int l, int r, int x) { if (r >= l) { int mid = l + (r - l) / 2; if (arr[mid] == x) return mid; if (arr[mid] > x) return binarySearch(arr, l, mid - 1, x); return binarySearch(arr, mid + 1, r, x); }
Jump Search
Jump Search
O(√n)
The basic idea is to check fewer elements (than linear search) by jumping ahead by fixed steps or skipping some elements in place of searching all elements. The best step size is m = √n.
int jumpSearch(int arr[], int x, int n) // x value to find,n=size
{ int step = sqrt(n); // Finding block size to be jumped int prev = 0; while (arr[min(step, n)-1] < x) { prev = step; step += sqrt(n); if (prev >= n) return -1; } // Doing a linear search for x in block // beginning with prev. while (arr[prev] < x) { prev++; // If we reached next block or end of // array, element is not present. if (prev == min(step, n)) return -1; } // if element is found if (arr[prev] == x) return prev; return -1; }
Interpolation Search
Interpolation Search
O(n) Worst case
O(log(log(n)), on average
On average, if the elements are uniformly distributed
int pos = lo + ((hi - lo) /(arr[hi] - arr[lo])) *
(x - arr[lo]));
The Interpolation Search is an improvement over Binary Search for instances, where the values in a sorted array are uniformly distributed. On the other hand, interpolation search may go to different locations according to the value of the key being searched. For example, if the value of the key is closer to the last element, an interpolation search is likely to start the search toward the end side.
To find the position to be searched, it uses the following formula.
// The idea of formula is to return higher value of pos // when element to be searched is closer to arr[hi]. And // smaller value when closer to arr[lo] pos = lo + [ (x-arr[lo])*(hi-lo) / (arr[hi]-arr[Lo]) ]
int interpolationSearch(int arr[], int n, int x) { // Find indexes of two corners int lo = 0, hi = (n - 1);
// Since array is sorted, an element present // in array, x must be in range between lo and hi while (lo <= hi && x >= arr[lo] && x <= arr[hi]) { // error for 1 element or less if (lo == hi) { if (arr[lo] == x) return lo; return -1; } // Probing the position with keeping // uniform distribution in mind. int pos = lo + (((double)(hi - lo) / (arr[hi] - arr[lo])) * (x - arr[lo]));
// Condition if target found if (arr[pos] == x) return pos;
// If element is smaller than x, x is in upper part if (arr[pos] < x) lo = pos + 1;
// If x is smaller, x is in the lower part else hi = pos - 1; } return -1; }
Exponential Search
Exponential Search
O(Log n) worst case
Exponential search involves two steps:
Find range where element is present
Do Binary Search in above found range.
The idea is to start with subarray size 1, compare its last element with x, then try size 2, then 4, and so on until the last element of a subarray is not greater.
Once we find an index i (after repeated doubling of i), we know that the element must be present between i/2 and i.
int exponentialSearch(int arr[], int n, int x) { // If x is present at first location itself if (arr[0] == x) return 0;
// Find range for binary search by // repeated doubling int i = 1; while (i < n && arr[i] <= x) i = i*2;
// Call binary search for the found range. // array, i/2 is last iter, smaller of i vs n-1 return binarySearch(arr, i/2, min(i, n-1), x); }
Ternary Search
Ternary Search, aka 3 part search
Time Complexity for // c is an arbitrary parameter
Ternary search = 4clog3n + O(1)
Time Complexity for
Binary search = 2clog2n + O(1)
Since the value of (2 / Log23) > 1, Ternary Search does MORE comparisons than Binary Search in the worst case.
// A recursive binary search function. It returns location of x in // given array arr[l..r] is present, otherwise -1 int binarySearch(int arr[], int l, int r, int x) { if (r >= l) { int mid = l + (r - l)/2;
// If the element is present at the middle itself if (arr[mid] == x) return mid;
// If element is smaller than mid, then it can only be present // in left subarray if (arr[mid] > x) return binarySearch(arr, l, mid-1, x);
// Else the element can only be present in right subarray return binarySearch(arr, mid+1, r, x); }
// when the element is not present in the array return -1; }
Selection Sort
Selection Sort
O(n**2) worst case, as there are two nested loops.
The good thing about selection sort is it never makes more than O(n) swaps and can be useful when memory write is a costly operation.
From the start, linear search to see if any are lower than initial and swap, to find if it is lowest or move to the next in line. It will then swap with current if it is smallest, otherwise it repeats to next in index.
void swap(int *xp, int *yp) { // temp -> x -> y -> temp (became x) int temp = *xp; *xp = *yp; *yp = temp; }
void selectionSort(int arr[], int n) { int i, j, min_idx;
// linear search of unsorted subarray for (i = 0; i < n-1; i++) { // Find the minimum element in unsorted array min_idx = i; // second linear search to find if front < rest or swap for (j = i+1; j < n; j++) if (arr[j] < arr[min_idx]) min_idx = j;
// Swap the found minimum element with the first element swap(&arr[min_idx], &arr[i]); } }
Bubble Sort
Bubble Sort Worst and Average Case Time Complexity: O(n**2) if it was already sorted: O(n)
Bubble Sort is the simplest sorting algorithm by repeatedly swapping the adjacent elements if they are in the wrong order.
First Pass:
( 5 1 4 2 8 ) –> ( 1 5 4 2 8 ), Here, the algorithm compares the first two elements and swaps since 5 > 1.
( 1 5 4 2 8 ) –> ( 1 4 5 2 8 ), Swap since 5 > 4
( 1 4 5 2 8 ) –> ( 1 4 2 5 8 ), Swap since 5 > 2
( 1 4 2 5 8 ) –> ( 1 4 2 5 8 ), Now, since these elements are already in order (8 > 5), algorithm does not swap them.
int n = sizeof(arr)/sizeof(arr[0]); void swap(int *xp, int *yp) { int temp = *xp; *xp = *yp; *yp = temp; }
void bubbleSort(int arr[], int n) { for (int i = 0; i < n-1; i++)
// Last i elements are already in place for (int j = 0; j < n-i-1; j++) if (arr[j] > arr[j+1]) swap(&arr[j], &arr[j+1]); }
Insertion Sort
Insertion Sort Worst and Average Case Time Complexity: O(n**2)/ Better way is Binary Insertion sort! O(log i) i=ith iteration if it was already sorted: O(n)
To sort an array of size n in ascending order:
1: Iterate from arr[1] to arr[n] over the array.
2: Compare the current element (key) to its predecessor.
3: If the key element is smaller than its predecessor, compare it to the elements before. Move the greater elements one position up to make space for the swapped element.
i = 2. 13 will remain at its position as all elements in A[0..I-1] are smaller than 13
11, 12, 13, 5, 6
i = 3. 5 will move to the beginning and all other elements from 11 to 13 will move one position ahead of their current position.
5, 11, 12, 13, 6
void insertionSort(int arr[], int n) { int i, key, j; for (i = 1; i < n; i++) { key = arr[i]; j = i - 1;
/* Move elements of arr[0..i-1], that are greater than key, to one position ahead of their current position */ // j is previous ele // while j is bigger than next, // next ele is now j, then j-- // after, next ele in arr is key while (j >= 0 && arr[j] > key) { arr[j + 1] = arr[j]; j = j - 1; } arr[j + 1] = key; } }
Binary Insertion Sort
Binary Insertion Sort Worst and Average Case Time Complexity: O(log n) n=nth iteration, to worst is O(n**2) if it was already sorted: O(n)
do a binary search first, then insertion sort(insert and move all elements down) the element.
int binarySearch(int a[], int item, int low, int high) { if (high <= low) return (item > a[low]) ? (low + 1) : low;
int mid = (low + high) / 2; if (item == a[mid]) return mid + 1;
if (item > a[mid]) return binarySearch(a, item, mid + 1, high); return binarySearch(a, item, low, mid - 1); }
void insertionSort(int a[], int n) { int i, loc, j, k, selected;
for (i = 1; i < n; ++i) { j = i - 1; selected = a[i];
// find location where selected should be inserted loc = binarySearch(a, selected, 0, j);
// Move all elements after location to create space while (j >= loc) { a[j + 1] = a[j]; j--; } a[j + 1] = selected; } }
int main() { insertionSort(a, n); }
Merge Sort
Merge Sort
Time complexity of Merge Sort is:
O(n Log n) in all 3 cases (worst, average, and best) as merge sort always divides the array into two halves and takes linear time to merge two halves.
Merge Sort continues to cut in half recursively till size is 1. Then the merge process happens and starts merging arrays back until fully merged.
MergeSort(arr[], l, r)
If r > l
1. Find the middle point to divide the array into two halves:
middle m = l+ (r-l)/2
2. Call mergeSort for first half:
Call mergeSort(arr, l, m)
3. Call mergeSort for second half:
Call mergeSort(arr, m+1, r)
4. Merge the two halves sorted in step 2 and 3:
Call merge(arr, l, m, r)
// Merges two subarrays of arr[]. // First subarray is arr[l..m] // Second subarray is arr[m+1..r] void merge(int arr[], int l, int m, int r) { int n1 = m - l + 1; int n2 = r - m;
// Create temp arrays // expression must have constant value ERROR int L[n1], R[n2];
// Copy data to temp arrays L[] and R[] for (int i = 0; i < n1; i++) L[i] = arr[l + i]; for (int j = 0; j < n2; j++) R[j] = arr[m + 1 + j];
// Merge the temp arrays back into arr[l..r]
// Initial index of first subarray int i = 0; // Initial index of second subarray int j = 0; // Initial index of merged subarray int k = l; // lowercase int L = 0;
// n1 is left side.size(), n2 is right side.size() // if left side 0 is <= right side 0, // arr[k] is final array to return // L[0] num is arr[k=0], then i++ // // else R[0] is arr[k=0], then j++ // after, k++ while (i < n1 && j < n2) { if (L[i] <= R[j]) { arr[k] = L[i]; i++; } else { arr[k] = R[j]; j++; } k++; } // should make a first IF statement instead of these whiles // Copy the remaining elements of // L[], if there are any while (i < n1) { arr[k] = L[i]; i++; k++; }
// Copy the remaining elements of // R[], if there are any while (j < n2) { arr[k] = R[j]; j++; k++; } }
// l is for left index and r is // right index of the sub-array // of arr to be sorted */ void mergeSort(int arr[],int l,int r){ if(l>=r){ return;//returns recursively } int m =l+ (r-l)/2; mergeSort(arr,l,m); mergeSort(arr,m+1,r); merge(arr,l,m,r); }
Binary Heap Sort
Binary Heap Sort Time complexity of heapify is: O(Logn). Time complexity of createAndBuildHeap() is: O(n) overall time complexity of Heap Sort is: O(nLogn).
You build a binary tree (parent, left, right), then left to right for each child. You start at the top parent, going down left childs to compare which is larger and moved up, with index 0 will the largest, then 1 is second largest, then 2 is smallest. Then you recursively go through the bottom right-most child, to the right-most parent, starting with smallest int to largest.
Input data: 4, 10, 3, 5, 1 4(0) / \ 10(1) 3(2) / \ 5(3) 1(4)
The numbers in the bracket represent the indices in the array
representation of data.
Applying heapify procedure to index 1: 4(0) / \ 10(1) 3(2) / \ 5(3) 1(4)
Applying heapify procedure to index 0: 10(0) / \ 5(1) 3(2) / \ 4(3) 1(4) The heapify procedure calls itself recursively to build the heap in a top-down manner.
// To heapify a subtree rooted with node i which is // an index in arr[]. n is size of heap void heapify(int arr[], int n, int i) { int largest = i; // Initialize largest as root int l = 2 * i + 1; // left = 2*i + 1 int r = 2 * i + 2; // right = 2*i + 2
// If left child is larger than root if (l < n && arr[l] > arr[largest]) largest = l;
// If right child is larger than largest so far if (r < n && arr[r] > arr[largest]) largest = r;
// If largest is not root if (largest != i) { swap(arr[i], arr[largest]);
// Recursively heapify the affected sub-tree heapify(arr, n, largest); } }
// main function to do heap sort void heapSort(int arr[], int n) { // Build heap (rearrange array) for (int i = n / 2 - 1; i >= 0; i--) heapify(arr, n, i);
// One by one extract an element from heap for (int i = n - 1; i > 0; i--) { // Move current root to end swap(arr[0], arr[i]);
// call max heapify on the reduced heap heapify(arr, i, 0); } }
Quick Sort/Pivot Swap Sort
Quick Sort/Pivot Swap Sort Worst Case: O(n**2) Average and Best Case: O(n Logn)
Merge Sort is preferred over Quick Sort for Linked Lists
Unlike arrays, linked list nodes may not be adjacent in memory. Unlike array, in linked list, we can insert items in the middle in O(1) extra space and O(1) time.
Therefore merge operation of merge sort can be implemented without extra space for linked lists.
QuickSort is a Divide and Conquer algorithm. It picks an element as a pivot and partitions the given array around the picked pivot. There are many different versions of quickSort that pick pivot in different ways.
Always pick the first element as pivot.
Always pick the last element as the pivot (implemented below)
Pick a random element as a pivot.
Pick median (ele 0, n/2, n; pick median int) as a pivot.
https://www.youtube.com/watch?v=PgBzjlCcFvc
/* This function takes last element as pivot, places
the pivot element at its correct position in sorted
array, and places all smaller (smaller than pivot)
to left of pivot and all greater elements to right
of pivot */
int partition (int arr[], int low, int high)
{
int pivot = arr[high]; // pivot
int i = (low - 1); // Index of smaller element and indicates the right position of pivot found so far
// for (int j = low/-1 ; j <= high - 1; j++) { // If current element is smaller than the pivot(high), // increment index of smaller element, // then swap(smaller, larger) if (arr[j] < pivot) { i++; swap(&arr[i], &arr[j]); } } swap(&arr[i + 1], &arr[high]); return (i + 1); }
/* The main function that implements QuickSort
arr[] –> Array to be sorted,
low –> Starting index,
high –> Ending index /
void quickSort(int arr[], int low, int high)
{
if (low < high)
{
/ pi is partitioning index, arr[p] is now
at right place */
int pi = partition(arr, low, high);
// Separately sort elements before // partition and after partition quickSort(arr, low, pi - 1); quickSort(arr, pi + 1, high); } }
Radix Sort
Radix Sort
Complexity Analysis:
The worst complexity is:
O((n + b) * logb(maxx))
where b is the base(position of each number 10**1,2,3) for representing numbers and maxx is the maximum element of the input array.
This is clearly visible as we make (n + b) iterations logb(maxx) times (number of digits in the maximum element).
If maxx <= n**c, then the complexity can be written as:
O(n * logb(n))
Advantages :
- Fast when the keys are short i.e. when the range of the array elements is less.
- Used in suffix array constuction algorithms like Manber’s algorithm and DC3 algorithm.
Disadvantages:
- Since Radix Sort depends on digits or letters, Radix Sort is much less flexible than other sorts. Hence, for every different type of data, it needs to be rewritten.
- The constant for Radix sort is greater compared to other sorting algorithms.
- It takes more space compared to Quicksort, which is in-place sorting.
You look at the ones’ column, and sort the numbers, then the 10’s, 100’s, and so on.
Original, unsorted list:
170, 45, 75, 90, 802, 24, 2, 66
Sorting by the least significant digit (1s place) gives:
[*Notice that we keep 802 before 2, because 802 occurred
before 2 in the original list, and similarly for pairs
170 & 90 and 45 & 75.]
170, 90, 802, 2, 24, 45, 75, 66
Sorting by next digit (10s place) gives:
[*Notice that 802 again comes before 2 as 802 comes before
2 in the previous list.]
802, 2, 24, 45, 66, 170, 75, 90
Sorting by the most significant digit (100s place) gives:
2, 24, 45, 66, 75, 90, 170, 802
//O((n + b) * logb(maxx)) //where b is the base for representing numbers and maxx is the maximum //element of the input array. //This is clearly visible as we make (n + b) iterations logb(maxx) times //(number of digits in the maximum element).
//If maxx <= n**c, then the complexity can be written as: //O(n * logb(n)) // A utility function to get maximum value in arr[] int getMax(int arr[], int n) { int mx = arr[0]; for (int i = 1; i < n; i++) if (arr[i] > mx) mx = arr[i]; return mx; }
// A function to do counting sort of arr[] according to // the digit represented by exp. void countSort(int arr[], int n, int exp) { int output[n]; // output array int i, count[10] = { 0 }; // count is size of numbers
// Store count of occurrences in count[] // exp is exponent for 10**i for (i = 0; i < n; i++) count[(arr[i] / exp) % 10]++;
// Change count[i] so that count[i] now contains actual // position of this digit in output[] for (i = 1; i < 10; i++) count[i] += count[i - 1];
// Build the output array for (i = n - 1; i >= 0; i--) { output[count[(arr[i] / exp) % 10] - 1] = arr[i]; count[(arr[i] / exp) % 10]--; }
// Copy the output array to arr[], so that arr[] now // contains sorted numbers according to current digit for (i = 0; i < n; i++) arr[i] = output[i]; }
// The main function to that sorts arr[] of size n using // Radix Sort void radixsort(int arr[], int n) { // Find the max, total elements vs sizeof.(ele) int m = getMax(arr, n);
// Do counting sort for every digit. Note that instead // of passing digit number, exp is passed. exp is 10^i // where i is current digit number // m = max/1 > 0; // as long as max is bigger than 1; exp**10 gets bigger for (int exp = 1; m / exp > 0; exp *= 10) countSort(arr, n, exp); }
Counting Sort
Counting Sort
O(n+k)
where n is # of elements in array and k is the range.
Counting sort is a sorting technique based on keys between a specific range. It works by counting the number of objects having distinct key values (kind of hashing). Then either start from smallest buckets to decrement. Or you can continue to add the count of the next ele so it is a placeholder for which position the ele will have when checked.
https://www.youtube.com/watch?v=7zuGmKfUt7s
Points to be noted:
- Counting sort is efficient if the range of input data is not significantly greater than the number of objects to be sorted. Consider the situation where the input sequence is between range 1 to 10K and the data is 10, 5, 10K, 5K.
- It is not a comparison-based sorting. It is a running time complexity is O(n) with space proportional to the range of data.
- It is often used as a sub-routine to another sorting algorithm like radix sort.
- Counting sort uses a partial hashing to count the occurrence of the data object in O(1).
- Counting sort can be extended to work for negative inputs also.
void countSort(vector& arr) { int max = *max_element(arr.begin(), arr.end()); int min = *min_element(arr.begin(), arr.end()); int range = max - min + 1;
//if 1 is smallest, arr[i] - 1 to start at ele 0, then ++ for each bucket type vector count(range), output(arr.size()); for (int i = 0; i < arr.size(); i++) count[arr[i] - min]++;
// this cumulatively adds up start to finish, at the end, should add to total // range for (int i = 1; i < count.size(); i++) count[i] += count[i - 1];
// starting from high side of arr, place int at ele count, then count–
for (int i = arr.size() - 1; i >= 0; i–) {
output[count[arr[i] - min] - 1] = arr[i];
count[arr[i] - min]–;
}
for (int i = 0; i < arr.size(); i++) arr[i] = output[i]; }
Bucket Sort
Bucket Sort
O(n**2) is worst case
O(n) is average case if uniformly distrubuted
The worst-case scenario occurs when all the elements are placed in a single bucket. The overall performance would then be dominated by the algorithm used to sort each bucket, which is:
O(n**2) is worst case, similar to insertion sort
making bucket sort less optimal than:
O (n log(n)) comparison sort algorithms, such as merge sort.
Consider the case that the input is uniformly distributed, it can be done in:
O(n) is average case
bucketSort(arr[], n)
1) Create n empty buckets (Or lists).
2) Do following for every array element arr[i].
…….a) Insert arr[i] into bucket[n*array[i]]
3) Put buckets back into array FIRST (for optimization)
4) THEN run insertion sort over complete array
void bucketSort(float arr[], int n) {
// 1) Create n empty buckets vector b[n];
// 2) Put array elements // in different buckets for (int i = 0; i < n; i++) { int bi = n * arr[i]; // bi is pointer to Index in bucket b[bi].push_back(arr[i]); // push the int into that index }
// 3) Concatenate all buckets into arr[] int index = 0; for (int i = 0; i < n; i++) for (int j = 0; j < b[i].size(); j++) arr[index++] = b[i][j];
// 4) Sort individual buckets // for (int i = 0; i < n; i++) sort(arr.begin(), arr.end());
}
Shell Sort
Shell Sort
O(n**2) worst case
depends on gap size
O (n5/4) to O (n3/2) average case
it is NOT stable, because you need to swap all the pieces before sorting,
but it IS in place
Shell sort takes a gap at half of n, first element looks at next ele at the length of gap, makes a swap if it is lower, and repeats until end. it repeats and reduces gap by 1 and continues till gap is 1.
int shellSort(int arr[], int n) { // Start with a big gap, then reduce the gap for (int gap = n/2; gap > 0; gap /= 2) { // Do a gapped insertion sort for this gap size. // The first gap elements a[0..gap-1] are already in gapped order // keep adding one more element until the entire array is // gap sorted for (int i = gap; i < n; i++) { // add a[i] to the elements that have been gap sorted // save a[i] in temp and make a hole at position i int temp = arr[i];
// shift earlier gap-sorted elements up until the correct // location for a[i] is found int j; for (j = i; j >= gap && arr[j - gap] > temp; j -= gap) arr[j] = arr[j - gap];
// put temp (the original a[i]) in its correct location arr[j] = temp; } } return 0; }
Comb Sort
Comb Sort
Comb Sort works better than Bubble Sort on average, but the worst case remains as:
O(n**2)
The gap starts with a large value and shrinks by a factor of 1.3 in every iteration until it reaches the value 1.
gap = (gap*10)/13;
Starts at about half gap, from initial to gap ele, compares if lesser and swaps if so, then moves to next ele and repeats until the end. Next iteration, the gap is reduced again by a ratio of 1.3 until 1.
// To find gap between elements int getNextGap(int gap) { // Shrink gap by Shrink factor gap = (gap*10)/13;
if (gap < 1) return 1; return gap; }
// Function to sort a[0..n-1] using Comb Sort void combSort(int a[], int n) { // Initialize gap int gap = n;
// Initialize swapped as true to make sure that // loop runs bool swapped = true;
// Keep running while gap is more than 1 and last // iteration caused a swap while (gap != 1 || swapped == true) { // Find next gap, either 1.3 gap = getNextGap(gap);
// Initialize swapped as false so that we can // check if swap happened or not swapped = false;
// Compare all elements with current gap for (int i=0; i a[i+gap]) { swap(a[i], a[i+gap]); swapped = true; } } } }
Pigeonhole Sort
Pigeonhole Sort
O(n + Range) worst case
Pigeonhole Sort, similar to Bucket Sort
Pigeonhole sorting is a sorting algorithm that is suitable for sorting lists of elements where the number of elements and the number of possible key values/range are approximately the same.
void pigeonholeSort(int arr[], int n) { // Find minimum and maximum values in arr[] int min = arr[0], max = arr[0]; for (int i = 1; i < n; i++) { if (arr[i] < min) min = arr[i]; if (arr[i] > max) max = arr[i]; } int range = max - min + 1; // Find range
// Create an array of vectors. Size of array // range. Each vector represents a hole that // is going to contain matching elements. vector holes[range];
// Traverse through input array and put every // element in its respective hole for (int i = 0; i < n; i++) holes[arr[i]-min].push_back(arr[i]);
// Traverse through all holes one by one. For // every hole, take its elements and put in // array. int index = 0; // index in sorted array for (int i = 0; i < range; i++) { vector::iterator it; for (it = holes[i].begin(); it != holes[i].end(); ++it) arr[index++] = *it; } }
Cycle Sort
Cycle Sort
O(n2) for all cases, up to
O(n + k) where k is total number of hashes
Cycle Sort is best suited for situations where memory write or swap operations are costly.
Cycle Sort looks at first element and cycles it to the element index, then takes that ele and places it at next ele index and continues. then iterate from next element.
arr[] = {10, 5, 2, 3}
index = 0 1 2 3
cycle_start = 0
item = 10 = arr[0]
Find position where we put the item pos = cycle_start i=pos+1 while(i int: // Sort an array in place and return the number of // writes. writes = 0
# Loop through the array to find cycles to rotate. for cycle_start in range(0, len(array) - 1): item = array[cycle_start]
# Find where to put the item. pos = cycle_start for i in range(cycle_start + 1, len(array)): if array[i] < item: pos += 1
# If the item is already there, this is not a cycle. if pos == cycle_start: continue
# Otherwise, put the item there or right after any duplicates. while item == array[pos]: pos += 1 array[pos], item = item, array[pos] writes += 1 # Rotate the rest of the cycle. while pos != cycle_start: # Find where to put the item. pos = cycle_start for i in range(cycle_start + 1, len(array)): if array[i] < item: pos += 1 # Put the item there or right after any duplicates. while item == array[pos]: pos += 1 array[pos], item = item, array[pos] writes += 1 return writes
Stability in sorting algorithms
Stability in sorting algorithms
A sorting algorithm is said to be stable if two objects with equal keys appear in the same order in sorted output as they appear in the input array to be sorted.
Any comparison-based sorting algorithm which is not stable by nature can be modified to be stable by changing the key comparison operation so that the comparison of two keys considers position as a factor for objects with equal keys.
Which sorting algorithm makes the minimum number of memory writes?
Which sorting algorithm makes the minimum number of memory writes?
Selection Sort makes the least number of writes (it makes O(n) swaps). But, Cycle Sort almost always makes less number of writes compared to Selection Sort. In Cycle Sort, each value is either written zero times, if it’s already in its correct position, or written one time to its correct position
How to Sort a nearly sorted (or K sorted) array?
How to Sort a nearly sorted (or K sorted) array?
We can use Insertion Sort to sort the elements efficiently.
/* Function to sort an array using insertion sort*/ void insertionSort(int A[], int size) { int i, key, j; for (i = 1; i < size; i++) { key = A[i]; j = i-1;
/* Move elements of A[0..i-1], that are greater than key, to one position ahead of their current position. This loop will run at most k times */ while (j >= 0 && A[j] > key) { A[j+1] = A[j]; j = j-1; } A[j+1] = key; } }
BFS Breadth-First Search
BFS Breadth-First Search
O(V + E) where Vand E are total of vertices and edges
O(1) to O(V**2) depending how dense the graph is
it uses a queue (first in, first out). It looks at first node, and look at its neighbors(next to/side to side) and put them all in the queue. After that, you look at next node and put all their neighbors on the queue, then finish with the rest of the first neighbors and so on.
1 procedure BFS(G, root) is 2 let Q be a queue 3 label root as discovered 4 Q.enqueue(root) 5 while Q is not empty do 6 v := Q.dequeue() 7 if v is the goal then 8 return v 9 for all edges from v to w in G.adjacentEdges(v) do 10 if w is not labeled as discovered then 11 label w as discovered 12 Q.enqueue(w)