Algorithms and Data Structures Flashcards

1
Q

Linear Search

A

Linear Search
O(n) Worst Case

Start from the leftmost element of arr[] and one by one compare x with each element of arr[]

for (int i = 0; i < n.size(); i++)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Binary Search

A

Binary Search
O(log n)

Search a sorted array by repeatedly dividing the search interval in half. Begin with an interval covering the whole array. If the value of the search key is less than the item in the middle of the interval, narrow the interval to the lower half. Otherwise narrow it to the upper half. Repeatedly check until the value is found or the interval is empty.

int binarySearch(int arr[], int l, int r, int x)
{
if (r >= l) {
        int mid = l + (r - l) / 2;
if (arr[mid] == x)
            return mid;
if (arr[mid] > x)
            return binarySearch(arr, l, mid - 1, x);
return binarySearch(arr, mid + 1, r, x);
    }
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Jump Search

A

Jump Search
O(√n)
The basic idea is to check fewer elements (than linear search) by jumping ahead by fixed steps or skipping some elements in place of searching all elements. The best step size is m = √n.

int jumpSearch(int arr[], int x, int n) // x value to find,n=size

{
    int step = sqrt(n);  // Finding block size to be jumped
    int prev = 0;
    while (arr[min(step, n)-1] < x)
    {
        prev = step;
        step += sqrt(n);
        if (prev >= n)
            return -1;
    }
// Doing a linear search for x in block
// beginning with prev.
while (arr[prev] < x)
    {
        prev++;
        // If we reached next block or end of
        // array, element is not present.
        if (prev == min(step, n))
            return -1;
    }
// if element is found
if (arr[prev] == x)
        return prev;
return -1;
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Interpolation Search

A

Interpolation Search
O(n) Worst case

O(log(log(n)), on average
On average, if the elements are uniformly distributed

int pos = lo + ((hi - lo) /(arr[hi] - arr[lo])) *
(x - arr[lo]));

The Interpolation Search is an improvement over Binary Search for instances, where the values in a sorted array are uniformly distributed. On the other hand, interpolation search may go to different locations according to the value of the key being searched. For example, if the value of the key is closer to the last element, an interpolation search is likely to start the search toward the end side.
To find the position to be searched, it uses the following formula.

// The idea of formula is to return higher value of pos
// when element to be searched is closer to arr[hi]. And
// smaller value when closer to arr[lo]
pos = lo + [ (x-arr[lo])*(hi-lo) / (arr[hi]-arr[Lo]) ]
int interpolationSearch(int arr[], int n, int x)
{
    // Find indexes of two corners
    int lo = 0, hi = (n - 1);
    // Since array is sorted, an element present
    // in array, x must be in range between lo and hi
    while (lo <= hi && x >= arr[lo] && x <= arr[hi])
    {
        // error for 1 element or less
        if (lo == hi)
        {
            if (arr[lo] == x) return lo;
            return -1;
        }
        // Probing the position with keeping
        // uniform distribution in mind.
        int pos = lo + (((double)(hi - lo) /
            (arr[hi] - arr[lo])) * (x - arr[lo]));
        // Condition if target found
        if (arr[pos] == x)
            return pos;
        // If element is smaller than x, x is in upper part
        if (arr[pos] < x)
            lo = pos + 1;
        // If x is smaller, x is in the lower part
        else
            hi = pos - 1;
    }
    return -1;
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Exponential Search

A

Exponential Search
O(Log n) worst case

Exponential search involves two steps:

Find range where element is present
Do Binary Search in above found range.

The idea is to start with subarray size 1, compare its last element with x, then try size 2, then 4, and so on until the last element of a subarray is not greater.
Once we find an index i (after repeated doubling of i), we know that the element must be present between i/2 and i.

int exponentialSearch(int arr[], int n, int x)
{
    // If x is present at first location itself
    if (arr[0] == x)
        return 0;
    // Find range for binary search by
    // repeated doubling
    int i = 1;
    while (i < n && arr[i] <= x)
        i = i*2;
    //  Call binary search for the found range.
// array, i/2 is last iter, smaller of i vs n-1
    return binarySearch(arr, i/2, min(i, n-1), x);
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Ternary Search

A

Ternary Search, aka 3 part search

Time Complexity for // c is an arbitrary parameter
Ternary search = 4clog3n + O(1)

Time Complexity for
Binary search = 2clog2n + O(1)

Since the value of (2 / Log23) > 1, Ternary Search does MORE comparisons than Binary Search in the worst case.

// A recursive binary search function. It returns location of x in
// given array arr[l..r] is present, otherwise -1
int binarySearch(int arr[], int l, int r, int x)
{
   if (r >= l)
   {
        int mid = l + (r - l)/2;
        // If the element is present at the middle itself
        if (arr[mid] == x)  return mid;
        // If element is smaller than mid, then it can only be present
        // in left subarray
        if (arr[mid] > x) return binarySearch(arr, l, mid-1, x);
        // Else the element can only be present in right subarray
        return binarySearch(arr, mid+1, r, x);
   }
   // when the element is not present in the array
   return -1;
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Selection Sort

A

Selection Sort
O(n**2) worst case, as there are two nested loops.
The good thing about selection sort is it never makes more than O(n) swaps and can be useful when memory write is a costly operation.

From the start, linear search to see if any are lower than initial and swap, to find if it is lowest or move to the next in line. It will then swap with current if it is smallest, otherwise it repeats to next in index.

void swap(int *xp, int *yp) 
{ 
// temp -> x -> y -> temp (became x)
    int temp = *xp; 
    *xp = *yp; 
    *yp = temp; 
} 
void selectionSort(int arr[], int n) 
{ 
    int i, j, min_idx; 
    // linear search of unsorted subarray 
    for (i = 0; i < n-1; i++) 
    { 
        // Find the minimum element in unsorted array 
        min_idx = i; 
        // second linear search to find if front < rest or swap
        for (j = i+1; j < n; j++) 
        if (arr[j] < arr[min_idx]) 
            min_idx = j; 
        // Swap the found minimum element with the first element 
        swap(&arr[min_idx], &arr[i]); 
    } 
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Bubble Sort

A
Bubble Sort
Worst and Average Case Time Complexity: 
O(n**2)
if it was already sorted:
O(n)

Bubble Sort is the simplest sorting algorithm by repeatedly swapping the adjacent elements if they are in the wrong order.

First Pass:
( 5 1 4 2 8 ) –> ( 1 5 4 2 8 ), Here, the algorithm compares the first two elements and swaps since 5 > 1.
( 1 5 4 2 8 ) –> ( 1 4 5 2 8 ), Swap since 5 > 4
( 1 4 5 2 8 ) –> ( 1 4 2 5 8 ), Swap since 5 > 2
( 1 4 2 5 8 ) –> ( 1 4 2 5 8 ), Now, since these elements are already in order (8 > 5), algorithm does not swap them.

int n = sizeof(arr)/sizeof(arr[0]); 
void swap(int *xp, int *yp) 
{ 
    int temp = *xp; 
    *xp = *yp; 
    *yp = temp; 
} 
void bubbleSort(int arr[], int n) 
{ 
    for (int i = 0; i < n-1; i++)     
    // Last i elements are already in place 
    for (int j = 0; j < n-i-1; j++) 
        if (arr[j] > arr[j+1]) 
            swap(&arr[j], &arr[j+1]); 
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Insertion Sort

A
Insertion Sort
Worst and Average Case Time Complexity: 
O(n**2)/ Better way is Binary Insertion sort! O(log i) i=ith iteration
if it was already sorted:
O(n)

To sort an array of size n in ascending order:

1: Iterate from arr[1] to arr[n] over the array.
2: Compare the current element (key) to its predecessor.
3: If the key element is smaller than its predecessor, compare it to the elements before. Move the greater elements one position up to make space for the swapped element.

i = 2. 13 will remain at its position as all elements in A[0..I-1] are smaller than 13
11, 12, 13, 5, 6
i = 3. 5 will move to the beginning and all other elements from 11 to 13 will move one position ahead of their current position.
5, 11, 12, 13, 6

void insertionSort(int arr[], int n)
{
    int i, key, j;
    for (i = 1; i < n; i++)
    {
        key = arr[i];
        j = i - 1;
        /* Move elements of arr[0..i-1], that are
        greater than key, to one position ahead
        of their current position */
// j is previous ele
// while j is bigger than next,
// next ele is now j, then j--
// after, next ele in arr is key
        while (j >= 0 && arr[j] > key)
        {
            arr[j + 1] = arr[j];
            j = j - 1;
        }
        arr[j + 1] = key;
    }
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Binary Insertion Sort

A
Binary Insertion Sort
Worst and Average Case Time Complexity: 
O(log n) n=nth iteration, to worst is O(n**2)
if it was already sorted:
O(n)

do a binary search first, then insertion sort(insert and move all elements down) the element.

int binarySearch(int a[], int item,
                 int low, int high)
{
    if (high <= low)
        return (item > a[low]) ?
                (low + 1) : low;
int mid = (low + high) / 2;

if (item == a[mid])
    return mid + 1;
    if (item > a[mid])
        return binarySearch(a, item,
                            mid + 1, high);
    return binarySearch(a, item, low,
                        mid - 1);
}
void insertionSort(int a[], int n)
{
    int i, loc, j, k, selected;
for (i = 1; i < n; ++i)
{
    j = i - 1;
    selected = a[i];
        // find location where selected should be inserted
        loc = binarySearch(a, selected, 0, j);
        // Move all elements after location to create space
        while (j >= loc)
        {
            a[j + 1] = a[j];
            j--;
        }
        a[j + 1] = selected;
    }
}
int main()
{
insertionSort(a, n);
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Merge Sort

A

Merge Sort
Time complexity of Merge Sort is:
O(n Log n) in all 3 cases (worst, average, and best) as merge sort always divides the array into two halves and takes linear time to merge two halves.

Merge Sort continues to cut in half recursively till size is 1. Then the merge process happens and starts merging arrays back until fully merged.

MergeSort(arr[], l, r)
If r > l
1. Find the middle point to divide the array into two halves:
middle m = l+ (r-l)/2
2. Call mergeSort for first half:
Call mergeSort(arr, l, m)
3. Call mergeSort for second half:
Call mergeSort(arr, m+1, r)
4. Merge the two halves sorted in step 2 and 3:
Call merge(arr, l, m, r)

// Merges two subarrays of arr[].
// First subarray is arr[l..m]
// Second subarray is arr[m+1..r]
void merge(int arr[], int l, int m, int r)
{
    int n1 = m - l + 1;
    int n2 = r - m;
    // Create temp arrays
// expression must have constant value ERROR
    int L[n1], R[n2];
    // Copy data to temp arrays L[] and R[]
    for (int i = 0; i < n1; i++)
        L[i] = arr[l + i];
    for (int j = 0; j < n2; j++)
        R[j] = arr[m + 1 + j];
// Merge the temp arrays back into arr[l..r]
    // Initial index of first subarray
    int i = 0;
    // Initial index of second subarray
    int j = 0;
    // Initial index of merged subarray
    int k = l;  // lowercase int L = 0;
// n1 is left side.size(), n2 is right side.size()
// if left side 0 is <= right side 0,
// arr[k] is final array to return
// L[0] num is arr[k=0], then i++
//
// else R[0] is arr[k=0], then j++
// after, k++
    while (i < n1 && j < n2) {
        if (L[i] <= R[j]) {
            arr[k] = L[i];
            i++;
        }
        else {
            arr[k] = R[j];
            j++;
        }
        k++;
    }
 // should make a first IF statement instead of these whiles
    // Copy the remaining elements of
    // L[], if there are any
    while (i < n1) {
        arr[k] = L[i];
        i++;
        k++;
    }
    // Copy the remaining elements of
    // R[], if there are any
    while (j < n2) {
        arr[k] = R[j];
        j++;
        k++;
    }
}
// l is for left index and r is
// right index of the sub-array
// of arr to be sorted */
void mergeSort(int arr[],int l,int r){
    if(l>=r){
        return;//returns recursively
    }
    int m =l+ (r-l)/2;
    mergeSort(arr,l,m);
    mergeSort(arr,m+1,r);
    merge(arr,l,m,r);
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Binary Heap Sort

A
Binary Heap Sort
Time complexity of heapify is:
 O(Logn). 
Time complexity of createAndBuildHeap() is:
 O(n) 
overall time complexity of Heap Sort is:
 O(nLogn).

You build a binary tree (parent, left, right), then left to right for each child. You start at the top parent, going down left childs to compare which is larger and moved up, with index 0 will the largest, then 1 is second largest, then 2 is smallest. Then you recursively go through the bottom right-most child, to the right-most parent, starting with smallest int to largest.

Input data: 4, 10, 3, 5, 1
         4(0)
        /   \
     10(1)   3(2)
    /   \
 5(3)    1(4)

The numbers in the bracket represent the indices in the array
representation of data.

Applying heapify procedure to index 1:
         4(0)
        /   \
    10(1)    3(2)
    /   \
5(3)    1(4)
Applying heapify procedure to index 0:
        10(0)
        /  \
     5(1)  3(2)
    /   \
 4(3)    1(4)
The heapify procedure calls itself recursively to build the heap
 in a top-down manner.
// To heapify a subtree rooted with node i which is
// an index in arr[]. n is size of heap
void heapify(int arr[], int n, int i)
{
    int largest = i; // Initialize largest as root
    int l = 2 * i + 1; // left = 2*i + 1
    int r = 2 * i + 2; // right = 2*i + 2
    // If left child is larger than root
    if (l < n && arr[l] > arr[largest])
        largest = l;
    // If right child is larger than largest so far
    if (r < n && arr[r] > arr[largest])
        largest = r;
    // If largest is not root
    if (largest != i) {
        swap(arr[i], arr[largest]);
        // Recursively heapify the affected sub-tree
        heapify(arr, n, largest);
    }
}
// main function to do heap sort
void heapSort(int arr[], int n)
{
    // Build heap (rearrange array)
    for (int i = n / 2 - 1; i >= 0; i--)
        heapify(arr, n, i);
    // One by one extract an element from heap
    for (int i = n - 1; i > 0; i--) {
        // Move current root to end
        swap(arr[0], arr[i]);
        // call max heapify on the reduced heap
        heapify(arr, i, 0);
    }
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Quick Sort/Pivot Swap Sort

A
Quick Sort/Pivot Swap Sort
Worst Case: 
O(n**2)
Average and Best Case:
O(n Logn)

Merge Sort is preferred over Quick Sort for Linked Lists
Unlike arrays, linked list nodes may not be adjacent in memory. Unlike array, in linked list, we can insert items in the middle in O(1) extra space and O(1) time.
Therefore merge operation of merge sort can be implemented without extra space for linked lists.

QuickSort is a Divide and Conquer algorithm. It picks an element as a pivot and partitions the given array around the picked pivot. There are many different versions of quickSort that pick pivot in different ways.

Always pick the first element as pivot.
Always pick the last element as the pivot (implemented below)
Pick a random element as a pivot.
Pick median (ele 0, n/2, n; pick median int) as a pivot.

https://www.youtube.com/watch?v=PgBzjlCcFvc
/* This function takes last element as pivot, places
the pivot element at its correct position in sorted
array, and places all smaller (smaller than pivot)
to left of pivot and all greater elements to right
of pivot */
int partition (int arr[], int low, int high)
{
int pivot = arr[high]; // pivot
int i = (low - 1); // Index of smaller element and indicates the right position of pivot found so far

// 
    for (int j = low/-1 ; j <= high - 1; j++)
    {
        // If current element is smaller than the pivot(high),
       // increment index of smaller element,
       // then swap(smaller, larger)
        if (arr[j] < pivot)
        {
            i++; 
            swap(&arr[i], &arr[j]);
        }
    }
    swap(&arr[i + 1], &arr[high]);
    return (i + 1);
}

/* The main function that implements QuickSort
arr[] –> Array to be sorted,
low –> Starting index,
high –> Ending index /
void quickSort(int arr[], int low, int high)
{
if (low < high)
{
/
pi is partitioning index, arr[p] is now
at right place */
int pi = partition(arr, low, high);

        // Separately sort elements before
        // partition and after partition
        quickSort(arr, low, pi - 1);
        quickSort(arr, pi + 1, high);
    }
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Radix Sort

A

Radix Sort
Complexity Analysis:
The worst complexity is:
O((n + b) * logb(maxx))
where b is the base(position of each number 10**1,2,3) for representing numbers and maxx is the maximum element of the input array.
This is clearly visible as we make (n + b) iterations logb(maxx) times (number of digits in the maximum element).

If maxx <= n**c, then the complexity can be written as:
O(n * logb(n))

Advantages :

  1. Fast when the keys are short i.e. when the range of the array elements is less.
  2. Used in suffix array constuction algorithms like Manber’s algorithm and DC3 algorithm.

Disadvantages:

  1. Since Radix Sort depends on digits or letters, Radix Sort is much less flexible than other sorts. Hence, for every different type of data, it needs to be rewritten.
  2. The constant for Radix sort is greater compared to other sorting algorithms.
  3. It takes more space compared to Quicksort, which is in-place sorting.

You look at the ones’ column, and sort the numbers, then the 10’s, 100’s, and so on.

Original, unsorted list:
170, 45, 75, 90, 802, 24, 2, 66

Sorting by the least significant digit (1s place) gives:
[*Notice that we keep 802 before 2, because 802 occurred
before 2 in the original list, and similarly for pairs
170 & 90 and 45 & 75.]

170, 90, 802, 2, 24, 45, 75, 66

Sorting by next digit (10s place) gives:
[*Notice that 802 again comes before 2 as 802 comes before
2 in the previous list.]

802, 2, 24, 45, 66, 170, 75, 90

Sorting by the most significant digit (100s place) gives:
2, 24, 45, 66, 75, 90, 170, 802

//O((n + b) * logb(maxx)) 
//where b is the base for representing numbers and maxx is the maximum //element of the input array. 
//This is clearly visible as we make (n + b) iterations logb(maxx) times //(number of digits in the maximum element). 
//If maxx <= n**c, then the complexity can be written as:
//O(n * logb(n))
// A utility function to get maximum value in arr[]
int getMax(int arr[], int n)
{
    int mx = arr[0];
    for (int i = 1; i < n; i++)
        if (arr[i] > mx)
            mx = arr[i];
    return mx;
}
// A function to do counting sort of arr[] according to
// the digit represented by exp.
void countSort(int arr[], int n, int exp)
{
    int output[n]; // output array
    int i, count[10] = { 0 };  // count is size of numbers
    // Store count of occurrences in count[]
    // exp is exponent for 10**i
    for (i = 0; i < n; i++)
        count[(arr[i] / exp) % 10]++;
    // Change count[i] so that count[i] now contains actual
    //  position of this digit in output[]
    for (i = 1; i < 10; i++)
        count[i] += count[i - 1];
    // Build the output array
    for (i = n - 1; i >= 0; i--) {
        output[count[(arr[i] / exp) % 10] - 1] = arr[i];
        count[(arr[i] / exp) % 10]--;
    }
    // Copy the output array to arr[], so that arr[] now
    // contains sorted numbers according to current digit
    for (i = 0; i < n; i++)
        arr[i] = output[i];
}
// The main function to that sorts arr[] of size n using
// Radix Sort
void radixsort(int arr[], int n)
{
    // Find the max, total elements vs sizeof.(ele)
    int m = getMax(arr, n);
    // Do counting sort for every digit. Note that instead
    // of passing digit number, exp is passed. exp is 10^i
    // where i is current digit number
    // m = max/1 > 0;  // as long as max is bigger than 1; exp**10 gets bigger
    for (int exp = 1; m / exp > 0; exp *= 10)
        countSort(arr, n, exp);
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Counting Sort

A

Counting Sort
O(n+k)
where n is # of elements in array and k is the range.

Counting sort is a sorting technique based on keys between a specific range. It works by counting the number of objects having distinct key values (kind of hashing). Then either start from smallest buckets to decrement. Or you can continue to add the count of the next ele so it is a placeholder for which position the ele will have when checked.
https://www.youtube.com/watch?v=7zuGmKfUt7s

Points to be noted:

  1. Counting sort is efficient if the range of input data is not significantly greater than the number of objects to be sorted. Consider the situation where the input sequence is between range 1 to 10K and the data is 10, 5, 10K, 5K.
  2. It is not a comparison-based sorting. It is a running time complexity is O(n) with space proportional to the range of data.
  3. It is often used as a sub-routine to another sorting algorithm like radix sort.
  4. Counting sort uses a partial hashing to count the occurrence of the data object in O(1).
  5. Counting sort can be extended to work for negative inputs also.
void countSort(vector& arr)
{
    int max = *max_element(arr.begin(), arr.end());
    int min = *min_element(arr.begin(), arr.end());
    int range = max - min + 1;
//if 1 is smallest, arr[i] - 1 to start at ele 0, then ++ for each bucket type
    vector count(range), output(arr.size());
    for (int i = 0; i < arr.size(); i++)
        count[arr[i] - min]++;
// this cumulatively adds up start to finish, at the end, should add to total // range
    for (int i = 1; i < count.size(); i++)
        count[i] += count[i - 1];

// starting from high side of arr, place int at ele count, then count–
for (int i = arr.size() - 1; i >= 0; i–) {
output[count[arr[i] - min] - 1] = arr[i];
count[arr[i] - min]–;
}

for (int i = 0; i < arr.size(); i++)
    arr[i] = output[i]; }
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Bucket Sort

A

Bucket Sort

O(n**2) is worst case
O(n) is average case if uniformly distrubuted

The worst-case scenario occurs when all the elements are placed in a single bucket. The overall performance would then be dominated by the algorithm used to sort each bucket, which is:
O(n**2) is worst case, similar to insertion sort

making bucket sort less optimal than:
O (n log(n)) comparison sort algorithms, such as merge sort.
Consider the case that the input is uniformly distributed, it can be done in:
O(n) is average case

bucketSort(arr[], n)
1) Create n empty buckets (Or lists).
2) Do following for every array element arr[i].
…….a) Insert arr[i] into bucket[n*array[i]]
3) Put buckets back into array FIRST (for optimization)
4) THEN run insertion sort over complete array

void bucketSort(float arr[], int n)
{
    // 1) Create n empty buckets
    vector b[n];
    // 2) Put array elements
    // in different buckets
    for (int i = 0; i < n; i++) {
        int bi = n * arr[i]; // bi is pointer to Index in bucket
        b[bi].push_back(arr[i]);  // push the int into that index
    }
    // 3) Concatenate all buckets into arr[]
    int index = 0;
    for (int i = 0; i < n; i++)
        for (int j = 0; j < b[i].size(); j++)
            arr[index++] = b[i][j];
    // 4) Sort individual buckets
    // for (int i = 0; i < n; i++)
        sort(arr.begin(), arr.end());

}

17
Q

Shell Sort

A

Shell Sort
O(n**2) worst case

depends on gap size
O (n5/4) to O (n3/2) average case
it is NOT stable, because you need to swap all the pieces before sorting,
but it IS in place

Shell sort takes a gap at half of n, first element looks at next ele at the length of gap, makes a swap if it is lower, and repeats until end. it repeats and reduces gap by 1 and continues till gap is 1.

int shellSort(int arr[], int n)
{
    // Start with a big gap, then reduce the gap
    for (int gap = n/2; gap > 0; gap /= 2)
    {
        // Do a gapped insertion sort for this gap size.
        // The first gap elements a[0..gap-1] are already in gapped order
        // keep adding one more element until the entire array is
        // gap sorted 
        for (int i = gap; i < n; i++)
        {
            // add a[i] to the elements that have been gap sorted
            // save a[i] in temp and make a hole at position i
            int temp = arr[i];
            // shift earlier gap-sorted elements up until the correct 
            // location for a[i] is found
            int j;            
            for (j = i; j >= gap && arr[j - gap] > temp; j -= gap)
                arr[j] = arr[j - gap];
            //  put temp (the original a[i]) in its correct location
            arr[j] = temp;
        }
    }
    return 0;
}
18
Q

Comb Sort

A

Comb Sort
Comb Sort works better than Bubble Sort on average, but the worst case remains as:
O(n**2)

The gap starts with a large value and shrinks by a factor of 1.3 in every iteration until it reaches the value 1.
gap = (gap*10)/13;

Starts at about half gap, from initial to gap ele, compares if lesser and swaps if so, then moves to next ele and repeats until the end. Next iteration, the gap is reduced again by a ratio of 1.3 until 1.

// To find gap between elements
int getNextGap(int gap)
{
    // Shrink gap by Shrink factor
    gap = (gap*10)/13;
if (gap < 1)
    return 1;
return gap; }
// Function to sort a[0..n-1] using Comb Sort
void combSort(int a[], int n)
{
    // Initialize gap
    int gap = n;
    // Initialize swapped as true to make sure that
    // loop runs
    bool swapped = true;
    // Keep running while gap is more than 1 and last
    // iteration caused a swap
    while (gap != 1 || swapped == true)
    {
        // Find next gap, either 1.3
        gap = getNextGap(gap);
        // Initialize swapped as false so that we can
        // check if swap happened or not
        swapped = false;
        // Compare all elements with current gap
        for (int i=0; i a[i+gap])
            {
                swap(a[i], a[i+gap]);
                swapped = true;
            }
        }
    }
}
19
Q

Pigeonhole Sort

A

Pigeonhole Sort
O(n + Range) worst case

Pigeonhole Sort, similar to Bucket Sort
Pigeonhole sorting is a sorting algorithm that is suitable for sorting lists of elements where the number of elements and the number of possible key values/range are approximately the same.

void pigeonholeSort(int arr[], int n)
{
    // Find minimum and maximum values in arr[]
    int min = arr[0], max = arr[0];
    for (int i = 1; i < n; i++)
    {
        if (arr[i] < min)
            min = arr[i];
        if (arr[i] > max)
            max = arr[i];
    }
    int range = max - min + 1; // Find range
    // Create an array of vectors. Size of array
    // range. Each vector represents a hole that
    // is going to contain matching elements.
    vector holes[range];
    // Traverse through input array and put every
    // element in its respective hole
    for (int i = 0; i < n; i++)
        holes[arr[i]-min].push_back(arr[i]);
    // Traverse through all holes one by one. For
    // every hole, take its elements and put in
    // array.
    int index = 0;  // index in sorted array
    for (int i = 0; i < range; i++)
    {
       vector::iterator it;
       for (it = holes[i].begin(); it != holes[i].end(); ++it)
            arr[index++]  = *it;
    }
}
20
Q

Cycle Sort

A

Cycle Sort
O(n2) for all cases, up to

O(n + k) where k is total number of hashes

Cycle Sort is best suited for situations where memory write or swap operations are costly.

Cycle Sort looks at first element and cycles it to the element index, then takes that ele and places it at next ele index and continues. then iterate from next element.

arr[] = {10, 5, 2, 3}
index = 0 1 2 3
cycle_start = 0
item = 10 = arr[0]

Find position where we put the item  
pos = cycle_start
i=pos+1
while(i int:
// Sort an array in place and return the number of 
// writes.
    writes = 0
    # Loop through the array to find cycles to rotate.
    for cycle_start in range(0, len(array) - 1):
        item = array[cycle_start]
        # Find where to put the item.
        pos = cycle_start
        for i in range(cycle_start + 1, len(array)):
            if array[i] < item:
                pos += 1
        # If the item is already there, this is not a cycle.
        if pos == cycle_start:
            continue
    # Otherwise, put the item there or right after any duplicates.
    while item == array[pos]:
        pos += 1

    array[pos], item = item, array[pos]
    writes += 1

    # Rotate the rest of the cycle.
    while pos != cycle_start:
        # Find where to put the item.
        pos = cycle_start
        for i in range(cycle_start + 1, len(array)):
            if array[i] < item:
                pos += 1

        # Put the item there or right after any duplicates.
        while item == array[pos]:
            pos += 1
        array[pos], item = item, array[pos]
        writes += 1

return writes
21
Q

Stability in sorting algorithms

A

Stability in sorting algorithms
A sorting algorithm is said to be stable if two objects with equal keys appear in the same order in sorted output as they appear in the input array to be sorted.

Any comparison-based sorting algorithm which is not stable by nature can be modified to be stable by changing the key comparison operation so that the comparison of two keys considers position as a factor for objects with equal keys.

22
Q

Which sorting algorithm makes the minimum number of memory writes?

A

Which sorting algorithm makes the minimum number of memory writes?

Selection Sort makes the least number of writes (it makes O(n) swaps). But, Cycle Sort almost always makes less number of writes compared to Selection Sort. In Cycle Sort, each value is either written zero times, if it’s already in its correct position, or written one time to its correct position

23
Q

How to Sort a nearly sorted (or K sorted) array?

A

How to Sort a nearly sorted (or K sorted) array?

We can use Insertion Sort to sort the elements efficiently.

/* Function to sort an array using insertion sort*/
void insertionSort(int A[], int size)
{
   int i, key, j;
   for (i = 1; i < size; i++)
   {
       key = A[i];
       j = i-1;
       /* Move elements of A[0..i-1], that are
          greater than key, to one
          position ahead of their current position.
          This loop will run at most k times */
       while (j >= 0 && A[j] > key)
       {
           A[j+1] = A[j];
           j = j-1;
       }
       A[j+1] = key;
   }
}
24
Q

BFS Breadth-First Search

A

BFS Breadth-First Search

O(V + E) where Vand E are total of vertices and edges
O(1) to O(V**2) depending how dense the graph is

it uses a queue (first in, first out). It looks at first node, and look at its neighbors(next to/side to side) and put them all in the queue. After that, you look at next node and put all their neighbors on the queue, then finish with the rest of the first neighbors and so on.

 1  procedure BFS(G, root) is
 2      let Q be a queue
 3      label root as discovered
 4      Q.enqueue(root)
 5      while Q is not empty do
 6          v := Q.dequeue()
 7          if v is the goal then
 8              return v
 9          for all edges from v to w in G.adjacentEdges(v) do
10              if w is not labeled as discovered then
11                  label w as discovered
12                  Q.enqueue(w)
25
DFS Depth First Search
DFS Depth First Search O(V + E) where V and E are total vertices and edges O(1) to O(V**2) depending on how dense the graph is You start at the root and explore as far as possible for each branch before backtracking. It uses the Stack instead of queue ``` // recursive imple of preorder traversal procedure preorder(treeNode v) { visit(v); for each child u of v preorder(u); } ``` ``` // to check neighbors first instead of child procedure dfs(vertex v) { visit(v); for each neighbor u of v if u is undiscovered call dfs(u); } ``` ``` class Graph { public: // a vector of vectors to represent an adjacency list vector> adjList; ``` ``` // Graph Constructor Graph(vector const &edges, int N) { // resize the vector to hold N elements of type `vector` adjList.resize(N); ``` ``` // add edges to the undirected graph for (auto &edge: edges) { adjList[edge.src].push_back(edge.dest); adjList[edge.dest].push_back(edge.src); } } }; ``` ``` // Function to perform DFS traversal on the graph on a graph void DFS(Graph const &graph, int v, vector &discovered) { // mark the current node as discovered discovered[v] = true; ``` ``` // print the current node cout << v << " "; ``` ``` // do for every edge `v —> u` for (int u: graph.adjList[v]) { // if `u` is not yet discovered if (!discovered[u]) { DFS(graph, u, discovered); } } } ```
26
Inorder, Preorder, Postorder Tree Traversals
Inorder, Preorder, Postorder Tree Traversals All are: O(n) Inorder, LNR Left subtrees going up, N (root), next Right subtree and repeat Preorder, NLR N(root), Left subtrees, next right subtree and follow Left subtrees again and repeat Postorder, LRN Left Subtree, Right Subtree, then N (root) should be next Left now and repeat
27
Kruskal’s Minimum Spanning Tree Algorithm
Kruskal’s Minimum Spanning Tree Algorithm Given an undirected, connected, and weighted graph, construct a minimum spanning tree. A greedy algorithm by finding a subset of edges covering every vertex, so the sum of weights on the edges are at a minimum. It includes edges unless they create a cycle (A > B > C > A), as they will be discarded. ``` #include #include #include #include using namespace std; ``` ``` // Data structure to store a graph edge struct Edge { int src, dest, weight; }; ``` ``` // Comparison object to be used to order the edges struct compare { bool operator() (Edge const &a, Edge const &b) const { return a.weight > b.weight; } }; ``` ``` // A class to represent a disjoint set class DisjointSet { unordered_map parent; public: // perform MakeSet operation void makeSet(int N) { // create `N` disjoint sets (one for each vertex) for (int i = 0; i < N; i++) { parent[i] = i; } } ``` ``` // Find the root of the set in which element `k` belongs int Find(int k) { // if `k` is root if (parent[k] == k) { return k; } ``` ``` // recur for the parent until we find the root return Find(parent[k]); } ``` ``` // Perform Union of two subsets void Union(int a, int b) { // find the root of the sets in which elements // `x` and `y` belongs int x = Find(a); int y = Find(b); ``` parent[x] = y; } }; ``` // Function to construct MST using Kruskal’s algorithm vector kruskalAlgo(vector edges, int N) // no-ref, no-const { // stores the edges present in MST vector MST; ``` ``` // initialize `DisjointSet` class DisjointSet ds; ``` ``` // create a singleton set for each element of the universe ds.makeSet(N); ``` ``` // sort edges by increasing weight sort(edges.begin(), edges.end(), compare()); ``` ``` // MST contains exactly `V-1` edges while (MST.size() != N - 1) { // consider the next edge with minimum weight from the graph Edge next_edge = edges.back(); edges.pop_back(); ``` ``` // find the root of the sets to which two endpoints // vertices of the next edge belongs int x = ds.Find(next_edge.src); int y = ds.Find(next_edge.dest); ``` ``` // if both endpoints have different parents, they belong to // different connected components and can be included in MST if (x != y) { MST.push_back(next_edge); ds.Union(x, y); } } return MST; } ```
28
Floyd Warshall's All-Pairs Shortest Path Algorithm
Floyd Warshall's All-Pairs Shortest Path Algorithm O( |V**3| ) where V is vertices A graph where its edge weights w(u, v) can be negative, you find the shortest directed path with weights d(s, v) from every source S for all vertices. ``` let dist be V × V matrix of minimum distances initialized to INFINITY for each vertex v dist[v][v] = 0 for each edge (u, v) dist[u][v] = weight(u, v) ``` for k from 0 to |V| – 1 for i from 0 to |V| – 1 for j from 0 to |V| – 1 if (dist[i][k] + dist[k][j]) < dist[i][j] then dist[i][j] = dist[i][k] + dist[k][j] The adjacency matrix containing the shortest distance is: 0 -1 -2 0 4 0 2 4 5 1 0 2 3 -1 1 0 The shortest path from: – vertex 0 to vertex 1 is (0 2 3 1) – vertex 0 to vertex 2 is (0 2) – vertex 0 to vertex 3 is (0 2 3) – vertex 1 to vertex 0 is (1 0) – vertex 1 to vertex 2 is (1 0 2) – vertex 1 to vertex 3 is (1 0 2 3) – vertex 2 to vertex 0 is (2 3 1 0) – vertex 2 to vertex 1 is (2 3 1) – vertex 2 to vertex 3 is (2 3) – vertex 3 to vertex 0 is (3 1 0) – vertex 3 to vertex 1 is (3 1) – vertex 3 to vertex 2 is (3 1 0 2)
29
Dijkstra’s Algorithm
Dijkstra’s Algorithm We know that the Breadth-first search BFS can be used to find the shortest path in an unweighted graph or even a weighted for same cost for each edge. But if edges are weighted differently, then BFS generalizes to Uniform-Cost Search. Instead of expanding nodes to their depth from the root, UCS expands the nodes in order of their cost from the root. A variant is called Dijkstra's Algorithm. It finds the shortest path between that node and every other node. The algorithm is stopped once the fastest route to the destination node has been determined
30
Linked List
Linked List In computer science, a linked list is a linear collection of data elements whose order is not given by their physical placement in memory. Instead, each element points to the next. It is a data structure consisting of a collection of nodes which together represent a sequence. In its most basic form, each node contains: data, and a reference (in other words, a link) to the next node in the sequence. This structure allows for efficient insertion or removal of elements from any position in the sequence during iteration. More complex variants add additional links, allowing more efficient insertion or removal of nodes at arbitrary positions. A drawback of linked lists is that access time is linear (and difficult to pipeline). ``` // C++ code for linked list merged sort class Node { public: int data; Node* next; }; ``` /* function prototypes */ Node* SortedMerge(Node* a, Node* b); void FrontBackSplit(Node* source, Node** frontRef, Node** backRef); ``` /* sorts the linked list by changing next pointers (not data) */ // Base case -- length 0 or 1 void MergeSort(Node** headRef) { Node* head = *headRef; Node* a; Node* b; ``` ``` /* Base case -- length 0 or 1 */ if ((head == NULL) || (head->next == NULL)) { return; } ``` ``` /* Split head into 'a' and 'b' sublists */ FrontBackSplit(head, &a, &b); ``` /* Recursively sort the sublists */ MergeSort(&a); MergeSort(&b); ``` /* answer = merge the two sorted lists together */ *headRef = SortedMerge(a, b); } ``` ``` /* See https:// www.geeksforgeeks.org/?p=3622 for details of this function */ Node* SortedMerge(Node* a, Node* b) { Node* result = NULL; ``` ``` /* Base cases, will return the other if null */ if (a == NULL) return (b); else if (b == NULL) return (a); ``` ``` /* Pick either a or b, and recursion */ if (a->data <= b->data) { result = a; // -> points to struct or union, as next is a struct result->next = SortedMerge(a->next, b); } else { result = b; // -> points to struct or union, as next is a struct result->next = SortedMerge(a, b->next); } return (result); } ``` /* UTILITY FUNCTIONS */ /* Split the nodes of the given list into front and back halves, and return the two lists using the reference parameters. If the length is odd, the extra node should go in the front list. Uses the fast/slow pointer strategy. */ void FrontBackSplit(Node* source, Node** frontRef, Node** backRef) { Node* fast; Node* slow; slow = source; fast = source->next; ``` /* Advance 'fast', slow, then fast one node */ while (fast != NULL) { fast = fast->next; if (fast != NULL) { slow = slow->next; fast = fast->next; } } ``` ``` /* 'slow' is before the midpoint in the list, so split it in two at that point. */ // frontRef is source, backRef is next which is slow, // slow is now the next which is a null last slot *frontRef = source; *backRef = slow->next; slow->next = NULL; } ``` ``` /* Function to print nodes in a given linked list */ void printList(Node* node) { while (node != NULL) { cout << node->data << " "; node = node->next; } } ``` ``` /* Function to insert a node at the beginging of the linked list */ void push(Node** head_ref, int new_data) { // allocates a new node, puts in data, link list to node, // moves head to point to new node Node* new_node = new Node(); new_node->data = new_data; new_node->next = (*head_ref); (*head_ref) = new_node; } ```
31
Singly Linked List VS Doubly Linked List
Singly Linked List VS Doubly Linked List A singly-linked list contains some data and a link to the next element, which allows keeping the structure. On the other hand, every node in a doubly-linked list also contains a link to the previous node.
32
Quick Sort for Singly Linked List
Quick Sort for Singly Linked List ``` // here the sorting happens exclusively at the end node struct Node* quickSortRecur(struct Node* head, struct Node* end) { // base condition // if ever head or NOT head is the end, return head if (!head || head == end) return head; ``` Node *newHead = NULL, *newEnd = NULL; ``` // Partition the list, newHead and newEnd will be // updated by the partition function struct Node* pivot = partition(head, end, &newHead, &newEnd); ``` ``` // If pivot is the smallest element - no need to recur // for the left part. if (newHead != pivot) { // Set the node before the pivot node as NULL struct Node* tmp = newHead; while (tmp->next != pivot) tmp = tmp->next; tmp->next = NULL; ``` ``` // Recur for the list before pivot newHead = quickSortRecur(newHead, tmp); ``` ``` // Change next of last node of the left half to // pivot tmp = getTail(newHead); tmp->next = pivot; } ``` ``` // Recur for the list after the pivot element pivot->next = quickSortRecur(pivot->next, newEnd); ``` return newHead; } ``` // The main function for quick sort. This is a wrapper over // recursive function quickSortRecur() void quickSort(struct Node** headRef) { (*headRef) = quickSortRecur(*headRef, getTail(*headRef)); return; } ```
33
Trie aka Digital Tree or Prefix Tree
Trie aka Digital Tree or Prefix Tree A type of search tree, a tree data structure used for locating specific keys from within a set. These keys are most often strings, with links between nodes defined not by the entire key, but by individual characters. In order to access a key (to recover its value, change it, or remove it), the trie is traversed depth-first, following the links between nodes, which represent each character in the key. A trie for keys "A", "to", "tea", "ted", "ten", "i", "in", and "inn". Each complete English word has an arbitrary integer value associated with it.
34
Graph
Graph A graph is a data structure that consists of the following two components: 1. A finite set of vertices also called nodes. 2. A finite set of ordered pair of the form (u, v) called an edge. The pair is ordered because (u, v) is not the same as (v, u) in the case of a directed graph(di-graph). The pair in the form of (u, v) indicates that there is an edge from vertex u to vertex v. The edges may contain weight/value/cost. ``` // A utility function to add an edge in an // undirected graph. void addEdge(vector adj[], int u, int v) { adj[u].push_back(v); adj[v].push_back(u); } ``` ``` // A utility function to print the adjacency list // representation of graph void printGraph(vector adj[], int V) { for (int v = 0; v < V; ++v) { cout << "\n Adjacency list of vertex " << v << "\n head "; for (auto x : adj[v]) cout << "-> " << x; printf("\n"); } } ``` ``` // Driver code int main() { int V = 5; vector adj[V]; addEdge(adj, 0, 1); addEdge(adj, 0, 4); addEdge(adj, 1, 2); addEdge(adj, 1, 3); addEdge(adj, 1, 4); addEdge(adj, 2, 3); addEdge(adj, 3, 4); printGraph(adj, V); return 0; } ```
35
Hashing
Hashing There is input that goes into a hash function that gives a hash value that is unique to itself. First block in a blockchain is called Genesis block, it will be added to the new transactions in the new block, thus making it an unbreakable dependency secure, immutable, and transparent network. https://www.youtube.com/watch?v=2BldESGZKB8
36
Bit Manipulation
Bit Manipulation ``` AND 0 AND 0 = 0 1 AND 0 = 0 0 AND 1 = 0 1 AND 1 = 1 AND is represented by the ampersand - 1 & 1 = 1 ``` ``` OR 0 OR 0 = 0 1 OR 0 = 1 0 OR 1 = 1 1 OR 1 = 1 OR is represented by the vertical bar (pipe) - 1 | 1 = 1 ``` ``` XOR 0 XOR 0 = 0 1 XOR 0 = 1 0 XOR 1 = 1 1 XOR 1 = 0 XOR is represented by the upwards caret - 1 ^ 1 = 1 ``` ``` XOR, int main() { int x = 10, y = 5; // Code to swap 'x' (1010) and 'y' (0101) x = x ^ y; // x now becomes 15 (1111) y = x ^ y; // y becomes 10 (1010) x = x ^ y; // x becomes 5 (0101) cout << "After Swapping: x =" << x << ", y=" << y; return 0; } ``` ``` void swap(int* xp, int* yp) { *xp = *xp ^ *yp; *yp = *xp ^ *yp; *xp = *xp ^ *yp; } ```
37
Stack vs Heap
The stack can do function calls and local variables/primitives, but not objects. It can use a a pointer of Object myObject; and points to a part in the heap. In C to use Heap: malloc, calloc, realloc, free In C++ to use Heap: new, delete Heap/dynamic memory is a large space and determines if memory is still used/no longer referenced or becomes trash that can be used for new memory. You need heap when you need states to stay around.
38
Recursion
Recursion Recursion is a method of solving a problem where the solution depends on solutions to smaller instances of the same problem (as opposed to iteration). Such problems can generally be solved by iteration, but this needs to identify and index the smaller instances at programming time. Recursion solves such recursive problems by using functions that call themselves from within their own code.
39
Dynamic Programming
Dynamic Programming Simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner. Recursion solves such recursive problems by using functions that call themselves from within their own code. If a problem can be solved optimally by breaking it into sub-problems and then recursively finding the optimal solutions to the sub-problems, then it is said to have an optimal substructure. If sub-problems can be nested recursively inside larger problems, so that dynamic programming methods are applicable, then there is a relation between the value of the larger problem and the values of the sub-problems. In the optimization literature, this relationship is called the Bellman equation. ``` Examples are: Dijkstra's Algo for the shortest path Fibonacci sequence a type of balanced 0-1 matrix checkerboard sequence alignment tower of Hanoi egg dropping puzzle matrix chain multiplication ```