All kinds of sorts. insertion, merge, heap, counting, radix Flashcards
what are in-place alogrthim and mention exampels of it
algorithms that rearrange the elements within the array with at most a constant number stored outside the array at any time
selection sort
bubble sort
insertion sort
heap sort
For searching on sorted data by comparing keys, optimal solutions require
o(logn)
searching on unsorted data by comparing keys, optimal solutions require big o of
o(n)
what are stable algorthims
algorithms in which their elements appear in the output array in the same order as they do in the input array.
examples of algorithms that are both in place and stable algorithm
bubble sort and insertion sort
display how would insertion sort work on these numbers {6,3,9,8,24,8}
write psudocode for insertion sortand write its running time and the
whats insertion sort running time in the best case and worst case
when do both cases happen
best case O(n) liner function =an+b
Worst Case O(nˆ2)
best Case happens when it is already sorted
worst case when it’s in the opposite order
write running time for insertion sort pseudocode
represent insertion sort worst case mathmatically
T or F: In insertion sort, the average case is way better than the worst case.
F
almost same as bad
on average what is the condition of insertion sort problems
On average, half the elements in A [1 … j-1] are less than A[j], and half the elements are greater.
tj = ½ j
The resulting average-case running time for insertion sort turns out to be
a quadratic function of the input size.
merge sort to sort an array usually dose thee following
To sort an array A[p . . r]:
Divide
Divide the n-element sequence to be sorted into two subsequences of n/2 elements each
Conquer
Sort the subsequences recursively using merge sort.
When the size of the sequences is 1 there is nothing more to do.
Combine
Merge the two sorted subsequences.
merge sort
conquer merge sort
conquer
merge sort pausoxodw
whats the output of merging
One single sorted subarray A[p . . r]
1.
Idea for merging:
Two piles of sorted cards
Choose the smaller of the two top cards
Remove it and place it in the output pile
Repeat the process until one pile is empty
Take the remaining input pile and place it face-down onto the output pile
MERGE(A, 0, 3, 7)
A={1,6,43,8,0,4,6}
starting like this
what is merge pseudocode
its okay to look at code while figering that out
MERGE(A, p, q, r)
Compute n1 and n2
Copy the first n1 elements into
L[1 . . n1 + 1] and the next n2 elements into R[1 . . n2 + 1]
L[n1 + 1] ← ∞; R[n2 + 1] ← ∞
i ← 1; j ← 1
for k ← p to r
do if L[ i ] ≤ R[ j ]
then A[k] ← L[ i ]
i ←i + 1
else A[k] ← R[ j ]
j ← j + 1
Running Time of Merge
Initialization (copying into temporary arrays):
Θ(n1 + n2) = Θ(n)
Adding the elements to the final array:
n iterations, each taking constant time ⇒ Θ(n)
Total time for Merge:
Θ(n)
MERGE-SORT Running Time
Divide:
compute q as the average of p and r: D(n) = Θ(1)
Conquer:
recursively solve 2 sub-problems, each of size n/2 ⇒ 2T (n/2)
Combine:
MERGE on an n-element subarray takes Θ(n) time ⇒ C(n) = Θ(n)
The (binary) heap data structure is an …….. that we can view as a nearly complete binary tree.
array object
1.
Heap data structure is used for:
Heap sort
Priority queue
T or F:The binary heap tree is filled on all levels
F
A completely filled on all levels except possibly the lowest.
T or F: Each node of the tree corresponds to an element of the array
T
An array A that represents a heap is an object with two attributes:
A.length: number of elements in the array.
A.heap-size: represents how many elements in the heap are stored within array A.
what are the valid elemns of the heap
a.heap.size
Although A[1..A.length] may contain numbers, only the elements in A[1..A.heap-size], where
0 <= A.heap-size <= A.length, are valid elements of the heap
how to get
parent
left child
right child
parent: return i/2. where i is place of elmeent
left: 2i
right: 2i+1
Min heap is commnly used to implemnet
priority queues
The height of a node in a heap is
the number of edges on the longest simple downward path from the node to a leaf.
The height of the heap
is the height of its root
whats the hight of heap tree as funtion of n
logn
write psuodocode for max heapify
The children’s subtrees each have size at most
2n/3
what is maxheapify recurence and why is it like this and what is the big o of max heapify
T(n)T(n/2)+O(1)
maxheapify >=recurence is t(2n/3)
because when it calls it self its at most 2n/3 size
big o od logn
Alternatively, we can characterize the running time of MaxHeapify()
on a node of height h as O(h).
Number of nodes in the tree and worst case
Number of nodes in the tree = 1 + (Number of nodes in Left Subtree) + (Number of nodes in Right Subtree)
For our case where the tree has last level half full, iF we assume that the right subtree is of height h, then the left subtree if of height (h+1):
Number of nodes in Left Subtree =1+2+4+8….2^(h+1)=2^(h+2)-1 …..(i)
Number of nodes in Right Subtree =1+2+4+8….2^(h) =2^(h+1)-1 …..(ii)
Thus, plugging into:
Number of nodes in the tree = 1 + (Number of nodes in Left Subtree) + (Number of nodes in Right Subtree)
=> N = 1 + (2^(h+2)-1) + (2^(h+1)-1)
=> N = 1 + 3(2^(h+1)) - 2
=> N = 3(2^(h+1)) -1
=> 2^(h+1) = (N + 1)/3
Plugging in this value into equation (i), we get:
Number of nodes in Left Subtree = 2^(h+2)-1 = 2*(N+1)/3 -1 =(2N-1)/3 < (2N/3)
The elements in the subarray ……………. are all leaves of the tree, and so each is a 1-element heap to begin with.
what dose The procedure BUILD-MAX-HEAP do
goes through the remaining nodes (other than the leavs A[(n/2)+1..n] so n/2 to 0 baiscally )of the tree and runs MAX-HEAPIFY on each one.
implement max heapify psudocode
Each call to MAX-HEAPIFY costs O(………) time and
BUILD-MAX-HEAP makes O(………….) calls.
Thus, the running time is O(…………).
thus ………..
Each call to MAX-HEAPIFY costs O(log n) time and
BUILD-MAX-HEAP makes O(n) calls.
Thus, the running time is O(n log n).
Upper bound is not asymptotically tight.
how to know the number of nodes at any height
rouneded up
descripe heapsort algorthim
The Heapsort algorithm starts by using BuildMaxHeap()
then loop
Swap A[1] (max element) with element at A[n] to put the maximum element into its correct position
then
decrement the heap_size[A] to discard the node n from the heap
But
The children of the root remain max-heap, but the new root element may violate the max-heap property
so call maxheapify and repeat until goint from a.length to 2
heapsort psuodocode
The running time of Heapsort algorithm is
O(n log n):
Build-max-heap takes O(…) time why
nlogn
loop from bulid-max-heap=n
* maxheapify log n
What sorting algorthim compines the better attributes of merge sort(speed) and insertion sort (in place)
Heapsort
BC
Running time O(n logn)
Sorts in-place
diffrence between heaparray.length
heaparray.size
A.length: number of elements in the array.
A.heap-size: represents how many elements in the heap are stored within array A.
height of heap sort can be expressed as
logn
T or F: Although A[1..A.length] may contain numbers, only the elements in A[1..A.heap-size], are part of the heaparray
T
mention the linear sorting algorithms
Counting Sort
Radix Sort
Bucket sort
merge and insertion sorted order they determine is basedon…….
comparisons between the input elements
Selection Sort, Insertion Sort: O (….)
Heap Sort, Merge sort: O (……….)
Quicksort: O (……..) [ best and average case] ,
O (……….) [worst case]
lection Sort, Insertion Sort: O (n2)
Heap Sort, Merge sort: O (n log n)
Quicksort: O (n log n) [ best and average case] ,
O (n2) [worst case]
whats the issue with comparison algorthims and whats a soultion
comparison sorts must make Ω(n log n) comparisons in the worst case.
instade we can choose liner sorting algorthim rather than comparison sorting algorthim for shorter running time
.liner sort makes some….
assumptions about the data.
T or F: Linear sorts are the same as comparison sorts
F
they are not the same
counting sort assumbtions
1-each of the n input elements is an integer in the range 0 to k
2-assumes that the input consists of integers in a small range.
mention all arrays counting sort uses and how many loops needed
arrays:
Input: A[1 . . n], where A[ j]∈{1, 2, …, k} .
Output: B[1 . . n], sorted.
Auxiliary storage: C[1 . . k] .
loops:
for(i=1 to k)
c[i]<- 0
for(i=1to n)
C[A[i]]=C[A[i]]+1
for (i=2 to k)
c[i]=c[i]+c[i-1]
for(i=n downto 1)
B[C[A[i]]]=A[i]
C[A[i]]=C[A[i]]-1
what dose counting sort determines
Counting sort determines, for each input element x, the number of elements less than x.
It uses this information to place element x directly into its position in the output array.
counting sort running time
-O-(k+n)
if k= n
then its O(n)which is better than comparison sort that equal O(nlogn)
T or F: Heap Sort, Quick Sort are considerd stable algorthim
F not stable
Stable algorthim are like Insertion sort, Merge Sort, Bubble Sort.
1.
What does “stable” means ?
A sorting algorithm is said to be stable if two objects with equal keys appear in the same order in sorted output as they appear in the input array to be sorted.
in radix sort numbers are sorted from …… to ….
The numbers are sorted from least significant digit to the most significant digit.
how To sort numbers using Radix sort.. |STEPS
-Find the number of digit in the biggest number
.
-If there are d number of digit in the biggest number, it will need to perform d number of passes.
.
-The remaining numbers must be padded with zeros so they all have d digits
.
-Then we will take 10 buckets labeled from 0 – 9 .
.
-then Sort the numbers from least significant digit.
After the sorting is complete, remove the leading zeros.
based on what you should choose sorting algorthim
Running time (best, worst, and average).
Implementation
Stable or not
Space complexity (in-place or not)
Input size
Format of the input list
Nearly sorted list
One pass of radix sorting takes…… assuming that……we use counting sort
O(n+k) since d=1
after how many passes you should stop radix sort
d number of digits
so if 0382
stop after 4 passes since there is 4 digits
what is radix sort running time and based on what we evaulted it
O(d(n+k))
In Radix Assuming d=O(43) and k=O(n), running time is
O(n)
radix sort these digits
remeber to remove leading zeros when finshed