All kinds of sorts. insertion, merge, heap, counting, radix Flashcards

1
Q

what are in-place alogrthim and mention exampels of it

A

algorithms that rearrange the elements within the array with at most a constant number stored outside the array at any time

selection sort
bubble sort
insertion sort
heap sort

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

For searching on sorted data by comparing keys, optimal solutions require

A

o(logn)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

searching on unsorted data by comparing keys, optimal solutions require big o of

A

o(n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what are stable algorthims

A

algorithms in which their elements appear in the output array in the same order as they do in the input array.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

examples of algorithms that are both in place and stable algorithm

A

bubble sort and insertion sort

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

display how would insertion sort work on these numbers {6,3,9,8,24,8}

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

write psudocode for insertion sortand write its running time and the

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

whats insertion sort running time in the best case and worst case

when do both cases happen

A

best case O(n) liner function =an+b
Worst Case O(nˆ2)

best Case happens when it is already sorted
worst case when it’s in the opposite order

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

write running time for insertion sort pseudocode

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

represent insertion sort worst case mathmatically

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

T or F: In insertion sort, the average case is way better than the worst case.

A

F
almost same as bad

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

on average what is the condition of insertion sort problems

A

On average, half the elements in A [1 … j-1] are less than A[j], and half the elements are greater.
tj = ½ j

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

The resulting average-case running time for insertion sort turns out to be

A

a quadratic function of the input size.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

merge sort to sort an array usually dose thee following

A

To sort an array A[p . . r]:
Divide
Divide the n-element sequence to be sorted into two subsequences of n/2 elements each
Conquer
Sort the subsequences recursively using merge sort.
When the size of the sequences is 1 there is nothing more to do.
Combine
Merge the two sorted subsequences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

merge sort

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

conquer merge sort

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

conquer

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

merge sort pausoxodw

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

whats the output of merging

A

One single sorted subarray A[p . . r]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

1.

Idea for merging:

A

Two piles of sorted cards
Choose the smaller of the two top cards
Remove it and place it in the output pile
Repeat the process until one pile is empty
Take the remaining input pile and place it face-down onto the output pile

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

MERGE(A, 0, 3, 7)

A={1,6,43,8,0,4,6}

A

starting like this

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

what is merge pseudocode
its okay to look at code while figering that out

A

MERGE(A, p, q, r)
Compute n1 and n2
Copy the first n1 elements into
L[1 . . n1 + 1] and the next n2 elements into R[1 . . n2 + 1]
L[n1 + 1] ← ∞; R[n2 + 1] ← ∞
i ← 1; j ← 1
for k ← p to r
do if L[ i ] ≤ R[ j ]
then A[k] ← L[ i ]
i ←i + 1
else A[k] ← R[ j ]
j ← j + 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Running Time of Merge

A

Initialization (copying into temporary arrays):
Θ(n1 + n2) = Θ(n)
Adding the elements to the final array:
n iterations, each taking constant time ⇒ Θ(n)
Total time for Merge:
Θ(n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

MERGE-SORT Running Time

A

Divide:
compute q as the average of p and r: D(n) = Θ(1)
Conquer:
recursively solve 2 sub-problems, each of size n/2 ⇒ 2T (n/2)
Combine:
MERGE on an n-element subarray takes Θ(n) time ⇒ C(n) = Θ(n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

The (binary) heap data structure is an …….. that we can view as a nearly complete binary tree.

A

array object

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

1.

Heap data structure is used for:

A

Heap sort
Priority queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

T or F:The binary heap tree is filled on all levels

A

F
A completely filled on all levels except possibly the lowest.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

T or F: Each node of the tree corresponds to an element of the array

A

T

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

An array A that represents a heap is an object with two attributes:

A

A.length: number of elements in the array.
A.heap-size: represents how many elements in the heap are stored within array A.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

what are the valid elemns of the heap

A

a.heap.size
Although A[1..A.length] may contain numbers, only the elements in A[1..A.heap-size], where
0 <= A.heap-size <= A.length, are valid elements of the heap

28
Q

how to get
parent
left child
right child

A

parent: return i/2. where i is place of elmeent

left: 2i
right: 2
i+1

29
Q

Min heap is commnly used to implemnet

A

priority queues

30
Q

The height of a node in a heap is

A

the number of edges on the longest simple downward path from the node to a leaf.

31
Q

The height of the heap

A

is the height of its root

32
Q

whats the hight of heap tree as funtion of n

A

logn

33
Q

write psuodocode for max heapify

A
34
Q

The children’s subtrees each have size at most

A

2n/3

35
Q

what is maxheapify recurence and why is it like this and what is the big o of max heapify

A

T(n)T(n/2)+O(1)

maxheapify >=recurence is t(2n/3)
because when it calls it self its at most 2n/3 size
big o od logn

36
Q

Alternatively, we can characterize the running time of MaxHeapify()

A

on a node of height h as O(h).

37
Q

Number of nodes in the tree and worst case

A

Number of nodes in the tree = 1 + (Number of nodes in Left Subtree) + (Number of nodes in Right Subtree)
For our case where the tree has last level half full, iF we assume that the right subtree is of height h, then the left subtree if of height (h+1):
Number of nodes in Left Subtree =1+2+4+8….2^(h+1)=2^(h+2)-1 …..(i)
Number of nodes in Right Subtree =1+2+4+8….2^(h) =2^(h+1)-1 …..(ii)
Thus, plugging into:
Number of nodes in the tree = 1 + (Number of nodes in Left Subtree) + (Number of nodes in Right Subtree)
=> N = 1 + (2^(h+2)-1) + (2^(h+1)-1)
=> N = 1 + 3(2^(h+1)) - 2
=> N = 3
(2^(h+1)) -1
=> 2^(h+1) = (N + 1)/3
Plugging in this value into equation (i), we get:
Number of nodes in Left Subtree = 2^(h+2)-1 = 2*(N+1)/3 -1 =(2N-1)/3 < (2N/3)

38
Q

The elements in the subarray ……………. are all leaves of the tree, and so each is a 1-element heap to begin with.

A
39
Q

what dose The procedure BUILD-MAX-HEAP do

A

goes through the remaining nodes (other than the leavs A[(n/2)+1..n] so n/2 to 0 baiscally )of the tree and runs MAX-HEAPIFY on each one.

40
Q
A
41
Q

implement max heapify psudocode

A
42
Q

Each call to MAX-HEAPIFY costs O(………) time and
BUILD-MAX-HEAP makes O(………….) calls.
Thus, the running time is O(…………).

thus ………..

A

Each call to MAX-HEAPIFY costs O(log n) time and
BUILD-MAX-HEAP makes O(n) calls.
Thus, the running time is O(n log n).
Upper bound is not asymptotically tight.

43
Q

how to know the number of nodes at any height

A

rouneded up

44
Q

descripe heapsort algorthim

A

The Heapsort algorithm starts by using BuildMaxHeap()
then loop
Swap A[1] (max element) with element at A[n] to put the maximum element into its correct position
then
decrement the heap_size[A] to discard the node n from the heap
But
The children of the root remain max-heap, but the new root element may violate the max-heap property
so call maxheapify and repeat until goint from a.length to 2

45
Q

heapsort psuodocode

A
46
Q

The running time of Heapsort algorithm is

A

O(n log n):

47
Q

Build-max-heap takes O(…) time why

A

nlogn

loop from bulid-max-heap=n
* maxheapify log n

48
Q

What sorting algorthim compines the better attributes of merge sort(speed) and insertion sort (in place)

A

Heapsort
BC
Running time O(n logn)
Sorts in-place

49
Q

diffrence between heaparray.length
heaparray.size

A

A.length: number of elements in the array.
A.heap-size: represents how many elements in the heap are stored within array A.

50
Q

height of heap sort can be expressed as

A

logn

51
Q

T or F: Although A[1..A.length] may contain numbers, only the elements in A[1..A.heap-size], are part of the heaparray

A

T

52
Q

mention the linear sorting algorithms

A

Counting Sort
Radix Sort
Bucket sort

53
Q

merge and insertion sorted order they determine is basedon…….

A

comparisons between the input elements

53
Q

Selection Sort, Insertion Sort: O (….)
Heap Sort, Merge sort: O (……….)
Quicksort: O (……..) [ best and average case] ,
O (……….) [worst case]

A

lection Sort, Insertion Sort: O (n2)
Heap Sort, Merge sort: O (n log n)
Quicksort: O (n log n) [ best and average case] ,
O (n2) [worst case]

53
Q

whats the issue with comparison algorthims and whats a soultion

A

comparison sorts must make Ω(n log n) comparisons in the worst case.

instade we can choose liner sorting algorthim rather than comparison sorting algorthim for shorter running time

53
Q

.liner sort makes some….

A

assumptions about the data.

53
Q

T or F: Linear sorts are the same as comparison sorts

A

F
they are not the same

54
Q

counting sort assumbtions

A

1-each of the n input elements is an integer in the range 0 to k

2-assumes that the input consists of integers in a small range.

55
Q

mention all arrays counting sort uses and how many loops needed

A

arrays:
Input: A[1 . . n], where A[ j]∈{1, 2, …, k} .
Output: B[1 . . n], sorted.
Auxiliary storage: C[1 . . k] .

loops:
for(i=1 to k)
c[i]<- 0
for(i=1to n)
C[A[i]]=C[A[i]]+1
for (i=2 to k)
c[i]=c[i]+c[i-1]

for(i=n downto 1)
B[C[A[i]]]=A[i]
C[A[i]]=C[A[i]]-1

55
Q

what dose counting sort determines

A

Counting sort determines, for each input element x, the number of elements less than x.
It uses this information to place element x directly into its position in the output array.

56
Q

counting sort running time

A

-O-(k+n)
if k= n
then its O(n)which is better than comparison sort that equal O(nlogn)

57
Q

T or F: Heap Sort, Quick Sort are considerd stable algorthim

A

F not stable

Stable algorthim are like Insertion sort, Merge Sort, Bubble Sort.

58
Q

1.

What does “stable” means ?

A

A sorting algorithm is said to be stable if two objects with equal keys appear in the same order in sorted output as they appear in the input array to be sorted.

59
Q
A
60
Q

in radix sort numbers are sorted from …… to ….

A

The numbers are sorted from least significant digit to the most significant digit.

61
Q

how To sort numbers using Radix sort.. |STEPS

A

-Find the number of digit in the biggest number
.
-If there are d number of digit in the biggest number, it will need to perform d number of passes.
.
-The remaining numbers must be padded with zeros so they all have d digits
.
-Then we will take 10 buckets labeled from 0 – 9 .
.
-then Sort the numbers from least significant digit.
After the sorting is complete, remove the leading zeros.

62
Q

based on what you should choose sorting algorthim

A

Running time (best, worst, and average).
Implementation
Stable or not
Space complexity (in-place or not)
Input size
Format of the input list
Nearly sorted list

62
Q

One pass of radix sorting takes…… assuming that……we use counting sort

A

O(n+k) since d=1

63
Q

after how many passes you should stop radix sort

A

d number of digits

so if 0382
stop after 4 passes since there is 4 digits

64
Q

what is radix sort running time and based on what we evaulted it

A

O(d(n+k))

65
Q

In Radix Assuming d=O(43) and k=O(n), running time is

A

O(n)

66
Q

radix sort these digits

A

remeber to remove leading zeros when finshed