223 test 2 Flashcards
declare a normal vector vs ptr
int* xptr = &x ; // & is the “ address of” operator
Dynamically allocating memory at runtime
int* yptr = new int; // allocate new memory on the fly
- yptr = 96;
memory leaks
you have memory that nothing can access
rule of three
A class that defines a destructor, assignment operator, or a copy constructor should define all three
Dereferencing
a pointer accesses value at the pointer’s stored address
* xptr = 57; // * here is the “ dereference “ operator
binary search
Iteratively search for a value v in a non-empty sorted list
- Pick the middle item in a list
- If middle item > v, then search in left half of list
- If middle item < v, then search in right half of list
- If middle item == v, then found match
What is the worst case for binary search?
Keep searching (halving list) until one element left)
Assume T(n) = 27, which is O(1)
• In this case, pick n0 to be 0 (any value will work in this case)
• Pick k to be 27, giving:
0 ≤ 27 ≤ 27 · 1 for k = 27 and all n ≥ 0
Assume T(n) = 3n + 2, which is O(n)
There are lots, but lets pick k = 5 and n0 = 1, giving:
• 3n + 2 ≤ 5n for all n ≥ 1
• e.g., 3 · 1 + 2 ≤ 5, 3 · 2 + 2 ≤ 10, 3 · 3 + 2 ≤ 15, etc.
Assume T(n) = n^2 − 5n + 10,
What values of n0 and k do we pick to solve this?
• There are lots, but lets pick n0 = 2 and k = 4, giving:
• n
2 − 5n + 10 ≤ 4n
2
for all n ≥ 1
• Note: T(2) = 4 − 10 + 10 = 4 ≤ 4 · 2
Selection Sort
The basic idea • Select (i.e., “find”) the largest item • Swap it with the last item in the list • Repeat with items 0 to n − 2 • Stop when only one item left
selection sort best and worst case
best: n^2
worst: n^2
bubble sort
The basic idea Compare adjacent items Swap if out of order Repeat with items 0 to n − 2 Stop when only one item left
bubble best and worst
best O(n) worst O(n^2)
(Linear) Insertion Sort
(Linear) Insertion Sort
The basic idea (note: beginning of list is sorted, rest is unsorted)
• Select first item (in unsorted region)
• Insert item into correct position in sorted region (shifting larger items over)
• Stop when unsorted region is empty
best and worst insertion sort
Best case: List already sorted (no shifts, just n-1 passes)
• Worst case: List in reverse order (shift max number of times each pass)
best O(n) worst O(n^2)
merge sort
The basic idea (note: recursive “divide and conquer” approach)
• divide list into two halves
• sort each half (recursive step)
• merge the sorted halves into sorted list
merge sort best and worst case
best O(n log n) worst O(n log n)
quick sort
The basic idea (also a recursive “divide and conquer” approach)
• pick a “pivot” element (e.g., first element in list)
• put values smaller than pivot on left
• put values larger than pivot on right
• put pivot value in the middle
• sort the left and right halves (using quick sort)
quick sort best and worst
best O(n log n) worst O(n log n)
Mergesort versus Quicksort:
• quicksort has no “merge” step (which requires extra space)
• each level in quicksort divides list by (n − 1)/2 as opposed to n/2 halves
• these make quicksort faster in practice
• also can improve worst case performance by picking a “better” pivot val
– takes a bit of extra time, but worth the trade off in practic
“collision
Map two or more keys to the same index
“Selecting Digits”
• Select specific digits of the key to use as the hash value
• Lets say keys are 9-digit employee numbers
– h(k) = 4th and 9th digit of k
– For example: h(001364825) = 35
• Here we insert (find) entry with key 001364825 at table[35]
• This is a fast and simple approach, but
• May not evenly distribute data Q: … why not?
“Folding”
Add (sum) digits in the key
• Lets say keys are again 9-digit employee numbers
– h(k) = i1 + i2 + · · · + i9 where h(k) = i = i1i2 . . . i9
– For example: h(001364825) = 29
– Store (retrieve) entry with key 001364825 at table[29]
• This is also fast, but also may not evenly distribute data
• In this example, only hits ranges from 0 to 81
• Can pick different (similar) schemes … e.g., i1i2i3 + i4i5i6 + i7i8i9
Modular Arithmetic
h(k) = f(k) mod table size
a “weighted” hash function
• Use the Java-based approach where weight is:
i0 · 31n−1 + i1 · 31n−2 + · · · + in−2 · 311 + in−1
linear probing
Insert:
• Linear probe (sequentially) until first empty bucket found (either type)
• Insert at first empty bucket
• Note: The table could be full! (which means we can’t insert)
Removal:
• Use similar approach to find key item
• Stop at empty-since-start bucket (item not in table)
• Mark item as empty-after-removal
Search:
• Go to hashed index
• Start searching sequentially for key
• Stop when an empty-since-start bucket is found or searched entire table
Quadratic Probing
similar to linear probing • but probe “quadratic” sequences • i + 1, i + 22 , i + 32 , i + 42 , and so on • helps “spread out” primary clusters
“Separate Chaining”
- Each array index has its own linked list of elements (the “chain”)
- The list grows and shrinks with collisions and removals
load_factor
load_factor = n / table_capacity
root
(no parents
leaf node
a node without children
internal node
node with children
path
is a sequence of nodes ninj
. . . nk
the next node after n in the sequence is a child of n
a path by starts at the root and traverses children (possibly ending at a leaf)
a tree has many paths (like many linked lists)
ancestor
is its parent and its parent’s ancestors
descendent
its children and its children’s descendents
siblings
if they have the same parent
height
is the length of the longest root-to-leaf path
Binary Trees
In a binary tree, each node has at most 2 children
“full” binary tree
- all paths from the root to a leaf has the same height h; and
- all internal nodes have two children
“complete” binary tree
- the tree is full at height h − 1, and
* the leaf nodes are filled in from left to right
A balanced binary tree
• all left and right subtrees that differ in height by at most 1
Searching based on key in a BST
Is the key of the root node equal to the search key?
– if yes, found a match!
• Is the search key less than the key of the root node?
– if yes, search left subtree of the root
• Is the search key greater than the key of the root node?
– if yes, search the right subtree of the root
Inserting into a BST
Basic Idea:
• Find where the node should be (key search)
• And then insert into that location
• Will always insert into a leaf node!