223 test 2 Flashcards

1
Q

declare a normal vector vs ptr

A

int* xptr = &x ; // & is the “ address of” operator

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Dynamically allocating memory at runtime

A

int* yptr = new int; // allocate new memory on the fly

  • yptr = 96;
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

memory leaks

A

you have memory that nothing can access

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

rule of three

A
A class that defines a destructor, assignment operator, or a copy constructor
should define all three
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Dereferencing

A

a pointer accesses value at the pointer’s stored address

* xptr = 57; // * here is the “ dereference “ operator

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

binary search

A

Iteratively search for a value v in a non-empty sorted list

  1. Pick the middle item in a list
  2. If middle item > v, then search in left half of list
  3. If middle item < v, then search in right half of list
  4. If middle item == v, then found match
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the worst case for binary search?

A

Keep searching (halving list) until one element left)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Assume T(n) = 27, which is O(1)

A

• In this case, pick n0 to be 0 (any value will work in this case)
• Pick k to be 27, giving:
0 ≤ 27 ≤ 27 · 1 for k = 27 and all n ≥ 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Assume T(n) = 3n + 2, which is O(n)

A

There are lots, but lets pick k = 5 and n0 = 1, giving:
• 3n + 2 ≤ 5n for all n ≥ 1
• e.g., 3 · 1 + 2 ≤ 5, 3 · 2 + 2 ≤ 10, 3 · 3 + 2 ≤ 15, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Assume T(n) = n^2 − 5n + 10,

A

What values of n0 and k do we pick to solve this?
• There are lots, but lets pick n0 = 2 and k = 4, giving:
• n
2 − 5n + 10 ≤ 4n
2
for all n ≥ 1
• Note: T(2) = 4 − 10 + 10 = 4 ≤ 4 · 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Selection Sort

A
The basic idea
• Select (i.e., “find”) the largest item
• Swap it with the last item in the list
• Repeat with items 0 to n − 2
• Stop when only one item left
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

selection sort best and worst case

A

best: n^2
worst: n^2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

bubble sort

A
The basic idea
ˆ Compare adjacent items
ˆ Swap if out of order
ˆ Repeat with items 0 to n − 2
ˆ Stop when only one item left
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

bubble best and worst

A
best O(n)
worst O(n^2)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

(Linear) Insertion Sort

A

(Linear) Insertion Sort
The basic idea (note: beginning of list is sorted, rest is unsorted)
• Select first item (in unsorted region)
• Insert item into correct position in sorted region (shifting larger items over)
• Stop when unsorted region is empty

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

best and worst insertion sort

A

Best case: List already sorted (no shifts, just n-1 passes)
• Worst case: List in reverse order (shift max number of times each pass)

best O(n)
worst O(n^2)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

merge sort

A

The basic idea (note: recursive “divide and conquer” approach)
• divide list into two halves
• sort each half (recursive step)
• merge the sorted halves into sorted list

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

merge sort best and worst case

A
best O(n log n) 
worst O(n log n)
19
Q

quick sort

A

The basic idea (also a recursive “divide and conquer” approach)
• pick a “pivot” element (e.g., first element in list)
• put values smaller than pivot on left
• put values larger than pivot on right
• put pivot value in the middle
• sort the left and right halves (using quick sort)

20
Q

quick sort best and worst

A
best O(n log n)
worst O(n log n)
21
Q

Mergesort versus Quicksort:

A

• quicksort has no “merge” step (which requires extra space)
• each level in quicksort divides list by (n − 1)/2 as opposed to n/2 halves
• these make quicksort faster in practice
• also can improve worst case performance by picking a “better” pivot val
– takes a bit of extra time, but worth the trade off in practic

22
Q

“collision

A

Map two or more keys to the same index

23
Q

“Selecting Digits”

A

• Select specific digits of the key to use as the hash value
• Lets say keys are 9-digit employee numbers
– h(k) = 4th and 9th digit of k
– For example: h(001364825) = 35
• Here we insert (find) entry with key 001364825 at table[35]
• This is a fast and simple approach, but
• May not evenly distribute data Q: … why not?

24
Q

“Folding”

A

Add (sum) digits in the key
• Lets say keys are again 9-digit employee numbers
– h(k) = i1 + i2 + · · · + i9 where h(k) = i = i1i2 . . . i9
– For example: h(001364825) = 29
– Store (retrieve) entry with key 001364825 at table[29]
• This is also fast, but also may not evenly distribute data
• In this example, only hits ranges from 0 to 81
• Can pick different (similar) schemes … e.g., i1i2i3 + i4i5i6 + i7i8i9

25
Q

Modular Arithmetic

A

h(k) = f(k) mod table size

26
Q

a “weighted” hash function

A

• Use the Java-based approach where weight is:

i0 · 31n−1 + i1 · 31n−2 + · · · + in−2 · 311 + in−1

27
Q

linear probing

A

Insert:
• Linear probe (sequentially) until first empty bucket found (either type)
• Insert at first empty bucket
• Note: The table could be full! (which means we can’t insert)
Removal:
• Use similar approach to find key item
• Stop at empty-since-start bucket (item not in table)
• Mark item as empty-after-removal
Search:
• Go to hashed index
• Start searching sequentially for key
• Stop when an empty-since-start bucket is found or searched entire table

28
Q

Quadratic Probing

A
similar to linear probing
• but probe “quadratic” sequences
• i + 1, i + 22
, i + 32
, i + 42
, and so on
• helps “spread out” primary clusters
29
Q

“Separate Chaining”

A
  • Each array index has its own linked list of elements (the “chain”)
  • The list grows and shrinks with collisions and removals
30
Q

load_factor

A

load_factor = n / table_capacity

31
Q

root

A

(no parents

32
Q

leaf node

A

a node without children

33
Q

internal node

A

node with children

34
Q

path

A

is a sequence of nodes ninj
. . . nk
ˆ the next node after n in the sequence is a child of n
ˆ a path by starts at the root and traverses children (possibly ending at a leaf)
ˆ a tree has many paths (like many linked lists)

35
Q

ancestor

A

is its parent and its parent’s ancestors

36
Q

descendent

A

its children and its children’s descendents

37
Q

siblings

A

if they have the same parent

38
Q

height

A

is the length of the longest root-to-leaf path

39
Q

Binary Trees

A

In a binary tree, each node has at most 2 children

40
Q

“full” binary tree

A
  • all paths from the root to a leaf has the same height h; and
  • all internal nodes have two children
41
Q

“complete” binary tree

A
  • the tree is full at height h − 1, and

* the leaf nodes are filled in from left to right

42
Q

A balanced binary tree

A

• all left and right subtrees that differ in height by at most 1

43
Q

Searching based on key in a BST

A

Is the key of the root node equal to the search key?
– if yes, found a match!
• Is the search key less than the key of the root node?
– if yes, search left subtree of the root
• Is the search key greater than the key of the root node?
– if yes, search the right subtree of the root

44
Q

Inserting into a BST

A

Basic Idea:
• Find where the node should be (key search)
• And then insert into that location
• Will always insert into a leaf node!