Mod 8 Flashcards

1
Q

When is it optimal to use a map data type?

A

When insertion, lookup, and removal are the **only ** operations we need, we can use the map data type.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a hash table?

A

A hash table is similar to an array, where the key is transformed into an index through a hash function. A hash function is a function that takes values of some type (e.g. string, struct, double, etc.) and maps them to an integer index value. We can then use this value to both store and retrieve data from an actual array.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the two steps to compute an index in a hash function?

A
  1. hash = hash_function(key)

This is finding the hash.

  1. index = hash % array_size

This is ensuring the hash fits within the array size. It is always better to use a prime number for array size as remainders of prime numbers have more uniformity and hence there will be less collisions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the three important properties of a hash function?

A
  1. Determinism – a given input should always map to the same hash value.
  2. Uniformity– the inputs should be mapped as evenly as possible over the output range. A non-uniform function can result in many collisions, where multiple elements are hashed to the same array index. We’ll look more at this later.
  3. Speed – the function should have a low computational burden.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is DJB2?

A
  • A widely-used hash function:

def hash_djb2(s):
hash = 5381
for x in s:
# hash * 33 + c (bitshift left 5 places = * 32)
hash = ( (hash &laquo_space;5) + hash ) + ord(x)
return hash & 0xFFFFFFFF

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How is a minimally perfect hash function different from a perfect hash function?

A

Both perfect and minimally perfect hash functions map all keys to a distinct integer.

Minimally perfect hash functions take this one step further. They map N keys to exactly inters 0 to N-1, with each key getting precisely one value. Hence the output of a minimally perfect function is different in that the distinct integers are consecutive.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How can we handle hash table collisions without probing?

A

We can eliminate collisions entirely if we allow multiple keys to share the same table entry (i.e., array index). To accommodate multiple keys, linked lists can be used to store the individual keys that map to the same entry. The linked lists are commonly referred to as buckets or chains, and this technique of collision resolution is known as chaining.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the load factor of a hash table?

A

The load factor of a hash table is the average number of elements in each bucket:

𝝺=n/m

  • n is the total number of elements stored in the table
  • m is the number of buckets
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Can the load factor be greater than 1 in a hash table?

A

Only in a chained hash table, where multiple items can go in one bucket.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Ina chained table, what is the average number of links traversed for searches?

A

For a linked list-based chained table, the average number of links traversed for successful searches is 𝝺 / 2. For unsuccessful searches, the average number of links traversed is equal to 𝝺.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the average-case complexity of a linked list-based chained hash table?

What about worst-case?

A

Assuming good distribution:

The average case for all operations is O(𝝺).
If the number of buckets is adjusted according to the load factor, then the number of elements is a constant factor of the number of buckets i.e.:

𝝺 = n/m = O(m)/m = O(1).

The worst-case complexity is O(n), since all of the elements might end up in the same bucket.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is probing?

A

Probing occurs when the index from a hash function is already filled in a hash table with open addressing. Hence a new index must be calculated by probing until an empty index is found.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Is linear probing good?

A
  • Linear probing: i = iinitial + j (where j = 1, 2, 3, …)

Linear probing can be problematic due to clustering. As a cluster becomes bigger, collisions become more likely and it takes longer and longer to find new empty spaces. Other probing methods may save time by looking further ahead and wrapping to the beginning once they reach the end.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is quadratic probing? Does it have any drawbacks?

A

Quadratic probing: i = ( iinitial + j2) % m (where, j = 1, 2, 3, …)

Quadratic probing can be problematic if there are empty spaces in an array that cannot be returned by the quadratic probing function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is double hashing?

A

A form of probing that increases by the hash itself. It helps to reduce clustering (which bloats insertion times):

i = ( iinitial + j * h2(key) ) % m (where, j = 1, 2, 3, …)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a tombstone?

A

When an element is removed, we insert the tombstone value.

This prevents removals from interfering with probing for earlier indexes.

The tombstone value can be replaced when adding a new entry.

17
Q

What should be done when a hash table is getting close to full?

A

Just as a dynamic array is doubled in size when necessary, a common solution to a full hash table is to move all values into a new and larger table when the load factor becomes greater than some threshold, such as 0.75. A new table is created, and every entry in the old table is rehashed, this time dividing by the new table size to find the correct index to use in the new table.

18
Q

What is the probability that the first probe (index for insertion) is successful?

A

p = (m−n)/m

There are m total slots and n filled slots, so m − n open spots.

19
Q

What is the probability that the second probe (index for insertion) is successful?

A

If the first probe fails, the probability that the second probe succeeds is (m−n)/(m−1).

There are still m − n remaining open slots, but now we only have a total of m − 1 slots to look at, since we have examined one already.

20
Q

Why do hash tables have O(1) operations on average?

A

For each probe, the probability of success is at least p because (m−n)/(m−c) >= (m−n)/m = p.

the expected number of probes until success is:
1/p = 1/((m−n)/m) = 1/(1−n/m) = 1/(1−𝝺)

Thus, the expected number of probes for any given operation is O(1/(1−𝝺).

If we limit the load factor to a constant and reasonably small number, our operations will be O(1) on average.

21
Q

In chaining, which data structure is appropriate?

A

Singly linked list

22
Q

Does every key in a hash table need to be unique?

A

Yes, that’s the point of storing key/value pairs.

23
Q

Does every key in a hash table need to hash to a unique value?

A

No, but we should use a hash function with as few collisions as possible.

24
Q

Does hash table performance increase or decrease as the number of buckets increases?

A

It should increase. An increase in the number of buckets will more evenly distribute the elements within the indices of the hash table. Also, recall that accessing a bucket in a hash table runs in O(1) time, since it simply needs to run the hash function on a key to find out which bucket it belongs in. However, unless it is a perfect hash function without any collisions (multiple elements stored at one index), you will still need to traverse all of the elements within that bucket for certain methods like “contains”. This is where having more buckets and a smaller table load factor will increase performance since more buckets generally mean fewer elements stored at any one index.

25
Q

In open addressing, if the first probe fails, is the probability that the second probe succeeds (m-n)/(n-1)?

A

No.

If the first probe fails, the probability that the second probe succeeds is** (m-n)/(m-1). **

26
Q

What is the worst-case time complexity to retrieve a value from a hash table with chaining?

A

In the worst case scenario, all keys have hashed to the same bucket and, therefore, all keys are chained together in a linked list. Further, the element you’re searching for may be at the end of the linked list, meaning you have to traverse the full list from head to tail, which is O(n).