HTat Flashcards

1
Q

-1: Constant time growth

A

The nomal amount of time an instruction takes under the RAM model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

-Log n: logarithmic

A

Occurs in algos that transform a bigger problem into a smaller problem where the input size is a ration of the origional problem, common in searching and trees

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

-n: Linear

A

Algorithms that are forced to pass through all elements of the input(n) a number of times yield linear running times

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

n^k log n: polylogarithmic

A

Algorithms that break a problem into smaller parts, solve the ssmaller problems and combine into a solution k>=1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

n^2: Quadratic

A

Subset of polynomial solution. Relatively efficeint to small or medium scale problems. typical of algos that have to analyse all pairs of elements of the input

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

n^3(+) cubic

A

Not very efficient but still polynomial. An example of an algorithm in this class is matrix multiplication

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

2^n: exponential

A

This is as bad as testing all possible answers to a problem, when algorithms fall into this catagory designers look for aproximation algorithms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Worst case of an algorithm

A

The worst-case efficiency of an algorithm is its efficiency for the worst-case input of size n, which is an input
(or inputs) of size n for which the algorithm runs the longest among all possible inputs of that size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Advantages of finding the worse case of an algorithm

A

-Binds runnning time to an upper bound
-ensures no matter the input size it cant be worse that CWorst(n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Best case of an algorithm

A

The best-case efficiency of an algorithm is its efficiency for the best-case input of size n, which is an input
(or inputs) of size n for which the algorithm runs the fastest among all the inputs of that size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Advantages of finding best case

A

If the best case is not good enough the approach may need to be revisited

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How can we measure the time efficiency of an algorithm

A

The standard approach is to identify the basic operation(s) and count how many times it is executed.
As a rule of thumb, it is the most time-consuming operation in the innermost loop of the algorithm
The established framework is:
Count the number of times the algorithm’s basic operation is executed for inputs of size n

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why can’t we use a standard unit of time to measure the efficiency of an algorithm

A

We can’t have influence from external factors such as:
Dependence on the speed of a particular computer
* Dependence on the quality of the program implementing the algorithm
* Dependence on the quality of the compiler used to generate the executable
* Difficulty of clocking the time precisely

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What does RAM stand for in the RAM model of computation

A

Random access machine

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Do we have to take constants into account when talking about growth functions

A

No, for example in a function that looks like 2x^3 + 10x^2+ 3 the only important term for growth is x^3 according to asymptotic notation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is Asymptotic notation?

A

It provides the basic vocabulary for discussing the design and analysis of algorithms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Why is Asymptotic notation used

A

It’s ubiquitous because it identifies the key point for reasoning about algorithms.
It’s course enough to suppress all the details that should be ignored
precise enough to make useful comparisons between different algorithms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Big O notation

A

look it up yourself fucking nerd

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the formula to find average case of an algoritm

A

(1 x p/n + 2 x p/n + .. + i x p/n) + n x (1-p)
Where to p is the probability of the first match occuring in the ith position for every i

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Did you look over week 4 again?

A

Do it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

look over lecture 3

A

<

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

look over all of week 3

A

<

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Data Structues

A

Data structures are the building blocks of programming
-define how data is organised and allocated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Definition of ADT?

A

Abstract data types

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Do data structures have operations

A

Yes they have operations to manipulate data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What must be true for a structure to be linear ?

A

-unique first element
-unique last element
-each component has a unique predecessor(bar first)
-each component has a unique succesor(bar last)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What are the two methods for accessing data in linear data structures

A

Sequential access - elements must all be accessed sequentially
Direct access - any element can be accessed directly, no prior access required

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Give advantages and disadvantages of using arrays

A

Advantages
-Noramlly pre-defined in most programming language
-Elements in arrays can be directly accessed
Disadvantages
-Fixed size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Give the definition of a bernolli trial

A

An expermient or random process that has two possible outcomes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is a Bernolli process?

A

A series of executions of Bernolli trials

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is a list?

A

A linear structure that only provides sequential access it has:
-two special elements; head and tail
-a homogenus structure
-easy insertation and deletion of nodes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What are advantages of using lists over arrays

A

-They don’t have a fixed size
-can be used as the basis of several other structures e.g queues and stacks
-Easier to insert and delete nodes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

How does a linked list work?

A

consists of a collection of records called nodes each containing a member pointing to the next node in the list(implicit)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What is a better alternative to lists and why?

A

Lists using dynamic allocation(e.g linked lists)
-Because we have a link member we can store data anywhere in the list without worrying

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Advantages of linked lists

A

-Fair use of memory
-Size of list doesn’t need to be declared in advance
-common operations(ins, del eg.g) are cheaper

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Disadvantages of linked lists?

A

-Algorithms are more complex therefore harder to understand and debug
-Allocation and deAllocation of memory space can cause some overhead

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What do we have to consider when examining a node in a linkedlist

A

-The node is the first node
-The node is an interior node(any node not first or last)
-The node is the last node

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What is a sentinal node?

A

-A placeholder node containing no actual data designed to simplify operations on the linked list
-For example if the list is being searched and a trailing sentinal node is reached the algorithm knows it can stop searching
-if a sentinal node is added to the beggining and end then every insertion can be counted as a general insertion as there will never be a need to insert into the first and last node

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

What are Ciruclar lists

A

Variation of a singually linked list wherin the last node points to the first node instead of to nothing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

When are circular lists useful/used?

A

-They are useful in problems where we aren’t interested in what node is the first or the last
-Some problems are better described using circular lists e.g. Josephus election
-In this structure the head can point to any node and isn’t used to keep nodes from being list
-Algorithms using circular lists are simper

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What is the Josephus algorithm

A

An algorithm that makes use of circular lists, it will naviagate a circular list eliminating every kth person until only one remains

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

How can an array be used to implement linked lists?

A

The standard way to do this involves:
-An array of nodes to simulate the memory
-An array of boolean to keep track of free space in the memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

What is a doubly linked list?

A

It’s similar to a linked list however in addition to the head a pointer called the tail is also maintained.
That way each node is linked to both the node in front and behind it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

One advantage and disadvantage of doubly linked lists

A

Operations are made easier to understand
More overhead due to increased number of pointers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

What is a stack?

A

A restricted form of list

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

What are some properties of a stack

A

-Follows a Last in First out structure (LIFO)
-Can only refer to the top of the stack
-Only has two operations, one to add something to the top of the stack(push) and one to remove from the top of the stack(pull)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Applications of stacks

A

-Express evaluation
-Runtime memory management
-Search evaluation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

What is a queue

A

A restricted form of a list

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

What are the properties of a queue?

A

-Implements a first in first out methodology
-Has a reference at the front and end of the queue
-Most important operations are enqueue and dequeue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

What are some application of a queue?

A

-Job Scheduling
-printer spool
-mutitask operating systems (timesharing)
-Search problems(Knowing what part of memory hasn’t been explored yet)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

What are some special purpose queues and stacks

A

-Double ended queue
-There are modifications that remove duplicates

52
Q

What are the elementry sorting algorithms?

A

-Insertion sort
-Bubble sort
-Selection sort

53
Q

When is a sorting method said to be stable?

A

It preserves the relative order of items with duplicate keys on the list. e.g sorting by name then by age if we want the primary sorting parameter to be age, sorting by name first ensures items with the same age value are sorted alphabetically

54
Q

Why is stability important?

A

-More efficient as keys arent swapped if they don’t need to be
-Easier to be parallelised
-Allows multi key sort to be easily implemented via multiple runs of the sorting algorithm
-

55
Q

How does selection sort work?

A

Finds the index of the maximum element and puts it in it’s place on the list(highest position)

56
Q

Properties of selection sort

A

-Simple algorithm
-Inefficient for large values of n
-Easily implemented on linked lists

57
Q

What is the worst case of selection sort?

A

Main loop is executed n- 1 times where n is the size of the array:
No. steps given by n^2/2 - n/2

58
Q

How does Insertion sort work?

A

-We get an element and insert it into the position it belongs, some elements may have to be shifted to open space

59
Q

When is Insertion sort used

A

-Commonly used when we want to generate another list with the items sorted and keep the original list
-Useful as it can be done in place without requiring an additional list

60
Q

What is the worst case of insertion sort?

A

The main loop is executed n-1 times therefore the inner loop is executed n-1 times. This gives an efficiency of O(n^2)

61
Q

Properties of bubble sort?

A

-One of the simplest and slowest sorting algorithms
-In theory has the same efficiency as selection sort
-After each outer loop interaction one element is placed in it’s final location

62
Q

Advantages of quicksort

A

Average case is O(nlogn)
Only uses a small stack as an auxillary structure

63
Q

Disadvantages of quicksort

A

Recursive and costly
Worst case O(n^2)
fragile

64
Q

What is the basic idea of quicksort

A

Pick one item from the list and call it pivot
2. Partition the items in the list using the pivot.
Reorganise the list so that: all the elements that are smaller than the pivot are in the left
partition and the right partition contains only elements that are greater than the pivot.
You can put equal elements in one of the partitions
3. Use recursion to sort the partitions

65
Q

In quicksort, what conditions must be true for the sublists to be considered rearranged

A

-The element chosen as the pivot must be in it’s final place in the list
-All the elements in list[first],…,list[i-1] are less than or equal to list[i]
- All the elements in list[i+1],…,list[last] are greater than list[i]

66
Q

What is the worst case of quicksort

A

While the performance of quicksort depends on how balanced the recursion tree is the worst case of quicksort yields O(n^2)

67
Q

Worst case of quicksort using reccurance

A

O(n) + T(n-1)

68
Q

Best case of quicksort

A

-Depth of recursion is O(logn)

69
Q

How can we avoid the worst case for quicksort

A

We can use the median of 3:
(i think this is creating 3 pivots and finding the median but idrk)

70
Q

Was Mergesort proposed by Vonn Neuman

A

There was some evidence saying it was but it’s inconculsive so no hard evidence

71
Q

What is the difference between quicksort and mergesort?

A

Quicksort works on the concept of selection which divides a larger list into two smaller lists and mergesort works on the concept of merging which is joining two smaller lists together into one larger list

72
Q

Advantages of mergesort

A

-Worst case is O(nlogn)
-Stable and Robust
-Easily works with linked lists

73
Q

Drawbacks of mergesort

A

-May require an additional array up to the size of the original list

74
Q

What effiency does mergesort guarantee

A

Mergesort gaurantees O(nlogn)

75
Q

What is a radix sort algorithm

A

Radixsort is used when the comparison keys we are using are more complicated.
It’s useful as in several applications the consideration of the whole key is not neccesary, for example when comparing names alphabetically the whole name is often not relevant

76
Q

What are the main ideas behind radixsort

A

-We could represent keys in binary and work with the individual bits
-We can consider the number in decimal base and work with the individual digits
-We can consider strings as sequence of characters and work with the individual
characters

77
Q

What is an MSD radix sort?

A

Stands for most significant digit radix sort, it will compare digits from left to right
-MSD tend to be preferred because are normally able to sort the list by examining just a partial
number of the digits
-more complex than lsd sorts

78
Q

What is an LSD radix sort?

A

Least significant digit radix sort, will compare digits from right to left
-less complex than MSD sorts

79
Q

How can we represent the keys in radix sort?

A

For smaller and simpler keys extraction of digits is enough.
For large and non-uniform keys it may be a better idea to use the binary representation
of the key.

80
Q

What is the performance of radixsort?

A

For sorting n records with k number of digits the running time of Radixsort is equivalent to k*n = O(n)
-Clearly this performance depend on the sorting algorithm used to sort the element based on digit k.
-For large values of n radixsort is comparable to O(nlogn)

81
Q

What is counting search?

A

Counting search is a type of search algorithm that works when we know how many values are in the array

82
Q

What are the basic concepts behind counting search?

A

For each element x count how many other elements are less then x
* This information tells the position of x in the sorted list
* If there are 20 elements less than x, then x is in position 21 in the list

83
Q

What is the performance of counting search

A

Counting sort has a performance of k = O(n)

84
Q

Why might we want to avoid direct access structures?

A

We have to set the size in advance and as we have to overestimate the size we end up with a sparse structure

85
Q

Why might we want to use a hash table?

A

When the number of keys to be stored is much smaller than the total universe of keys

86
Q

What are some advantages of hash tables

A

-they have the advantage of direct access
-we can access the element directly as the keys are non trivial

87
Q

What makes a good hash function?

A

It is one that satisfies the assumption of uniform hashing
This means that each key is equally like to hash to any of the m slots in the table.

88
Q

What are two common hash functions?

A

The division method (h(k) = k mod m where m is the size of the hash table, good values of m are crucial
-For instance, if we want to store 4000 numbers and we don’t mind doing 4 probes, chose m
to be a prime close to 1000

The multiplication method h(k) = [m x (kAmod1)] where A is a constant between 0 and 1

89
Q

What are two types of collision resolutions that we can use for hash tables?

A

Seperate chaining
-each slot of the hash table is a pointer to a dynamic structure(e.g linked list)

Open Addressing
-When a collision occurs the data is stored in an alternative location in the hash table determined by which method is used.
-This is best used when we know the maximum inputs the hash table has to sustain

90
Q

What are the different methods we can use to redirect data collisions with open addressing?

A

-Linear probing
-Quadratic probing
-random probing
-rehashing

91
Q

Linear probing

A

Do a linear search of the table until you find an empty spot
-suffers from primary clustering

92
Q

Quadrati probing

A

A varient of Linear probing where the term being added to the hash result is squared h(key) + c^2
-sufferes from secondary clustering which is milder than primary clustering

93
Q

Random probing

A

A varient of linear probing where the term added to the hash result is a random number h(key) + random()

94
Q

-Rehashing

A

A series of hashing functions are definied and if a collision occurs they are used in order until it is resolved

95
Q

How does searching in a hash table work?

A

Searching is an important function in hash table:
-Hash the target
-Take the value of the hash of target and go to the slot. If the target exist it must be in
that slot
-Search in the list in the current slot using a linear search (assuming Linked Lists)

96
Q

What is the worst case for searching a hash table

A

O(n)

97
Q

What is the best case for searching a hash table

A

O(1)

98
Q

What is the average case for searching a hash table

A

O(1)

99
Q

Verticies in a graph are objects that have properties attached to them, while edges do not normally have properties attached what is it called if they do?

A

A weighted graph

100
Q

How can we represent a path of nodes in a graph?

A

A path from node A to node B in a graph is a list of nodes where successive nodes are
connected in the graph e.g
A path from C to F
normally represented
as
(c,a),(a,d),(d,f)

101
Q

When is a graph considered connected?

A

When all nodes are connected to each other via a path

102
Q

How does a non connected graph work?

A

A non connected graph is made up of smaller graphs called components

103
Q

What is a simple path

A

A path in which no node is repeated

104
Q

What is a cycle

A

A simple path with the same first and last node

105
Q

What is a spanning tree?

A

A connected subgraph hat contains all the
nodes of the graph but has no cycles is called a spanning tree

106
Q

How many edges can be in a graph

A

The number of edges in a graph can range from 0 (zero) to V(V-1)/2. Where V is the number of verticies

107
Q

What is a sparse graph

A

We call a sparse graph a graph in which E is much less than V2
* In general we say that in a sparse graph E = O(V)

108
Q

What is a dense graph

A

We call a dense graph one in which E is close to V2
* Or we say that in a dense graph E = Θ(V2)

109
Q

What is a complete graph

A

We call a complete graph a graph in which E is exactly V(V-1)/2
* A complete graph with k nodes is also called a k-clique.

110
Q

When is adjacency list representation best used?

A

Best used when representing sparse graphs

111
Q

What is BFS

A

Breadth first search, a type of graph search

112
Q

What are some properties of breadth first search

A

Similar idea to level order traversals for trees
The algorithm is very simple and is important for other graph algorithms
* for instance Dijkstra’s and Prim’s algorithms use ideas similar to BFS
The breadth-first search in graphs will make use of colors to identify nodes that have been visited
* White nodes are nodes not yet visited
* Grey nodes have been discovered but not visited
* Black nodes have been visited
At the end of the breadth first search, a tree is formed. This tree is called BFS tree.

113
Q

How does breadth first search work?

A

After the seach starting point is discovered all the nodes attached are added to a queue with the lowest subnodes being at the highest priority
The search moves to the next item in the queue and adds all the new subnodes discovered to the end of the queue, again with the lowest subnodes having highest priority
The search will move through every node in this manner

114
Q

How does a depth first search work?

A

Traverse left subtrees first and then travsere right subtrees

115
Q

How can we use a depth first search on graphs

A

The algorithm is also very similar to the BFS algorithm
* By replacing a queue with a stack we get depth-first search

116
Q

What is a connected undirected graph considered?

A

A tree

117
Q

What do BFS and DFS tell us about distances in spanning trees

A

BFS: Shortest path – distance from the starting vertex (unweighted graph)
DFS: Traverse through all possible options
Both produce spanning trees

118
Q

What is a wieghted graph

A

A graph with a cost attached to the edges

119
Q

What is the minumum spanning edge of a tree

A

A collection of edges connecting all
nodes such that the sum of the weights of the edges is minimum.

120
Q

How does Prim’s algorithm work?

A

Start with an arbitrary vertex v as the root of the tree.
Create a set of visited vertices and add v to it.
Create a set of unvisited vertices and add all the other vertices to it.
While the set of unvisited vertices is not empty:
a. Find the edge with the smallest weight that connects a visited vertex to an unvisited vertex.
b. Add the unvisited vertex to the set of visited vertices.
c. Add the selected edge to the minimum spanning tree.
Return the minimum spanning tree.

121
Q

what does mst stand for?

A

Minimum spanning tree

122
Q

What is a priority queue

A

A queue where the next item to be removed is the item with the highest priority(in the case of Dijkstras that would be the lowest distance)

123
Q

How does Dijkstras algorithm work?

A

A distance array is created to keep track of how far each vertex is from the start and a visited array is created to keep track of what verticies have been visited
-The starting vertex is selected
-For every neighbor of the selected vertex calculate the distance from the started vertex and add it to a priority queue
-Select the vertex with the lowest priority in the queue and visit it’s neighbours, calculating the distance from the starting vertex and adding them to the queue
-If a vertex is already in the queue then only replace it if the calculated value is smaller
-repeat

124
Q

What is a directed graph?

A

More general form of graph
-Edges of the graph now have directions which means traversal is more difficult
-Directed graphs can represent undirected graphs but not vise versa

125
Q

What is a weakly connected directed graph

A

There is a path(direction is irrelevent) between every set of verticies

126
Q

What is a strongly connected directed graph?

A

There must be a path(directions included) between every set of nodes

127
Q

Does depth first search account for directed graphs?

A

No, the algorithm must be repeated starting at a different node if there are still undiscovered nodes at the end of operations