Algorithmics Flashcards

1
Q

Stack Operations

A

push
peak
pop
isEmpty

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Queue Operations

A

enqueue
peek
dequeue
isEmpty

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Priority Queues Additional Operations

A

insert (giving a priority)
findMin
deleteMin/removeMin - returns the lowest element

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

List Operation

A

add - shifts elements to make space
remove
set - replaces element at position
get

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Set Operations

A

add
remove
contains
size
isEmpty

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Map Operations

A

put
get
remove
size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Skip List

Data Structure

A

Hierarchies of linked lists which allow for binary search, allowing Θ(log(n)) search. Good for concurrent access compared to binary trees

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Binary Tree

A

A tree (a graph with no cycles and no direction) where each node has at most 2 children

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Binary Search Tree

A

A binary tree where every element in the left subtree is smaller and each element in the right subtree is larger for every parent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Successor

Binary Search Tree

A

The next smallest element. Found by:
1. If has right child move right once then left as far as possible
2. Otherwise go up left as far as possible then up right once

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Deletion

Binary Search Tree with 2 children

A

Replace the element with it’s successor. Then remove the successor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Single rotation

Binary Search Tree

A

Move top element of larger subtree up and change middle subtree to old top element

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Double Rotation

Binary Search Tree

A

Needed if large subtree is in the middle. First move large subtree to outside. Then do normal rotation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

AVL trees

A

Binary search tree where all parents left and right subtrees differ by at most 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

AVL tree implemenation

A

Each node has a balance factor. If > |1| then do rotations on deepest section.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

AVL additions

A

Iterate up tree changing balance factor (increase if on left). Stop when balancefactor > |1| or is 0.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

AVL deletion

A

Requires updating of balance factors and rotations but may need repeated rotations. Worst case O(log(n))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Red-Black Trees

A
  • Black Root
  • Children of red nodes are black
  • Number of blacks on route from root to any exposed node (<= 2 children) are the same for all routes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Complete Tree

A

Lowest level is all on the left and other levels are full.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Heap

A

A complete tree with every child being >= its parent (so root is smallest)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Adding to Heap

A

Add element to next space in tree and swap with parent until corrrect (percolate up)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

removeMin

heap priority queue

A
  1. Pop root
  2. Replace it with last node in heap
  3. Swap with smallest child until fixed
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Heap Array Implementation

A

Root stored at 0. Children of node at k are at 2k + 1 and 2k + 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Heap Sort

A

Add all elements to a heap then take them off again. Worst case O(nlog(n)) as we need to do add/remove 2n times and add/remove are O(log(n))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

B-tree

A

Balanced trees for keeping related data close together. Good for disk access time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

M-way tree

A

Allow for nodes to have many children. Reduces the depth of trees greatly, reducing access time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

B+ Tree

A

All data stored in leaves. Non leaf nodes store M/2 - M-1 keys with a key corresponding to a child (Can have an extra child if on left or right). All leaves have L/2 - L data entries. Each key corresponds to a subtree with the smallest value in that subtree being that key value. Leftmost subtree has no key for it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Trie

A

Multiway tree for storing sets of words. All words end with $. Use lots of memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Internal one-way branching

Trie

A

Edges between non-leaf nodes, can be removed by condensing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

External one-way branchiong

Trie

A

Edges that reach the leaf nodes, can be removed by condensing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Suffix Tree

A

Trie of all suffixes of string, so all parts of string included. Good for finding if string in text

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Content-addressable memory

A

Where we want to access data based on a linked objects contents (use hash table)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Turn Hash into address

A

Hash % tableSize

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Hash Collision

A

Two objects map to same address

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Seperate Chaining

Hashing

A

Make a linked list to store any hash collisions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Open Addressing

A

Finding a new space in the hash table to put hash collisions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Linear Probing

A

Hash collisions are moved to next available space. Can cause clustering making loading data slow

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Loading Factor

hashing

A

Proportion of full entries in the table

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Quadratic Probing

hashing

A

hash collisions are placed in next available space 1, 4, 9, … spaces away (i^2). Prevents clustering, may cause failure to place element when table not full

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Double Hashing

A

hash collision are placed in next available space multiples of new hash address away (from a different hashing function). Table size should be prime to allow all spaces to be checked.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Element Removal when open addressing

Hash Table

A

Can be sone by replacing removed elements with dummy entry, counted as full for finding, but empty for placing

42
Q

Union-Find

A

Groups elements together. Operations:
* union (A, B) - Combines the groups containing A and B together
* isConnected (A, B) - Checks if A and B are in the same group
* find (A) - Checks what group A is in

43
Q

Dynamic Connectivity

Data Structures

A

A property of a data structure such that it stores objects where some of them are connected. These connections can change. A Union-Find has dynamic connectivity

44
Q

Equivalence class properties

A

A relation on a set of elements such that it is:
* Reflexive - A ∼ A
* Symmetric - if A ∼ B then B ∼ A
* Transitive - if A ∼ B and B ∼ C then A ∼ C
A property of a union-find

45
Q

Quick-Find

A

Uses a 1 to 1 array of elements and an array of ids (groups). Elements are in the same class (group) if their id is the same. Allows O(1) find
* isConnected (A, B) - checks if A and B have the same id
* find (A) - returns A’s id
* union (A, B) - changes id of all elements in group A to the id of group B

46
Q

Quick-Union

A

Elements stored in trees. Parent of a tree is itself by default. The id of an element is it’s parent, so that’s what the array of ids stores. Bad as O(n) for all operations. Operations:
* find (A) - return the id of A’s root
* isConnected (A, B) - true if A and B’s roots are the same
* union (A, B) - Makes A’s root the child of B’s root

47
Q

Weighted (Balanced) Quick-Union

A

Store an additional size[] array storing the number of elements in each tree. For union (A, B), make the root of the smaller tree the child of the root of the larger tree. Allows for O(log(n)) find and union when doing path compression

48
Q

Path Compression

Quick-Union

A

Improvement to quick union where elements visited when finding a root have their id set to the root

49
Q

Stability

Sorting Algorithm

A

Stable if the order of elements with the same value does not change

50
Q

In-Place

Soting Algorithm

A

If memory used is O(1)

51
Q

Insertion Sort Properties

A

It is stable and in-place. But O(n^2) worst case time complexity. Very good for small arrays

52
Q

Selection Sort

A

Search for smallest element and swapped with the element in the first unsorted position

53
Q

Selection Sort Properties

A

in-place but not stable as swaps can change order. O(n^2) time complexity in all cases. Performs few swaps

54
Q

Loop Invariant

Algorithms

A

Property of a loop that helps show correctness. Must show:
* Initialisation - Be true before the first iteration
* Maintenance - Is true before loop, is true after
* Termination - Can be used to show that the algorithm is correct after the final loop

55
Q

Merge Sort

A

Divide the array into two halves until 1 element arrays. Then Merge sorted arrays together until 1 array. stable, O(n) space and O(nlog(n)) time

56
Q

Merge Sort Improvement

A

Can use insertion sort on small sub-arrays, due to it being fast on small arrays

57
Q

Quicksort

A

Choose pivot and put smaller elements on left and larger elements on right. Repeatedly pick pivot until all elements been picked as pivot. not stable but in place, O(n^2) worst case but O(nlog(n)) average

58
Q

Choosing pivot well

quicksort

A
  • Choose randomly to avoid worst case (can be done by shuffling array at start)
  • Choose median of first middle and last element
59
Q

Quicksort improvement

A

Use insertion sort on the small arrays

60
Q

Comparison-Based time complexity

A

Minimum omega(nlog(n)) due to minimum number of comparisons

61
Q

Quicksort Partitioning

A

Algorithm for splitting elements lower and higher than pivot

62
Q

Lomot Partitioning Scheme

Quicksort Partitioning

A

2 sections for higher and lower with each element sorted into one of them. O(n) time complexity

63
Q

Hoare Partitioning Scheme

Quicksort Partitioning

A

2 Pointers at ends of array which move away from ends. If element at pointer not correct in relation to pivot, swapped with equivalent element at other pointer. Stop when pointers cross. Pointer that started at the right is partition point. O(n) time complexity

64
Q

Knuth Shuffle

Quicksort

A

Starting from last element go back, swapping each element with random one in unshuffled section. O(n)

65
Q

Multi-pivot Quicksort Benefits

A

Better performance due to rduced cache misses.

66
Q

Radix Sort

A

Every element must be d characters, with k possible characters. Sort array by each character, starting from last using stable sort. Is Θ(d(n + k)) when using counting sort with d the number of digits and k the base

67
Q

Counting Sort

A

Count the occurences of each element. Make the occurences into a cumulative counter. Loop backwards through the unordered array and place element in position as stated by cumulative array. Then decrease count by 1.

68
Q

Counting Sort Properties

A

Stable but not in-place. time O(n + k) and space O(n + k) with k being base. Good when n > > k. if k > n then bad as have to make large structure

69
Q

Adjacency Matrix

Graph

A

Represents a graph. A matrix with each node representing a row and column. Intersections between nodes have 1s if an edge is present and a 0 if not

70
Q

Adjacenecy List

Graph

A

Represents a graph. Nodes are stored as a list with each connected nodes stored in that list

71
Q

Euler Trail

Graph

A

Route through a graph where each edge is passed through once. Requires <= 2 odd-degree nodes

72
Q

Euler Circuit

Graph

A

Euler trial with the end node being the starting node. Requires 0 odd-degree nodes

73
Q

Euler Circuit Construction

Graph

A
  1. Travel along any adjacent unused edge
  2. Repeat step one until the start node for the loop is reached
  3. Mark the loop and move along it until there is an adjacent unused edge and go to step one. if the start is reached first (every edge used), the circuit is done
74
Q

Minimum Spanning Tree

Graph

A

A subgraph that spans all nodes of the parent graph but minimises the weight of the edges crossed.

75
Q

Kruskal’s Algorithm

Graph

A

Method for finding a minimum spanning tree. Contains a set of trees each step the safe edge with the lowest weight is added to the forest (set of trees)

76
Q

Safe Edge

Graph

A

An edge that when added to a graph does not form a cycle, and would prevent a minimum spanning tree

77
Q

Kruskal’s Implementation

Graph

A

All nodes stored as thier own tree initially in a union-find. When two subtrees connected a union is done on the subtrees. O(elog(e))

78
Q

Prim’s Algorithm

Graph

A

Method for finding a minimum spanning tree. Contains a tree, each step the safe edge with the lowest weight connected to the tree is added to the tree

79
Q

Prim’s Implementation

Graph

A

Two arrays for the connected nodes and the unconnected nodes. When node connected to the tree it is added to the connected nodes array O(elog(e))

80
Q

Dijkstra’s

Graph

A

Method for finding the shortest route from a root node to every other node

81
Q

Hamiltonian Path

Graph

A

A path that goes through each node exactly once

82
Q

Dynamic Programming

A

The idea of combining smaller overlapping subproblems to solve a given problem. The subproblems are created recursively.

83
Q

Dynamic Programming Steps

A
  1. Find a recursive algorithm to solve the problem
  2. Remove recalculation of subproblems using memoization/bottom-up
  3. Store the actual solution, not just the values
84
Q

Memoization

Dynamic Programming

A

The storage of results to sub-problems to prevent their future calculation

85
Q

Bottom-up

Dynamic Programming

A

The calculation of subproblems starting at the base case then working up to the full problem, replacing the recursive algorithm with an iterative one

86
Q

P

Complexity Theory

A

Problems that can be solved in polynomial time

87
Q

NP

Complexity Theory

A

Problems that can have solutions verified in polynomial time

88
Q

NP to P relationship

Complexity Theory

A

P ⊆ NP As if a solution is found, that verifies it. Assume P ⊂ NP for our uses

89
Q

NP-Hard

Complexity Theory

A

Problems in NP-Hard are at least as hard as all problems in NP

90
Q

NP-Complete

Complexity Theory

A

Problems that are in NP-Hard and in NP. i.e. the hardest problems in NP

91
Q

α-approximation algorithm

A

An algorithm that runs in polynomial time and produces a solution that is at most a factor of α of the original solution

92
Q

Satisfiability

Algorithms

A

Gives a boolean formula in the structure A ∧ B ∧ C ∧ … where each letter is a clause containing A1 ∨ A2 ∨ A3 ∨ … Is there an assignement where the formular is true. It is NP-Complete. SAT if you reach an empty formula

93
Q

DPLL Algorithm

Algorithms

A

Two rules
* If a clause has 1 variables, it must be true
* If a variable is pure (It’s negation does not appear), it can be set to true

94
Q

Branch and Bound

Algorithms

A

When doing backtracking, instead of trying all possibilities, you can exclude certain routes that are impossible to provide a solution,

95
Q

Local Search

Algorithms

A

Used in an optimisation problem. At each step changes to a neighboring output that goes in the wanted direction. A greedy strategy as gets stuck in local optima

96
Q

Simulated Annealing

Algorithms

A

At each step randomly choose a new location. If better do it. If worse do it with a probability, increased if less bad and if higher temperature. Probability given by e^(−∆/T)

97
Q

Chromosome

Genetic Algorithms

A

A representation of an individual

98
Q

Gene

Genetic Algorithms

A

A position in a chromosome

99
Q

Allele

Genetic Algorithms

A

A possible value for a gene

100
Q

Steps

Genetic Algorithms

A
  1. Compute fitness of all individuals
  2. Keep some individuals, favouring higher fitness
  3. Mutate the individuals
  4. Crossover individuals to replace dead ones