Algorithms And Complexity Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What is an algorithm?

A
  • To produce a program to solve all instances of a problem we must specify a general step by step procedure for producing the solution to each instance
  • This procedure is an algorithm
  • The algorithm solves the problem
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a parameter?

A
  • variable values used in algorithms

- eg; for finding a value (x) in a list (s) of size (n) there would be the 3 parameters x, s and n

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a problem?

A

A question we seek to answer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a solution?

A

Answer to the question asked in that instance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is an Instance?

A

-Each specific assignment of values to the parameters is called an instance of the problem (each time problem is defined with values)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the primary and secondary interests of efficiency?

A
  • Primary = running time and operations on data structures (time complexity)
  • Secondary = memory usage (space complexity)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How important are algorithms and choosing the right one?

A
  • bigger the problem the more evident an efficient algorithm becomes
  • should be considered a technology (same as hardware)
  • choice of algorithm is as important as choice of hardware
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Wat is the point in algorithmic analysis?

A
  • measuring amount of work as it varies with the size of data affected
  • measuring performance against another algorithm at the same task
  • predicting performance to solve a task
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is algorithm performance?

A
  • expressed as approximate rate of growth of work as size of data grows
  • best, average and worst cases are discovered
  • worst case is the primary issue
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is big-o notation?

A
  • estimates rate of algorithm growth rather than time/space resources used
  • not exact measurements
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the rules of Big-o notation?

A
  • discard coefficients, logarithmic bases and constants
  • determine how time required by required by repetitive work grows as size of data does
  • nested relationships = multiply
  • sequential relationships = add
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the order of efficiency measures in big-o notation?

A

-fastest to slowest:

0(1), O(logn), O(N), O(NlogN), O(N^2), O(N^3), O(2^N), O(N!)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How do we simplify Big-O expressions?

A
  • split into dominant and lesser terms

- discard lesser terms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What else can be measured using big-o?

A
  • memory requirements

- some algorithms may be equal on time but may consume more memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is big O concerned with and what does it to a function?

A
  • concerned with eventual behaviour

- puts asymptotic upper bound on function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is Omega?

A
  • describes least amount of a resource that an algorithm needs for some class of input
  • measures least amount of time mainly
  • lower bound of algorithm
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is theta?

A

-intersection between omega and big O

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is a problem with growth rate measuring?

A
  • for average problem it is easy to recognise the growth rate for that algorithm
  • distinction between upper and lower bound is only worthwhile when you have incomplete knowledge about the thing being measured
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is a mistake many people confuse with UB, LB and best/worst case?

A
  • upper bound does not mean worst case and lower bound is not best case
  • best and worst give us solid input instance that we can apply to algorithm description to get a cost measure
  • what is being bounded is the growth rate for the cost not the cost itself
  • growth rate applies to change in cost as input size changes
20
Q

What is a misconception people have with best and worst case?

A
  • best case does not occur when size is small and vise versa

- best and worst case instances exist for each possible size of input

21
Q

What is the issue with a fast algorithm? How do we chose algorithms?

A
  • faster an algorithm is the more complicated it will be to write
  • when choosing we need to consider; size of data amount of memory you have, amount of time you have, if we want a fast algorithm that produces an ok solution or a slow one that produces a good solution
22
Q

What is the difference between decomposition and recursive decomposition?

A
  • decomposition divides problem into dissimilar sub problems

- Recursive decomposition problem into smaller version of the same problem

23
Q

What properties must problems have to be solved recursively?

A
  • must show self similarity
  • structure repeats itself at different levels in scale
  • solve larger problems means solving smaller problems within
24
Q

What is the base case?

A

-simplest version of problem that can be solved

25
Q

What is recursive case?

A
  • make calls to self to get results for smaller, simpler versions
  • recursive calls must advance to base case
  • results of recursive calls combine to solve larger version
26
Q

What are some problems with recursion?

A
  • can require same resources as alternative approach
  • may be much more or much less efficient
  • for problems with simple iterative solution, iteration is likely best
27
Q

Why use recursion?

A
  • can express problems with clear, elegant, direct code
  • can intuitively model a task that is recursive in nature
  • solution may require recursion
  • iteration may not be an option
28
Q

How do we do recursive decomposition?

A
  • find recursive sub structure
  • solve problem using result from smaller problems
  • identify base case
  • simplest possible case, directly solvable, recursion advances to it
29
Q

What are common patterns in recursion?

A
  • handle first and or last, recur on remaining
  • divide in half, recur on one/both halves
  • make a choice on options and recur on updated state
30
Q

What is the difference between functional and procedural recursion?

A
  • function returns result

- procedural returns void

31
Q

What are the time complexities of binary and linear search?

A
  • Binary = O(logN)

- linear = O(n)

32
Q

What is interpolation?

A
  • exploits additional knowledge about values
  • similar to binary but uses different method to find mid point
  • M = low(((x-s(low)/s(high)-s(low))*(high-low))
  • average case is O(log(logN))
  • worst case is O(n)
33
Q

What is robust interpolation search?

A
  • in worst case normal interpolation is the same as sequential
  • uses a variable gap
  • gap = ((high-low+1)^1/2)
  • mid then calculated using normal formula
  • after that mid is changed with special formula
  • O(log(logN)) average case
  • O((logN)^2) worst which is worse than binary but better than interpolation
34
Q

What is the search tree algorithm?

A
  • in a binary tree every branch has only 2 children
  • go down tree comparing value to each branch, if value you are after is smaller go to the left branch and vise versa
  • average is O(logN)
  • worst is O(n)
  • binary search tree is guaranteed O(logN)
  • auto sorts data
35
Q

What are bubble and insertion sort?

A
  • bubble is bubble. Easy but slow
  • insertion is based on sequential, similar to sorting cards in order, easy to implement for space but not time
  • used when data is nearly ordered
  • best case is O(N)
  • worst case is O(n^2)
36
Q

What is merge sort?

A

-D&C
-very fast
O(nlog(n))
-split —> merge
-easy split, hard join
-memory hungry

37
Q

What is quick sort?

A
  • D&C
  • Hard split, easy-join
  • Pivot
  • Best = O(nlog(n))
  • worst = O(n^2)
  • pivot choice decided in many ways, can be first or last item, random etc
  • can be median value (algorithm to calculate this takes O(n) but will generate quick sort with garenteed O(nlog(n))
38
Q

Merge vs quicksort

A
  • QS is not as memory hungry
  • QS moves elements quicker to position
  • Merge will always be O(nlog(n)) but quick can be as slow as O(n^2)
39
Q

What is a priority queue?

A
  • restricted list of items arranged by priority values
  • items will enter queue based on priority level, they wont be stuck on back
  • items removed from it must be highest priority of all values in queue. If more than one item with same priority then the one that has been in there the longest is removed
40
Q

What do these operations mean;

1) Insert(S,x)
2) max(s)
3) Extract_max(s)
4) increase_key(s,x,k)

A

1) insert value x into set s
2) return element of S with largest key
3) return element of S with largest key and remove it from S
4) increase the value of element X’s key to new value k

41
Q

What is a complete binary tree (CBT)

A
  • a binary tree with leaves on either a single level or on two adjacent levels such that the leaves on the bottommost level are placed as far left as possible
  • and such that nodes appear in every possible position in each level above the bottom level
42
Q

What is a heap?

A
  • implementation of a priority queue
  • an array visualised as a nearly CBT
  • max heap property = key of a node is larger or equal that of the keys of its children
  • min heap = opposite
43
Q

Wat do the following operations do;

1) build_max_heap
2) max_heapify
3) insert
4) extract_max
5) heapsort

A

1) produce a max_heap from an unordered array
2) correct a single violation of the heap property in a sub tree of its root
3) inserts a new element
4) extracts the element with the highest key
5) sorts an array in place

44
Q

How do we build a heap?

A
  • built in a bottom up manner
  • elements in the subarray are all leaves of the tree, each one is an one-element heap to begin with
  • build_max_heap goes through the remaining nodes of the tree and runs max_heapify on each one
45
Q

How do we use heaps to sort?

A
  • find max element of A[1]
  • swap elements A[n] and A[1] now max element is at end of the array
  • discard node n from the heap
  • new root may violate max heap property, use max_heapify to fix
  • go to point 2 unless heap is empty
  • O(nlog(N))
46
Q

How do we add values to a heap?

A
  • create new leaf

- values on the path from the new leaf to the root are copied down until a place for the new value is found