DATA STRUCTURES 8 Flashcards

1
Q

Ο( g(n) )

A

a ≤ b; f(n) ≤ cg(n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Ω( g(n) )

A

a ≥ b; f(n) ≥ cg(n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Θ( g(n) )

A

a = b; c₁g(n) ≤ f(n) ≤ c₂g(n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

ω( g(n) )

A

a > b; f(n) > cg(n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

ο( g(n) )

A

a < b; f(n) < cg(n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

O( n² )

A

bubble sort; Shell sort; quicksort; selection sort; insertion sort

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

O( log n )

A

binary search

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

When parameters are passed ______, the formal parameters of a function refer to _______ of the actual parameters which are discarded as soon as the function call returns.

A

by-value; copies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

void addOne( long x )

A

passed by-value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

void addOneByRef( long& x )

A

passed by-reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Function overloading

A

Has to have different parameters. Can return different value types. Can have one version with 2 parameters and one with 3 parameters.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

*p = ?

A

value of variable pointed to

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

p = ?

A

address of variable pointed to

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

&x = ?

A

address of variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

N + (N - 1) + … + 2 + 1 =

A

N(N+1)/2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Time for searching?

A

O( log n )

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Time for Traveling Salesman?

A

O( n! )

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Time for one loop?

A

O( n )

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Time for a loop with in a loop?

A

O( n² )

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Static memory is created on the _____?

A

Stack

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Dynamic memory is created on the ____?

A

Heap

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Dynamic memory must be ____ via _____?

A

released; delete

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Static is freed when they go out of ____?

A

Scope

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

______ is a data structure that uses a ____ function to map identifying values, known as keys (e.g., a person’s name), to their associated values (e.g., their telephone number).

A

Hash Table; Hash

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What are hash tables good for?

A

The main advantage of hash tables over other table data structures is speed. This advantage is more apparent when the number of entries is large (thousands or more). Hash tables are particularly efficient when the maximum number of entries can be predicted in advance, so that the bucket array can be allocated once with the optimum size and never resized.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

In what situations are hash tables not likely to be useful?

A

The main issue is the flexibility of the table in terms of size. Once you have chosen a particular size m for the array, you will also commit to a particular hash function h whose range is exactly {0,…,m-1}. If you do not have a reasonably tight estimate for the size of the set S that you’d like to store in your hash table ahead of time, hash tables may not be the best answer. Hash tables become quite inefficient when there are many collisions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What are the worst-case running times of the hash table algorithms for insertion, removal and search?

A

For the worst choice of hash function, every insertion causes a collision, and hash tables degenerate to linear search, with Ω(k) amortized comparisons per insertion and up to k comparisons for a successful lookup.

28
Q

“What are the average-case running times for hash tables, and what are the caveats regarding the ““average-case?”””

A

The average case time for searching (as well as insertion and removal) is still Θ(1) under various assumptions about h.

29
Q

What is a universal hashing family?

A

“A family of functions is a universal hash family if for any two keys of the universe that collide, there is a probability at most (1 / m) when the hash function h is drawn randomly from H. “

30
Q

In the context of hash tables, what problem does universal hashing families solve?

A

Guarantees a low number of collisions in expectation, even if the data is chosen by an adversary.

31
Q

What is recursion?

A

See: What is recursion?In mathematics and computer science, a class of objects or methods exhibit recursive behavior when they can be defined by two properties: (1) A simple base case (or cases), and (2) A set of rules which reduce all other cases toward the base case.

32
Q

Recursive solution to the Towers of Hanoi?

A

To move n discs from peg A to peg C:(1) Move n−1 discs from A to B. This leaves disc n alone on peg A.(2) Move disc n from A to C.(3) Move n−1 discs from B to C so they sit on disc n.

33
Q

What is a Binary Search Tree (BST)?

A

A binary search tree (BST), which may sometimes also be called an ordered or sorted binary tree, is a node-based binary tree data structure which has the following properties: (1) The left subtree of a node contains only nodes with keys less than the node’s key. (2) The right subtree of a node contains only nodes with keys greater than the node’s key. (3) Both the left and right subtrees must also be binary search trees.

34
Q

What are the traversals for a BST?

A

Pre-Order; In-Order; Post-Order; Breadth First; Depth First;

35
Q

Worst case running time for pre/in/post order traversal?

A

It takes O(n) time to walk (in-order, pre-order and post-order) a tree of n nodes.

36
Q

Worst case running time for deletion for a BST?

A

The procedure runs in O(h) time on a tree of height h.

37
Q

Worst case running time for insertion for a BST?

A

It runs in O(h) time on a tree of height h.

38
Q

Worst case running time for Breadth First Traversal of a BST?

A

The total running time for breadth-first search traversal is O(V + E) where V is the verts and E is the edges.

39
Q

Worst case running time for Depth First Traversal of a BST?

A

The running time of DFS is Θ(V + E).

40
Q

For graphs, in an ______ ____ representation, we keep, for each vertex in the graph, a list of all other vertices which it has an edge to.

A

Adjacency List

41
Q

For graphs, an ______ ____ is a means of representing which vertices (or nodes) of a graph are adjacent to which other vertices.

A

“Adjacency Matrix “

42
Q

When might an adjacency list be better than an adjacency matrix?

A

It is easy to find all vertices adjacent to a given vertex in an adjacency list representation; you simply read its adjacency list. With an adjacency matrix you must instead scan over an entire row, taking O(n) time. If you, instead, want to perform a neighbor test on two vertices (i.e., determine if they have an edge between them), an adjacency matrix provides this at once.For a graph with a sparse adjacency matrix an adjacency list representation of the graph occupies less space, because it does not use any space to represent edges that are not present. A graph must be sparse for an adjacency list representation to be more memory efficient than an adjacency matrix.

43
Q

How does BFS work?

A

Breadth-first search (BFS) is a graph search algorithm that begins at the root node and explores all the neighboring nodes. Then for each of those nearest nodes, it explores their unexplored neighbor nodes, and so on, until it finds the goal.BFS uses a QUEUE.http://www.youtube.com/watch?v=or9xlA3YYzo&feature=player_detailpage#t=271s

44
Q

How does DFS work?

A

Depth-first search (DFS) is an algorithm for traversing or searching a tree, tree structure, or graph. One starts at the root (selecting some node as the root in the graph case) and explores as far as possible along each branch before backtracking.DFS uses a STACK.https://www.youtube.com/watch?v=or9xlA3YYzo

45
Q

What is BFS good for?

A

BFS can be used to test bipartiteness, by starting the search at any vertex and giving alternating labels to the vertices visited during the search. That is, give label 0 to the starting vertex, 1 to all its neighbours, 0 to those neighbours’ neighbours, and so on. If at any step a vertex has (visited) neighbours with the same label as itself, then the graph is not bipartite. If the search ends without such a situation occurring, then the graph is bipartite.Finding the shortest path between two nodes u and v.The set of nodes reached by a BFS (breadth-first search) form the connected component containing the starting node.

46
Q

When is a graph useful?

A

Using graphs can involve representing relationships between objects, places, or concepts. Because graphs can be either directed or undirected, they are a flexible method of showing connections. For instance, you can describe who knows whom in a room as a collection of nodes, each representing a person, and directed edges, each representing that one person knows another.

47
Q

What is abstraction in C++?

A

In object-oriented programming languages such as C++, the concept of abstraction has itself become a declarative statement - using the keywords virtual (in C++). After such a declaration, it is the responsibility of the programmer to implement a class to instantiate the object of the declaration.An abstract class is a class that is designed to be specifically used as a base class. An abstract class contains at least one pure virtual function. You declare a pure virtual function by using a pure specifier (= 0) in the declaration of a virtual member function in the class declaration.class AB {public: virtual void f() = 0;};

48
Q

Example of Polymorphism in C++?

A

class CPolygon { protected: int width, height; public: void set_values (int a, int b) { width=a; height=b; } };class CRectangle: public CPolygon { public: int area () { return (width * height); } };

49
Q

How does one declare one class as a derived class of another?

A

“Use of the ““public”” keyword.class Derived: public Base”

50
Q

Can you think of an instance where a virtual destructor would be a good idea?

A

It is a good idea to use a virtual destructor in any class that will be used as an interface (i.e., that has any other virtual method). If your class has no virtual methods, you may not want to declare the destructor virtual because doing so will require the addition of a virtual table pointer.

51
Q

How does insertion sort work?

A

Every repetition of insertion sort removes an element from the input data, inserting it into the correct position in the already-sorted list, until no input elements remain. The choice of which element to remove from the input is arbitrary, and can be made using almost any choice algorithm.

52
Q

How does selection sort work?

A

(1) Find the minimum value in the list.(2) Swap it with the value in the first position.(3) Repeat the steps above for the remainder of the list (starting at the second position and advancing each time).

53
Q

How does merge sort work?

A

(1) If the list is of length 0 or 1, then it is already sorted. Otherwise:(2) Divide the unsorted list into two sublists of about half the size.(3) Sort each sublist recursively by re-applying the merge sort.(4) Merge the two sublists back into one sorted list.

54
Q

How does quick sort work?

A

Quicksort is a divide and conquer algorithm. Quicksort first divides a large list into two smaller sub-lists: the low elements and the high elements. Quicksort can then recursively sort the sub-lists.The steps are:(1) Pick an element, called a pivot, from the list.(2) Reorder the list so that all elements with values less than the pivot come before the pivot, while all elements with values greater than the pivot come after it (equal values can go either way). After this partitioning, the pivot is in its final position. This is called the partition operation.(3)Recursively sort the sub-list of lesser elements and the sub-list of greater elements.The base case of the recursion are lists of size zero or one, which never need to be sorted.

55
Q

What is a heap?

A

“In computer science, a heap is a specialized tree-based data structure that satisfies the heap property: if B is a child node of A, then key(A) ≥ key(B). This implies that an element with the greatest key is always in the root node, and so such a heap is sometimes called a max-heap. (Alternatively, if the comparison is reversed, the smallest element is always in the root node, which results in a min-heap.) “

56
Q

How does heap sort work?

A

https://www.youtube.com/watch?v=WYII2Oau_VY

57
Q

How does bucket sort work?

A

https://www.youtube.com/watch?v=Iiuqrns0Gwk

58
Q

Insertion sort has ___ worst case and ___ average case run time.

A

О(n²); О(n²)

59
Q

Selection sort has ___ worst case and ___ average case run time.

A

О(n²); О(n²)

60
Q

Merge sort has ___ worst case and ___ average case run time.

A

O(n log n); O(n log n)

61
Q

Quick sort has ___ worst case and ___ average case run time.

A

О(n²); O(n log n)

62
Q

Heap sort has ___ worst case and ___ average case run time.

A

O(n log n); O(n log n)

63
Q

Bucket sort has ___ worst case and ___ average case run time.

A

О(n²); O(n)

64
Q

Why might one prefer quick sort to merge sort in spite of the О(n²) worst case running time?

A

For Merge sort worst case is O(nlog(n)), for Quick sort: O(n²). For other cases (avg, best) both have O(nlog(n)). However Quick sort is space constant where Merge sort depends on the structure you’re sorting.

65
Q

Be able to prove Ω(n log n) bounds on the running time for comparison sorts.

A

Proof: Recall that the sorting algorithm must output a permutation of the input [a1, a2, . . . , an]. The key to the argument is that (a) there are n! different possible permutations the algorithm might output, and (b) for each of these permutations, there exists an input for which that permutation is the only correct answer. For instance, the permutation [a3, a1, a4, a2] is the only correct answer for sorting the input [2, 4, 1, 3]. In fact, if you fix a set of n distinct elements, then there will be a 1-1 correspondence between the different orderings the elements might be in and the permutations needed to sort them.Given (a) and (b) above, this means we can fix some set of n! inputs (e.g., all orderings of {1, 2, . . . , n}), one for each of the n! output permutations. Let S be the set of these inputs that are consistent with the answers to all comparisons made sofar (so, initially, |S| = n!). We can think of a new comparison as splitting S into two groups: those inputs for which the answer would be YES and those for which the answer would be NO.Now, suppose an adversary always gives the answer to each comparison corresponding to the larger group. Then, each comparison will cut down the size of S by at most a factor of 2. Since S initially has size n!, and by construction, the algorithm at the end must have reduced |S| down to 1 in order to know which output to produce, the algorithm must make at least log2(n!) comparisons before it can halt. We can then solve:log2(n!) = log2(n) + log2(n − 1) + . . . + log2(2)= Ω(n log n).Notice that our proof is like a game of 20 Questions in which the responder (the adversary) doesn’t actually decide what he is thinking of until there is only one option left. This is legitimate because we just need to show that there is some input that would cause the algorithm to take a long time.In other words, since the sorting algorithm is deterministic, we can take that final remaining option and then re-run the algorithm on that specific input, and the algorithm will make the same exact sequence of operations.http://www.cs.cmu.edu/~avrim/451/lectures/lect0908.pdf