ECM 1414 Workshop Mix Flashcards
What does the term ‘Algorithm’ derive from?
A Persian mathematician’s name, Al-Khwarizmi
In computer science, an algorithm is best defined as:
A problem-solving and computational method
Why is the study of algorithms fundamental in computer science?
They provide a basis for writing efficient and effective computer programs
example of a simple algorithm discussed in the class for
which we provided several versions?
Consecutive integer checking for finding the greatest common divisor (GCD)
Which statement best describes a characteristic of Euclid’s algorithm for computing
the greatest common divisor (GCD)?
It involves division and finding the remainder repetitively
Why is the concept of abstraction important in studying algorithms?
It is more intuitive for humans than programming language code
What characterizes the Fibonacci sequence?
Each number is the sum of the two preceding ones
In the context of algorithms, how is the Fibonacci sequence typically generated leading
to a very inefficient solution?
Through a recursive algorithm
How efficient is Euclid’s algorithm for computing the GCD of large numbers?
It is efficient and converges quickly, taking about O(k) steps where k is the number
of digits in the minimum of m and n
In Euclid’s algorithm, what happens when one of the numbers (n) becomes zero?
The other number (m) is returned as the GCD
What notable aspect of Euclid’s algorithm is highlighted in the text regarding its
historical context?
It was developed in the 3rd century B.C. by Euclid and appeared in his book
‘Elements’ (Book 7)
How is an algorithm defined in computer science, according to Sedgewick (2011), and
probably definition that is not as precise?
A finite sequence of instructions, each with a clear meaning, performed in a finite
amount of time
According to Levitin (2003), what characterizes an algorithm (and better definition
than the one above)?
It is a finite sequence of unambiguous instructions for solving a problem, obtaining
the required output for any legitimate input in a finite amount of time
What is the primary focus in designing algorithms?
Developing specific instructions for solving problems
What is the first step in the process of designing algorithms?
Understanding the problem
What does the equation ‘Algorithms + Data Structures = Programs’ illustrate?
The fundamental relationship between algorithms, data structures, and
programs
What is a major consideration in the analysis of algorithms?
The efficiency of the algorithm with respect to running time and memory space
How is the basic operation of an algorithm identified?
The most time-consuming operation in the innermost loop
What is the RAM model in algorithm design?
A hypothetical computer model used for machine-independent algorithm design
What does the ‘worst-case’ scenario of an algorithm refer to?
The efficiency of the algorithm for the most challenging input of a given size
What does the O notation fundamentally represent in asymptotic analysis?
The upper bound of a function’s growth rate
In the context of Big O notation, what does the statement ‘f(n) = O(g(n))’ imply?
f(n) grows at most as fast as g(n)
What best describes the Ω (Omega) notation?
It represents the lower bound of a function’s growth rate
What is the primary use of Θ (Theta) notation in algorithm analysis?
To represent the tight bound of a function’s growth rate
When a function f(n) is said to be in o (little-o) notation with respect to g(n), it
means:
f(n) grows strictly slower than g(n)
What does the statement ‘f(n) = Θ(g(n))’ imply about f(n) and g(n)?
f(n) grows at the same rate as g(n)
f(n) and g(n) have equivalent upper and lower bounds
How does transitivity apply in the comparison of functions in asymptotic analysis?
If f(n) = O(g(n)) and g(n) = O(h(n)), then f(n) = O(h(n))
If f(n) = Ω(g(n)) and (n) = Ω(h(n)), then f(n) = Ω(h(n))
If f(n) = Θ(g(n)) and g(n) = Θ(h(n)), then f(n) = Θ(h(n))
In asymptotic analysis, what does the usage of limits indicate when comparing the
growth of functions?
A method to compare the relative growth rates of functions
What does it mean if an algorithm has a time complexity of Θ(n2)?
The algorithm’s running time is exactly proportional to the square of the input
size
In the context of asymptotic notation, what does ‘tight bound’ mean?
The upper and lower bounds of the algorithm’s running time are equivalent
How does the Ω (Omega) notation differ from the O (Big O) notation?
Ω denotes a lower bound, while O denotes an upper bound
Consider two functions f(n) = n2 and g(n) = n3. What can be written about the two functions?
f(n) = O(g(n))
For the functions f(n) = 100n + log n and g(n) = n, what is the relationship between
f(n) and g(n)?
f(n) = O(g(n))
g(n) = O(f(n))
f(n) = Ω(g(n))
f(n) = Θ(g(n))
g(n) = Θ(f(n))
all here apply
Considering f(n) = n! (n factorial) and g(n) = 2^n, how do these functions compare?
g(n) = O(f(n))
f(n) = Ω(g(n))
For f(n) = n^0.5 and g(n) = n^2, which statement is correct?
f(n) = O(g(n))
g(n) = Ω(f(n))
If f(n) = 2^n and g(n) = n^10, how do f(n) and g(n) compare asymptotically?
g(n) = O(f(n))
f(n) = Ω(g(n))
What does the notation f(n) = o(g(n)) indicate about the functions f(n) and g(n)?
f(n) grows slower than g(n) and becomes insignificant as n approaches infinity
If f(n) = o(n), what is true about this equation?
f(n) is dominated by n as n approaches infinity
What is the primary difference between little o (o) and little omega (ω) notations?
o notation signifies a non-tight (strict) upper bound, while ω signifies a non-tight
(strict) lower bound
In the context of recurrences, what does ‘solving a recurrence relation’ typically
involve?
Finding a non-recursive expression that describes the relationship
How is the recurrence relation for the ‘Two-Flips Method’ in pancake flipping
problem defined? Assume the cost is measured as a function of the number of
pancakes.
T(n) = T(n-1) + n
The binary search algorithm is an example of which kind of problem-solving
approach?
Divide and conquer
What does the recurrence relation T(n) = 2T(n-1) + 1 with T(1) = 1 describe?
The time complexity of the towers of Hanoi
What represents the closed-form solution for T(n) = 2T(n-1) + 1,
where T(1) = 1?
T(n) = 2^n – 1
For the recurrence relation T(n) = 2T(n-1) + 1, what is the value of T(4)?
15
Considering the solution to the recurrence T(n) = 2T(n-1) + 1, T(1) = 1, how does
T(n) grow with respect to n?
Exponentially with 2^n
Considering When analysing the solution to a recurrence, why might we consider
the limit as n approaches infinity (n → ∞)?
To understand the asymptotic behaviour
In solving recurrence relations, what principle is applied when dealing with a
geometric series?
Summation of terms
What is the main goal of utilizing data structures in computer programming?
Organizing and managing data effectively
Identify a key feature of linear data structures.
Each element is preceded and followed by another element
What distinguishes direct access from sequential access in the context of data
structures?
Immediate accessibility to any data element
Which advantage is associated with the use of arrays?
Direct access to individual elements is provided
The Sieve of Eratosthenes algorithm finds application in:
Identifying prime numbers within a specified range
Which data structure is exemplified by the Sieve of Eratosthenes program?
array
The “if (Math.random() < 0.5)” condition in the Coin Flipping Simulation’s heads method is significant for:
Simulating the flip of a coin yielding heads
what is the coin flipping simulation code?
import random
def heads ():
return random.random() < 0.5
def main(n, m):
result = [0] * (n+1)
for _ in range(m):
cnt = sum(heads() for _ in range (n))
result[cnt] += 1
for j in range (n+ 1):
if result [j] == 0:
print(‘.’, end=’’)
else:
print(‘*’ * (result[j] // 10), end=’’)
print()
Which of the following best describes the principle of a stack?
Last-In-First-Out (LIFO)
In a queue data structure, how are elements removed?
In the same order they were added
What is the primary characteristic of Bubble Sort?
It repeatedly steps through the list, compares adjacent elements, and swaps
them if they are in the wrong order.
Which sorting algorithm is considered the most efficient in the best-case scenario?
(note that the best-case scenario of each may be a different configuration of the
elements)
bubble sort
In Insertion Sort, what is true about the initial state of the ‘sorted’ portion of the
array?
It contains only the first element of the array
Which algorithm performs sorting by repeatedly choosing the minimum element
from the unsorted portion and moving it to the end of the sorted portion?
Selection Sort
Which of the following is a disadvantage of using a linked list over an array?
direct access to elements
Which of the following data structures would be most appropriate for
implementing a browser’s back button functionality?
stack
Which data structure is ideally suited for managing tasks in a printer queue?
queue
In a circular queue, what happens when the back pointer reaches the end of the
array?
It moves to the front of the array.
Which sorting algorithm conceptually works by dividing the unsorted list into two
halves, sorts each with recursion, and then merges them into a complete sorted
list?
merge sort
What Java code snippets correctly demonstrates the operation of
pushing an element onto a stack?
stack[++top] = element;
How would you correctly implement the enqueue operation for a queue in Java?
Assume front tracks the first element of the queue and back tracks the last
element of the queue.
queue[++back] = element;
Considering a bubble sort algorithm, what pseudocode snippet
accurately represents a single pass for sorting an array in ascending order?
for i from 0 to n-1:
if array[i] > array[i+1]:
swap(array[i], array[i+1])
Which algorithmic paradigm does quicksort exemplify?
divide and conquer
What is the worst-case time complexity of quicksort?
O(n^2)
How does the choice of pivot affect the performance of quicksort?
Choosing the median of a constant number of values as pivot leads to the best
worst-case performance
What is the average-case time complexity of Quicksort?
O(n log n)
The depth of recursion for Quicksort in the best case is:
O(log n)
Which case does the quicksort algorithm perform poorly in?
when the pivot is the smallest or largest element of the array
when the array is already sorted
What is a key advantage of MergeSort over QuickSort in certain scenarios?
Better worst-case time complexity
Stable sorting
What is NOT a characteristic of MergeSort?
In-place sorting algorithm
RadixSort is particularly efficient when:
The range of the input is significantly less than the number of items
How does RadixSort handle sorting?
By grouping elements based on their individual digits or letters
Radix Sort’s time complexity depends on two key factors: the number of elements
(n) and what other factor?
The number of digits in the largest number (k)
In the context of Radix Sort, what is the significance of using a stable sorting
algorithm as the subroutine for sorting digits?
It maintains the relative order of records with equal keys (digits).
A tree is a data structure where:
Exactly one path exists between any two nodes
In a binary tree
A node can have at most two children
A leaf node in a tree is defined as a node that:
Has no children
Pre-order traversal in a tree means:
Visiting the root before the subtrees
An M-ary tree is a tree in which:
Each node can have at most M children
The main advantage of a binary search tree (BST) is:
Efficient search, insert, and delete operations
Which traversal is used to get nodes in non-decreasing order in a BST?
in-order
What is not a direct application of trees?
Performing arithmetic operations
What does it mean for a tree to be ‘balanced’?
The difference in height between any two subtrees is not more than one
In post-order traversal, in what order are the nodes processed?
Left subtree, right subtree, root
What statements below are true?
a) There is not binary tree that has the same in-order, pre-order, post-order, and
level order traversals.
b) In a full binary tree, every node other than the leaves has two children.
c) A balanced binary tree guarantees an O(log n) time complexity for insertion.
d) Every binary tree can be considered a binary search tree.
e) A binary tree can be traversed in O(n) time complexity.
b, c, e
What statements below are true?
a) You can uniquely reconstruct a binary tree from a given inorder traversal
b) You can uniquely reconstruct a binary tree from a given pre-order traversal
c) You can uniquely reconstruct a binary tree from a given post-order traversal
d) You can uniquely reconstruct a binary tree from a given inorder traversal and
either the postorder or pre-order
e) You can uniquely reconstruct a binary tree from a given pre-order and the
postorder traversals
d
What is the main advantage of balanced binary search trees?
O(log n) operation time for most actions
What defines a balanced tree?
No leaf is ‘much farther’ from the root than any other
Why isn’t a binary search tree always kept balanced?
Rebalancing can be costly
Who introduced AVL trees?
Georgii Adelson-Velskii and Evgenii Landis
How does the AVL tree determine the rotations needed for rebalancing after an
insertion?
By using the balance factor of the nodes from the insertion point up to the root
Considering a perfectly balanced AVL tree, what is the maximum number of nodes
it can contain at a height of h?
2^(h+1) - 1
What is a characteristic of AVL trees?
Height of subtrees differ by at most one
What is the balance factor in AVL trees?
Difference between heights of left and right subtrees
What rotations are used to maintain balance in AVL trees?
single rotation
double rotation
When is a single-right rotation applied in AVL trees?
When a tree is left-heavy
What is the balance factor of all nodes in a perfectly balanced AVL tree?
0
Can AVL trees guarantee O(log n) search time?
Yes, due to their balanced nature
What is the purpose of double rotations in AVL trees?
To correct imbalances that single rotations cannot
How does an AVL tree ensure that operations remain efficient?
By maintaining balance through rotations
In an AVL tree, after how many consecutive right insertions starting from an empty
tree will the first rebalancing occur?
After the third insertion, causing a right-right case.
Which operation does not guarantee O(log n) time complexity in an AVL tree,
assuming arbitrary sequences of operations?
Level-order traversal
What is the balance factor of a node that triggers a double rotation during AVL tree
rebalancing?
2
or
-2
What is a characteristic feature of a max-heap?
No child has a value bigger than the value of its parent
In a max-heap, where is the maximum element found?
at the root
What operations are used to maintain the heap property in a binary heap?
sink and swim
Given a binary max-heap implemented as an array, where the array indices start at
1, what conditions must always be true for any valid index i
(where i > 1) in the heap?
The parent of the node at index i is at index i/2.
The left child of the node at index i is at index 2i.
The right child of the node at index i is at index 2i + 1.
What is the result of performing a ‘sink’ operation in a complete tree in reverse
level order?
The heap property is restored by moving down elements that are out of place
Which of the following best describes heapify operation?
Adjusting a binary tree to maintain the heap property
What is the time complexity of building a heap from an arbitrary array of
elements?
O(n log n)
How does HeapSort sort an array in place?
By constructing a heap and then performing a series of swap and sink operations
What is the primary advantage of HeapSort compared to MergeSort?
It sorts in-place, using only a constant amount of extra space
Why might job starvation occur in a priority queue implemented with a heap?
Without precautions, continuously arriving high-priority jobs may prevent low priority jobs from being processed
describe a graph
A set of vertices connected by edges
What does an undirected edge in a graph represent?
A bidirectional relationship between two nodes
Which graph representation is most space-efficient for sparse graphs?
adjacency-list representation
In the context of BFS, what does a ‘white’ node represent?
A node that has not been visited
Given a directed graph G with V vertices and E edges, and assuming that the graph
may contain cycles, which of the following statements correctly compares the
complexities of DFS and BFS?
BFS can discover the shortest path to all nodes from the start node in O(V + E)
time for unweighted graphs, while DFS cannot guarantee the shortest path.
What is a characteristic of BFS?
It can be used to find the shortest path in unweighted graphs
How does DFS differ from BFS in terms of exploration?
DFS explores as far as possible along a branch before backtracking
What is a cycle in a graph?
A path that starts and ends at the same vertex
What is true about directed graphs?
Edges have a direction from one vertex to another
What is true about dense graphs?
They have a number of edges close to the maximal number of edges
Consider a graph G that is a tree (i.e., a connected, acyclic, undirected graph). If
you perform DFS and BFS starting from the same node in G, which of the following
statements is true?
a) DFS will always finish before BFS.
b) BFS will visit the nodes in the exact opposite order of DFS.
c) The order in which leaf nodes are visited is the same in both DFS and BFS.
d) DFS will visit exactly half the number of nodes that BFS visits.
e) The BFS and DFS traversals will produce identical sequences of visited nodes.
c
What does a complete graph mean?
A graph where each vertex is connected to every other vertex