Data Structures & Algorithms Flashcards
What two factors contribute to an algorithms efficiency?
Running Time & Memory Space
What are some reasons we don’t use standard time units for measuring time efficiency of an algorithm?
- Dependance on the speed of the computer
- Dependance on the quality of the program implementing the algorithm
- Dependance on the quality of the compiler used to generate the executable
- Difficulty of clocking the time precisely
What is the worst case scenario for an algorithm?
The longest an algorithm could possibly take to run. For example in a linear search, the desired item being in the last spot.
What is the best case scenario for a algorithm?
The quickest an algorithm could possibly take to run. For example in insertion sort, if the list is already sorted it will be best case.
What is the average case scenario for a algorithm?
The average case scenario of an algorithm refers to the expected performance of the algorithm when it is run on typical inputs. It is a more realistic than best and worst case scenarios for estimating an algorithms efficiency.
What are the three types of asymptotic analysis?
O notation, omega notation and theta notation
What notation uses t(n) ≤ cg(n), ∀n ≥ n0
O notation
What notation uses t(n) ≥ cg(n), ∀n ≥ n0
Ω notation (omega)
What notation uses t(n) > cg(n), ∀n ≥ n0
ω notation (small omega)
What notation uses c1 g(n) ≤ t(n) ≤ c2 g(n), ∀n ≥ n0
Θ notation (theta)
What is O notation
O notation uses mathematical notation like O(f(n)) to represent the upper bound of the growth rate of the function as the input size changes.
What is Omega Notation
Omega notation uses mathematical notation like Ω(f(n)) to represent the lower bound of the growth rate of the function as the input size changes.
What is Theta Notation?
Theta notation uses mathematical notation like Θ(f(n)) to represent the tight bound of the growth rate of the function as the input size changes.
What theta notation does T(n) = T(n-1) + x have?
A theta notation of n * x, for example T(n-1) + 1 = Θ(n), T(n-1) + log n = Θ(n log n)
What theta notation does T(n) = 2T(n-1) + x have?
A theta notation of log 2^n * x, for example 2T(n-1) + 1 =Θ(2^n), T(n-1) + log n = Θ(2^n log n)
What is a linear and nonlinear structure?
Linear structures have a unique first and last element, and every component apart from the first has a unique predecessor, and every component apart from the last has a unique successor. If a structure doesn’t have any of those properties it is nonlinear.
What are Sequential and Direct Access structures?
In a linear data structure, a sequential structure is when to access an item you have to access all items before it, and a direct access structure is when you can access an item directly
What are the properties of an array?
A linear data structure that provides direct access but has a fixed size. They are a collection of a components of the same type, and a set of indexes is used to access the data stored in the array.
What are the properties of a list?
A list is a linear structure that provides sequential access to its elements, lists have a head and tail that point to the first and last element of the list. They don’t have a fixed size and only store elements of the same time.
What are linked lists?
Linked lists use dynamic allocation, where is consists of a collection of records called nodes containing at least one item in the list and then the location of the next node in the list, meaning it doesn’t have to be stored contiguously in memory.
What are the advantages and disadvantages on linked lists?
Pos:
- Fair use of memory
- Size does not need to be declared in advance
- Common operations are cheaper
Neg:
- Algorithms are more complex, harder to read and harder to debug
- Allocation and de-allocation of memory space will slightly decrease the performance of the algorithm
What are circular linked lists?
A variation of a linked list where the last node points to the first node instead of to null. This means the initial pointer can point to any node.
How can linked lists be stored in arrays?
An array of nodes to simulate memory and a Boolean array which keeps track of free space in the memory
What are doubly linked lists?
In addition to the head of a linked list, a tail pointer is also added to the node. This makes some operations easier at the cost of extra overhead.
What are the properties of stacks?
Stacks follow a last in first out rule, meaning there are only two operations for editing the stack, pop to remove the top item of the stack, and push to push an item to the top of the stack. It is a linear sequential list of values of the same type.
What are the properties of queues?
Queues follow a first in first out rule meaning it has two main operations enqueue and dequeue. It requires two references at the front and end of the queue. It is a linear sequential list of values of the same type.
What is a stable sorting method?
A sorting method is said the be stable if it preserves the relative order of the items with duplicated keys on the list. This means if we have a list of lists then it will sort by the first item, then sort the duplicates of the first item by the second item.
How does selection sort work?
It finds the biggest element in the list and places it at the end of the list, then finds the second biggest element in the list, and places it second last etc. It is a simple algorithm, however inefficient for large values of n. The worst case scenario is Θ(n^2), and it is an unstable sorting algorithm.
How does insertion sort work?
It works by creating a sorted subarray in the array .It starts of on the first element, compares with second element and swaps if needed. It then checks third element with second and first if needed and swaps if needed, and does this for every element until its a fully sorted array. The worst case scenario is Θ(n^2), and it is a stable sorting algorithm.
How does bubble sort work?
Bubble sort works by passing through the array a number of times and swaps the element with the one next to it if needed. Once it is fully sorted it does another pass through to check.
What are the 3 cases for the master theorem with dividing functions with equation T(n) = aT(n/b) + Θ(n^k * log^p n)
Case 1: if log b(a) > k then Θ = n ^ log b(a)
Case 2: if log b(a) = k then Θ = n^k * log^p+1 n
Case 3: if log b(a) < k then Θ = n^k * log^p n
What are the elementary sorting algorithms?
Selection, Insertion, Bubble
How does Quicksort work?
Quicksort is a divide and conquer algorithm, where it divides a list into two sublists around a pivot, where all items lower than the pivot go in one, and all higher go in the other. It then uses recursion to do this until all items are in their own lists and it is fully sorted. It works at O(n^2) at worst case.
How does Radix Sort work?
Radix sort is a sorting algorithm that works by sorting the elements of an array of numbers based on their individual digits. Radix sort can be implemented using either a least significant digit (LSD) or most significant digit (MSD) approach. LSD radix sort starts from the rightmost digit and moves towards the left, while MSD radix sort starts from the leftmost digit and moves towards the right. It has time complexity O(n), or O(n log n) for large values of n.