Algorithms and Data Structures Flashcards
What is an algorithm?
In computer programming terms, an algorithm is a set of well-defined instructions to solve a particular problem. It takes a set of input(s) and produces the desired output.
https://www.programiz.com/dsa/algorithm
What are the qualities of a good algorithm?
- Input and output should be defined precisely.
- Each step in the algorithm should be clear and unambiguous.
- Algorithms should be most effective among many different ways to solve a problem.
- An algorithm shouldn’t include computer code. Instead, the algorithm should be written in such a way that it can be used in different programming languages.
https://www.programiz.com/dsa/algorithm
What are data structures?
Data structure is a storage that is used to store and organize data. It is a way of arranging data on a computer so that it can be accessed and updated efficiently.
Depending on your requirement and project, it is important to choose the right data structure for your project.
https://www.programiz.com/dsa/data-structure-types
What are the types of data structures?
Basically, data structures are divided into two categories:
- Linear data structure
- Non-linear data structure
https://www.programiz.com/dsa/data-structure-types
What is Asymptotic Analysis?
The study of change in performance of the algorithm with the change in the order of the input size is defined as asymptotic analysis.
https://www.programiz.com/dsa/asymptotic-notations
What is Asymptotic Notation?
Asymptotic notations are the mathematical notations used to describe the running time of an algorithm when the input tends towards a particular value or a limiting value.
https://www.programiz.com/dsa/asymptotic-notations
What is Big-O notation?
Big-O notation represents the upper bound of the running time of an algorithm. Thus, it gives the worst-case complexity of an algorithm.
O(g(n)) = { f(n): there exist positive constants c and n0
such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }
https://www.programiz.com/dsa/asymptotic-notations
What is Big-Omega notation?
Omega notation represents the lower bound of the running time of an algorithm. Thus, it provides the best case complexity of an algorithm.
Ω(g(n)) = { f(n): there exist positive constants c and n0
such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }
https://www.programiz.com/dsa/asymptotic-notations
What is Big-Theta notation?
Theta notation encloses the function from above and below. Since it represents the upper and the lower bound of the running time of an algorithm, it is used for analyzing the average-case complexity of an algorithm.
Θ(g(n)) = { f(n): there exist positive constants c1, c2 and n0
such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ n0 }
https://www.programiz.com/dsa/asymptotic-notations
What is little-o notation?
Big-Ο is used as a tight upper bound on the growth of an algorithm’s effort (this effort is described by the function f(n)), even though, as written, it can also be a loose upper bound. “Little-ο” (ο()) notation is used to describe an upper bound that cannot be tight.
Definition: Let f(n) and g(n) be functions that map positive integers to positive real numbers. We say that f(n) is ο(g(n)) (or f(n) Ε ο(g(n))) if for any real constant c > 0, there exists an integer constant n0 ≥ 1 such that 0 ≤ f(n) < c*g(n).
Thus, little o() means loose upper-bound of f(n). Little o is a rough estimate of the maximum order of growth whereas Big-Ο may be the actual order of growth.
In mathematical relation, f(n) = o(g(n)) means lim f(n)/g(n) = 0 n→∞
https://www.geeksforgeeks.org/analysis-of-algorithems-little-o-and-little-omega-notations/
What is little-omega notation?
Definition : Let f(n) and g(n) be functions that map positive integers to positive real numbers. We say that f(n) is ω(g(n)) (or f(n) ∈ ω(g(n))) if for any real constant c > 0, there exists an integer constant n0 ≥ 1 such that f(n) > c * g(n) ≥ 0 for every integer n ≥ n0.
f(n) has a higher growth rate than g(n) so main difference between Big Omega (Ω) and little omega (ω) lies in their definitions.In the case of Big Omega f(n)=Ω(g(n)) and the bound is 0<=cg(n)<=f(n), but in case of little omega, it is true for 0<=c*g(n)<f(n).
The relationship between Big Omega (Ω) and Little Omega (ω) is similar to that of Big-Ο and Little o except that now we are looking at the lower bounds. Little Omega (ω) is a rough estimate of the order of the growth whereas Big Omega (Ω) may represent exact order of growth. We use ω notation to denote a lower bound that is not asymptotically tight. And, f(n) ∈ ω(g(n)) if and only if g(n) ∈ ο((f(n)).
In mathematical relation,
if f(n) ∈ ω(g(n)) then,
lim f(n)/g(n) = ∞
n→∞
https://www.geeksforgeeks.org/analysis-of-algorithems-little-o-and-little-omega-notations/
What is the Master Theorem?
The master method is a formula for solving recurrence relations of the form:
T(n) = aT(n/b) + f(n),
where,
n = size of input
a = number of subproblems in the recursion
n/b = size of each subproblem. All subproblems are assumed
to have the same size.
f(n) = cost of the work done outside the recursive call,
which includes the cost of dividing the problem and
cost of merging the solutions
Here, a ≥ 1 and b > 1 are constants, and f(n) is an asymptotically positive function.
If a ≥ 1 and b > 1 are constants and f(n) is an asymptotically positive function, then the time complexity of a recursive relation is given by
T(n) = aT(n/b) + f(n)
where, T(n) has the following asymptotic bounds:
1. If f(n) = O(nlogb a-ϵ), then T(n) = Θ(nlogb a). 2. If f(n) = Θ(nlogb a), then T(n) = Θ(nlogb a * log n). 3. If f(n) = Ω(nlogb a+ϵ), then T(n) = Θ(f(n)).
ϵ > 0 is a constant.
Each of the above conditions can be interpreted as:
- If the cost of solving the sub-problems at each level increases by a certain factor, the value of f(n) will become polynomially smaller than nlogb a. Thus, the time complexity is oppressed by the cost of the last level ie. nlogb a
- If the cost of solving the sub-problem at each level is nearly equal, then the value of f(n) will be nlogb a. Thus, the time complexity will be f(n) times the total number of levels ie. nlogb a * log n
- If the cost of solving the subproblems at each level decreases by a certain factor, the value of f(n) will become polynomially larger than nlogb a. Thus, the time complexity is oppressed by the cost of f(n).
The master theorem cannot be used if:
- T(n) is not monotone. eg. T(n) = sin n
- f(n) is not a polynomial. eg. f(n) = 2n
- a is not a constant. eg. a = 2n
- a < 1
https://www.programiz.com/dsa/master-theorem
What is a Divide and Conquer algorithm?
A divide and conquer algorithm is a strategy of solving a large problem by
- breaking the problem into smaller sub-problems
- solving the sub-problems, and
- combining them to get the desired output.
To use the divide and conquer algorithm, recursion is used.
https://www.programiz.com/dsa/divide-and-conquer
What is the time complexity of a Divide and Conquer algorithm?
The complexity of the divide and conquer algorithm is calculated using the master theorem.
https://www.programiz.com/dsa/divide-and-conquer
When do you use divide and conquer vs dynamic programming?
The divide and conquer approach divides a problem into smaller subproblems; these subproblems are further solved recursively. The result of each subproblem is not stored for future reference, whereas, in a dynamic approach, the result of each subproblem is stored for future reference.
Use the divide and conquer approach when the same subproblem is not solved multiple times. Use the dynamic approach when the result of a subproblem is to be used multiple times in the future.
https://www.programiz.com/dsa/divide-and-conquer
What are the advantages of the divide and conquer approach?
- The complexity for the multiplication of two matrices using the naive method is O(n3), whereas using the divide and conquer approach (i.e. Strassen’s matrix multiplication) is O(n2.8074). This approach also simplifies other problems, such as the Tower of Hanoi.
- This approach is suitable for multiprocessing systems.
- It makes efficient use of memory caches.
https://www.programiz.com/dsa/divide-and-conquer
What is a stack?
A stack is a linear data structure that follows the principle of Last In First Out (LIFO). This means the last element inserted inside the stack is removed first.
You can think of the stack data structure as the pile of plates on top of another.
https://www.programiz.com/dsa/stack
What is LIFO?
Last In First Out. In programming terms, putting an item on top of the stack is called push and removing an item is called pop.
https://www.programiz.com/dsa/stack
What are the basic operations of a stack?
There are some basic operations that allow us to perform different actions on a stack.
Push: Add an element to the top of a stack Pop: Remove an element from the top of a stack IsEmpty: Check if the stack is empty IsFull: Check if the stack is full Peek: Get the value of the top element without removing it
https://www.programiz.com/dsa/stack
What is the time complexity of a stack?
For the array-based implementation of a stack, the push and pop operations take constant time, i.e. O(1).
https://www.programiz.com/dsa/stack
What are some applications of the stack data structure?
To reverse a word - Put all the letters in a stack and pop them out. Because of the LIFO order of stack, you will get the letters in reverse order.
In compilers - Compilers use the stack to calculate the value of expressions like 2 + 4 / 5 * (7 - 9) by converting the expression to prefix or postfix form.
In browsers - The back button in a browser saves all the URLs you have visited previously in a stack. Each time you visit a new page, it is added on top of the stack. When you press the back button, the current URL is removed from the stack, and the previous URL is accessed.
https://www.programiz.com/dsa/stack
What is a queue?
A queue is a useful data structure in programming. It is similar to the ticket queue outside a cinema hall, where the first person entering the queue is the first person who gets the ticket.
Queue follows the First In First Out (FIFO) rule - the item that goes in first is the item that comes out first.
https://www.programiz.com/dsa/queue
What is FIFO?
First In First Out (FIFO) rule - the item that goes in first is the item that comes out first.
In programming terms, putting items in the queue is called enqueue, and removing items from the queue is called dequeue.
https://www.programiz.com/dsa/queue
What are the basic operations of a queue?
A queue is an object (an abstract data structure - ADT) that allows the following operations:
Enqueue: Add an element to the end of the queue Dequeue: Remove an element from the front of the queue IsEmpty: Check if the queue is empty IsFull: Check if the queue is full Peek: Get the value of the front of the queue without removing it
https://www.programiz.com/dsa/queue