DSA Flashcards
What is an Array
An array is a collection of homogeneous elements or values that can be identified and accessed using their index or position within the array.
Array Operations
- Accessing an Element: This operation involves retrieving the value of an element at a specific index in the array.
-
Isertion: Insertion refers to adding an element to the array at a specific position or appending it to the end of the array.
Example: Inserting the value 60 at the end of an integer array -
Deletion involves removing an element from the array, either by specifying its index or its value.
Example: Deleting the second element (value 20) from an integer array arr -
Search: Search involves finding the index or position of a specific element within the array.
Example: Searching for the value 30 within an integer array - **Update: Updating an element means modifying its value at a specific index in the array.
Example: Updating the fourth element (value 40) to 45 in an integer array arr in Java:
java
int[] arr = {10, 20, 30, 40, 50};
arr[3] = 45;
Divide and Conquer
a problem-solving technique in computer science and mathematics that involves breaking a problem into smaller, more manageable subproblems, solving the subproblems independently, and then combining their solutions to solve the original problem.
Greedy Algorithms
These algorithm makes the locally optimal choice at each step with the hope of finding a globally optimal solution. In other words, a greedy algorithm makes the best choice at each step without considering the consequences of that choice on future steps.
Dynamic Programming
Dynamic programming is defined as a computer programming technique where an algorithmic problem is first broken down into sub-problems, the results are saved, and then the sub-problems are optimized to find the overall solution — which usually has to do with finding the maximum and minimum range of the algorithmic
Bubble sort Algorithm
Bubble Sort Algorithm:
Bubble Sort is a simple comparison-based sorting algorithm. It repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong order. The pass through the list is repeated until no swaps are needed, indicating that the list is sorted.
Steps:
Start from the first element (index 0) and compare it with the next element (index 1).
If the first element is greater than the second element, swap them.
Continue this process for all adjacent pairs in the list, moving from left to right.
After the first pass, the largest element will have “bubbled up” to the end of the list.
Repeat steps 1-4 for the remaining unsorted portion of the list (excluding the last element).
Continue these passes until no more swaps are needed.
Complexity:
Time Complexity: O(n^2) in the worst case.
Space Complexity: O(1).
Quick Sort
Quick Sort:
Algorithm:
Quick Sort is a divide-and-conquer sorting algorithm. It works by selecting a “pivot” element from the array and partitioning the other elements into two sub-arrays, according to whether they are less than or greater than the pivot. The sub-arrays are then recursively sorted.
Steps:
Choose a pivot element from the array (e.g., the last element).
Partition the array into two sub-arrays: elements less than the pivot and elements greater than the pivot.
Recursively apply Quick Sort to the sub-arrays.
Combine the sorted sub-arrays and the pivot to produce the sorted array.
Complexity:
Time Complexity: Average-case time complexity is O(n log n), while the worst-case time complexity is O(n^2) (if a bad pivot choice is consistently made).
Space Complexity: O(log n) in the average case for the recursive call stack.
Queue Structure
A Queue is a linear data structure that follows the First-In-First-Out (FIFO) principle, meaning that the first element added to the queue will be the first one to be removed.
Key characteristics and operations of a Queue:
Enqueue (Insertion): This operation is used to add an element to the back or end of the queue. It is also referred to as “push” in some contexts.
Dequeue (Removal): This operation is used to remove and return the element from the front or head of the queue. It is also referred to as “pop” in some contexts.
Front: The front of the queue refers to the element that will be removed next (the oldest element).
Rear (or Back): The rear of the queue refers to the element that was most recently added (the newest element).
Empty Queue: A queue is considered empty when it contains no elements.
Full Queue: Depending on the implementation, a queue can be bounded (with a maximum capacity) or unbounded (with dynamic resizing). A queue is considered full when it reaches its maximum capacity in bounded implementations.
Queue vs Stack
- Order of Elements:
Queue: Follows the FIFO (First-In-First-Out) order, meaning the first element added is the first one removed.
Stack: Follows the LIFO (Last-In-First-Out) order, meaning the last element added is the first one removed.
Insertion and Removal Points:
- Queue: Elements are inserted at the rear (end) and removed from the front (head).
Stack: Elements are pushed onto the top and popped from the top (last-in, first-out).
Usage:
Q3. ueue: Used in scenarios where order preservation matters, such as task scheduling, print job management, and breadth-first search algorithms.
Stack: Used when the order of processing or undo operations is important, such as function call stack in recursion and backtracking algorithms.
Applications:
- Queue: Used in scenarios where multiple entities need to access a resource in a fair and ordered manner.
Stack: Used in scenarios where the most recent item should be accessed first, such as keeping track of function calls and their local variables in programming languages.
Hashing
is a technique that involves the use of a mathematical function to convert input data into a fixed-length output. It is done by mapping data to specific locations or indices within a data structure, typically an array or hash table.
The goal of hashing is to quickly and efficiently retrieve data based on its key, making it a common choice for data retrieval operations.
Asymptotic notations
Asymptotic notations are mathematical notations used to analyze and describe the running time (time complexity) and space usage (space complexity) of algorithms in computer science. They provide a way to describe the growth rate of resource consumption as the input size of a problem approaches infinity.
- Big O notation (O()),
- Omega notation (Ω()), and
- Theta notation (Θ()).
1. Big O Notation (O()):
Concept: Big O notation represents the upper bound or worst-case scenario of an algorithm’s time or space complexity.eg. algorithm has a time complexity of O(n), it means that its running time grows linearly with the input size.
2. Omega Notation (Ω()):
Concept: Omega notation represents the lower bound or best-case scenario of an algorithm’s time or space complexity. It provides a lower limit on the growth rate of resource usage as the input size increases.
eg. if an algorithm has a time complexity of Ω(n^2), it means that its running time grows at least as fast as n^2.
3. Theta Notation (Θ()):
Concept: Theta notation represents both the upper and lower bounds of an algorithm’s time or space complexity. It provides a tight bound on the growth rate of resource usage, indicating that it neither overestimates nor underestimates the actual complexity.
Usage: It is used to express the tight bound on the time complexity of an algorithm. For example, if an algorithm has a time complexity of Θ(n), it means that its running time grows exactly linearly with the input size.
Advantages of array
- Contiguous memory Allocation:Elements in an array are stored in contiguous memory locations,
- Efficient Random Access:
- Easy to Implement
4. Predictable Performance:
Disadvantages of Array
- Fixed Size
- Wasted Memory: Arrays allocate memory for the maximum number of elements they can hold, even if you don’t use that capacity.
- cant handle heterogenous data
- not suitable for Key-Value storage
why do we need Linked List?
- Dynamic sizing
- Memory efficiency
- Constant time insertions
- No fixed size
- Flexible insertion and deletion
Types of Linked Lists
- Hash Linked List: used within a hash table to handle collisions
- Sparsed LinkedLists: used to represent sparse data structures where most of the elements are empty or have default values.
- Self-adjusting Lists (e.g., Move-to-Front List): Lists where accessed elements are moved to the front (head) of the list, improving access times for frequently accessed items.
- Skip List:
A more complex data structure that uses multiple layers of linked lists. - Circular Linked List:
Similar to singly or doubly linked lists but with the last node pointing back to the first node (in a circular fashion).