Algorithms and Data Structures Flashcards
What is an algorithm?
In computer programming terms, an algorithm is a set of well-defined instructions to solve a particular problem. It takes a set of input(s) and produces the desired output.
https://www.programiz.com/dsa/algorithm
What are the qualities of a good algorithm?
- Input and output should be defined precisely.
- Each step in the algorithm should be clear and unambiguous.
- Algorithms should be most effective among many different ways to solve a problem.
- An algorithm shouldn’t include computer code. Instead, the algorithm should be written in such a way that it can be used in different programming languages.
https://www.programiz.com/dsa/algorithm
What are data structures?
Data structure is a storage that is used to store and organize data. It is a way of arranging data on a computer so that it can be accessed and updated efficiently.
Depending on your requirement and project, it is important to choose the right data structure for your project.
https://www.programiz.com/dsa/data-structure-types
What are the types of data structures?
Basically, data structures are divided into two categories:
- Linear data structure
- Non-linear data structure
https://www.programiz.com/dsa/data-structure-types
What is Asymptotic Analysis?
The study of change in performance of the algorithm with the change in the order of the input size is defined as asymptotic analysis.
https://www.programiz.com/dsa/asymptotic-notations
What is Asymptotic Notation?
Asymptotic notations are the mathematical notations used to describe the running time of an algorithm when the input tends towards a particular value or a limiting value.
https://www.programiz.com/dsa/asymptotic-notations
What is Big-O notation?
Big-O notation represents the upper bound of the running time of an algorithm. Thus, it gives the worst-case complexity of an algorithm.
O(g(n)) = { f(n): there exist positive constants c and n0
such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }
https://www.programiz.com/dsa/asymptotic-notations
What is Big-Omega notation?
Omega notation represents the lower bound of the running time of an algorithm. Thus, it provides the best case complexity of an algorithm.
Ω(g(n)) = { f(n): there exist positive constants c and n0
such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }
https://www.programiz.com/dsa/asymptotic-notations
What is Big-Theta notation?
Theta notation encloses the function from above and below. Since it represents the upper and the lower bound of the running time of an algorithm, it is used for analyzing the average-case complexity of an algorithm.
Θ(g(n)) = { f(n): there exist positive constants c1, c2 and n0
such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ n0 }
https://www.programiz.com/dsa/asymptotic-notations
What is little-o notation?
Big-Ο is used as a tight upper bound on the growth of an algorithm’s effort (this effort is described by the function f(n)), even though, as written, it can also be a loose upper bound. “Little-ο” (ο()) notation is used to describe an upper bound that cannot be tight.
Definition: Let f(n) and g(n) be functions that map positive integers to positive real numbers. We say that f(n) is ο(g(n)) (or f(n) Ε ο(g(n))) if for any real constant c > 0, there exists an integer constant n0 ≥ 1 such that 0 ≤ f(n) < c*g(n).
Thus, little o() means loose upper-bound of f(n). Little o is a rough estimate of the maximum order of growth whereas Big-Ο may be the actual order of growth.
In mathematical relation, f(n) = o(g(n)) means lim f(n)/g(n) = 0 n→∞
https://www.geeksforgeeks.org/analysis-of-algorithems-little-o-and-little-omega-notations/
What is little-omega notation?
Definition : Let f(n) and g(n) be functions that map positive integers to positive real numbers. We say that f(n) is ω(g(n)) (or f(n) ∈ ω(g(n))) if for any real constant c > 0, there exists an integer constant n0 ≥ 1 such that f(n) > c * g(n) ≥ 0 for every integer n ≥ n0.
f(n) has a higher growth rate than g(n) so main difference between Big Omega (Ω) and little omega (ω) lies in their definitions.In the case of Big Omega f(n)=Ω(g(n)) and the bound is 0<=cg(n)<=f(n), but in case of little omega, it is true for 0<=c*g(n)<f(n).
The relationship between Big Omega (Ω) and Little Omega (ω) is similar to that of Big-Ο and Little o except that now we are looking at the lower bounds. Little Omega (ω) is a rough estimate of the order of the growth whereas Big Omega (Ω) may represent exact order of growth. We use ω notation to denote a lower bound that is not asymptotically tight. And, f(n) ∈ ω(g(n)) if and only if g(n) ∈ ο((f(n)).
In mathematical relation,
if f(n) ∈ ω(g(n)) then,
lim f(n)/g(n) = ∞
n→∞
https://www.geeksforgeeks.org/analysis-of-algorithems-little-o-and-little-omega-notations/
What is the Master Theorem?
The master method is a formula for solving recurrence relations of the form:
T(n) = aT(n/b) + f(n),
where,
n = size of input
a = number of subproblems in the recursion
n/b = size of each subproblem. All subproblems are assumed
to have the same size.
f(n) = cost of the work done outside the recursive call,
which includes the cost of dividing the problem and
cost of merging the solutions
Here, a ≥ 1 and b > 1 are constants, and f(n) is an asymptotically positive function.
If a ≥ 1 and b > 1 are constants and f(n) is an asymptotically positive function, then the time complexity of a recursive relation is given by
T(n) = aT(n/b) + f(n)
where, T(n) has the following asymptotic bounds:
1. If f(n) = O(nlogb a-ϵ), then T(n) = Θ(nlogb a). 2. If f(n) = Θ(nlogb a), then T(n) = Θ(nlogb a * log n). 3. If f(n) = Ω(nlogb a+ϵ), then T(n) = Θ(f(n)).
ϵ > 0 is a constant.
Each of the above conditions can be interpreted as:
- If the cost of solving the sub-problems at each level increases by a certain factor, the value of f(n) will become polynomially smaller than nlogb a. Thus, the time complexity is oppressed by the cost of the last level ie. nlogb a
- If the cost of solving the sub-problem at each level is nearly equal, then the value of f(n) will be nlogb a. Thus, the time complexity will be f(n) times the total number of levels ie. nlogb a * log n
- If the cost of solving the subproblems at each level decreases by a certain factor, the value of f(n) will become polynomially larger than nlogb a. Thus, the time complexity is oppressed by the cost of f(n).
The master theorem cannot be used if:
- T(n) is not monotone. eg. T(n) = sin n
- f(n) is not a polynomial. eg. f(n) = 2n
- a is not a constant. eg. a = 2n
- a < 1
https://www.programiz.com/dsa/master-theorem
What is a Divide and Conquer algorithm?
A divide and conquer algorithm is a strategy of solving a large problem by
- breaking the problem into smaller sub-problems
- solving the sub-problems, and
- combining them to get the desired output.
To use the divide and conquer algorithm, recursion is used.
https://www.programiz.com/dsa/divide-and-conquer
What is the time complexity of a Divide and Conquer algorithm?
The complexity of the divide and conquer algorithm is calculated using the master theorem.
https://www.programiz.com/dsa/divide-and-conquer
When do you use divide and conquer vs dynamic programming?
The divide and conquer approach divides a problem into smaller subproblems; these subproblems are further solved recursively. The result of each subproblem is not stored for future reference, whereas, in a dynamic approach, the result of each subproblem is stored for future reference.
Use the divide and conquer approach when the same subproblem is not solved multiple times. Use the dynamic approach when the result of a subproblem is to be used multiple times in the future.
https://www.programiz.com/dsa/divide-and-conquer
What are the advantages of the divide and conquer approach?
- The complexity for the multiplication of two matrices using the naive method is O(n3), whereas using the divide and conquer approach (i.e. Strassen’s matrix multiplication) is O(n2.8074). This approach also simplifies other problems, such as the Tower of Hanoi.
- This approach is suitable for multiprocessing systems.
- It makes efficient use of memory caches.
https://www.programiz.com/dsa/divide-and-conquer
What is a stack?
A stack is a linear data structure that follows the principle of Last In First Out (LIFO). This means the last element inserted inside the stack is removed first.
You can think of the stack data structure as the pile of plates on top of another.
https://www.programiz.com/dsa/stack
What is LIFO?
Last In First Out. In programming terms, putting an item on top of the stack is called push and removing an item is called pop.
https://www.programiz.com/dsa/stack
What are the basic operations of a stack?
There are some basic operations that allow us to perform different actions on a stack.
Push: Add an element to the top of a stack Pop: Remove an element from the top of a stack IsEmpty: Check if the stack is empty IsFull: Check if the stack is full Peek: Get the value of the top element without removing it
https://www.programiz.com/dsa/stack
What is the time complexity of a stack?
For the array-based implementation of a stack, the push and pop operations take constant time, i.e. O(1).
https://www.programiz.com/dsa/stack
What are some applications of the stack data structure?
To reverse a word - Put all the letters in a stack and pop them out. Because of the LIFO order of stack, you will get the letters in reverse order.
In compilers - Compilers use the stack to calculate the value of expressions like 2 + 4 / 5 * (7 - 9) by converting the expression to prefix or postfix form.
In browsers - The back button in a browser saves all the URLs you have visited previously in a stack. Each time you visit a new page, it is added on top of the stack. When you press the back button, the current URL is removed from the stack, and the previous URL is accessed.
https://www.programiz.com/dsa/stack
What is a queue?
A queue is a useful data structure in programming. It is similar to the ticket queue outside a cinema hall, where the first person entering the queue is the first person who gets the ticket.
Queue follows the First In First Out (FIFO) rule - the item that goes in first is the item that comes out first.
https://www.programiz.com/dsa/queue
What is FIFO?
First In First Out (FIFO) rule - the item that goes in first is the item that comes out first.
In programming terms, putting items in the queue is called enqueue, and removing items from the queue is called dequeue.
https://www.programiz.com/dsa/queue
What are the basic operations of a queue?
A queue is an object (an abstract data structure - ADT) that allows the following operations:
Enqueue: Add an element to the end of the queue Dequeue: Remove an element from the front of the queue IsEmpty: Check if the queue is empty IsFull: Check if the queue is full Peek: Get the value of the front of the queue without removing it
https://www.programiz.com/dsa/queue
What is a limitation of a queue?
After a bit of enqueuing and dequeuing, the size of the queue has been reduced.
And we can only add indexes 0 and 1 only when the queue is reset (when all the elements have been dequeued).
After REAR reaches the last index, if we can store extra elements in the empty spaces (0 and 1), we can make use of the empty spaces. This is implemented by a modified queue called the circular queue.
https://www.programiz.com/dsa/queue
What is the complexity of a queue?
The complexity of enqueue and dequeue operations in a queue using an array is O(1). If you use pop(N) in python code, then the complexity might be O(n) depending on the position of the item to be popped.
https://www.programiz.com/dsa/queue
What are the applications of a queue?
CPU scheduling, Disk Scheduling
When data is transferred asynchronously between two processes.The queue is used for synchronization. For example: IO Buffers, pipes, file IO, etc
Handling of interrupts in real-time systems.
Call Center phone systems use Queues to hold people calling them in order.
https://www.programiz.com/dsa/queue
What are the types of queues?
There are four different types of queues:
Simple Queue Circular Queue Priority Queue Double Ended Queue
https://www.programiz.com/dsa/types-of-queue
What is a simple queue?
In a simple queue, insertion takes place at the rear and removal occurs at the front. It strictly follows the FIFO (First in First out) rule.
https://www.programiz.com/dsa/types-of-queue
What is a circular queue?
In a circular queue, the last element points to the first element making a circular link.
The main advantage of a circular queue over a simple queue is better memory utilization. If the last position is full and the first position is empty, we can insert an element in the first position. This action is not possible in a simple queue.
https://www.programiz.com/dsa/types-of-queue
What is a priority queue?
A priority queue is a special type of queue in which each element is associated with a priority and is served according to its priority. If elements with the same priority occur, they are served according to their order in the queue.
Insertion occurs based on the arrival of the values and removal occurs based on priority.
https://www.programiz.com/dsa/types-of-queue
What is a deque?
In a double ended queue, insertion and removal of elements can be performed from either from the front or rear. Thus, it does not follow the FIFO (First In First Out) rule.
https://www.programiz.com/dsa/types-of-queue
What is the complexity of a circular queue?
The complexity of the enqueue and dequeue operations of a circular queue is O(1) for (array implementations).
https://www.programiz.com/dsa/circular-queue
What are the applications of a circular queue?
CPU scheduling
Memory management
Traffic Management
https://www.programiz.com/dsa/circular-queue
What’s the difference between a priority queue and a normal queue?
In a queue, the first-in-first-out rule is implemented whereas, in a priority queue, the values are removed on the basis of priority. The element with the highest priority is removed first.
https://www.programiz.com/dsa/priority-queue
How is a priority queue implemented?
Priority queue can be implemented using an array, a linked list, a heap data structure, or a binary search tree. Among these data structures, heap data structure provides an efficient implementation of priority queues.
https://www.programiz.com/dsa/priority-queue
What is the complexity of a priority queue?
A comparative analysis of different implementations of priority queue is given below.
Operations peek insert delete Linked List O(1) O(n) O(1) Binary Heap O(1) O(log n) O(log n) Binary Search Tree O(1) O(log n) O(log n)
https://www.programiz.com/dsa/priority-queue
What are the applications of a priority queue?
Some of the applications of a priority queue are:
Dijkstra's algorithm for implementing stack for load balancing and interrupt handling in an operating system for data compression in Huffman code
https://www.programiz.com/dsa/priority-queue
What are the types of deque?
Input Restricted Deque
In this deque, input is restricted at a single end but allows deletion at both the ends.
Output Restricted Deque
In this deque, output is restricted at a single end but allows insertion at both the ends.
https://www.programiz.com/dsa/deque
What is the complexity of a deque?
The time complexity of all the above operations is constant i.e. O(1).
https://www.programiz.com/dsa/deque
What are the applications of a deque?
In undo operations on software.
To store history in browsers.
For implementing both stacks and queues.
https://www.programiz.com/dsa/deque
What is a linked list?
A linked list is a linear data structure that includes a series of connected nodes. Here, each node stores the data and the address of the next node.
The power of a linked list comes from the ability to break the chain and rejoin it.
Doing something similar in an array would have required shifting the positions of all the subsequent elements.
https://www.programiz.com/dsa/linked-list
What is the complexity of linked lists?
Time Complexity
Worst case Average Case Search O(n) O(n) Insert O(1) O(1) Deletion O(1) O(1)
Space Complexity: O(n)
https://www.programiz.com/dsa/linked-list
What are the applications of linked lists?
Dynamic memory allocation
Implemented in stack and queue
In undo functionality of softwares
Hash tables, Graphs
https://www.programiz.com/dsa/linked-list
What are the basic linked list operations?
Here’s a list of basic linked list operations that we will cover in this article.
Traversal - access each element of the linked list Insertion - adds a new element to the linked list Deletion - removes the existing elements Search - find a node in the linked list Sort - sort the nodes of the linked list
https://www.programiz.com/dsa/linked-list-operations
What are the types of linked lists?
There are three common types of Linked List.
Singly Linked List Doubly Linked List Circular Linked List
https://www.programiz.com/dsa/linked-list-types
What is a doubly linked list?
A doubly linked list is a type of linked list in which each node consists of 3 components:
*prev - address of the previous node data - data item *next - address of next node
https://www.programiz.com/dsa/doubly-linked-list
What is the complexity of a doubly linked list?
Doubly Linked List Complexity
Time Complexity
Space Complexity
Insertion Operation O(1) or O(n) O(1) Deletion Operation O(1) O(1)
- Complexity of Insertion OperationThe insertion operations that do not require traversal have the time complexity of O(1).
And, insertion that requires traversal has time complexity of O(n).
The space complexity is O(1). - Complexity of Deletion OperationAll deletion operations run with time complexity of O(1).
And, the space complexity is O(1).
https://www.programiz.com/dsa/doubly-linked-list
What are the applications of a doubly linked list?
Redo and undo functionality in software.
Forward and backward navigation in browsers.
For navigation systems where forward and backward navigation is required.
https://www.programiz.com/dsa/doubly-linked-list
Compare singly and doubly linked lists.
Singly Linked List
Doubly Linked List
Each node consists of a data value and a pointer to the next node. Each node consists of a data value, a pointer to the next node, and a pointer to the previous node. Traversal can occur in one way only (forward direction). Traversal can occur in both ways. It requires less space. It requires more space because of an extra pointer. It can be implemented on the stack. It has multiple usages. It can be implemented on the stack, heap, and binary tree.
https://www.programiz.com/dsa/doubly-linked-list
What is a circular linked list?
A circular linked list is a type of linked list in which the first and the last nodes are also connected to each other to form a circle.
You can have these be single linked or double linked.
https://www.programiz.com/dsa/circular-linked-list
What is the complexity of a circular linked list?
Circular Linked List Complexity
Time Complexity
Space Complexity
Insertion Operation O(1) or O(n) O(1) Deletion Operation O(1) O(1)
- Complexity of Insertion OperationThe insertion operations that do not require traversal have the time complexity of O(1).
And, an insertion that requires traversal has a time complexity of O(n).
The space complexity is O(1). - Complexity of Deletion OperationAll deletion operations run with a time complexity of O(1).
And, the space complexity is O(1).
https://www.programiz.com/dsa/circular-linked-list
What are the applications of circular linked lists?
It is used in multiplayer games to give a chance to each player to play the game.
Multiple running applications can be placed in a circular linked list on an operating system. The os keeps on iterating over these applications.
https://www.programiz.com/dsa/circular-linked-list
What is a hash table?
The Hash table data structure stores elements in key-value pairs where
Key- unique integer that is used for indexing the values Value - data that are associated with keys.
https://www.programiz.com/dsa/hash-table
What is a hash function?
In a hash table, a new index is processed using the keys. And, the element corresponding to that key is stored in the index. This process is called hashing.
Let k be a key and h(x) be a hash function.
Here, h(k) will give us a new index to store the element linked with k.
https://www.programiz.com/dsa/hash-table
What is a hash collision?
When the hash function generates the same index for multiple keys, there will be a conflict (what value to be stored in that index). This is called a hash collision.
We can resolve the hash collision using one of the following techniques.
Collision resolution by chaining Open Addressing: Linear/Quadratic Probing and Double Hashing
https://www.programiz.com/dsa/hash-table
How do you resolve hash collisions via chaining?
In chaining, if a hash function produces the same index for multiple elements, these elements are stored in the same index by using a doubly-linked list.
If j is the slot for multiple elements, it contains a pointer to the head of the list of elements. If no element is present, j contains NIL.
https://www.programiz.com/dsa/hash-table
How do you resolve hash collisions via open addressing?
Unlike chaining, open addressing doesn’t store multiple elements into the same slot. Here, each slot is either filled with a single key or left NIL.
There are a few options:
- linear probing
- quadratic probing
- double hashing
https://www.programiz.com/dsa/hash-table
What is linear probing?
In linear probing, hashing collision is resolved by checking the next slot.
h(k, i) = (h′(k) + i) mod m
where
i = {0, 1, ….} h'(k) is a new hash function
If a collision occurs at h(k, 0), then h(k, 1) is checked. In this way, the value of i is incremented linearly.
The problem with linear probing is that a cluster of adjacent slots is filled. When inserting a new element, the entire cluster must be traversed. This adds to the time required to perform operations on the hash table.
https://www.programiz.com/dsa/hash-table
What is quadratic probing?
It is for resolving hashing collisions. It works similar to linear probing but the spacing between the slots is increased (greater than one) by using the following relation.
h(k, i) = (h′(k) + c1i + c2i2) mod m
where,
c1 and c2 are positive auxiliary constants, i = {0, 1, ….}
https://www.programiz.com/dsa/hash-table
What is double hashing?
If a collision occurs after applying a hash function h(k), then another hash function is calculated for finding the next slot.
h(k, i) = (h1(k) + ih2(k)) mod m
https://www.programiz.com/dsa/hash-table
What makes a good hash function?
A good hash function may not prevent the collisions completely however it can reduce the number of collisions.
https://www.programiz.com/dsa/hash-table
What are the different methods to find a good hash function?
- division method
- multiplication method
- universal hashing
https://www.programiz.com/dsa/hash-table
What is the division method?
If k is a key and m is the size of the hash table, the hash function h() is calculated as:
h(k) = k mod m
For example, If the size of a hash table is 10 and k = 112 then h(k) = 112 mod 10 = 2. The value of m must not be the powers of 2. This is because the powers of 2 in binary format are 10, 100, 1000, …. When we find k mod m, we will always get the lower order p-bits.
if m = 22, k = 17, then h(k) = 17 mod 22 = 10001 mod 100 = 01
if m = 23, k = 17, then h(k) = 17 mod 22 = 10001 mod 100 = 001
if m = 24, k = 17, then h(k) = 17 mod 22 = 10001 mod 100 = 0001
if m = 2p, then h(k) = p lower bits of m
https://www.programiz.com/dsa/hash-table
What is the multiplication method?
h(k) = ⌊m(kA mod 1)⌋
where,
kA mod 1 gives the fractional part kA, ⌊ ⌋ gives the floor value A is any constant. The value of A lies between 0 and 1. But, an optimal choice will be ≈ (√5-1)/2 suggested by Knuth.
https://www.programiz.com/dsa/hash-table
What is universal hashing?
In Universal hashing, the hash function is chosen at random independent of keys.
https://www.programiz.com/dsa/hash-table
What are the applications of hash tables?
Hash tables are implemented where
constant time lookup and insertion is required cryptographic applications indexing data is required
https://www.programiz.com/dsa/hash-table
Why is hashing needed?
After storing a large amount of data, we need to perform various operations on these data. Lookups are inevitable for the datasets. Linear search and binary search perform lookups/search with time complexity of O(n) and O(log n) respectively. As the size of the dataset increases, these complexities also become significantly high which is not acceptable.
We need a technique that does not depend on the size of data. Hashing allows lookups to occur in constant time i.e. O(1).
https://www.programiz.com/dsa/hashing
What is a heap?
Heap data structure is a complete binary tree that satisfies the heap property, where any given node is
always greater than its child node/s and the key of the root node is the largest among all other nodes. This property is also called max heap property. always smaller than the child node/s and the key of the root node is the smallest among all other nodes. This property is also called min heap property.
What is “heapify”?
Heapify is the process of creating a heap data structure from a binary tree. It is used to create a Min-Heap or a Max-Heap.
It is used after insertion, removal, or deletion.
https://www.programiz.com/dsa/heap-data-structure
What are the applications of heaps?
Heap is used while implementing a priority queue.
Dijkstra’s Algorithm
Heap Sort
https://www.programiz.com/dsa/heap-data-structure
What is a Fibonacci heap?
A fibonacci heap is a data structure that consists of a collection of trees which follow min heap or max heap property. We have already discussed min heap and max heap property in the Heap Data Structure article. These two properties are the characteristics of the trees present on a fibonacci heap.
In a fibonacci heap, a node can have more than two children or no children at all. Also, it has more efficient heap operations than that supported by the binomial and binary heaps.
The fibonacci heap is called a fibonacci heap because the trees are constructed in a way such that a tree of order n has at least Fn+2 nodes in it, where Fn+2 is the (n + 2)th Fibonacci number.
https://www.programiz.com/dsa/fibonacci-heap
What are the properties of a Fibonnaci heap?
Important properties of a Fibonacci heap are:
It is a set of min heap-ordered trees. (i.e. The parent is always smaller than the children.) A pointer is maintained at the minimum element node. It consists of a set of marked nodes. (Decrease key operation) The trees within a Fibonacci heap are unordered but rooted.
https://www.programiz.com/dsa/fibonacci-heap
What is the memory representation of the nodes in a Fibonacci heap?
The roots of all the trees are linked together for faster access. The child nodes of a parent node are connected to each other through a circular doubly linked list as shown below.
There are two main advantages of using a circular doubly linked list.
Deleting a node from the tree takes O(1) time. The concatenation of two such lists takes O(1) time.
https://www.programiz.com/dsa/fibonacci-heap
What is the complexity of a Fibonnaci heap?
Insertion
O(1)
Find Min O(1) Union O(1) Extract Min O(log n) Decrease Key O(1) Delete Node O(log n)
https://www.programiz.com/dsa/fibonacci-heap
What are the applications of Fibonnaci heaps?
To improve the asymptotic running time of Dijkstra’s algorithm.
https://www.programiz.com/dsa/fibonacci-heap
What is a tree?
A tree is a nonlinear hierarchical data structure that consists of nodes connected by edges.
Other data structures such as arrays, linked list, stack, and queue are linear data structures that store data sequentially. In order to perform any operation in a linear data structure, the time complexity increases with the increase in the data size. But, it is not acceptable in today’s computational world.
Different tree data structures allow quicker and easier access to the data as it is a non-linear data structure.
https://www.programiz.com/dsa/trees
What is a tree node?
A node is an entity that contains a key or value and pointers to its child nodes.
The last nodes of each path are called leaf nodes or external nodes that do not contain a link/pointer to child nodes.
The node having at least a child node is called an internal node.
https://www.programiz.com/dsa/trees
What is a tree edge?
It is the link between any two nodes.
https://www.programiz.com/dsa/trees
What is a tree root?
It is the topmost node of a tree.
https://www.programiz.com/dsa/trees
What is the height of a tree node?
The height of a node is the number of edges from the node to the deepest leaf (ie. the longest path from the node to a leaf node).The height of a node is the number of edges from the node to the deepest leaf (ie. the longest path from the node to a leaf node).
https://www.programiz.com/dsa/trees
What is the depth of a tree node?
The depth of a node is the number of edges from the root to the node.
https://www.programiz.com/dsa/trees
What is the height of a tree?
The height of a Tree is the height of the root node or the depth of the deepest node.
https://www.programiz.com/dsa/trees
What is the degree of a tree node?
The degree of a node is the total number of branches of that node.
https://www.programiz.com/dsa/trees
What is a forest?
A collection of disjoint trees is called a forest.
You can create a forest by cutting the root of a tree.
https://www.programiz.com/dsa/trees
What are some types of trees?
Binary Tree
Binary Search Tree
AVL Tree
B-Tree
https://www.programiz.com/dsa/trees
What is tree traversal?
In order to perform any operation on a tree, you need to reach to the specific node. The tree traversal algorithm helps in visiting a required node in the tree.
https://www.programiz.com/dsa/trees
What are the applications of trees?
Binary Search Trees(BSTs) are used to quickly check whether an element is present in a set or not.
Heap is a kind of tree that is used for heap sort.
A modified version of a tree called Tries is used in modern routers to store routing information.
Most popular databases use B-Trees and T-Trees, which are variants of the tree structure we learned above to store their data
Compilers use a syntax tree to validate the syntax of every program you write.
https://www.programiz.com/dsa/trees
What are the ways to traverse a tree?
inorder
preorder
postorder
https://www.programiz.com/dsa/tree-traversal
What is inorder traversal?
First, visit all the nodes in the left subtree
Then the root node
Visit all the nodes in the right subtree
https://www.programiz.com/dsa/tree-traversal
What is preorder traversal?
Visit root node
Visit all the nodes in the left subtree
Visit all the nodes in the right subtree
https://www.programiz.com/dsa/tree-traversal
What is postorder traversal?
Visit all the nodes in the left subtree
Visit all the nodes in the right subtree
Visit the root node
https://www.programiz.com/dsa/tree-traversal
What is a binary tree?
A binary tree is a tree data structure in which each parent node can have at most two children. Each node of a binary tree consists of three items:
data item address of left child address of right child
https://www.programiz.com/dsa/binary-tree
What are the types of binary tree?
full
perfect
complete
degenerate/pathological
skewed
balanced
https://www.programiz.com/dsa/binary-tree
What is a full/proper binary tree?
A full Binary tree is a special type of binary tree in which every parent node/internal node has either two or no children.
https://www.programiz.com/dsa/binary-tree
What is a perfect binary tree?
A perfect binary tree is a type of binary tree in which every internal node has exactly two child nodes and all the leaf nodes are at the same level.
https://www.programiz.com/dsa/binary-tree
What is a complete binary tree?
A complete binary tree is just like a full binary tree, but with two major differences
Every level must be completely filled All the leaf elements must lean towards the left. The last leaf element might not have a right sibling i.e. a complete binary tree doesn't have to be a full binary tree.
https://www.programiz.com/dsa/binary-tree
What is a degenerate/pathological binary tree?
A degenerate or pathological tree is the tree having a single child either left or right.
https://www.programiz.com/dsa/binary-tree
What is a skewed binary tree?
A skewed binary tree is a pathological/degenerate tree in which the tree is either dominated by the left nodes or the right nodes. Thus, there are two types of skewed binary tree: left-skewed binary tree and right-skewed binary tree.
https://www.programiz.com/dsa/binary-tree
What is a balanced binary tree?
It is a type of binary tree in which the difference between the height of the left and the right subtree for each node is either 0 or 1.
https://www.programiz.com/dsa/binary-tree
What is the application of binary trees?
For easy and quick access to data
In router algorithms
To implement heap data structure
Syntax tree
https://www.programiz.com/dsa/binary-tree
What are the full binary tree theorems?
Let, i = the number of internal nodes
n = be the total number of nodes
l = number of leaves
λ = number of levels
The number of leaves is i + 1. The total number of nodes is 2i + 1. The number of internal nodes is (n – 1) / 2. The number of leaves is (n + 1) / 2. The total number of nodes is 2l – 1. The number of internal nodes is l – 1. The number of leaves is at most 2λ - 1.
https://www.programiz.com/dsa/full-binary-tree
What are the perfect binary tree theorems?
A perfect binary tree of height h has 2h + 1 – 1 node.
A perfect binary tree with n nodes has height log(n + 1) – 1 = Θ(ln(n)).
A perfect binary tree of height h has 2h leaf nodes.
The average depth of a node in a perfect binary tree is Θ(ln(n)).
https://www.programiz.com/dsa/perfect-binary-tree