Data Structures Flashcards

1
Q

trees

A
  • data structures that consist of nodes which are linked together in a certain way
  • Nodes in a tree have a parent-child relationship
  • Each node is linked to 0 or more child nodes and at most 1 parent
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

root

A
  • special node at the top of the tree from which all other nodes descend.
  • The root has no parent
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

leaf

A

Nodes without any children

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

binary tree

A

each node can only have 0, 1, or 2 children (at most 2 children)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

branch

A
  • signifies a decision path, a choice that connects 1 node to another
  • binary tree may have a left branch and a right branch
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

subtree

A

mini tree within a binary tree, whose root can be any node and all of its descendants rooted at that node.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Binary search trees(BST)

A
  1. All of the nodes in the left-hand branch of a node are guaranteed to have lower values than the node itself, and
  2. all of the nodes in the right-hand branch of a node are guaranteed to have a higher value than the node itself.
  3. each node holds a key, a value, and left and right pointers.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

balanced tree

A
  • nodes are inserted equally on the left and right branches
  • the width grows exponentially with the number of nodes. This means that conversely, the height must grow logarithmically with the number of nodes. So the average case is O(log(n)).
  • a skewed binary search tree is basically a linked list. Therefore, you will need to iterate through each of those nodes in order to get to the bottom of the tree to insert something. This makes the time complexity O(n).
  • example of the best case would be inserting the root only, which would be O(1).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

retrieval

A

follows the same pattern as insertion. You check the value of the key against the key stored in a node in the BST and recursively follow the left or right branch accordingly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

retrieval time complexity

A
  • average time complexity of finding something in a BST would require traversing the height of a balanced tree and would be O(log(n)).
  • Just like insert, the worst case for finding something in a BST will occur when the tree is skewed left or right and you are searching for the node at the bottom where everything is inserted to 1 side, so it is O(n).
  • The best case is that the node you are trying to find is the root node, which would be O(1).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

removal

A
  • After you find the item that you want to remove using the find process described above, there are 3 scenarios that you may encounter.
    1. no children (a leaf node)
    2. 1 child (left or right, doesn’t matter)
    3. 2 children (left and right)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

removal-with no children

A

-After you find the node, and that node has no children, you remove it from the BST by breaking the link to the parent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

removal-with 1 child

A
  • you make the parent of the item you are removing point to this 1 child, and then detach the item that you want to remove from the parent. We will make 21 the right child of 19 and detach 28 from being the right child of 19.
  • the child of the node that you removed (21) has a new parent (19), the parent of the node you just removed.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

removal-with 2 children

A
  1. find the minimum value in the right subtree
  2. replace the value of the node to be removed with the found minimum - now, the right subtree contains a duplicate
  3. apply remove to the right subtree to remove the duplicate
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

removal time complexity

A

you can use similar logic to insertion and retrieval to show that the best case is O(1), the average case is O(log(n)), and the worst case is O(n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

linear search

A

you look through an array 1-by-1 until you find what you are looking for

17
Q

Binary search

A

-faster search method, which only works on sorted arrays.

-

18
Q

breadth-first search (BFS)

A

works across the rows of a tree
tree is visited level by level. In order to carry out a BFS, you need a “first-in, first-out” (FIFO) queue so you can store all the siblings in the queue and process them in the correct order.

19
Q

in-order traversal

A

in the algorithm that the left branch is visited, then the node itself is handled, then the right branch is visited

20
Q

pre-order traversal

A

If the node is handled before the branches then this is known

21
Q

post-order traversal

A

if the node is handled after the branches

22
Q

post-order traversal

A

if the node is handled after the branches

23
Q

bubble sort

A
  • classic “terrible” sorting algorithm
  • you keep looping through an array to find out whether adjacent values need swapping, and you keep going until there are no more values that need swapping
24
Q

Merge sort

A
  • you slice the array into 2 halves and sort each half by recursively calling mergeSort. The 2 sorted halves are then merged together in the correct order using the merge
  • merge the 2 lists you just keep choosing the lowest value from the left or right arrays that hasn’t already been added to the output array.
  • When 1 of the arrays is empty, you add all of the remaining values from the other array to it.
25
Q

Quicksort

A
  • sorting algorithm with O(nlog(n)) best and average-case performance, although it is O(n^2) in the worst case
  • quicksort is more commonly used than merge sort, as it is more cache-efficient and can easily be performed in place
  • partition the array into 2 halves around a pivot value. All of the values which are less than the pivot value go to 1 half of the array, and all of the values which are greater than the pivot go to the other half of the array. You then recursively sort the 2 halves of the array until the halves are of length 0 or 1.
26
Q

partitioning algorithms

A

common in-place algorithm is Lomuto’s algorithm