Leetcode: Basic Algorithms Flashcards

1
Q

Reverse a binary tree

A
# recursively
def invertTree1(self, root):
    if root:
        root.left, root.right = self.invertTree(root.right), self.invertTree(root.left)
        return root
# BFS
def invertTree2(self, root):
    queue = collections.deque([(root)])
    while queue:
        node = queue.popleft()
        if node:
            node.left, node.right = node.right, node.left
            queue.append(node.left)
            queue.append(node.right)
    return root
# DFS
def invertTree(self, root):
    stack = [root]
    while stack:
        node = stack.pop()
        if node:
            node.left, node.right = node.right, node.left
            stack.extend([node.right, node.left])
    return root
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Inorder traversal binary tree

A

DFS
left, root, right

  • —def inorder(self, node):
  • ——-if node:
  • ———–self.in_order(node.left)
  • ———–print(node.value, end=””)
  • ———–self.in_order(node.right)
  • —def inorder_iterative(self, node):
  • ——-stack = []
  • ——-current = node
  • ——-while True:
  • ———–if current:
  • —————stack.append(current)
  • —————current = current.left
  • ———–elif stack:
  • —————current = stack.pop()
  • —————print(current.value, end=””)
  • —————current = current.right
  • ———–else:
  • —————break
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Preorder binary tree

A

DFS
root, left, right

  • —def preorder(self, node):
  • ——-if node:
  • ———–print(node.value, end=””)
  • ———–self.preorder(node.left)
  • ———–self.preorder(node.right)
  • —def preorder_iterative(self, node):
  • ——-stack=[]
  • ——-stack.append(node)
  • ——-while stack:
  • ———–current = stack.pop()
  • ———–if current:
  • —————print(current.value)
  • —————stack.append(current.right)
  • —————stack.append(current.left)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Postorder binary tree

A

DFS
left, right, root

  • —def postorder(self, node):
  • ——-if node:
  • ———–self.post_order(node.left)
  • ———–self.post_order(node.right)
  • ———–print(node.value, end=””)
  • —def postorder_iterative(self, node):
  • ——-if not Node:
  • ———–return
  • ——-s1 = []
  • ——-s2 = []
  • ——-s1.append(node)
  • ——-while s1:
  • ———–current = s1.pop()
  • ———–s2.append(current)
  • ———–# print(current.value, end=””)
  • ———–if current.left: s1.append(current.left)
  • ———–if current.right: s1.append(current.right)
  • ——-while s2:
  • ———–node2 = s2.pop()
  • ———–print(node2.value, end=””)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are collisions in a hash map

A

This scenario can occur because according to the equals and hashCode contract, two unequal objects in Java can have the same hash code.

It can also happen because of the finite size of the underlying array, that is, before resizing. The smaller this array, the higher the chances of collision.

That said, it’s worth mentioning that Java implements a hash code collision resolution technique which we will see using an example.

Keep in mind that it’s the hash value of the key that determines the bucket the object will be stored in. And so, if the hash codes of any two keys collide, their entries will still be stored in the same bucket.

And by default, the implementation uses a linked list as the bucket implementation.

The initially constant time O(1) put and get operations will occur in linear time O(n) in the case of a collision. This is because after finding the bucket location with the final hash value, each of the keys at this location will be compared with the provided key object using the equals API.

To simulate this collision resolution technique, let’s modify our earlier key object a little:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

level order traversal of a binary tree

A

BFS

  1. utilize stack
  2. append root node to stack
  3. while stack, pop, print out values from left to right
  4. pop(0) to pop the first index instead of last
  • —def tree_level(self, node):
  • ——-stack = []
  • ——-stack.append(node)
  • ——-while stack:
  • ———–node = stack.pop(0)
  • ———–if node:
  • —————print(node.value)
  • —————stack.append(node.left)
  • —————stack.append(node.right)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

bubble_sort

A
  • —# O(n*2), constant space
  • —# https://www.youtube.com/watch?v=xli_FI7CuzA
  • —def bubble_sort(self, arr):
  • ——-n = len(arr)
  • ——-for i in range(n):
  • ———–for j in range(n-i-1):
  • —————if arr[j] > arr[j+1]:
  • ——————-arr[j], arr[j+1] = arr[j+1], arr[j]
  • ——-return arr
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

insertion sort

A

O n^2, constant space

  • —def insertion_sort(self, arr):
  • ——-for i in range(1, len(arr)):
  • ———–key = arr[i]
  • ———–j = i-1
  • ———–while j>=0 and arr[j]>key:
  • —————arr[j+1] = arr[j]
  • —————j -= 1
  • ———–arr[j+1] = key
  • ——-return arr
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

selection sort

A

O n^2, constant space

  • —def selection_sort(self, arr):
  • ——-n = len(arr)
  • ——-for i in range(n):
  • ———–min_index = i
  • ———–for j in range(i+1, n):
  • —————if arr[min_index]>arr[j]:
  • ——————-min_index = j
  • ———–arr[i], arr[min_index] = arr[min_index], arr[i]
  • ——-return arr
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

merge sort

A

time: o(n*log(n))
space: O(N), if we have 16 objects in arr, we go 16 -> 8 -> 4 -> 2 -> 1, 2 needs both 1’s to solve, 4 needs both 2s, until we get up to 16 which needs both 8s, so it evaluates to N space, it would be NLogN if we evaluated all the branches at the same time. https://stackoverflow.com/questions/10342890/merge-sort-time-and-space-complexity

def merge_sort(self, arr):

  • ——-if len(arr)[greater than]1:
  • ———–mid = len(arr)//2
  • ———–left = arr[:mid]
  • ———–right = arr[mid:]
  • ———–self.merge_sort(left)
  • ———–self.merge_sort(right)
  • ———–i=j=k=0
  • ———–while i[less than]len(left) and j[less than]len(right):
  • —————if left[i][less than]right[j]:
  • ——————-arr[k]=left[i]
  • ——————-i+=1
  • —————else:
  • ——————-arr[k]=right[j]
  • ——————-j+=1
  • —————k+=1
  • ———–while i[less than]len(left):
  • —————arr[k]=left[i]
  • —————k+=1
  • —————i+=1
  • ———–while j[less than]len(right):
  • —————arr[k]=right[j]
  • —————j+=1
  • —————k+=1
  • ——-return arr
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

heap sort

A

time: o(n*log(n)), worst O n^2
space: constant space since we only re-arrange the initial list (or heap)

  • —def heap_sort(self, arr):
  • ——-n = len(arr)
  • ——-for i in range(n//2-1, -1, -1):
  • ———–self.__heapify(arr, n, i)
  • ——-for i in range(n-1, 0, -1):
  • ———–arr[i], arr[0] = arr[0], arr[i] # swap
  • ———–self.__heapify(arr, i, 0)
  • ——-return arr
  • —def __heapify(self, arr, n, i):
  • ——-largest = i
  • ——-left = 2*i+1
  • ——-right = 2*i+2
  • ——-if left < n and arr[largest] < arr[left]:
  • ———–largest = left
  • ——-if right < n and arr[largest] < arr[right]:
  • ———–largest = right
  • ——-if largest != i:
  • ———–arr[i], arr[largest] = arr[largest], arr[i]
  • ———–self.__heapify(arr, n, largest)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

quick sort

A

time: O n*log(n)
space = log(n)

def quick_sort(self, arr, low, high):

  • ——-if high[greater than]=low:
  • ———–pi = self.__partition(arr, low, high)
  • ———–self.quick_sort(arr, low, pi-1)
  • ———–self.quick_sort(arr, pi+1, high)
  • ——-return arr
  • —def __partition(self, arr, low, high):
  • ——-pivot = arr[high]
  • ——-for j in range(low, high):
  • ———–if arr[j][less than]= pivot:
  • —————arr[low], arr[j] = arr[j], arr[low]
  • —————low+=1
  • ——-arr[low], arr[high] = arr[high], arr[low]
  • ——-return low
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

reverse a linked list

A

def reverse_iterative(self):

  • -current = self.head
  • -node = None
  • -while current:
  • —nxt = current.next
  • —current.next = node
  • —node = current
  • —current = nxt
  • -self.head = node
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

add, insert after a node, delete, find a node in a linked list

A

class Node(object):

  • —def __init__(self, val, next=None) -[greater than] None:
  • ——-self.val = val
  • ——-self.next = next

def find(target, node):

  • —if not node:
  • ——-return False
  • —if node.val is target or find(target, node.next):
  • ——-return True
  • —else:
  • ——-return False

def insert_after(after, new_node, node):

  • —if not node:
  • ——-return
  • —if node.val is after:
  • ——-hold = node.next
  • ——-node.next = Node(new_node)
  • ——-node.next.next = hold
  • —else:
  • ——-insert_after(after, new_node, node.next)

class LL:

  • —def __init__(self, node):
  • ——-self.head = node

need a reference to head to delete a node, since we could delete the first node!

  • —def delete_node(self, target):
  • ——-current = self.head
  • ——-if not current:
  • ———–return None
  • ——-if current.val is target:
  • ———–self.head = current.next
  • ———–current = None
  • ———–return None
  • ——-return self.delete_helper(target, current.next, current)
  • —def delete_helper(self, target, current, prev):
  • ——-if not current:
  • ———–return None
  • ——-if current.val is target:
  • ———–prev.next = current.next
  • ———–current = None
  • ———–return True
  • ——-else:
  • ———–return self.delete_helper(target, current.next, current)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

add, insert after a node, delete, find a node in a doubly-linked list

A
  1. make sure to have helper head/tail nodes, it becomes very difficult without them
  2. don’t forget indexing for get/delete is different from adding. If we have 5 elements, we can add on the 5th index, it just goes on the end

class ListNode:

  • —def __init__(self, x, next = None, prev = None):
  • ——-self.val = x
  • ——-self.next = next
  • ——-self.prev = prev

class MyLinkedList:

  • —def __init__(self):
  • ——-self.size = 0
  • ——-self.head, self.tail = ListNode(0), ListNode(0)
  • ——-self.head.next = self.tail
  • ——-self.tail.prev = self.head
  • —def get(self, index: int) -> int:
  • ——-if index > self.size - 1:
  • ———–return -1
  • ——-if index > self.size // 2:
  • ———–cur = self.tail
  • ———–for _ in range(self.size - index):
  • —————cur = cur.prev
  • ———–return cur.val
  • ——-else:
  • ———–cur = self.head.next
  • ———–for _ in range(index):
  • —————cur = cur.next
  • ———–return cur.val
  • —def addAtHead(self, val: int) -> None:
  • ——-self.addAtIndex(0, val)
  • —def addAtTail(self, val: int) -> None:
  • ——-self.addAtIndex(self.size, val)
  • —def addAtIndex(self, index: int, val: int) -> None:—-
  • ——-if index > self.size:
  • ———–return
  • ——-if index > self.size // 2:
  • ———–cur = self.tail
  • ———–for _ in range(self.size - index):
  • —————cur = cur.prev
  • ——-else:
  • ———–cur = self.head.next
  • ———–for _ in range(index):
  • —————cur = cur.next
  • ——-prev = cur.prev
  • ——-new = ListNode(val)
  • ——-prev.next = new
  • ——-new.prev = prev
  • ——-new.next = cur
  • ——-cur.prev = new
  • ——-self.size += 1
  • —def deleteAtIndex(self, index: int) -> None:
  • ——-if index > self.size - 1:
  • ———–return
  • ——-if index > self.size // 2:
  • ———–cur = self.tail
  • ———–for _ in range(self.size - index):
  • —————cur = cur.prev
  • ——-else:
  • ———–cur = self.head.next
  • ———–for _ in range(index):
  • —————cur = cur.next
  • ——-cur.prev.next = cur.next
  • ——-cur.next.prev = cur.prev
  • ——-self.size -= 1
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

add, insert after a node, delete, find a node in a circular-linked list

A

make sure you do head/next

have a head pointer similar to the other problems where we start off with head.next as the head, the pointer is just a 0 value node that helps with the list

instead of while cur.next, use cur.next != head to know we’ve made a loop

insert/add indexes can be greater than the length of the list. a list of len 3 and we insert on index 3 is okay bc we just add to the end.

17
Q

subset, subarray, subsequence

A

base: [1, 2, 3, 4]

A subarray is a CONTIGUOUS part of array.
An array that is inside another array. For example, consider the array [1, 2, 3, 4], There are 10 non-empty sub-arrays. The subarrays are (1), (2), (3), (4), (1,2), (2,3), (3,4), (1,2,3), (2,3,4) and (1,2,3,4). In general, for an array/string of size n, there are n*(n+1)/2 non-empty subarrays/substrings.

A substring is a subarray, but for a string, CONTIGUOUS
s=”apple”
substring = appl, ple, pple, a, le

A subsequence is a sequence that can be derived from another sequence, without changing the order of the remaining elements. DOES NOT NEED TO BE CONTIGUOUS, but they must remain in relative order.
For the same example, there are 15 sub-sequences. They are (1), (2), (3), (4), (1,2), (1,3),(1,4), (2,3), (2,4), (3,4), (1,2,3), (1,2,4), (1,3,4), (2,3,4), (1,2,3,4)

a subset is any combination of elements in the array as long as they exist in any order. [3,1], [3,1,2] are subsets of the base array

18
Q

Does this successfully loop through the list without throwing out of index errors?

l=0
li=[1,2,3]
while l

A

yes, if l=0 and you do while l

19
Q

iterate backwards through a list

A

for i in range(len(list)-1, -1, -1)

if the list is [1,2,3,4] len(list)-1=3, so we start at index 3, which is the end.
we then use the 3rd parameter, step, to increment backwards
we end on -1, which is the last item of the list list[-1]=4, and the last item in range() is exclusive so it works great

20
Q

return K largest values in an unsorted list

A
  1. build a heap
  2. pop k times off the heap, return those values

Adding to a heap while keeping it as a valid heap is log(k), where k is the elements in the heap, which height evaluates to log(k). given an array of N elements, we have to continually add N elements to the array and then heapify them. Since we keep the array at size K, it takes N * log(k) time to create a fixed size heap and find the largest values in it

def kthLargest(nums, k):

  • —import heapq
  • —return heapq.nlargest(k, nums)[-1]]

returns a descending array with the kth largest elements, we want the last one.
time is above, space is O(k) since we only keep K elements in the array

~~~~Need to talk about quick select too

21
Q

heap as an array

A

0 index is root
left = 2n+1
right = 2n+2 where n is the index of the array

[0,3,6,5,9,7,8]

  • ————-0
  • ———3—-6
  • ——-5-9—7—8
22
Q

what is a binary tree, binary search tree, n-ary tree

A

binary tree is a tree where all nodes have at most 2 children

binary search tree is a tree in which all nodes have at most 2 children, all nodes on the left are less than the root, and all nodes on the right are greater than the root. equal values you can add a count variable to the node or do less than or equal to for left side

n-ary tree is a tree with each node can have n children. nothing is mentioned about order

23
Q

preorder traversal of n-ary tree

A

Time and space is O(N) bc of the stack or recursion stack

class Solution:

  • —def preorder(self, root):
  • ——-return self.helper(root, [])
  • —def helper(self, root, output):
  • ——-if not root:
  • ———–return output
  • ——-output.append(root.val)
  • ——-for x in root.children:
  • ———–self.helper(x, output)
  • ——-return output
  • —def iterative(self, root):
  • ——-if not root:
  • ———–return []
  • ——-output = []
  • ——-stack = [root]
  • ——-while stack:
  • ———–current = stack.pop()
  • ———–output.append(current.val)
  • ———–for i in range(len(current.children)-1, -1, -1):
  • —————stack.append(current.children[i])
  • ——-return output
public static List preorderIterative(TreeNode node){
        List nodes = new ArrayList<>();
        Stack stack = new Stack<>();
        stack.push(node);
        while (!stack.isEmpty()){
            TreeNode n = stack.pop();
            nodes.add(n);
            for (int i=n.children.size()-1;i>-1;i--){
                stack.push(n.children.get(i));
            }
        }
        return nodes;
    }
    public static List preorder(TreeNode node){
        List nodes = new ArrayList<>();
        preorderHelper(node, nodes);
        return nodes;
    }
    private static void preorderHelper(TreeNode node, List nodes){
        TreeNode c = node;
//        System.out.println(c.value);
        nodes.add(c);
        for (TreeNode tn : c.children){
            preorderHelper(tn, nodes);
        }
    }
24
Q

postorder traversal of an n-ary tree

A

class Solution:

  • —def postorder(self, root):
  • ——-output = []
  • ——-return self.helper(root, output)
  • —def helper(self, root, output):
  • ——-if not root:
  • ———–return output
  • ——-for x in root.children:
  • ———–self.helper(x, output)
  • ——-output.append(root.val)
  • ——-return output
  • —def iterative(self, root):
  • ——-output = []
  • ——-if not root:
  • ———–return []
  • ——-stack = [root]
  • ——-while stack:
  • ———–current = stack.pop()
  • ———–output.insert(0, current.val)
  • ———–for x in current.children:
  • —————stack.append(x)
  • ——-return output
25
Q

inorder traversal of an n-ary tree

A
  • —def inorder(self, node):
  • ——-if not node:
  • ———–return None
  • ——-children = len(node.children)
  • ——-for i in range(children - 1):
  • ———–self.inorder(node.children[i])
  • ——-print(node.value)

——–self.inorder(node.children[-1])

26
Q

stack and heap memory

A

stack memory is memory that is allocated to function calls and local variables within a function call. recursion calls occupy stack memory. stack memory is accessed LIFO

heap memory (dynamic memory allocation) stores classes, objects, and all new objects. heap memory can be accessed randomly and objects can be stored in random areas relative to each other

accessing stack memory is typically faster than heap memory because of its simple structure and LIFO access

27
Q

what is recursion “tree depth” and “tree breadth”

A

tree depth is the depth of the recursive tree. The depth represents how much space the algorithm takes up. An easy way to calculate this is to count the return statements, the return statements a particular function takes to return to the base case is the depth, therefore the space.

tree breadth is how many recursive calls are made. this is the time complexity.

28
Q

What is topological sort? Can you do it for an undirected graph? can you perform a topological sort for a cyclic graph?

A

Topological sort does not work for undirected of directed cyclic graphs. Undirected graphs do not have a starting point, so we cannot return a proper topological sort; a part of topological sort is that it will return none if there is a cycle found, so it won’t work

Topological sorting for Directed Acyclic Graph (DAG) is a linear ordering of vertices such that for every directed edge u v, vertex u comes before v in the ordering. Topological Sorting for a graph is not possible if the graph is not a DAG.

For example, a topological sorting of the following graph is “5 4 2 3 1 0”. There can be more than one topological sorting for a graph. For example, another topological sorting of the following graph is “4 5 2 3 1 0”. The first vertex in topological sorting is always a vertex with in-degree as 0 (a vertex with no incoming edges).

graph {
5: 0, 2
4: 0, 1
3: 1
2: 3
1: None
0: None
}

class TopologicalSort(object):

  • —def answer(self, graph):
  • ——-visited=defaultdict(lambda: False)
  • ——-recent_stack=defaultdict(lambda: False)
  • ——-stack=[]
  • ——-for vertex in graph.keys():
  • ———–if not visited[vertex]:
  • —————if self.answer_helper(vertex, graph, visited, recent_stack, stack):
  • ——————-return None
  • ——-return stack
  • —def answer_helper(self, vertex, graph, visisted, recent_stack, stack):
  • ——-visisted[vertex]=True
  • ——-recent_stack[vertex]=True
  • ——-for vertex2 in graph[vertex]:
  • ———–if not visisted[vertex2]:
  • —————if self.answer_helper(vertex2, graph, visisted, recent_stack, stack):
  • ——————-return True
  • ———–elif recent_stack[vertex2]:
  • —————return True
  • ——-recent_stack[vertex]=False
  • ——-stack.append(vertex)
  • ——-return False
29
Q

Binary search

if you want to find the index of the largest number that is less than your search, how do you modify it?

If you want to find the index of the first number that is greater than your search, how do you modify it?

A

Returning left on a failed find will find you the index of the first number that is greater than your search, this will return 1+len(array) if you do not find anything

returning right on a failed find will give you the index of the largest number that your search is greater than, on a failed find, you will get -1 or max(array), you will have to make sure to check for these edge cases

a=[1,3,4,6]
right: 
inputs -5, 2, 5, 17 return -1, 1, and 2, 3
left:
inputs -5, 2, 5, 17 returns 0, 2, 3, 4

you can see left returns what right would +1, so you could just take that into consideration when you use left

class Solution:

  • —def binary_search(self, nums, left, right, target):
  • ——-if left[less than]=right:
  • ———–mid=(right+left)//2
  • ———–if target[greater than]nums[mid]:
  • —————return self.bs(nums, mid+1, right, target)
  • ———–elif target[less than]nums[mid]:
  • —————return self.bs(nums, left, mid-1, target)
  • ———–else:
  • —————return mid
  • ——-return left
  • —def binary_search_iterative(self, nums, target):
  • ——-left=0
  • ——-right=len(nums)-1
  • ——-# returns the first value greater than it, with [2,5,8,12,19] target 9, it returns 12. This means it can return an index
  • ——-# outside of the range of numbers, returning right will return the first num less than it, can return -1, can also return an index larger than available
  • ——-while left[less than]=right:
  • ———–mid=(right+left)//2
  • ———–if target[greater than]nums[mid]:
  • —————left=mid+1
  • ———–elif target[less than]nums[mid]:
  • —————right=mid-1
  • ———–else:
  • —————# return True
  • —————return mid
  • ——-return left
30
Q

What is a bottom-up solution, top-down in dynamic programming and recursion?

A

a bottom-up solution is used in dynamic programming where you solve a base case and work your way up to solve larger answers. The bag of coins problem is a bottom-up solution

a top-down solution is when you solve the largest case (your target case) and use that to solve smaller problems that help you solve the target case. Integer break/recursion problems typically use this

31
Q

return all possible subsequences of a string or list

A

from itertools import combinations

  • —def all_subsequences(self, s):
  • ——-out = set()
  • ——-for r in range(1, len(s) + 1):
  • ———–for c in combinations(s, r):
  • —————out.add(‘‘.join(c))
  • ——-return out

def printSubsequence(input, output=””, response=set()):

  • —# Base Case
  • —# if the input is empty print the output string
  • —if len(input) == 0:
  • ——-if output:
  • ———–response.add(output)
  • ——-return
  • —# output is passed with including the
  • —# 1st characther of input string
  • —printSubsequence(input[1:], output+input[0])
  • —# output is passed without including the
  • —# 1st character of input string
  • —printSubsequence(input[1:], output)
  • —return sorted(response)
32
Q

explain how memoizing something reducing the time complexity

fib standard is 2^n, using memoize it is reduced to o(n)

A

Let’s get back to our example of Fibonacci number. With memoization, we save the result of Fibonacci number for each index n. We are assured that the calculation for each Fibonacci number would occur only once. And we know, from the recurrence relation, the Fibonacci number f(n) would depend on all n-1 precedent Fibonacci numbers. As a result, the recursion to calculate f(n) would be invoked n-1 times to calculate all the precedent numbers that it depends on.

Now, we can simply apply the formula we introduced in the beginning of this chapter to calculate the time complexity, which is {\mathcal{O}(1) * n = \mathcal{O}(n)}O(1)∗n=O(n). Memoization not only optimizes the time complexity of algorithm, but also simplifies the calculation of time complexity.

In the next article, we will talk about how to evaluate the space complexity of recursion algorithms.