InterviewCake Flashcards
(18 cards)
Greedy algorithms
A greedy algorithm builds up a solution by choosing the option that looks the best at every step. Sometimes a greedy algorithm doesn’t give you an optimal solution! (eg Give money back, biggest coin first…)
Bottom-Up Algorithms
Avoid recursion (top-down), saving the memory cost that recursion incurs when it builds up the call stack as well as the risk of stack overflow error. (eg product from 1 to n)
Overlapping Subproblems and Memoization
A problem has overlapping subproblems if finding its solution involves solving the same subproblem multiple times. (eg Fibonacci) Memoization ensures that a function doesn’t run for the same inputs more than once by keeping a record of the results for the given inputs (usually in a dictionary).
Short Circuit Evaluation
Avoid unnecessary work (don’t evaluate a part of the condition). We can use this to our advantage.
Garbage Collection
A garbage collector automatically frees up memory that a program isn’t using anymore. tracing garbage collection VS reference counting VS manual memory management
Closures
A closure is a function that accesses a variable “outside” itself (javascript…)
Array Slicing
Careful: you are allocating a new list and copying the elements from the original list to the new list O(n) time and O(n) space, where n is the number of elements in the resulting list. Even if the variable is not saved.
In-Place Algorithm
Operates directly on its input and changes it (destructive). Working in-place is a good way to save space. An in-place algorithm will generally have O(1) space cost.
Arrays - space, lookup, append, insert, delete
O(n), O(1), O(1), O(n), O(n). Fixed size, one after another in memory.
Dynamic Array - space, lookup, append, insert, delete
O(n), O(1), O(1) - amortized, O(n), O(n)
Hash Tables - space, lookup, append, insert, delete
O(n), O(1) - amortized, O(1) - amortized, O(1) - amortized Flexible keys (any datatype), Fast lookups but Unordered, Single-directional lookups (O(n) to get the key of a value), Not cache-friendly (uses linked-lists, data not next to each other in memory)
Linked List - space, prepend, append, lookup, insert, delete
O(n), O(1), O(1), O(n), O(n), O(n)
An item in a linked list is called a node. The first node is called the head. The last node is called the tail.
Fast operations on the ends, Flexible size but Costly lookups (used for stacks and queues)
Queue - space, enqueue, dequeue, peek
O(n), O(1), O(1), O(1)
A queue stores items in a first-in, first-out (FIFO) order. Fast operations (BFS, processes…)
Stack - space, push, pop, peek
O(n), O(1), O(1), O(1)
A stack stores items in a last-in, first-out (LIFO) order. Fast operations (DFS…)
Binary Tree
class BinaryTreeNode(object): def \_\_init\_\_(self, value): self.value = value self.left = None self.right = None
perfect: every level of the tree is completely full (half of our nodes are on the last level)
n = 2^h + 1
log2(n - 1) = h
(level starts at 0 with 2^0 node…)
Bitwise AND, OR, XOR, NOT, bitshifting
5 & 6 → 4 5 | 6 → 7 5 ^ 6 → 3 ~ 5 → -6 0010 << 1 → 0100 1011010 >> 3 → 0001011
Shifting left multiplies by 2. Shifting right divides by 2, throwing out any remainders.
Two’s complement encoding
The leftmost digit is negative, and the rest of the digits are positive. ex: 101 = -3
Binary heap - space, get min, insert, remove min
O(n), O(1), O(lg(n)), O(lg(n))
A binary heap is a binary tree where the nodes are organized to so that the smallest value is always at the top. Quickly access the smallest item, Space efficient. One way to efficiently sort items is to insert them all into a heap and then remove them one at a time.