Week 1 Flashcards

(89 cards)

1
Q

What is an algorithm?

A

A step-by-step procedure for solving a problem using a computer program.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a procedure?

A

Method to solve the problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Is a programming language a type of procedure?

A

Nope.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Why do we analyze algorithms?

A

To predict performance, provide guarantees and understand theoretical basis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the limitations of run-time analysis?

A
  • Implementing the algorithm may be difficult.
  • Doesn’t include all possible inputs.
  • Only valid on same software environments and hardware.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How much time to primitive operations take?

A

Constant time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

If a perform a primitive operation many times what is the time taken?

A

N where N is the number of times.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Define O(1), O(log N), O(N), O(N log N)

A

O(1) = constant time regardless of input
O(log N) = logarithmic time, proportional to the logarithmic of the input size.
O(N) = linear time, directly proportional to input size.
O(N log N) = log-linear time, proportional to the logarithmic of the input size times the input size.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Why is Big O notation desirable?

A

Abstracts away implementation details as we expect every algorithm to have a run time proportional to its big O notation across every OS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is O of O(N) + O(N^2)?

A

O(N^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How can I find the value of n for which the Big-Oh notation holds? Use example 11n + 5 to explain.

A

We take the big O representation O(x) and express c * x >= 1 / 2 * (original equation). The value of n is the point where it holds.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What does big Ω, Θ and O do?

A

Omega: The algorithm will take at least this time.
O: The algorithm will never take more than this time.
Theta: The algorithm takes exactly this time (within constants)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Summarize difference between Big Θ, Big Oh and Big Omega using N^2 time.

A

Θ (N^2) takes N^2
O(N^2) N^2 max
Ω (N^2) N^2 min

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Explain briefly selection sort.

A

Iterate through the list from i to find smallest entry and swap with i. Then increment i and repeat till end of list.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

List sorting algorithms that are in-place?

A

Selection Sort
Insertion Sort
Bubble Sort
Shell Sort
Quick Sort

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

List sorting algorithms that are non-recursive.

A

Selection Sort
Bubble Sort
Insertion Sort
Shell Sort
Counting Sort
Radix Sort

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

List sorting algorithms that utilize comparisons.

A

Selection Sort
Insertion Sort
Bubble Sort
Shell Sort
Merge Sort
Quick Sort

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

List sorting algorithms that are internal.

A

Selection Sort
Insertion Sort.
Bubble Sort
Shell Sort
Quick Sort

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

List sorting algorithms that are external?

A

Merge Sort

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

List sorting algorithms that are unstable.

A

Selection Sort
Shell Sort
Quick Sort

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Explain stability in relation to sorting algorithms.

A

Stable sorting algorithms preserve the existing relative order of elements when comparing equal keys.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Explain internal vs external in relation to sorting algorithms.

A
  • Internal sort is one where the items being sorted can be kept in main memory / RAM.
  • External sort is one where the items being sorted need to use external memory.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Explain insertion sort?

A

In iteration i, swap a[i] with each larger entry on its left. Iterate through full list until every element has been inserted in correct place.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

List Sorting algorithms that are stable.

A

Insertion Sort
Bubble Sort
Merge Sort
Counting Sort
Radix Sort

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
List best case of all the sorting algorithms.
Selection Sort: O(N^2) Insertion Sort: O(N) Bubble Sort: O(N) Shell Sort (3x + 1): O(N log N) Merge Sort: 1 / 2 N log N Quick Sort: N lg N Counting Sort: O(n + k) Radix Sort: O(nd) where n is number of elements, k is range of values of elements and d is number of digits of largest number.
26
List worst case of all the sorting algorithms.
Selection Sort: O(N^2) Insertion Sort: O(N^2) Bubble Sort: O(N^2) Shell Sort (3x + 1): O(N^(3/2)) Merge Sort: N lg N Quick Sort: 1/2 N^2 Counting Sort: O(n + k) Radix Sort: O(nd) where n is number of elements, k is range of values of elements and d is number of digits of largest number.
27
List average case of all the sorting algorithms.
Selection Sort: O(N^2) Insertion Sort: O(N^2) Bubble Sort: O(N^2) Shell Sort (3x + 1): O(N^(3/2)) Merge Sort: N lg N Quick Sort: 2 N lg N Counting Sort: O(n + k) Radix Sort: O(nd) where n is number of elements, k is range of values of elements and d is number of digits of largest number.
28
Explain shell sort.
Elements a certain gap apart are compared and swapped if the order is incorrect. This is incremented throughout the list and then the gap is decremented. This process repeats until the gap is 1 and then a standard insertion sort is performed.
29
Explain merge sort.
Divide array into two halves, recursively sort each half, merge two halves by comparing elements sequentially from each half and inserting into new temp array.
30
List all sorting algorithms that are recursive.
Merge Sort Quick Sort
31
List all sorting algorithms that are out of place.
Merge Sort Counting Sort Radix Sort
32
Explain quick sort.
Take first / mid element of array. Place all elements greater than pivot to right of pivot and all elements less than pivot to left and then quicksort these two subarrays. What actually happens is pivot set to first element, left-mark set to 2nd element and right-mark set to last element. We advance left-mark until it finds an element that is > pivot. Then we decrement right-mark till we find element < pivot. We swap these elements and continue with left again. This process continues until left and right cross and then we swap pivot with right-mark.
33
Explain Key-Indexed Counting Sort.
Find max value in array Make array of frequencies of size max + 1 Reconstruct array starting with smallest number adding it how many times it occurs and working up to biggest number.
34
Explain radix sort.
Perform sort on units place of numbers, then tens and so on until the array is sorted. We get the max 'place' from the largest number.
35
What is linear search?
Brute force approach look at every element in turn until you find the correct one.
36
What is the advantage of linear search?
Can find in unsorted input.
37
What is the worst case time complexity of linear search?
O(N)
38
Explain binary search.
Selected middle element of sorted array, if target is that element return, if target is lower than that element search in lower half of array, if target is higher than that element search in higher half of array.
39
What is the worst case time complexity for binary search?
O (log N)
40
Explain Knuth-Morris-Pratt creation of lps array.
matching = 0 lps[0] = 0 int i = 1; while (i < pattern length) if p[i] == p[matching] then matching++ lps[i] = matching i++ else if matching != 0 then matching = lps[matching - 1] (No i++ here as we must check smaller suffixs) else lps[i] = 0 i++
41
Explain Knuth-Morris-Pratt algorithm assuming LPS array already formed.
i is text index j is pattern index if i == j increment both if j == pattern.length() Pattern found set j = lps[j - 1] else if j != 0 then j = lps[j - 1] else i++
42
What is the time complexity of the Knuth Morris Prath algorithm?
O(length of pattern + length of text)
43
Time complexity and space complexity of counting sort vs radix sort.
C: O(n+ range of input values) R: O(d * (n + k)) where n = number of elements d = number of digits in the largest number k = base (10 for decimal) C: (n + range of input values) R: O(n + k)
44
Explain KMP with DFA style code assuming dfa already produced.
j = 0 current state for i to n j = dfa[text.charAt(i)][j] if j == m Pattern found at i - m + 1 j = 0 where n is length text, m is length pattern
45
Explain how to create the dfs array.
dfa[state][character_input] int x = 0; dfa[charAt(0)][0] = 1 All values in dfa init to 0 for j = 1 to m copy values of state x into state j set matching input to progress to next state. x = dfa[matching_input][x]
46
Explain how run length encoding reduces redundancy in a bitstream.
4-bit counts to represent alternating runs of 0's and 1's. (Will have to increases bits per count if there are ever more than 15 0's or 1's in a row.
47
Explain the LZW compression algorithm.
Put ASCII values into dictionary String w = "" for c : input String wc = w + c if dictionary.contains(wc) w = wc else result.add(dictionary.get(w)) dictionary.put(wc, dictSize++) w = "" + c if (!w.equals("") result.add(dictionary.get(w))
48
Explain the LZW decompression algorithm.
Simply read off dictionary made with compression algorithm.
49
Explain huffman encoding.
Calculate frequency of every character. Make all characters leaf nodes. Combine two leafs with least frequency into a node with its value being combined frequencies of the two. Repeat until all leafs used. Going left in tree is 0, going right is 1. Construct binary encoding from leaf starting with LSB. Construct ASCII from root starting with MSB. Note: Must obey BST property, always use actual leafs when available (by convention).
50
Huffman Code assuming frequency array done.
MinPQ pq; for char i=0 to R if(freq[i] > 0) pq.insert(node(i, freq[i],null,null)) while (pq.size() > 1) Node x = pq.min() Node y = pq.min() Node parent = new Node("_", x.freq + y.freq, x + y) pq.insert(parent) return pq.min()
51
What is fixed length encoding?
Assigning binary value to each character by simple increment, starting with 0,1,10 etc.
52
Knapsack recursive with memoization code.
n is number of items, C is weight w[n] is array of item weights v[n] is array of item values if arr[n][C] != null = return arr[n][C] if n==0 or C==0 return 0 else if w[n] > C return KnapSack(n-1,C) else var1 = KnapSack(n-1,C) var2 = v[n] + KnapSack(n - 1, C - w[n]) result = max{var1, var2} arr[n][C] = result return result
53
What is the time complexity of knapsack recursive with memoisation?
O(nC) where n is number of items and C is weight.
54
Distinguish between memoization and tabulation.
M: top-down, recursive, O(N), recursive problems T: bottom-up, iterative, O(N), less overhead with iterative
55
Knapsack iterative with tabulation code.
for i=1 to n for c=0 to C if(w[i-1] > c) tab[i][c] = tab[i - 1][c] else int var1 = tab[i - 1][c] int var2 = v[i - 1] + tab[i - 1][c-w[i - 1]] dp[i][c] = Math.max(var1, var2) return dp[n][C]
56
Basic Knapsack algorithm
B[k,w] = if given item is overweight B[k - 1, w] else max { B[k-1,w], B[k-1,w-given item weight] + given item value}
57
Time complexity for Greedy algorithm vs dynamic programming for fractional knapsack problem.
G: Time O(nlogn) D: Time O(n . W . m)
58
What is the greedy algorithm that solves FRACTIONAL knapsack problem efficiently?
Choose best-value-to-weight ratio.
59
Why is greedy algorithm ineffective for 0/1 knapsack problem?
Future decisions depend on past choices. Sacrifices optimality for locally best choice and short term gain.
60
What is the general time complexity for double recursion?
O(2^n)
61
If the time taken based on n forms a geometric series 1,2,4,..,2^k < n.
O(n)
62
Time complexity of factorial
O(n)
63
Time complexity of this for loop -> for (int i = 1; i < Math.pow(2, n); i++)
O(2^n)
64
What is the time complexity of these loops nested? for (int i = 0; i < 10; i++) for (int j = 0; j < 10; j++)
O(1)
65
What makes radix sort inefficent?
If keys have a very large number of digits.
66
What makes counting sort inefficient?
When the range of input values is much larger than the number of elements. range >> n
67
In Huffman coding, what property ensures that no code is a prefix of another?
Prefix-free property
68
Why is quick sort considered an in-place algorithm?
It performs all sorting with limited auxiliary space.
69
12. What is the best-case scenario for Quick Sort? A) Pivot always splits array equally B) Pivot is always largest element C) Array is already sorted D) All elements are equal E) Array is in descending order
A
70
What is quick sort worst case?
When pivot is always largest or smallest element.
71
Explain the fractional knapsack problem (Assuming a 1 increment)
Sort items by their value per weight ratios, take items one by one with highest height value-per weight ratios, take a full fraction of them all until you can't. And then for this last item take the fraction you can take to fill the weight.
72
Which of the bigOh represents the fastest order of growth for large N?
O(2^N)
73
What does the constant factor in algorithm analysis refer to?
The 10 in 10N^2 where O(N^2) is the complexity.
74
Define adaptive property for sorting algorithms. Give the obvious example.
Ability of the algorithm to take advantage of existing order in the input data to improve efficiency. E.g. Insertion Sort
75
Define distributive for sorting algorithms. Give an example.
Non-comparison based. E.g. Counting sort / radix sort.
76
What is the acceptable space complexity for a recursive sorting algorithm to still be considered in-place?
O(log n)
77
What is the differences and similarities between LSD and MSD radix sort?
MSD radix sort is recursive Time and Space complexity the same MSD higher overhead but more efficient.
78
What potential performance issue can arise with MSD Radix Sort, especially for strings?
It can create a very large number of small subarrays, leading to high overhead, if many strings have same prefix.
79
Dynamic programming is most suitable for problems exhibiting which two properties.
Optimal Substructure and Overlapping subproblems.
80
What does optimal substructure mean in the context of DP?
A globally optimal solution can be constructed from optimal solutions to its sub-problems.
81
What does overlapping subproblems mean in DP?
The same subproblems are encountered and solved multiple time in a naive recursive approach.
82
Which technique is generally preferred for implementing DP if all subproblems must be solved anyway?
Tabulation.
83
Which technique might be more memory efficient if not all subproblems need to be solved?
Memoization
84
The space complexity of the standard tabulation approach for 0/1 Knapsack is O(N ×C). How can this be optimized if only the final maximum value is needed?
Can be optimized to O(C) space by using only two rows (or even one row) of the DP table. Namely the current state and the previous one as we don't need to keep track of anything else.
85
Name the five types of compression model. Which one is Huffman and which one(s) is LZW?
Adaptive Model Static/Dynamic Model Lossy Model Dictionary Model Predictive Model Huffman -> Static/Dynamic LZW -> Adaptive / Dictionary.
86
What does RLE stand for?
Run Length Encoding
87
What is GIF, JPEG, MP3. DEFLATE a mixture of?
RLE and Huffman. Huffman and lots of things Huffman and lots of things Huffman and LZW variant
88
Define interpolation search. Give best, average, worst case.
Estimating the likely position of the target value within the current interval based on the values at the interval ends. Best: O(N) Average: O(N log N) Worst: O(N^2)
89
Define Jump Search. Give best, average, worst.
Jump ahead by fixed steps (e.g. root N) and then perform a linear search in the block where the element might be. Best: O(N) Average: O(N log N) Worst: O(N^2)