Week 1 Flashcards

1
Q

What is an algorithm?

A

A step-by-step procedure for solving a problem using a computer program.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a procedure?

A

Method to solve the problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Is a programming language a type of procedure?

A

Nope.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Why do we analyze algorithms?

A

To predict performance, provide guarantees and understand theoretical basis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the limitations of run-time analysis?

A
  • Implementing the algorithm may be difficult.
  • Doesn’t include all possible inputs.
  • Only valid on same software environments and hardware.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How much time to primitive operations take?

A

Constant time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

If a perform a primitive operation many times what is the time taken?

A

N where N is the number of times.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Define O(1), O(log N), O(N), O(N log N)

A

O(1) = constant time regardless of input
O(log N) = logarithmic time, proportional to the logarithmic of the input size.
O(N) = linear time, directly proportional to input size.
O(N log N) = log-linear time, proportional to the logarithmic of the input size times the input size.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Why is Big O notation desirable?

A

Abstracts away implementation details as we expect every algorithm to have a run time proportional to its big O notation across every OS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is O of O(N) + O(N^2)?

A

O(N^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How can I find the value of n for which the Big-Oh notation holds? Use example 11n + 5 to explain.

A

We take the big O representation O(x) and express c * x >= 1 / 2 * (original equation). The value of n is the point where it holds.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What does big Ω do?

A

Tells us that the time taken grows at least by this function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What does Θ notation do?

A

Tight bound, gives a minimum running time k1 * N and a maximum running time k2 * N.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Summarize difference between Big Θ, Big Oh and Big Omega.

A

Θ (N^2) must have N^2 only as biggest.
O(N^2) must have N^2 or lower as biggest.
Ω (N^2) must have N^2 or larger as biggest.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Explain briefly selection sort.

A

Iterate through the list from i to find smallest entry and swap with i. Then increment i and repeat till end of list.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

List sorting algorithms that are in-place?

A

Selection Sort
Insertion Sort
Shell Sort
Quick Sort

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

List sorting algorithms that are non-recursive.

A

Selection Sort
Insertion Sort
Shell Sort

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

List sorting algorithms that utilize comparisons.

A

Selection Sort
Insertion Sort
Shell Sort
Merge Sort
Quick Sort

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

List sorting algorithms that are internal.

A

Selection Sort
Insertion Sort.
Shell Sort
Quick Sort

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

List sorting algorithms that are external?

A

Merge Sort

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

List sorting algorithms that are unstable.

A

Selection Sort
Shell Sort
Quick Sort

22
Q

Explain stability in relation to sorting algorithms.

A

Stable sorting algorithms preserve the existing relative order of elements when comparing equal keys.

23
Q

Explain internal vs external in relation to sorting algorithms.

A
  • Internal sort is one where the items being sorted can be kept in main memory / RAM.
  • External sort is one where the items being sorted need to use external memory.
24
Q

Explain insertion sort?

A

In iteration i, swap a[i] with each larger entry on its left. Iterate through full list until every element has been inserted in correct place.

25
List Sorting algorithms that are stable.
Insertion Sort Merge Sort
26
List best case of all the sorting algorithms.
Selection Sort: O(N^2) Insertion Sort: O(N) Shell Sort (3x + 1): O(N log N) Merge Sort: 1 / 2 N log N Quick Sort: N lg N
27
List worst case of all the sorting algorithms.
Selection Sort: O(N^2) Insertion Sort: O(N^2) Shell Sort (3x + 1): O(N^(3/2)) Merge Sort: N lg N Quick Sort: 1/2 N^2
28
List average case of all the sorting algorithms.
Selection Sort: O(N^2) Insertion Sort: O(N^2) Shell Sort (3x + 1): O(N^(3/2)) Merge Sort: N lg N Quick Sort: 2 N lg N
29
Explain shell sort.
Elements a certain gap apart are compared and swapped if the order is incorrect. This is incremented throughout the list and then the gap is decremented. This process repeats until the gap is 1 and then a standard insertion sort is performed.
30
Explain merge sort.
Divide array into two halves, recursively sort each half, merge two halves by comparing elements sequentially from each half and inserting into new temp array.
31
List all sorting algorithms that are recursive.
Merge Sort Quick Sort
32
List all sorting algorithms that are out of place.
Merge Sort
33
Explain quick sort.
Take first / mid element of array. Place all elements greater than pivot to left of pivot and all elements less than pivot to right and then quicksort these two subarrays. What actually happens is pivot set to first element, left-mark set to 2nd element and right-mark set to last element. We advance left-mark until it finds an element that is > pivot. Then we decrement right-mark till we find element < pivot. We swap these elements and continue with left again. This process continues until left and right cross and then we swap pivot with right-mark.
34
Explain Key-Indexed Counting Sort.
Find max value in array Make array of frequencies of size max + 1 Reconstruct array starting with smallest number adding it how many times it occurs and working up to biggest number.
35
Explain radix sort.
Perform sort on units place of numbers, then tens and so on until the array is sorted. We get the max 'place' from the largest number.
36
What is linear search?
Brute force approach look at every element in turn until you find the correct one.
37
What is the advantage of linear search?
Can find in unsorted input.
38
What is the worst case time complexity of linear search?
O(N)
39
Explain binary search.
Selected middle element of sorted array, if target is that element return, if target is lower than that element search in lower half of array, if target is higher than that element search in higher half of array.
40
What is the worst case time complexity for binary search?
O (log N)
41
Explain Knuth-Morris-Pratt creation of lps array.
matching = 0 i is pattern index lps[0] = 0 if p[i] == p[matching] then matching++ lps[i] = matching i++ else if matching != 0 then matching = lps[matching - 1] (No i++ here as we must check smaller suffixs) else matching == 0 then lps[i] = 0 i++
42
Explain Knuth-Morris-Pratt algorithm assuming LPS array already formed.
i is text index j is pattern index if i == j increment both if j == pattern.length() Pattern found set j = lps[j - 1] else if j != 0 then j = lps[j - 1] else i++
43
What is the time complexity of the Knuth Morris Prath algorithm?
O(length of pattern + length of text)
44
Time complexity of counting sort vs radix sort.
C: O(n+ range of input values) R: O(d * (n + k)) where n = number of elements d = number of digits in the largest number k = base (10 for decimal)
45
Explain KMP with DFA style code assuming dfa already produced.
j = 0 current state for i to n j = dfa[text.charAt(i)][j] if j == m Pattern found at i - m + 1 j = 0 where n is length text, m is length pattern
46
Explain how to create the dfs array.
dfa[state][character_input] int x = 0; dfa[charAt(0)][0] = 1 All values in dfa init to 0 for j = 1 to m copy values of state x into state j set matching input to progress to next state. x = dfa[matching_input][x]
47
Explain how run length encoding reduces redundancy in a bitstream.
4-bit counts to represent alternating runs of 0's and 1's. (Will have to increases bits per count if there are ever more than 15 0's or 1's in a row.
48
Explain the LZW compression algorithm.
Put ASCII values into dictionary String w = "" for c : input String wc = w + c if dictionary.contains(wc) w = wc else result.add(dictionary.get(w)) dictionary.put(wc, dictSize++) w = "" + c if (!w.equals("") result.add(dictionary.get(w))
49
Explain the LZW decompression algorithm.
Simply read off dictionary made with compression algorithm.
50
Explain huffman encoding.
Calculate frequency of every character. Make all characters leaf nodes. Combine two leafs with least frequency into a node with its value being combined frequencies of the two. Repeat until all leafs used. Going left in tree is 0, going right is 1. Construct binary encoding from root starting with MSB.
51
Huffman Code assuming frequency array done.
MinPQ pq; for char i=0 to R if(freq[i] > 0) pq.insert(node(i, freq[i],null,null)) while (pq.size() > 1) Node x = pq.min() Node y = pq.min() Node parent = new Node("_", x.freq + y.freq, x + y) pq.insert(parent) return pq.min()