# Sorts Flashcards

1
Q

Is quick sort in place? Stable?

A

Quick sort is in place, and can be made stable (while still fast) with some neat optimizations.

2
Q

What is quick sort’s time complexity (consider all cases)?

A

Θ(n log n) in the best and average case. Θ(n^2) in the worst case.

3
Q

What leads to a worst case performance of quick sort?

A

When the pivots selected are consistently greater than or less than the rest of the values (leading to partitions of length 0).

4
Q

How long does each of quicksort’s partitioning steps take?

A

Each partition step takes Θ(n) operations.

5
Q

Name the two common quick sort partitioning schemes.

A

Hoare and Lomuto.

6
Q

Describe the Hoare partitioning scheme.

A

Two scanning indices move towards each other until they swap or cross.
A swap occurs when both point to an element on the wrong side (inversion).

7
Q

Describe the Lomuto partitioning scheme.

A

Two indices, but only one is scanning - out of place items are swapped to the beginning.

8
Q

Which pairtitioning scheme is more efficient?

A

Hoare.

9
Q

Quicksort performance is primarily driven by what?

A

Pivot selection.

10
Q

Besides pivot selection, how else can quicksort be made more efficient?

A

Diverting to insertion sort for smaller partitions is generally more efficient.

11
Q

What are the divide and conquer sorting algorithms (that we are studying)?

A

Quick sort and merge sort.

12
Q

Write pseudo code for quick sort (assume partitioning is handled by a separate function).

A

void sort(int[] a, int lo, int hi) {
if (hi <= lo) return;
int j = partition(a, lo, hi);
sort(a, lo, j-1);
sort(a, j+1, hi);
}

13
Q

Write pseudo code for quick sort’s paritioning.

A

int partition(int[] a, int lo, int hi) {
int i = lo, j = hi + 1;
int v = a[lo];
while (true) {
while (a[++i] < v) if (i == hi) break;
while (v < a[–j]) if (j == lo) break;
if (i >= j) break;
swap(a, i, j);
}
swap(a, lo, j);
return j;
}

14
Q

What is merge sort’s time complexity? (consider all cases)

A

Θ(n log n) in all cases.

15
Q

Is merge sort in place? Stable?

A

Stable, but not in place.

16
Q

What is merge sort’s space complexity?

A

Θ(n)

17
Q

Can merge sort readily be an in-place algorithm using arrays?

A

No.

18
Q

Can merge sort be implemented with lists?

A

Yes.

19
Q

How does merge sort find the halfway point of list in 1n time, rather than 2n time?

A

Using the “alternate and skip” method.

20
Q

Can merge sort be made to work on a list of lists? (nested lists)

A

Yes, by merging depth first.

21
Q

What’s the cost of finding the halfway point of an array in merge sort?

A

Θ(1)

22
Q

What are the benefits of identifying already sorted runs in merge sort?

A

Less overall comparisons & swaps, reduced merge operations.

23
Q

Why is it possible to develop logarithmic time insert and remove the maximum operations within a heap?

A

Because the height of a complete binary tree of size n is floor(lg n).

24
Q

What are the two approaches to heapifying?

A

Top-down (swim-based) or bottom-up (sink-based).

25
Q

Bottom-up heapifying uses how many compares and how many exchanges to construct a heap from N items?

A

fewer than 2N compares, and fewer than N exchanges.

26
Q

Write the pseudo code for merge sort. (assume merging is handling by a separate function).

A

list mergesort(list a) {
if (a’s size <= 1) return a;
l1 = a[0 to half];
l2 = a[half to end];
return merge(mergesort(l1), mergesort(l2));
}

27
Q

Write a function that merges two arrays together.

A

int[] merge(l1, l2) {
L = new List();
while (neither l1/l2 are empty) {
if (l1.get(0) <= l2.get(0)) {L.add(l1.remove(0));}
}
}

28
Q

Write the pseudo code for heapsort.

A

void heapsort(int[] a) {
for (int i = a.length/2; i >= 1; i–) {sink(a, i, a.length);
int N = a.length;
while (N > 1) {
exch(a, 1, N–);
sink(a, 1, N);
}
}