Quizes Flashcards

1
Q

We apply insertion sort to the input-array A = [7,3,5,1,6,2,4]. What is A after two iterations of the crucial for-loop?

A) A = [1,3,5,7,6,2,4]

B) A = [3,7,5,1,6,2,4]

C) A = [1,2,3,7,5,6,4]

D) A = [3,5,7,1,6,2,4]

A

D) A = [3,5,7,1,6,2,4]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the worst-case running time of insertion sort, for inputs of size n?

A) Theta (n log(n))

B) O (n log(n))

C) Theta (n^2)

D) O (log n)

A

C) Theta (n^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the best-case running time of insertionsort, for inputs of size n?

A) Theta (n)

B) Theta (n log(n))

C) Theta (n^2)

D) Theta (log(n))

A

A) Theta (n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

We consider insertionsort. What happens if we swap the order of the tests for the while-loop in line 5?

A) We may then possibly compare A[0] with key, but A[0] does not exist.

B) The algorithm then sorts in reverse order.

C) It does not matter.

D) The program then does not terminate.

A

A) We may then possibly compare A[0] with key, but A[0] does not exist.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

We annotate the following code fragment with the amount of times a line is executed in the worst case. What do we get?

for j:=2 to n do

i := j-1

while i>0 do

i:= i-1

A) For the first line n, for the second line n-1, for the third line the summation from j=2 to n of j, for the fourth line the summation from j=2 to n of (j-1).

B) For the first line n-1, for the second line n-1, for the third line the summation from j=1 to n of j, for the fourth line the summation from j=1 to n of (j-1).

C) For the first line n, for the second line n-1 for the third line j-1, for the fourth line n.

D) For the first line n-1, for the second line n-1, for the third line j-1, for the fourth line n.

A

A) For the first line n, for the second line n-1, for the third line the summation from j=2 to n of j, for the fourth line the summation from j=2 to n of (j-1).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the result of applying the first two iterations of the algorithm for selection sort to the input array [7, 2, 9, 6, 1, 3, 5, 4]?

A) 1,2,9,6,7,3,5,4

B) 1,3,9,6,7,2,5,4

C) 1,2,9,6,7,3,4,5

D) 1,3,9,6,7,2,4,5

A

A) 1,2,9,6,7,3,5,4

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the best-case running time of selection sort, for inputs of size n?

A) O(1) because we find immediately the smallest elements of the unordered part.

B) O(n) because we just check whether the array is ordered.

C) O(n log(n)) because that is the best-case for comparison-based sorting.

D) O(n^2) because we always have to completely traverse the unordered part.

A

D) O(n^2) because we always have to completely traverse the unordered part.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the worst-case running time of selection sort, for inputs of size n?

A) Theta(n)

B) Theta(n^2)

C) Theta(n log(n))

D) Theta(log(n))

A

B) Theta(n^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

In the recurrence tree for merge sort

A) the height is log(n) and the work per layer is the linear work to split.

B) the height is log(n) and the work per layer is the linear work to merge.

C) the height is n and the work per layer is the linear work to split.

D) the height is n and the work per layer is the linear work to merge.

A

B) the height is log(n) and the work per layer is the linear work to merge.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which of the following is(are) correct?

A) n^2 in O(n^3)

B) n^2 in O(n^2)

C) n^2 in Theta (n^2)

D) n^2 in Theta (n^3)

E) n^3 in Theta (n^2)

F) n^3 in O(n^2)

A

A) n^2 in O(n^3)

B) n^2 in O(n^2)

C) n^2 in Theta (n^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

An algorithm with worst-case time complexity Theta (n log(n)) is always, for any input, faster than an algorithm with worst-case time complexity in Theta (n^2). Is this statement true or false, and why?

A) True, because n log(n) has a lower rate of growth than n^2

B) True, because Theta gives a strict bound

C) False, because for a small input or for a special input an algorithm with worst-case time complexity in Theta (n^2) may perform better than an algorithm with worst-case time complexity in Theta (n log(n))

D) False, because n log(n) and n^2 have the same rate of growth anyway.

A

C) False, because for a small input or for a special input an algorithm with worst-case time complexity in Theta (n^2) may perform better than an algorithm with worst-case time complexity in Theta (n log(n))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

We consider two statements.

Statement A is: 2^(n+1) is in O(2^n).
Statement B is: 2^(2n) is in O(2^n).

Which of the following holds?

A) A and B are both false

B) A and B are both true

C) A is true but B is false

D) B is true but A is false

A

C) A is true but B is false

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Which of the following ordering of functions is(are) in increasing order of rate of growth?

A) 2^7, n log(n), n^2, n!

B) 3n, 2^3, n log(n), n^3

C) n^2, n^3, n log(n), 2^n

D) n log(n), n^2, n^7, 2^n

E) n log(n), n^2, 2^7, n!

A

A) 2^7, n log(n), n^2, n!

D) n log(n), n^2, n^7, 2^n

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Which of the following ordering of functions is in increasing order of rate of growth?

A) 2^{10}, n (log(n))^2, n^2 log(n), 2^{log(n)}

B) n (log(n))^2, n^2 log(n), 2^{log(n)}, 2^{10}

C) n (log(n))^2, n^2 log(n), 2^{10}, 2^{log(n)}

D) 2^{10}, 2^{log(n}, n (log(n))^2, n^2 log(n)

A

D) 2^{10}, 2^{log(n}, n (log(n))^2, n^2 log(n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

We have T(n) in O(n^2) with T a function for the worst-case time complexity of algorithm A. Do we have T(n) in Theta(n^2)?

A) Yes, because an upper bound for the worst-case gives a lower bound.

B) No, T is not necessarily bound from below by n^2.

A

B) No, T is not necessarily bound from below by n^2.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

We consider an array A that contains n keys. What is the asymptotic worst case running time of a best algorithm that checks whether the keys are in increasingly sorted order?

A) Theta (1)

B) Theta (log (n))

C) Theta (n)

D) Theta (n log (n))

A

C) Theta (n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

When is a sorting algorithm said to be stable?

A) If the algorithm is linear time on an array that is already sorted.

B) If sorting the array twice in a row does not affect the result.

C) If it does not need extra memory next to the input array.

D) If it respects the order of equal keys in the array.

A

D) If it respects the order of equal keys in the array.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

consider the following pseudo-code:

Algorithm Loop(n):

s := 0

for i := 1 to n^2 do

for j : = 1 to i do

s := s+ i

What is the worst-case running time of Loop?

A) O(n).

B) O(n^2).

C) O(n^3).

D) O(n^4).

A

D) O(n^4).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

We consider pseudo-code for finding a maximum element in an array of integers. What is an invariant useful for proving correctness?

FindMax(A)

max := A[1]

for i := 2 to A.length do

if A[i] > max then

max := A[i]

A) max is the largest value in A[1..i]

B) max is the largest value in A[1..(i-1)]

C) max is the largest value in A[1..(i+1)]

D) max is the largest value in A

A

B) max is the largest value in A[1..(i-1)]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

We consider an algorithm that computes for input n >= 1 the sum 1+…+n. What is an invariant that can be used to prove correctness?

ComputeSum(n)

sum := 0

for i := 1 to n

sum := sum + i

return sum

A) sum is 1 + … + (i-1)

B) sum is 1 + … + i

C) sum is i + … + n

D) sum is 1 + … + n

A

A) sum is 1 + … + (i-1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Merge sort uses the procedure merge as subroutine. What is the running time of merge on inputs of total size n?

A) It is in Theta (1).

B) It is in Theta (n^2).

C) It is in Theta (n log(n)).

D) It is in Theta (n).

A

D) It is in Theta (n).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

We apply merge sort to already sorted input arrays of size n. Then the running time is

A) in Theta (n) because we just need to check that the array is sorted.

B) in Theta (n log(n)) because the height of the recursion tree is log(n)+1 and the merge procedure has linear runnning time.

C) in Theta (log(n)) because the height of the recursion tree is log(n)+1 and the merge procedure is trivial.

D) in Theta (n log(n)) because the height of the recursion tree is n log(n) + 1 and the merge procedure is trivial.

A

B) in Theta (n log(n)) because the height of the recursion tree is log(n)+1 and the merge procedure has linear runnning time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is one of the main drawbacks of merge sort?

A) It is a recursive algorithm.

B) It uses additional memory, proportional to the input size.

C) It uses an additional subroutine, merge.

D) Its recursion tree has height approximately the log of the input size.

A

B) It uses additional memory, proportional to the input size.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is a correct clause for the case n>1 for a recurrence equation describing the worst-case running time of merge sort, for inputs of size n?

A) T(n) = T(n-1) +cn, with c a constant

B) T(n) = T(n/2) + c, with c a constant

C) T(n) = 2T(n/2) + c, with c a constant

D) T(n) = 2T(n/2) + cn, with c a constant

A

D) T(n) = 2T(n/2) + cn, with c a constant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Consider the following recurrence equation: T(0) = 1, and T(n)= T(n-1) + n for n > 1. What is the big-O of T?

A) T(n) in O(n^2)

B) T(n) in O(n log(n))

C) T(n) in O(n)

D) T(n) in O(2^n)

A

A) T(n) in O(n^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is a correct clause for the case n>1 for a recurrence equation describing the worst-case running time of max-heapify, for inputs of size n?

A) T(n) <= T((2n)/3) + c, with c a constant

B) T(n) <= T(n-1) + c, with c a constant

C) T(n) <= 2T(n/2) + cn, with c a constant

D) T(n) <= T(n/2) + c, with c a constant

A

A) T(n) <= T((2n)/3) + c, with c a constant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Which one of the following arrays is(are) a max-heap?

A) [7,6,3,5,4,2,1]

B) [7,6,2,5,4,3,1]

C) [1,2,3,4,5,6,7]

D) [7,6,4,1,2,5,3]

E) [7,4,6,3,1,5,2]

A

A) [7,6,3,5,4,2,1]

E) [7,4,6,3,1,5,2]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

We consider a max-heap containing n different keys. Why do we have that the height of such a max-heap is approximately log(n)?

A) Because the keys in a max-heap are partially ordered.

B) Because a max-heap is a binary tree.

C) Because a max-heap is an almost-complete binary tree.

D) Because all elements are different

A

C) Because a max-heap is an almost-complete binary tree.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

How many different max-heaps with keys 1, 2, 3, 4 exist?

A) 1

B) 2

C) 3

D) 4

A

C) 3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is the least and the largest number of elements that an almost complete binary tree of height h can have?

A) At least 2^h and at most 2^(h+1).

B) At least 2^(h-1) and at most 2^(h+1).

C) At least 2^(h-1) and at most 2^(h+1) - 1.

D) At least 2^h and at most 2^(h+1) - 1.

A

D) At least 2^h and at most 2^(h+1) - 1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

We represent a heap of size n as an array. At what indices of the array are the leaves of the heap? (We use floor(x) for the largest integer that less than or equal to x, and ceiling(x) for the smallest integer that is larger than or equal to x.)

A) At indices floor(n/2)+1, floor(n/2)+2, … , n.

B) At indices floor(n/2), floor(n/2)+1, …, n.

C) At indices 1, 2, …, floor(n/2).

D) At indices 1, 2, …, floor(n/2)-1.

A

A) At indices floor(n/2)+1, floor(n/2)+2, … , n.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

We consider executions of MaxHeapify on input an array A of length n, and an index i which is at least n/2. What is intuitively the running time of such executions?

A) Elementary time.

B) Approximately log(n) time, with n the length of the array A.

C) Approximately n time, with n the length of the array A.

D) Approximately n/2 time with n the length of the array A.

A

A) Elementary time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

We consider the worst-case running time of MaxHeapify. Which of the following (one or more) options are correct?

A) T(h) is in O(h) for inputs with height h.

B) T(h) is in O (log(h)) for inputs with height h.

C) T(n) is in O(n) for inputs with n keys.

D) T(n) is in O(log(n)) for inputs with n keys.

E) T(n) is in O(n/2) for inputs with n keys.

A

A) T(h) is in O(h) for inputs with height h.

C) T(n) is in O(n) for inputs with n keys.

D) T(n) is in O(log(n)) for inputs with n keys.

E) T(n) is in O(n/2) for inputs with n keys.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What is the result of performing the first iteration of the bottom-up procedure buildMaxHeap to [1, 2, 3, 4, 5, 6, 7, 8]?

A) [1, 2, 7, 4, 5, 6, 3, 8]

B) [1, 2, 7, 8, 5, 6, 3, 4]

C) [1, 2, 3, 8, 5, 6, 7, 4]

D) [1, 5, 3, 8, 2, 6, 7, 4]

A

C) [1, 2, 3, 8, 5, 6, 7, 4]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Which of the following is a description of heapsort?

A) We sort the array such that in the corresponding tree on every path from the root to a leaf the labels are increasing.

B) First we turn the input-array into a max-heap. Then we repeatedly swap the first element with the last element from the max-heap, disconnect the last element, and reconstruct the heap.

C) First we turn the input-array into a max-heap. Then we repeatedly perform the operation remove-max and store the so found max at the end of the output-array.

D) We split in the input-array into two more or less equal parts. The second half is already a max-heap. We sort the first half and merge it with the second one to obtain a max-heap.

A

B) First we turn the input-array into a max-heap. Then we repeatedly swap the first element with the last element from the max-heap, disconnect the last element, and reconstruct the heap.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

We apply heapsort to the input-array [5, 8, 1, 3, 7, 2, 4, 6]. What is the result of the first part (apply BuildMaxHeap) and what is the result of performing the first iteration of the second part?

A) [8, 7, 4, 6, 5, 2, 1, 3] and [7, 5, 6, 4, 3, 2, 1, 8]

B) [8, 7, 6, 4, 5, 2, 1, 3] and [7, 5, 6, 4, 3, 2, 1, 8]

C) [8, 7, 6, 4, 5, 2, 1, 3] and [7, 6, 4, 3, 5, 2, 1, 8]

D) [8, 7, 4, 6, 5, 2, 1, 3] and [7, 6, 4, 3, 5, 2, 1, 8]

A

D) [8, 7, 4, 6, 5, 2, 1, 3] and [7, 6, 4, 3, 5, 2, 1, 8]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What is the running time of heapsort if the input is an array that is already sorted (in increasing order)?

A) The running time is then in O (nlog(n)).

B) The running time is then in O(n).

C) The running time is then in O(n2).

D) The running time is then in O (log(n)).

A

A) The running time is then in O (nlog(n)).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

We consider the algorithm for the bottom-up max-heap construction. What happens if we let the loop-index increase from 1 upwards into floor of A.length/2 ?

A) Then the algorithm is no longer correct.

B) It works equally well.

C) Then we need to take the ceiling of A.length / 2 instead of the floor in order for the algorithm to be correct.

D) Then we should do the recursive call of MaxHeapify on A and i-1 instead of on A and i in order for the algorithm to be correct.

A

A) Then the algorithm is no longer correct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

We apply ExtractMax to a max-heap; the result is output 10 and the max-heap [6,3,5,2,1]. Which of the following could have been the input?

A) [10,6,5,3,2,1]

B) [10,3,6,2,1,5]

C) [10,6,5,2,3,1]

D) [10,5,6,2,1,3]

E) [10,5,1,2,3,6]

A

B) [10,3,6,2,1,5]

C) [10,6,5,2,3,1]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

What is the effect of applying the ExtractMax procedure on the max-heap [8,7,4,6,1,2,3,5] ?

A) [7,6,4,5,1,2,3,5] with heapsize 7

B) [7,6,4,5,1,2,3,5] with heapsize 8

C) [7,5,4,6,1,2,3,5] with heapsize 7

D) [7,6,5,4,3,2,1,5] with heapsize 7

A

A) [7,6,4,5,1,2,3,5] with heapsize 7

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What is the result of adding the key 10 to the max-heap [7,6,5,4,3,2,1,0,0,0] with initially heapsize 7?

A) [10,7,6,5,4,3,2,1,0,0] with heapsize 8.

B) [10,7,6,5,2,3,4,1,0,0] with heapsize 10.

C) [10,7,5,6,3,2,1,4,0,0] with heapsize 8.

D) [7,5,6,1,2,3,4,10,0,0] with heapsize 8.

A

C) [10,7,5,6,3,2,1,4,0,0] with heapsize 8.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

What is the result of executing the first four iterations (j = 1,2,3,4) of the algorithm partition to the input array A = [3,7,6,1,8,5,2,4] and indices p=1 and r=8? Note that the pivot is A[r].

A) A = [3,1,6,7,8,5,2,4]

B) A = [3,7,6,4,8,5,2,1]

C) A = [3,1,7,6,8,5,2,4]

D) A = [1,3,6,7,8,5,2,4]

A

A) A = [3,1,6,7,8,5,2,4]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

We (completely) perform the procedure partition which is used in quicksort. The resulting array is [2, 3, 1, 4, 5, 8, 7, 9, 10]. Which are all keys that could have been the pivot?

A) 5, 9, 10

B) 4, 5, 8, 10

C) 4, 5, 9, 10

D) 4, 5, 8, 9, 10

A

C) 4, 5, 9, 10

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

What is the worst-case running time of the algorithm partition used in quicksort, applied to an array A of n elements, and indices p and r in A?

A) The worst-case running time is in Theta(n).

B) The worst-case running time is in Theta
(log (n)).

C) The worst-case running time is in Theta(n^2).

D) The worst-case running time is in Theta(n log(n)).

A

A) The worst-case running time is in Theta(n).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Why is quicksort a good sorting algorithm?

A) Because it has good worst-case time complexity.

B) Because it is rather efficient, despite the fact that it does not have an optimal worst case time complexity.

C) Because it is a recursive algorithm.

D) Because it is the default sorting algorithm in the libraries of major programming languages.

A

B) Because it is rather efficient, despite the fact that it does not have an optimal worst case time complexity.

46
Q

Which recurrence equation correctly describes, next to T(0)=1, the worst-case running time of quicksort?

A) T(n) = 2T(n/2) + cn

B) T(n) = 2T(n/2)

C) T(n) = T(n) + T(0) + cn

D) T(n) = T(n-1) + T(0) + cn

A

D) T(n) = T(n-1) + T(0) + cn

47
Q

What is the running time in terms of Theta of quicksort on an array of size n in which all keys are identical?

A) Theta(n^2), because when partitioning one side will always be empty.

B) Theta(n), because the array is already sorted.

C) Theta(n log(n)), because the array is already sorted, but you still have the recursive calls.

D) Theta(1), because the array is trivial.

A

A) Theta(n^2), because when partitioning one side will always be empty.

48
Q

Suppose we use quicksort with partition spliting the array always in a 11-to-1 proportional split, so one part has length 1/12 and the other part has length 11/12. Then we have the following:

A) The recurrence for the running time of quicksort is T(n) = 2T(n/2) +cn with c a constant, and quicksort runs hence in O(n log n) time.

B) The recurrence for the running time of quicksort is T(n) = 12T(n/12) + cn with c a constant, and quicksort runs hence in O(n log n) time.

C) The recurrence for the running time of quicksort is T(n) = T(n/12) + T(11n/12) + cn with c a constant, and quicksort runs hence in O(n2) time.

D) The recurrence for the running time of quicksort is T(n)= T(n/12) + T(11n/12) + cn with c a constant, and quicksort runs hence in O(n log n) time.

A

D) The recurrence for the running time of quicksort is T(n)= T(n/12) + T(11n/12) + cn with c a constant, and quicksort runs hence in O(n log n) time.

49
Q

We apply the first step of counting sort to the input array A = [0, 6, 4, 3, 4, 2, 1, 0] where there range of elements to be sorted is 0 up to and including 6. What is the counting array C that we get, and what the counting-up-to array?

A) [0, 2, 1, 1, 1, 2, 1] and [0, 2, 3, 4, 5, 7, 8]

B) [2, 1, 1, 1, 2, 1, 0] and [2, 3, 4, 5, 7, 7, 8]

C) [2, 1, 1, 1, 2, 0, 1] and [2, 3, 4, 5, 7, 7, 8]

D) [0, 1, 2, 3, 4, 4, 6] and [0, 1, 3, 6, 7, 7, 13]

A

C) [2, 1, 1, 1, 2, 0, 1] and [2, 3, 4, 5, 7, 7, 8]

50
Q

Why does the linear time complexity of counting sort not contradict the lower bound on the worst-case complexity of comparison-based sorting?

A) Because the lower bound is on worst-case and the linear time complexity of counting sort is best-case.

B) Because counting sort is not comparison-based.

C) Because the lower bound is linear time.

D) Because counting sort uses additional memory.

A

B) Because counting sort is not comparison-based.

51
Q

What is the worst-case running time of the following algorithm in terms of big-Oh? The input is an array A of size n.

for i = 0 to n - 1 do

for j = i + 1 to n - 1 do

if A[i] == A[j] then

  return i

A) O(n)

B) O(1)

C) O(log(n))

D) O(n^2)

A

D) O(n^2)

52
Q

Which of the following statements are true?

1) Swapping two elements of an array can be done in constant time.

2) The smallest key of a max-heap is at the leftmost leaf.

3) All algorithms for turning an array into a max-heap are in Omega (n log(n)).

4) For all comparison-based sorting algorithms the worst-case time complexity is in Omega (n log(n)).

A) Only 1, 4 are true.

B) Only 1 is true.

C) Only 2 and 3 are true.

D) Only 1, 3, 4 are true.

E) Only 2, 3, 4 are true.

A

A) Only 1, 4 are true.

53
Q

Which of the following is(are) in increasing rate of growth?

A) 2^7, n log(n), n^2, n!

B) 3n, 2^3, n log(n), n^3

C) n^2, n^3, n log(n), 2^n

D) n log(n), n^2, n^7, 2^n

E) n log(n), n^2, 2^7, n!

A

A) 2^7, n log(n), n^2, n!

D) n log(n), n^2, n^7, 2^n

54
Q

Which of the following sorting algorithms has/have a worst-case time complexity in Theta (n log(n) )?

A) insertion sort

B) selection sort

C) merge sort

D) heapsort

E) quicksort

F) counting sort

A

C) merge sort

D) heapsort

55
Q

What is the advantage of implementing a min-priority queue using a heap?

A) Removal is in constant time.

B) Adding is in constant time.

C) Adding and deleting are equally expensive.

D) Adding and deleting are both faster than if we use an array.

A

C) Adding and deleting are equally expensive.

56
Q

What is the worst-case running time of bucket sort ?

A) It is in Theta (n^2), because we may call insertion sort, which is quadratic, on an input of size n, namely if (floor of nA[i]) is the same for all indices i.

B) It is in Theta (n^3), because we may call insertion sort which is quadratic n times.

C) It is in Theta (n), because the for-loops are not nested.

D) It is in Theta (n^2), because worst-case sorting is always in n^2.

A

A) It is in Theta (n^2), because we may call insertion sort, which is quadratic, on an input of size n, namely if (floor of nA[i]) is the same for all indices i.

57
Q

What is a reason to use insertion sort as subroutine in bucket sort?

A) It has a good worst-case time complexity.

B) It has a good best-case time complexity.

C) It performs relatively well on small inputs.

D) It performs relatively well on lists.

A

C) It performs relatively well on small inputs.

58
Q

We consider counting sort. What happens if we change the last for-loop by letting the index j go from 1 up to A.length?

A) The algorithm still sorts correctly, but is no longer stable.

B) The algorithm sorts in reverse order.

C) The algorithm is no longer correct.

D) The change has no effect at all

A

A) The algorithm still sorts correctly, but is no longer stable.

59
Q

What is the minimal number of leaves that a decision tree for some sorting algorithm for an input of size n has?

A) n!

B) n log(n)

C) 2^n

D) 2^(n-1)

A

A) n!

60
Q

We consider a decision tree for a comparison-based sorting tree for sorting an array of size n. The smallest possible depth of a leaf is

A) log(n)

B) n

C) n!

D) n log(n)

A

B) n

61
Q

Which of the following statements are true?

1) Swapping two elements of an array can be done in constant time.

2) The smallest key of a max-heap is at the leftmost leaf.

3) All algorithms for turning an array into a max-heap are in Omega (n log(n)).

4) For all comparison-based sorting algorithms the worst-case time complexity is in Omega (n log(n)).

A) Only 1, 4 are true.

B) Only 1 is true.

C) Only 2 and 3 are true.

D) Only 1, 3, 4 are true.

E) Only 2, 3, 4 are true.

A

A) Only 1, 4 are true.

62
Q

What does it mean that the lower-bound on the worst-case time complexity for comparison-based sorting is Omega (n log(n))?

A) It means that using comparison-based sorting we cannot sort faster than in n log(n).

B) It means that the worst-case of a comparison-based sorting algorithm is at least n log(n).

C) It means that the worst-case of a comparison-based sorting algorithm is at most n log(n).

D) It means that using comparison-based sorting we cannot sort slower than in n log(n).

A

B) It means that the worst-case of a comparison-based sorting algorithm is at least n log(n).

63
Q

Which of the following statements about arrays and linked lists is FALSE?

A) An array stores a set of items in a consecutive chunk of memory, but a linked list might store each item of a set of items in a different location of memory.

B) Both arrays and linked lists have the same performance when it comes to a search operation.

C) A linked list requires more space than an array, for storing a given set of items, due to the pointers associated with each node of the linked list.

D) A linked list and an array require the same amount of space, for storing a given set of items, as pointers associated with each node of the linked list do not require additional space.

A

D) A linked list and an array require the same amount of space, for storing a given set of items, as pointers associated with each node of the linked list do not require additional space.

64
Q

Given two singly linked lists A and B each storing n keys, how fast can someone compute the number of keys that are present in both A and B (i.e., the number of common keys in A and B), in the worst-case?

A) Theta (1)

B) Theta (n)

C) Theta (n log n)

D) Theta(n^2)

A

C) Theta (n log n)

65
Q

Given two sorted singly linked lists A and B each storing n keys, how fast can someone compute the number of keys that are present in both A and B (i.e., the number of common keys in A and B), in the worst-case?

A) Theta (1)

B) Theta(n)

C) Theta (n log n)

D) Theta (n^2)

A

B) Theta(n)

66
Q

Suppose we have access to a node that we wish to remove from a linked list consisting of n nodes.

A) In a singly linked list this can be done in constant time, and in a doubly linked list this can be done in constant time.

B) In a singly linked list this can be done in constant time, in a doubly linked lists this is in the worst case linear in n.

C) In a singly linked list this is in the worst case linear in n, in a doubly linked list this can be done in constant time.

D) In a singly linked list this is in the worst case linear in n, in a doubly linked list this is in the worst case linear in n.

A

C) In a singly linked list this is in the worst case linear in n, in a doubly linked list this can be done in constant time.

67
Q

We do hashing and solve collisions using open addressing. What is a drawback of this approach?

A) Searching is difficult.

B) Deletion is difficult.

C) Insertion is difficult.

D) This approach has no drawbacks.

A

B) Deletion is difficult.

68
Q

What is the performance of hashing n items in a table of size m, where we solve collisions using chaining, under the assumption of simple uniform hashing?

A) In the worst case in Theta (1 + (n/m)) and in the average case in Theta (1 + (n/m)).

B) In the worst case in Theta (n), and in the average case in Theta (n).

C) In the worst case in Theta (n), and in the average case in Theta (1).

D) In the worst case in Theta (n), and in the average case in Theta (1 + (n/m)).

A

D) In the worst case in Theta (n), and in the average case in Theta (1 + (n/m)).

69
Q

What is the worst-case running time of binary sort, on sorted arrays of length n?

A) Theta (log n)

B) Omega (n)

C) O(1)

D) Theta (1)

A

A) Theta (log n)

70
Q

We build a binary search tree by inserting the keys from a finite set one by one. The result depends on the order in which we insert the keys.

True

False

A

True

71
Q

We build a binary search tree by inserting the keys from a finite set one by one. The height depends on the order in which we insert the keys.

True

False

A

True

72
Q

Why is it possible that using a binary search tree for implementing a dynamic set is not an improvement over using linked lists?

A) Because they use pointers.

B) Because a binary search tree may have multiple occurrences of the same key.

C) Because the performance of search depends on the height which could be n.

D) Because when searching in a tree we may have to consider both the left- and the right subtree.

A

C) Because the performance of search depends on the height which could be n.

73
Q

We consider a binary search tree with as keys the numbers 1, … , 1000; there are no multiple occurrences of keys.We search a node with key 363. Which of the following sequences cannot be the keys of the nodes we visit while searching?

A) 2, 252, 401, 398, 330, 344, 397, 363.

B) 924, 220, 911, 244, 898, 258, 362, 363.

C) 925, 202, 911, 240, 912, 245, 363.

D) 2, 399, 387, 219, 266, 382, 381, 278, 363.

E) 935, 278, 347, 621, 399, 392, 358, 363.

A

C) 925, 202, 911, 240, 912, 245, 363.

74
Q

Which of the following recurrence equations with T(0) = 1 gives T(n) in Theta(n^2)?

A) T(n) = 2T(n/2) + n

B) T(n) = T(n-1) + 1

C) T(n) = T(n-1) + n

D) T(n) = T(n-1)

A

C) T(n) = T(n-1) + n

75
Q

We have T(n) in O(n^2) with T a function for the worst-case time complexity of algorithm A. Do we have T(n) in Theta(n^2)?

A) Yes, because an upper bound for the worst-case gives a lower bound.

B) No, T is not necessarily bound from below by n^2.

A

B) No, T is not necessarily bound from below by n^2.

76
Q

What is the worst-case running time of a fastest algorithm for checking whether the items in a array of length n are in sorted order?

A) in Theta (1)

B) in Theta (n log(n))

C) in Theta (log(n))

D) in Theta (n)

A

D) in Theta (n)

77
Q

What is the result of applying the first round of radix sort to the input sequence 189, 44, 84, 69, 511, 33, 2, 96?

A) 44, 84, 69, 33, 2, 96, 189, 511

B) 511, 2, 33, 44, 84, 96, 189, 69

C) 2, 33, 44, 84, 96, 69, 511, 189

D) 511, 189, 33, 44, 84, 96, 69, 2

A

B) 511, 2, 33, 44, 84, 96, 189, 69

78
Q

We consider singly linked lists. Which of the following operations in general cannot be done in elementary time, with a node x given?

A) Adding a node after x.

B) Returning the key of x.

C) Adding a node before x.

D) Removing the node after x (assuming there is such a node).

A

C) Adding a node before x.

79
Q

What traversal is done using the following pseudo-code?

Algorithm WhichTraversal(v):

new Stack S

push(S,v)

while the stack S is not empty do

x := pop(S)

visit(x)

if x.right != nil then

push(S,x.right)

if x.left != nil then

push(S,x.left)

A) preorder traversal

B) postorder traversal

C) inorder traversal

D) level order traversal

A

A) preorder traversal

80
Q

What is the worst-case running time for searching a key in a Binary Search Tree (BST) or in an AVL tree, consisting of n nodes?

A) In O(log(n)) for a BST and in O(log(n)) for an AVL-tree.

B) In O(n) for a BST and in O(log(n)) for an AVL-tree.

C) In O(log(n)) for a BST and in O(n) for an AVL-tree.

D) In O(n) for a BST and in O(n) for an AVL-tree.

A

B) In O(n) for a BST and in O(log(n)) for an AVL-tree.

81
Q

We consider a binary search tree T with all keys different. Consider a node x in T that has two children.

A) The successor of x has no left child and the predecessor of x has no right child.

B) The successor of x has no right child and the predecessor of x has no left child.

C) The successor of x has no left child and the predecessor of x has no left child.

D) The successor of x has no right child and the predecessor of x has no right child.

A

A) The successor of x has no left child and the predecessor of x has no right child.

82
Q

We consider adding a fresh key k to a binary search tree containing n different keys. Which of the following is correct?

A) We perform the search procedure to find the place where we should add the new node.

B) We search for the place where the new node should be, using a second pointer to keep track of the parent.

C) We add the node at the last leaf and restructure if necessary to make sure that the requirement on binary search trees are satisfied.

D) We add the node as new root, and restructure if necessary to make sure that the requirements on binary search trees are satisfied.\

A

B) We search for the place where the new node should be, using a second pointer to keep track of the parent.

83
Q

We consider a binary search tree consisting of n nodes. We do the following: first we find the tree-minimum, and then we repeatedly call the procedure for successor. What is correct?

A) This gives an inorder traversal.

B) This gives a preorder traversal.

C) This gives a postorder traversal.

D) We do not see all nodes of the binary search tree in this way.

A

A) This gives an inorder traversal.

84
Q

We consider an AVL-tree consisting of a root with label 3 and one leaf with label 5. Which of the following additions does not cause an imbalance?

A) Adding a node with key 2.

B) Adding a node with key 4.

C) Adding a node with key 6.

D) Adding a node with key 8.

A

A) Adding a node with key 2.

85
Q

Deletion from an AVL-tree is done as follows:

A) Deletion as from binary search trees, followed by at most Theta (log(n)) rebalance steps.

B) Deletion as from binary search trees, followed by at most one rebalance step.

C) Deletion as from binary search trees.

D) Deletion of the maximal element at the top.

A

A) Deletion as from binary search trees, followed by at most Theta (log(n)) rebalance steps.

86
Q

We consider the recursion tree for the naive recursive program for computing the Fibonacci numbers applied to input n. The Fibonacci numbers are defined by F (1) = F (2) = 1 and for n ≥ 3 we have F (n) = F (n − 1) + F (n − 2).

A) The recursion tree has height approximately n, and internal nodes have two successors, therefore the running time is quadratic.

B) The recursion tree is an almost complete binary tree, therefore the running time is logarithmic.

C) The recursion tree has height approximately log(n), and internal nodes have two successors, therefore the running time is quadratic.

D) The recursion tree has height approximately n, and internal nodes have two successors, therefore the running time is exponential.

A

D) The recursion tree has height approximately n, and internal nodes have two successors, therefore the running time is exponential.

87
Q

Which of the folllowing statements describes the bottom-up dynamic programming approach?

A) We solve subproblems starting from the largest one, and use solutions for previously solved larger subproblems.

B) We solve subproblems starting from the smallest one, and use solutions for previously solved smaller subproblems.

C) We solve the original problem recursively, and store solutions to subproblems in an array to avoid solving a problem more than once.

D) We solve the original problem recursively, and store the solution to the original problem in an array to avoid solving a problem more than once.

A

B) We solve subproblems starting from the smallest one, and use solutions for previously solved smaller subproblems.

88
Q

What is the worst-case time complexity of the naive recursive algorithm for rod-cutting, and what is the worst-case time complexity of the dynamic programming algorithm, for an input of length n?

A) The recursive algorithm is in Theta(2^n) and the dynamic programming algorithm is in Theta(n log(n)).

B) The recursive algorithm is in Theta(n^2) and the dynamic programming algorithm is in Theta(n log(n)).

C) The recursive algorithm is in Theta(2^n) and the dynamic programming algorithm is in Theta(n^2).

D) The recursive algorithm is in Theta(n^2) and the dynamic progamming algorithm is in Theta(n).

A

C) The recursive algorithm is in Theta(2^n) and the dynamic programming algorithm is in Theta(n^2).

89
Q

What is, together with the clause T(0)=1, the recurrence equation for the recursive algorithm for rod cutting, and what is hence its worst-case running time?

A) T(n) = 1 + T(0) + T(1) + … + T(n-1), and hence the worst-case running time is in Theta (2^n).

B) T(n) = T(n-1) + T(n-2) + 1, and hence the worst-case running time is in Theta (2^n).

C) T(n) = T(n-1) + 1, and hence the worst-case time complexity is in Theta (n).

D) T(n) = T(n-1) + Theta (n), and hence the worst-case time complexity is in Theta (n^2).

A

A) T(n) = 1 + T(0) + T(1) + … + T(n-1), and hence the worst-case running time is in Theta (2^n).

90
Q

What is the space complexity of the dynamic programming algorithm computing the longest common subsequences of two sequences, for imputs of size n?

A) in Theta (log(n))

B) in Theta (n)

C) in Theta (n^2)

D) in Theta (2^n)

A

C) in Theta (n^2)

91
Q

We consider the activity selection problem. Which of the following is a correct alternative approach?

A) We use a greedy choice for the last activity to start.

B) We use a greedy choice for the first activity to start.

C) We use a greedy choice for the last activity to end.

D) We use a greedy choice for the activity with the shortest duration.

A

A) We use a greedy choice for the last activity to start.

92
Q

We consider the activity selection problem. We aim to show correctness of the greedy choice for a smallest finish time. How do we proceed?

A) We show by induction on the number of activities of the activities that the activity with smallest finish time is in the solution.

B) Assume a solution. If an activity a with smallest finish time is in the solution then done, if not, then we adapt the solution to contain a.

C) We show that a solution cannot exist if there is no smallest finish time.

D) We show by induction on the duration of the activities that the activity with smallest finish time is in the solution.

A

B) Assume a solution. If an activity a with smallest finish time is in the solution then done, if not, then we adapt the solution to contain a.

93
Q

We consider the greedy algorithm for activity selection. What requirement should the input satisfy?

A) The activities with start-time s and finish-time f are ordered with increasing finish time.

B) The activities with start-time s and finish-time f are ordered with increasing duration.

C) The activities with start-time s and finish-time f are ordered with increasing start-time.

D) The activities with start-time s and finish-time f are mutually compatible.

A

A) The activities with start-time s and finish-time f are ordered with increasing finish time.

94
Q

Consider the recurrence (16.2) from the book (given on an exam) for activity selection. What is the worst-case running time of a dynamic programming algorithm based on that recurrence, for inputs of size n?

A) in O(n^3) because we make an nxn table and for every cell go over all n possibilities for a_k.

B) in O(n^2) because we consider for every start time all possible end times.

C) in O(n^2) because we make an nxn table to find all mutually compatible activities.

D) in O(2^n) because we consider for every activity a_k all possible compatible choices from the remaining activities.

A

A) in O(n^3) because we make an nxn table and for every cell go over all n possibilities for a_k.

95
Q

We consider the making change problem where the aim is to make a given amount with a minimal number of coins; for every coin value we have an unlimited number of coins available. Which of the following is correct:

A) In some cases depending on the coin values we can solve this problem using dynamic programming, and we can never use a greedy algorithm.

B) In all cases we can solve this problem using dynamic programming, and in some cases depending on the coin values we can use a greedy algorithm.

C) In all cases we can solve this problem using dynamic programming, and in all cases we can use a greedy algorithm.

D) In some cases depending on the coin values we can solve this problem using dynamic programming, and in all cases we can use a greedy algorithm.

A

B) In all cases we can solve this problem using dynamic programming, and in some cases depending on the coin values we can use a greedy algorithm.

96
Q

We consider the fractional knapsack problem: take a part of each of the n items that each have a weight and a benefit, with the aim to maximize the total benefit under the constraint of not exceeding the given maximal total weight. What is the underlying idea for an algorithm to solve this problem?

A) A greedy choice for the highest benefit.

B) Dynamic programming.

C) A greedy choice for the highest value (benefit over weight, b/w).

D) A greedy choice for the lowest weight.

A

C) A greedy choice for the highest value (benefit over weight, b/w).

97
Q

Consider the algorithm for fractional knapsack. What is the worst-case time complexity of the algorithm, for S consisting of n elements and W the maximal capacity?

A) It depends. If S is implemented using a heap, giving a priority queue, then in n log(n).

B) In n+W.

C) In nW.

D) In n^2.

A

A) It depends. If S is implemented using a heap, giving a priority queue, then in n log(n).

98
Q

We consider the algorithm for fractional knapsack, and the following input: item 1 with w1=3 and b1=9, item 2 with w2=2 and b2=2, item 3 with w3=3 and b3=6, and total capacity 5. Our choice is x1=3, x2=1, x3=1. What is the `mistake part’?

A) 3

B) 9

C) 1

D) 2

A

C) 1

99
Q

We consider the knapsack01 problem in the special setting that every item has weight 1. Can we simplify the dynamic programming approach?

A) No, we can only use the dynamic programming algorithm.

B) Yes, we can use a greedy choice for an item with smallest benefit.

C) Yes, we can use a greedy choice for an item with largest benefit.

D) Yes, we can use the dynamic programming algorithm but using less space.

A

C) Yes, we can use a greedy choice for an item with largest benefit.

100
Q

Consider the space-optimized version of the algorithm for knapsack01 (pseudo-code on the slides), what is the result of applying this algorithm to the set consisting of item 1 with w1=1 and b1=3, and item 2 with w2=1 and b2=2, and total capacity W=3?

A) B = [0, 3, 3, 5]

B) B = [0, 5, 5, 5]

C) B = [0, 3, 5, 5]

D) B = [0, 3, 5, 0]

A

C) B = [0, 3, 5, 5]

101
Q

(Follow up on the previous one). Suppose that we change the space-optimized version of the algorithm for knapsack01 in letting the w increase from w_k to W (instead of decrease). What is the result of applying the modified version of the algorithm to the set consisting of item 1 with w1=1 and b1=3, and item 2 with w2=1 and b2=2, and total capacity W=3?

A) B = [0, 3, 5, 5]

B) B = [0, 5, 7, 7]

C) B = [0, 3, 3, 5]

D) B = [0, 3, 6, 9]

A

D) B = [0, 3, 6, 9]

102
Q

Why can the algorithm for Huffman codes be in O(n log(n)) for input sets of size n?

A) Because we can sort the frequencies in O(n log(n)).

B) Because if we implement the min-priority queue using a min-heap, then extract and insert are in O(log(n)), and hence the for-loop is in total in O(n log(n)).

C) Because the coding tree is a full binary tree, hence of height log(n), and hence the for-loop is in total in O(n log(n)).

D) Because if we implement the min-priority queue using an array, then extract and insert are in O(log(n)), and hence the for-loop is in total in O(n log(n)).

A

B) Because if we implement the min-priority queue using a min-heap, then extract and insert are in O(log(n)), and hence the for-loop is in total in O(n log(n)).

103
Q

Which of the following is (asymptotically) in increasing rate of growth ?

A) 2^7,7n,n^7,nlog(n),7^n

B) 7n,nlog(n),2^7,7n,n^7

C) 2^7,7n,nlog(n),n^7,7^n

D) 7n,nlog(n),7^n,2^7,n^7

A

C) 2^7,7n,nlog(n),n^7,7^n

104
Q

What is the minimum number of leaves that a decision tree for some sorting algorithm for inputs of size n has?

A) nlog(n)

B) n!

C) 2^{n−1}

D) 2^n

A

B) n!

105
Q

What is correct for T defined by the recurrence equations T(0) = 1, and for n>0 by T(n) = 1 + Sigma_{j=0}^{n-1} T(j)?

A) T (n) in O(n).

B) T(n) in O(2^n).

C) T(n) in O(n^2).

D) T(n) in O(n log(n))

A

B) T(n) in O(2^n).

106
Q

Suppose T is an almost complete binary tree with 32 internal nodes (that is: nodes that are not a leaf). How many nodes does T have in total?

A) 63

B) 33

C) 65

D) 31

A

C) 65

107
Q

We consider non-empty trees. Which of the following statements is correct?

A. There exists a binary search tree that is also a min-heap.

B. There exists a binary search tree that is also a max-heap.

C. Every binary search tree is also a min-heap.

D. Every min-heap is also a binary search tree.

A and B.

A and B and C.

B and D.

A and C.

A

A and B.

108
Q

What is the worst-case running time of counting sort?

A) Theta(n log n) with n the number of elements in the input-array.

B) Theta(n + k) with n the number of elements in the input-array and k the range of those numbers.

C) Theta(n^2) with n the number of elements in the input-array.

D) Theta(1 + k) with k the range of the numbers in the input-array.

A

B) Theta(n + k) with n the number of elements in the input-array and k the range of those numbers.

109
Q

Bucket sort has an average worst case time complexity of Θ(n), with n the size of the input-array. Which assumption is needed for this?

A) We use a sorting algorithm in Θ(n log n) as subroutine.

B) The elements of the input-array are uniformly distributed over [0, 1).

C) We use insertion sort as subroutine which performs well on small inputs.

D) The elements of the input-array come from a fixed range of 0,…,k−1 for some k.

A

B) The elements of the input-array are uniformly distributed over [0, 1).

110
Q

We implement a queue using a `circular’ array with indices 1,….,n; see the slides for the pseudo-code (would be given on an exam). How do we test whether the queue is full or empty?

A) Test on empty queue: Q.head = 0. Test on full queue: (Q.tail = n) and (Q.head = 1).

B) Test on empty queue: Q.head = Q.tail. Test on full queue: (Q.head = Q.tail + 1) or (Q.head = 1 and Q.tail = n).

C) Test on empty queue: Q.head = Q.tail. Test on full queue: (Q.tail = n) and (Q.head = 1).

D) Test on empty queue: Q.head = 0. Test on full queue: (Q.head = Q.tail + 1) or (Q.head = 1 and Q.tail = n).

A

B) Test on empty queue: Q.head = Q.tail. Test on full queue: (Q.head = Q.tail + 1) or (Q.head = 1 and Q.tail = n).

111
Q

We consider a part of an AVL-tree: a node y with on the left a subtree A, on the right a node x. That node x has on the left a subtree B and on the right a subtree C. This is an unbalance of the form right-right caused by an insertion, with y the lowest unbalanced node. The subtree A has both before and after the insertion height h. What is the height of subtree C before and after the insertion, and how is the balance restored?

A) Before insertion the height of C is h-1 and after insertion it is h; balance is restored by a double rotation.

B) Before insertion the height of C is h-1 and after insertion it is h; balance is restored by a single rotation.

C) Before insertion the height of C is h and after insertion it is h+1; balance is restored by a double rotation.

D) Before insertion the height of C is h and after insertion it is h+1; balance is restored by a single rotation.

A

D) Before insertion the height of C is h and after insertion it is h+1; balance is restored by a single rotation.