Practice Final Flashcards

1
Q

What is a reason for using insertion sort as a subroutine in bucket sort?

A) Because insertion sort, although quadratic, performs well on small inputs.

B) Because insertion sort is quadratic.

C) Because insertion sort is linear.

D) Because it works well on lists.

A

A) Because insertion sort, although quadratic, performs well on small inputs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Which of the following statements are true?

1) Swapping two elements of an array can be done in constant time.

2) The smallest key of a max-heap is at the leftmost leaf.

3) All algorithms for turning an array into a max-heap are in Omega (n log(n)).

4) For all comparison-based sorting algorithms the worst-case time complexity is in Omega (n log(n)).

A) Only 1, 4 are true.

B) Only 1 is true.

C) Only 2 and 3 are true.

D) Only 1, 3, 4 are true.

A

A) Only 1, 4 are true.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a drawback of implementing a stack using an array?

A) The size of an array is fixed and the size of a stack is not fixed a priori.

B) An array is less efficient than a linked structure.

C) A stack is vertical and an array is horizontal.

D) Adding and removing elements from an array is not efficient.

A

A) The size of an array is fixed and the size of a stack is not fixed a priori.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

We implement a priority queue of n elements using a heap. What is the worst-case time complexity of the operations for adding and deleting?

A) Adding is in O(log(n)) and deleting is in O(log(n)).

B) Adding is in O(n) and deleting is in O(log(n)).

C) Adding i s in O(log(n)) and deleting is in O(n).

D) Adding is O(n) and deleting is in O(n).

A

A) Adding is in O(log(n)) and deleting is in O(log(n)).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the worst-case time complexity for searching a key in a hash table of size m storing n elements, if collision is solved by chaining?

A) O(n)

B) O(n+m)

C) O(m log(n))

D) O(n log(n))

A

B) O(n+m)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

We do hashing where we solve collisions by linear probing. We use a hash table of size 9 with indices starting at 0. We use the hash function h(k) = k mod 9. We add the keys 5, 28, 19, 15, 20, 33, 12, 17, 10 in this order to the initially empty hash table. At which index is the key 10 stored?

A) At index 0.

B) At index 1.

C) At index 2.

D) At index 8.

A

A) At index 0.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Suppose T is an almost complete binary tree with 32 internal nodes (that is: nodes that are not a leaf). How many nodes does T have in total?

A) 33

B) 65

C) 63

D) 31

A

B) 65

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

We search key 365 in a binary search tree with keys 1-1000. There are no multiple occurrences of keys. Which of the following sequence(s) cannot be the keys of the nodes visited in our search?

A) 4, 254, 403, 400, 332, 346, 399, 365.

B) 926, 222, 913, 246, 900, 260, 364, 365.

C) 927, 204, 913, 242, 914, 247, 365.

D) 4, 401, 389, 221, 268, 284, 383, 290, 365.

E) 937, 280, 349, 623, 301, 394, 360, 365.

A

C) 927, 204, 913, 242, 914, 247, 365.

E) 937, 280, 349, 623, 301, 394, 360, 365.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

We consider non-empty trees. Which of the following statements is correct?

A. There exists a binary search tree that is also a min-heap.

B. There exists a binary search tree that is also a max-heap.

C. Every binary search tree is also a min-heap.

D. Every min-heap is also a binary search tree.

A and B.

A and C.

B and D.

A and B and C

A

A and B.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Deletion from an AVL-tree of size n is done as follows:

A) Deletion as from binary search trees, followed by at most O(log(n)) rebalance steps.

B) Deletion as from binary search trees, followed by at most one rebalance step.

C) Deletion as from binary search trees.

D) Deletion of the maximal element at the top.

A

A) Deletion as from binary search trees, followed by at most O(log(n)) rebalance steps.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

We start from an empty AVL-tree. We add a node with label 4, and then a node with label 2. The AVL-tree gets unbalanced in the following (zero, one or more) case(s):

A) We add a node with label 1.

B) We add a node with label 3.

C) We add a node with label 5.

D) We add a node with label 6.

E) We add a node with label 0.

A

A) We add a node with label 1.

B) We add a node with label 3.

E) We add a node with label 0.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

We remove a node from an AVL-tree with n nodes according to the procedure for removing for binary search trees. Then

A) we are done.

B) we need to perform at most O(log(n)) rebalance steps.

C) we need to perform at most one rebalance step.

D) we need to perform at most O(n) rebalance steps.

A

B) we need to perform at most O(log(n)) rebalance steps.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

We consider the algorithm for knapsack01, applied to a set consisting of n items and with maximal capacity W. What is its worst-case time complexity?

A) In O(nW).

B) In O(n).

C) In O(n^2).

D) In O(n W^2).

A

A) In O(nW).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Consider the algorithm for knapsack01; S contains n elements. What is the space complexity of the algorithm and how can we improve on that?

A) The space complexity is in Theta(nW). We can improve the space complexity to Theta(W).

B) The space complexity is in Theta(nW). We cannot improve the space complexity.

C) The space complexity is in Theta(n+W). We cannot improve the space complexity.

D) The space complexity is in Theta(n+W). We can improve the space complexity to Theta (W).

A

A) The space complexity is in Theta(nW). We can improve the space complexity to Theta(W).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

We consider the making change problem where the aim is to make a given amount with a minimal number of coins; for every coin value we have an unlimited number of coins available. We have coins with values 1, 3, 7, 8 and we aim to make the amount 14. For this example,

A) a greedy choice for a coin of largest value is not correct.

B) a greedy choice for a coin of lowest value is correct.

C) a greedy choice for a coin of largest value is correct.

A

A) a greedy choice for a coin of largest value is not correct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

We consider the naive recursive program for computing the Fibonacci numbers. Consider the recursion tree for this procedure.

A) The recursion tree has height approximately n, and internal nodes have two successors, therefore the running time is exponential.

B) The recursion tree has height approximately log n, and internal nodes have two successors, therefore the running time is quadratic.

C) The recursion tree is an almost complete binary tree, therefore the running time is logarithmic.

D) The recursion tree has height approximately n, and internal nodes have two successors, therefore the running time is quadratic.

A

A) The recursion tree has height approximately n, and internal nodes have two successors, therefore the running time is exponential.

17
Q

What is, together with the clause T(0) =1, the recurrence equation for the recursive algorithm for rod cutting, and what is hence its worst-case time complexity?

A) T(n) = 1 + T(0) + T(1) + … + T(n-1), and hence the worst-case time complexity is in O(2^n).

B) T(n) = T(n-1) + T(n-2) + 1, and hence the worst-case time complexity is in O(2^n).

C) T(n) = T(n-1) + 1 and hence the worst-case time complexity is in O(n).

D) T(n) = T(n-1) + Theta (n), and hence the worst-case time complexity is in O(n^2).

A

A) T(n) = 1 + T(0) + T(1) + … + T(n-1), and hence the worst-case time complexity is in O(2^n).

18
Q

The knapsack01 problem can be solved using a dynamic programming algorithm. Alternatively, we could use

A) A greedy choice for an item with lowest weight.

B) A greedy choice for an item with largest benefit.

C) A greedy choice for an item with largest quotient benefit over value (b/v).

D) None of the possibilities mentioned in the other potential answers.

A

D) None of the possibilities mentioned in the other potential answers.

19
Q

Consider the algorithm for fractional knapsack. What is the worst-case time complexity of the algorithm, for S consisting of n elements and W the maximal capacity?

A) It depends. If S is implemented using a heap, giving a priority queue, then in n log(n).

B) In n^2.

C) In nW.

D) In n+W.

A

A) It depends. If S is implemented using a heap, giving a priority queue, then in n log(n).

20
Q

We consider the fractional knapsack problem:take a part of each of the n items that each have a weight and a benefit, with the aim to maximize the total benefit under the constraint of not exceeding the given maximal total weight. What is the underlying idea for an algorithm to solve this problem?

A) A greedy choice for the highest value (benefit over weight, b/w).

B) A greedy choice for the lowest weight.

C) Dynamic programming.

D) A greedy choice for the highest benefit.

A

A) A greedy choice for the highest value (benefit over weight, b/w).