Quizes Flashcards
We apply insertion sort to the input-array A = [7,3,5,1,6,2,4]. What is A after two iterations of the crucial for-loop?
A) A = [1,3,5,7,6,2,4]
B) A = [3,7,5,1,6,2,4]
C) A = [1,2,3,7,5,6,4]
D) A = [3,5,7,1,6,2,4]
D) A = [3,5,7,1,6,2,4]
What is the worst-case running time of insertion sort, for inputs of size n?
A) Theta (n log(n))
B) O (n log(n))
C) Theta (n^2)
D) O (log n)
C) Theta (n^2)
What is the best-case running time of insertionsort, for inputs of size n?
A) Theta (n)
B) Theta (n log(n))
C) Theta (n^2)
D) Theta (log(n))
A) Theta (n)
We consider insertionsort. What happens if we swap the order of the tests for the while-loop in line 5?
A) We may then possibly compare A[0] with key, but A[0] does not exist.
B) The algorithm then sorts in reverse order.
C) It does not matter.
D) The program then does not terminate.
A) We may then possibly compare A[0] with key, but A[0] does not exist.
We annotate the following code fragment with the amount of times a line is executed in the worst case. What do we get?
for j:=2 to n do
i := j-1
while i>0 do
i:= i-1
A) For the first line n, for the second line n-1, for the third line the summation from j=2 to n of j, for the fourth line the summation from j=2 to n of (j-1).
B) For the first line n-1, for the second line n-1, for the third line the summation from j=1 to n of j, for the fourth line the summation from j=1 to n of (j-1).
C) For the first line n, for the second line n-1 for the third line j-1, for the fourth line n.
D) For the first line n-1, for the second line n-1, for the third line j-1, for the fourth line n.
A) For the first line n, for the second line n-1, for the third line the summation from j=2 to n of j, for the fourth line the summation from j=2 to n of (j-1).
What is the result of applying the first two iterations of the algorithm for selection sort to the input array [7, 2, 9, 6, 1, 3, 5, 4]?
A) 1,2,9,6,7,3,5,4
B) 1,3,9,6,7,2,5,4
C) 1,2,9,6,7,3,4,5
D) 1,3,9,6,7,2,4,5
A) 1,2,9,6,7,3,5,4
What is the best-case running time of selection sort, for inputs of size n?
A) O(1) because we find immediately the smallest elements of the unordered part.
B) O(n) because we just check whether the array is ordered.
C) O(n log(n)) because that is the best-case for comparison-based sorting.
D) O(n^2) because we always have to completely traverse the unordered part.
D) O(n^2) because we always have to completely traverse the unordered part.
What is the worst-case running time of selection sort, for inputs of size n?
A) Theta(n)
B) Theta(n^2)
C) Theta(n log(n))
D) Theta(log(n))
B) Theta(n^2)
In the recurrence tree for merge sort
A) the height is log(n) and the work per layer is the linear work to split.
B) the height is log(n) and the work per layer is the linear work to merge.
C) the height is n and the work per layer is the linear work to split.
D) the height is n and the work per layer is the linear work to merge.
B) the height is log(n) and the work per layer is the linear work to merge.
Which of the following is(are) correct?
A) n^2 in O(n^3)
B) n^2 in O(n^2)
C) n^2 in Theta (n^2)
D) n^2 in Theta (n^3)
E) n^3 in Theta (n^2)
F) n^3 in O(n^2)
A) n^2 in O(n^3)
B) n^2 in O(n^2)
C) n^2 in Theta (n^2)
An algorithm with worst-case time complexity Theta (n log(n)) is always, for any input, faster than an algorithm with worst-case time complexity in Theta (n^2). Is this statement true or false, and why?
A) True, because n log(n) has a lower rate of growth than n^2
B) True, because Theta gives a strict bound
C) False, because for a small input or for a special input an algorithm with worst-case time complexity in Theta (n^2) may perform better than an algorithm with worst-case time complexity in Theta (n log(n))
D) False, because n log(n) and n^2 have the same rate of growth anyway.
C) False, because for a small input or for a special input an algorithm with worst-case time complexity in Theta (n^2) may perform better than an algorithm with worst-case time complexity in Theta (n log(n))
We consider two statements.
Statement A is: 2^(n+1) is in O(2^n).
Statement B is: 2^(2n) is in O(2^n).
Which of the following holds?
A) A and B are both false
B) A and B are both true
C) A is true but B is false
D) B is true but A is false
C) A is true but B is false
Which of the following ordering of functions is(are) in increasing order of rate of growth?
A) 2^7, n log(n), n^2, n!
B) 3n, 2^3, n log(n), n^3
C) n^2, n^3, n log(n), 2^n
D) n log(n), n^2, n^7, 2^n
E) n log(n), n^2, 2^7, n!
A) 2^7, n log(n), n^2, n!
D) n log(n), n^2, n^7, 2^n
Which of the following ordering of functions is in increasing order of rate of growth?
A) 2^{10}, n (log(n))^2, n^2 log(n), 2^{log(n)}
B) n (log(n))^2, n^2 log(n), 2^{log(n)}, 2^{10}
C) n (log(n))^2, n^2 log(n), 2^{10}, 2^{log(n)}
D) 2^{10}, 2^{log(n}, n (log(n))^2, n^2 log(n)
D) 2^{10}, 2^{log(n}, n (log(n))^2, n^2 log(n)
We have T(n) in O(n^2) with T a function for the worst-case time complexity of algorithm A. Do we have T(n) in Theta(n^2)?
A) Yes, because an upper bound for the worst-case gives a lower bound.
B) No, T is not necessarily bound from below by n^2.
B) No, T is not necessarily bound from below by n^2.
We consider an array A that contains n keys. What is the asymptotic worst case running time of a best algorithm that checks whether the keys are in increasingly sorted order?
A) Theta (1)
B) Theta (log (n))
C) Theta (n)
D) Theta (n log (n))
C) Theta (n)
When is a sorting algorithm said to be stable?
A) If the algorithm is linear time on an array that is already sorted.
B) If sorting the array twice in a row does not affect the result.
C) If it does not need extra memory next to the input array.
D) If it respects the order of equal keys in the array.
D) If it respects the order of equal keys in the array.
consider the following pseudo-code:
Algorithm Loop(n):
s := 0
for i := 1 to n^2 do
for j : = 1 to i do
s := s+ i
What is the worst-case running time of Loop?
A) O(n).
B) O(n^2).
C) O(n^3).
D) O(n^4).
D) O(n^4).
We consider pseudo-code for finding a maximum element in an array of integers. What is an invariant useful for proving correctness?
FindMax(A)
max := A[1]
for i := 2 to A.length do
if A[i] > max then
max := A[i]
A) max is the largest value in A[1..i]
B) max is the largest value in A[1..(i-1)]
C) max is the largest value in A[1..(i+1)]
D) max is the largest value in A
B) max is the largest value in A[1..(i-1)]
We consider an algorithm that computes for input n >= 1 the sum 1+…+n. What is an invariant that can be used to prove correctness?
ComputeSum(n)
sum := 0
for i := 1 to n
sum := sum + i
return sum
A) sum is 1 + … + (i-1)
B) sum is 1 + … + i
C) sum is i + … + n
D) sum is 1 + … + n
A) sum is 1 + … + (i-1)
Merge sort uses the procedure merge as subroutine. What is the running time of merge on inputs of total size n?
A) It is in Theta (1).
B) It is in Theta (n^2).
C) It is in Theta (n log(n)).
D) It is in Theta (n).
D) It is in Theta (n).
We apply merge sort to already sorted input arrays of size n. Then the running time is
A) in Theta (n) because we just need to check that the array is sorted.
B) in Theta (n log(n)) because the height of the recursion tree is log(n)+1 and the merge procedure has linear runnning time.
C) in Theta (log(n)) because the height of the recursion tree is log(n)+1 and the merge procedure is trivial.
D) in Theta (n log(n)) because the height of the recursion tree is n log(n) + 1 and the merge procedure is trivial.
B) in Theta (n log(n)) because the height of the recursion tree is log(n)+1 and the merge procedure has linear runnning time.
What is one of the main drawbacks of merge sort?
A) It is a recursive algorithm.
B) It uses additional memory, proportional to the input size.
C) It uses an additional subroutine, merge.
D) Its recursion tree has height approximately the log of the input size.
B) It uses additional memory, proportional to the input size.
What is a correct clause for the case n>1 for a recurrence equation describing the worst-case running time of merge sort, for inputs of size n?
A) T(n) = T(n-1) +cn, with c a constant
B) T(n) = T(n/2) + c, with c a constant
C) T(n) = 2T(n/2) + c, with c a constant
D) T(n) = 2T(n/2) + cn, with c a constant
D) T(n) = 2T(n/2) + cn, with c a constant