Full Course Flashcards

1
Q

Explain informally, why f(n) = Θ(n) implies f(n) = Ω(n) and f(n) =
O(n).

A

f(n) = Θ(n) means that f(n) has growth rate n. But then f(n) also has growth rate at least n (i.e., f(n) = Ω(n)) and at most n (i.e., f(n) = O(n))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Give definitions of the big-O, big-Ω, and big-Θ notations

A

O(g(n)) = { f(n) : there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0}
Ω(g(n)) = { f(n) : there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0}
Θ(g(n)) = { f(n) : there exist positive constants c1, c2 and n0 such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ n0}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Explain using the definitions why f(n) = Θ(n) when f(n) = Ω(n) and f(n) = O(n).

A

Let f(n) = O(g(n)) and f(n) = Ω(g(n)). Then, by definition of the big-O notation, there exist positive constants c1 and n1 such that 0 ≤ f(n) ≤ c1g(n) for all n ≥ n1. On the other hand, by definition of the big-Ω notation, there exist positive constants c2 and n2 such that 0 ≤ c2g(n) ≤ f(n) for all n ≥ n2.

Thus, there exist positive constants c1, c2, and n0 such that 0 ≤ c2g(n) ≤ f(n) ≤ c1g(n) for all n ≥ n0, where we choose n0 = max(n1, n2). Then, by definition of the big-Θ notation, f(n) = Θ(g(n)).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

If x occurs in A several times, which index is returned?

A: Both return the index of the first occurrence of x
B: Linear-Search returns the index of the last occurrence of x, Better-Linear-Search the first
C: Linear-Search returns all indices of x, Better LinearSearch only the first
D: Both return all indices

A

Linear-Search returns the index of the last occurrence of x, Better-Linear-Search the first

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Big-O notation – Upper bounding function
Theta-Θ notation – Order function
Omega-Ω notation – Lower bounding function

A

The running time is never worse than given function , asymptotically smaller or equal

The rates of growth in the best case and worst case are the same, asymptotically equal

the running time is never better than a given function, asymptotically greater or equal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Claim: 19n^3 + 17n^2 - 3n = Θ(n^3)

If we choose n0 = 1, what is a good choice for c2?
A: c2 = 1
B: c2 = 17
C: c2 = 19
D: c2 = 36

A

36

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

500 n + 150 = O(n^2)

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

4 n log^2 (n) + 20 n = Θ(n log n)

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

n log n = Ω(n)

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

n log^2(n) = O(n^(4/3))

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

57 n^2 + 17 log n = Θ(n^2)

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

O(n) ⊆ O(n^4)

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

An algorithm with worst case running time O(n log n)
is always slower than an algorithm with worst case
running time O(n) if n is sufficiently large.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Loop invariance

A

property that remains ture before and after the iteration of the loop

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Properties of Loop Invariance

A
  1. Initialisation: It is true even before the loop starts.
  2. Maintenance: If its true it will remain true before the next iteration.
  3. Termination : when the loop finishes, the loop invariant provides a useful property that helps to demonstrate that algo is correct
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Do binary search need sorted arrary or unsorted array

A