Complexity and Big O Flashcards

1
Q

The execution tree of a recursive function would form an n-ary tree, with n being the branches or the number of times recursion appears in the recursion relation.

When you have a recursive function that makes multiple calls the runtime will often look like:

A

O(branches ^ depth)

As mentioned in the question, branches denote the number of times the recursion appears in the recursion relation. The depth varies based on each problem. For instance, a fibonacci series [f(n) = f(n-1) + f(n-2)] yields us a binary tree and goes all the way down to the 0th element thus the depth is the number of elements, n.
This gives us O(2^n) time complexity where n is the number of elements and we have 2 branches f(n-1) & f(n-2).
Though, to be more strictly speaking and if we derive it purely mathematical way, we get the complexity as O(1.6^N)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Sum of 1 through (N-1)

A

N(N-1)/2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Sum of a sequence of powers of two is roughly equal to ___ ?:

A

The next value in the sequence.

So, 2^0 + 2^1 + 2^2 … 2^N = 2^(N+1) - 1 = 2^N - 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Number of times we can half n until we get 1 is ___ ?:

A

log n

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

When you see a problem where the number of elements in the problem space gets halved each time, that will likely be a ___ runtime:

A

O(log N)
eg.
1. binary search on an N element sorted array
2. Finding an element in Balanced binary search tree

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The term linearithmic is used to describe programs whose running time for a problem of size N has order of growth of ___:

A

N (log N)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Binary Search Running Time:

A

O(log N)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Merge sort runtime complexity:

A

O(N logN)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Quick Sort runtime complexity:

A

O(N logN)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Quadratic runtime complexity examples:

A

Selection sort and Insertion sort

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Selection sort runtime complexity:

A

O(N^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Insertion sort runtime complexity:

A

O(N^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How would you find maximum number of digits a number can represent. For eg. How many digits can 999 have? This is good to know when you have to iterate through the digits for a given number:

A

digits ~ log Z

Let’s say Z is a 2 digit number then the max 2 digit number would be 99 = 10^2 - 1 ~ 10^2. Similarly Z being a d digit number would give the max value look like:
Z ~ 10^d
taking log on each side, log Z = d log 10 = d

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
What is order of dominance for-
O(X log X)
O(X^2)
O(2^X)
O(X)
O(X!)
O(log X):
A

O(X!) > O(2^X) > O(X^2) > O(X log X) > O(X) > O(log X)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Which is dominant 2 ^ N or N ^ 2 when deciding the big O for an algorithm that has complexity O(2^N + N^2):

A

2^N is dominant, hence the complexity will boil down to O(2^N)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What will be the complexity of an algorithm that runs two different for loops each for N and M terms?:

A

O(N + M) since there is no established relationship between N and M we need to account for both

17
Q

How would you determine time complexity for two “nested for loops” each for N & M terms respectively:

A

O(N * M)

18
Q

How do we describe the runtime of dynamically adding elements in ArrayList:

A

using the Amortized time concept.

For instance, the cost of adding an element to an array that is full is the amortized runtime O(1).
You will create an array double the size and copy over the elements so O(N).. Every time we ran out of space we keep doubling so:
1 + 2 + 4 + 8 + … X adds
or, X + X/2 + X/4 + X/8 + … 1 = 2X adds in O(X) time
so 1 add in O(X) / X ~ O(1) time i.e., each add operation takes constant time which is how we describer amortized time