Complexity & Big-Oh Notation Flashcards

(35 cards)

1
Q

How do you compare data structures?

A

By their algorithm / time complexity which is evaluated by the number of operations that needs to be performed in the WORST CASE scenario as a function of the number of elements in the collection. We use the worst case scenario for comparison since most algorithms perform the same, or similarly, in the best or average cases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Algorithm complexity

A

This gives us an idea of how the number of operations in an algorithm relates to the number of elements/size of data structures used (i.e. constant, linear, exponential, etc.). However, it does not necessarily give an indication of the exact computation time, which is dependent on hardware. That is, complexity is not the same as performance!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Constant complexity

A

If the number of operations is independent of the number of elements, then an algorithm is said to exhibit constant complexity. Therefore, if you double the input the complexity is unchanged. This is denoted as O(1) complexity in Big-Oh notation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Linear complexity

A

If the number of operations is linearly proportional to the number of elements, an algorithm is said to exhibit linear complexity. That is, if you double the input then you double the number of operations. This is denoted as O(n) complexity in Big-Oh notation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What type of complexity does accessing an element in an ArrayList by its index demonstrate?

A

Constant complexity because the number of operations is independent of the number of elements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What type of complexity does accessing an element in an LinkedList by its index demonstrate?

A

Linear complexity because in the worst case the number of operations is roughly proportionate to the number of elements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is Big-Oh notation?

A

A mathematical expression of algorithmic complexity. Big-Oh notation is always expressed as O(g(n)), where g(n) is a general function (e.g. n if linear, n^2 for quadratic, etc.), this allows us to meaningfully compare algorithm complexities efficiently, while ignoring whatever affects our hardware or other factors may have on execution time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Asymptotic notation

A

When describing Big-Oh complexity, we define our complexity g(n) such that as our input size n increases, the number of operations will ultimately be upper- and lower-bounded by our more general function, g(n). This is known as asymptotic notation and it simplifies Big-Oh complexity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

If a function A is O(n) and function B runs operation A n times then what is the complexity of function B?

A

The complexity of function B is O(n × n) = O(n^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

If a function A is O(n^2) and function B is O(n) and function C runs function A followed by function B, then what is the complexity of function C?

A

The complexity of C is O(n^2 + n) , but since with Big-Oh notation we only care about the highest-order terms, this actually means that the complexity of C is simply O(n^2).

Note that ignoring the lower order terms only works if the variables are the same, for complexities with multiple variables, for example O(n^2 + m) , we cannot just discard the m because it may or may not be larger than n^2.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How do you evaluate the complexity of a function / algorithm?

A

Go through the code line by line, and account for the amount of time taken by each line.

  • Typically single operations are O(1), whereas lines that call other functions, will have the complexity of that function.
  • Look for the use of loops, and what they are conditioned on (i.e is the number of loop iterations a function of the number of inputs versus some constant number).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Amortized analysis

A

A technique we use to determine an algorithm’s complexity by accounting for its overall cost over time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q
A

In this example the two loops are not executed sequentially, but instead one loop runs within the other so their complexities should be multiplied, resulting in an overall complexity of O(n^2).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Suppose we have an algorithm that is made of two methods J and K, where we know that complexity of J is O(n^3) and the complexity of K is O(n). Given that this algorithm is executed as running function J three times in a row, followed by running the function K once, and finally running function J one last time. What is the overall complexity of this algorithm?

A

O(n^3)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Suppose a method A has a complexity of O(n^3). Given that A just calls another function C n times. What must the big-oh complexity of the method C be?

A

O(n^2) since O(n^3) / n = O(n^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Why is determining the average-case complexity of an algorithm challenging?

A

It requires us to define a probability distribution on the set of inputs, which involves sophisticated probability theory. For this reason we focus on worst-case analysis. This approach also typically leads to better algorithms since making the standard of success for an algorithm to perform well in the worst case scenario means that it will do in every scenario.

17
Q

Seven functions for analyzing algorithms: Constant function

A

This is the simplest function f (n) = c where c is some fixed constant such as c = 5. As simple as it is, the constant function is useful in algorithm analysis because it characterizes the number of steps needed to for a primitive operation.

18
Q

Primitive operations

A

Primitive operations correspond to low-level instructions with an execution time that is constant. Examples include:

  • Assigning a value to a variable
  • Following an object reference
  • Performing an arithmetic operation
  • Comparing two numbers
  • Accessing a single element of an array by an index
  • Calling a method
  • Returning from a method

Instead of trying to determine the specific execution time of each primitive operations, we simply count how many are executed and use this number t as a measure of the running time of the algorithm. The implicit assumption in this approach is that the running times of different primitive operations will be fairly similar.

19
Q

Seven functions for analyzing algorithms: Logarithm function

A

This function is defined as the inverse of a power, as follows:

x = log b n if and only if b^x = n.

The value b is known as the base of the logarithm. Note that by the above definition, for any base b > 0, we have that log b 1 = 0.

The most common base for a logarithm in computer science is 2 as computers store integers in binary. In fact, this base is so common that we will typically omit it from the notation when it is 2: log n = log 2 n.

20
Q

Seven functions for analyzing algorithms: Linear function

A

This is another simple yet important function:
f (n) = n.

This function arises anytime we have to perform an operation for each of n elements. The linear function also represents the best running time we can hope to achieve for any algorithm that processes each of n objects that are not already in the computer’s memory, because reading in the n objects already requires n operations.

21
Q

Seven functions for analyzing algorithms: N-Log-N function

A

This function assigns an input n the value of n times the logarithm base-two of n:

f (n) = nlogn

This function grows a little more rapidly than the linear function and a lot less rapidly than the quadratic function; therefore, we would greatly prefer an algorithm with a running time that is proportional to nlogn than one with quadratic running time. There are several important algorithms that exhibit this complexity. For example, the fastest possible algorithms for sorting n arbitrary values require time proportional to nlogn.

22
Q

Seven functions for analyzing algorithms: Quadratic function

A

Given the input value n, the function f assigns the product of n with itself (in other words, “n squared”):

f (n) = n^2

The main reason why the quadratic function appears in the analysis of algorithms is that there are many algorithms that have nested loops, where the inner loop performs a linear number of operations and the outer loop is performed a linear amount of times. Thus, the algorithm performs n * n operations, which is n^2.

The quadratic function can also arise in the context of nested loops for which the first iteration uses one operation, the second uses two operations, the third uses three operations, and so on. That is, the operations in the inner loop increase by one each time so that the total number of operations is quadratic in the number of times, n, we perform the outer loop.

23
Q

Seven functions for analyzing algorithms: Cubic function

A

The cubic function assign an input value n the product of n with itself three times: f (n) = n^3.

This function appears less frequently in algorithm analysis than the constant, linear, and quadratic functions.

24
Q

Seven functions for analyzing algorithms: Exponential function

A

This funtion assigns to the input argument n the value obtained by multiplying the base b by itself n times:
f (n) = b^n.
As was the case with the logarithm function, the most common base for the exponential function in algorithm analysis is b = 2.

25
What is the most ideal run time for data structure operations and algorithms?
Ideally, we would like data structure operations to run in time proportional to the constant or logarithm function, and we would like our algorithms to run in linear or *n*-log-*n* time.
26
Big-Omega
This is an asymptotic way of saying that a function grows at a rate that is greater than or equal to another.
27
Big-Theta
This is an asymptotic way of saying that two functions grow at the same rate up to constant factors.
28
Order these seven functions by their asymptotic growth rate (smallest to largest)
1. Constant: 1 2. Logarithmic: log*n* 3. Linear: *n* 4. N-Log-N: *n*log*n* 5. Quadratic: *n*^2 6. Cubic: *n*^3 7. Exponential: 2^*n*
29
Compare the complexities of LinkedLists and ArrayLists. Which data structure is better?
The better data structure depends on the use case. For example, if the use case requires frequent insertions and removals from the beginning of a list, then you should leverage a LinkedList. However, if the use case requires frequent retrieval of items by their index then you should leverage an ArrayList.
30
How do you combine Big-Oh complexities?
31
What is the time complexity of the following algorithm?
32
What is the time complexity of the following algorithm?
33
What is the time complexity of the following algorithm?
34
What is the time complexity of the following algorithm?
35
What is the time complexity of the following algorithm?