Complexity & Big-Oh Notation Flashcards
(35 cards)
How do you compare data structures?
By their algorithm / time complexity which is evaluated by the number of operations that needs to be performed in the WORST CASE scenario as a function of the number of elements in the collection. We use the worst case scenario for comparison since most algorithms perform the same, or similarly, in the best or average cases.
Algorithm complexity
This gives us an idea of how the number of operations in an algorithm relates to the number of elements/size of data structures used (i.e. constant, linear, exponential, etc.). However, it does not necessarily give an indication of the exact computation time, which is dependent on hardware. That is, complexity is not the same as performance!
Constant complexity
If the number of operations is independent of the number of elements, then an algorithm is said to exhibit constant complexity. Therefore, if you double the input the complexity is unchanged. This is denoted as O(1) complexity in Big-Oh notation.
Linear complexity
If the number of operations is linearly proportional to the number of elements, an algorithm is said to exhibit linear complexity. That is, if you double the input then you double the number of operations. This is denoted as O(n) complexity in Big-Oh notation.
What type of complexity does accessing an element in an ArrayList by its index demonstrate?
Constant complexity because the number of operations is independent of the number of elements
What type of complexity does accessing an element in an LinkedList by its index demonstrate?
Linear complexity because in the worst case the number of operations is roughly proportionate to the number of elements.
What is Big-Oh notation?
A mathematical expression of algorithmic complexity. Big-Oh notation is always expressed as O(g(n)), where g(n) is a general function (e.g. n if linear, n^2 for quadratic, etc.), this allows us to meaningfully compare algorithm complexities efficiently, while ignoring whatever affects our hardware or other factors may have on execution time.
Asymptotic notation
When describing Big-Oh complexity, we define our complexity g(n) such that as our input size n increases, the number of operations will ultimately be upper- and lower-bounded by our more general function, g(n). This is known as asymptotic notation and it simplifies Big-Oh complexity.

If a function A is O(n) and function B runs operation A n times then what is the complexity of function B?
The complexity of function B is O(n × n) = O(n^2)
If a function A is O(n^2) and function B is O(n) and function C runs function A followed by function B, then what is the complexity of function C?
The complexity of C is O(n^2 + n) , but since with Big-Oh notation we only care about the highest-order terms, this actually means that the complexity of C is simply O(n^2).
Note that ignoring the lower order terms only works if the variables are the same, for complexities with multiple variables, for example O(n^2 + m) , we cannot just discard the m because it may or may not be larger than n^2.
How do you evaluate the complexity of a function / algorithm?
Go through the code line by line, and account for the amount of time taken by each line.
- Typically single operations are O(1), whereas lines that call other functions, will have the complexity of that function.
- Look for the use of loops, and what they are conditioned on (i.e is the number of loop iterations a function of the number of inputs versus some constant number).
Amortized analysis
A technique we use to determine an algorithm’s complexity by accounting for its overall cost over time.

In this example the two loops are not executed sequentially, but instead one loop runs within the other so their complexities should be multiplied, resulting in an overall complexity of O(n^2).
Suppose we have an algorithm that is made of two methods J and K, where we know that complexity of J is O(n^3) and the complexity of K is O(n). Given that this algorithm is executed as running function J three times in a row, followed by running the function K once, and finally running function J one last time. What is the overall complexity of this algorithm?
O(n^3)
Suppose a method A has a complexity of O(n^3). Given that A just calls another function C n times. What must the big-oh complexity of the method C be?
O(n^2) since O(n^3) / n = O(n^2)
Why is determining the average-case complexity of an algorithm challenging?
It requires us to define a probability distribution on the set of inputs, which involves sophisticated probability theory. For this reason we focus on worst-case analysis. This approach also typically leads to better algorithms since making the standard of success for an algorithm to perform well in the worst case scenario means that it will do in every scenario.
Seven functions for analyzing algorithms: Constant function
This is the simplest function f (n) = c where c is some fixed constant such as c = 5. As simple as it is, the constant function is useful in algorithm analysis because it characterizes the number of steps needed to for a primitive operation.
Primitive operations
Primitive operations correspond to low-level instructions with an execution time that is constant. Examples include:
- Assigning a value to a variable
- Following an object reference
- Performing an arithmetic operation
- Comparing two numbers
- Accessing a single element of an array by an index
- Calling a method
- Returning from a method
Instead of trying to determine the specific execution time of each primitive operations, we simply count how many are executed and use this number t as a measure of the running time of the algorithm. The implicit assumption in this approach is that the running times of different primitive operations will be fairly similar.
Seven functions for analyzing algorithms: Logarithm function
This function is defined as the inverse of a power, as follows:
x = log b n if and only if b^x = n.
The value b is known as the base of the logarithm. Note that by the above definition, for any base b > 0, we have that log b 1 = 0.
The most common base for a logarithm in computer science is 2 as computers store integers in binary. In fact, this base is so common that we will typically omit it from the notation when it is 2: log n = log 2 n.
Seven functions for analyzing algorithms: Linear function
This is another simple yet important function:
f (n) = n.
This function arises anytime we have to perform an operation for each of n elements. The linear function also represents the best running time we can hope to achieve for any algorithm that processes each of n objects that are not already in the computer’s memory, because reading in the n objects already requires n operations.
Seven functions for analyzing algorithms: N-Log-N function
This function assigns an input n the value of n times the logarithm base-two of n:
f (n) = nlogn
This function grows a little more rapidly than the linear function and a lot less rapidly than the quadratic function; therefore, we would greatly prefer an algorithm with a running time that is proportional to nlogn than one with quadratic running time. There are several important algorithms that exhibit this complexity. For example, the fastest possible algorithms for sorting n arbitrary values require time proportional to nlogn.
Seven functions for analyzing algorithms: Quadratic function
Given the input value n, the function f assigns the product of n with itself (in other words, “n squared”):
f (n) = n^2
The main reason why the quadratic function appears in the analysis of algorithms is that there are many algorithms that have nested loops, where the inner loop performs a linear number of operations and the outer loop is performed a linear amount of times. Thus, the algorithm performs n * n operations, which is n^2.
The quadratic function can also arise in the context of nested loops for which the first iteration uses one operation, the second uses two operations, the third uses three operations, and so on. That is, the operations in the inner loop increase by one each time so that the total number of operations is quadratic in the number of times, n, we perform the outer loop.
Seven functions for analyzing algorithms: Cubic function
The cubic function assign an input value n the product of n with itself three times: f (n) = n^3.
This function appears less frequently in algorithm analysis than the constant, linear, and quadratic functions.
Seven functions for analyzing algorithms: Exponential function
This funtion assigns to the input argument n the value obtained by multiplying the base b by itself n times:
f (n) = b^n.
As was the case with the logarithm function, the most common base for the exponential function in algorithm analysis is b = 2.












