Asymptotic Notation Flashcards

1
Q

Asymptotic notation is a way to describe the running time or space complexity of an algorithm based on the input size. It is commonly used in complexity analysis to describe how an algorithm performs as the size of the input grows. The three most commonly used notations are:

A

Big O, Omega, and Theta.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Big O notation

A

This notation provides an upper bound on the growth rate of an algorithm’s running time or space usage. It represents the worst-case scenario, i.e., the maximum amount of time or space an algorithm may need to solve a problem. For example, if an algorithm’s running time is O(n), then it means that the running time of the algorithm increases linearly with the input size n or less.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Omega notation

A

This notation provides a lower bound on the growth rate of an algorithm’s running time or space usage. It represents the best-case scenario, i.e., the minimum amount of time or space an algorithm may need to solve a problem. For example, if an algorithm’s running time is Ω(n), then it means that the running time of the algorithm increases linearly with the input size n or more.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Theta notation

A

This notation provides both an upper and lower bound on the growth rate of an algorithm’s running time or space usage. It represents the average-case scenario, i.e., the amount of time or space an algorithm typically needs to solve a problem. For example, if an algorithm’s running time is Θ(n), then it means that the running time of the algorithm increases linearly with the input size n.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Usually, the analysis of an algorithm is done based on three cases:

A

Best Case (Omega Notation (Ω))
Average Case (Theta Notation (Θ))
Worst Case (O Notation(O))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Omega (Ω) notation specifies the asymptotic lower bound for a function f(n). For a given function g(n), Ω(g(n)) is denoted by:

A

Ω (g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤ c*g(n) ≤ f(n) for all n ≥ n0}.

This means that, f(n) = Ω(g(n)), If there are positive constants n0 and c such that, to the right of n0 the f(n) always lies on or above c*g(n).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Calculating Omega notation for a program

A

Break the program into smaller segments.

Find the number of operations performed for each segment(in terms of the input size) assuming the given input is such that the program takes the least amount of time.

Add up all the operations and simplify it, let’s say it is f(n).

Remove all the constants and choose the term having the least order or any other function which is always less than f(n) when n tends to infinity, let say it is g(n) then, Omega (Ω) of f(n) is Ω(g(n)).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Big-Theta(Θ) notation specifies a bound for a function f(n). For a given function g(n), Θ(g(n)) is denoted by:

A

Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ n0}.

This means that, f(n) = Θ(g(n)), If there are positive constants n0 and c such that, to the right of n0 the f(n) always lies on or above c1g(n) and below c2g(n).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Calculate theta notation

A

Break the program into smaller segments.

Find all types of inputs and calculate the number of operations they take to be executed. Make sure that the input cases are equally distributed.

Find the sum of all the calculated values and divide the sum by the total number of inputs let say the function of n obtained is g(n) after removing all the constants, then in Θ notation, it’s represented as Θ(g(n)).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Big – O (O) notation specifies the asymptotic upper bound for a function f(n). For a given function g(n), O(g(n)) is denoted by:

A

O (g(n)) = {f(n): there exist positive constants c and n0 such that f(n) ≤ c*g(n) for all n ≥ n0}.

This means that, f(n) = O(g(n)), If there are positive constants n0 and c such that, to the right of n0 the f(n) always lies on or below c*g(n).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Calculating Big O notation for a program:

A

Break the program into smaller segments.

Find the number of operations performed for each segment (in terms of the input size) assuming the given input is such that the program takes the maximum time i.e the worst-case scenario.

Add up all the operations and simplify it, let’s say it is f(n).

Remove all the constants and choose the term having the highest order because for n tends to infinity the constants and the lower order terms in f(n) will be insignificant, let say the function is g(n) then, big-O notation is O(g(n)).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

why perform analysis based on input size in complexity analysis of algorithms?

A

There are many important things that should be taken care of, like user-friendliness, modularity, security, maintainability, etc. Why worry about performance? The answer to this is simple, we can have all the above things only if we have performance. So performance is like currency through which we can buy all the above things. Another reason for studying performance is – speed is fun! To summarize, performance == scale. Imagine a text editor that can load 1000 pages, but can spell check 1 page per minute OR an image editor that takes 1 hour to rotate your image 90 degrees left OR … you get it. If a software feature can not cope with the scale of tasks users need to perform – it is as good as dead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How to study efficiency of algorithms?
The way to study the efficiency of an algorithm is to implement it and experiment by running the program on various test inputs while recording the time spent during each execution. A simple mechanism in Java is based on use of the currentTimeMillis() method of the System class for collecting such running times. That method reports the number of milliseconds that have passed since a benchmark time known
as the epoch (January 1, 1970 UTC).

A

The key is that if we record the time immediately before executing the algorithm and then immediately after it.

long start = System.currentTimeMillis( ); // record the starting time
/∗ (run the algorithm) ∗/
long end = System.currentTimeMillis( ); // record the ending time
long elapsed = end − start; //Total time elapsed

Measuring elapsed time provides a reasonable reflection of an algorithm’s efficiency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Given two algorithms for a task, how do we find out which one is better?

A

asymptotic analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

linear search

A

order of growth is linear

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

binary search

A

order of growth in logarithmic

17
Q

Challenges of Experimental Analysis:

A

Experimental running times of two algorithms are difficult to directly compare unless the experiments are performed in the same hardware and software environments. Experiments can be done only on a limited set of test inputs; hence, they leave out the running times of inputs not included in the experiment (and these inputs may be important).

To overcome the challenges in the Experimental analysis Asymptotic Analysis is used.

18
Q

Does Asymptotic Analysis always work?
Asymptotic Analysis is not perfect, but that’s the best way available for analyzing algorithms. For example, say there are two sorting algorithms that take 1000nLogn and 2nLogn time respectively on a machine. Both of these algorithms are asymptotically the same (order of growth is nLogn). So, With Asymptotic Analysis, we can’t judge which one is better as we ignore constants in Asymptotic Analysis.

A

Also, in Asymptotic analysis, we always talk about input sizes larger than a constant value. It might be possible that those large inputs are never given to your software and an asymptotically slower algorithm always performs better for your particular situation. So, you may end up choosing an algorithm that is Asymptotically slower but faster for your software.

19
Q

Advantages of asymptotic analysis

A

Asymptotic analysis provides a high-level understanding of how an algorithm performs with respect to input size.

It is a useful tool for comparing the efficiency of different algorithms and selecting the best one for a specific problem.

It helps in predicting how an algorithm will perform on larger input sizes, which is essential for real-world applications.

Asymptotic analysis is relatively easy to perform and requires only basic mathematical skills.

20
Q

Disadvantages of asymptotic analysis

A

Asymptotic analysis does not provide an accurate running time or space usage of an algorithm.

It assumes that the input size is the only factor that affects an algorithm’s performance, which is not always the case in practice.

Asymptotic analysis can sometimes be misleading, as two algorithms with the same asymptotic complexity may have different actual running times or space usage.

It is not always straightforward to determine the best asymptotic complexity for an algorithm, as there may be trade-offs between time and space complexity.

21
Q

Applications of asymptotic analysis

A

In applied mathematics, asymptotic analysis is used to build numerical methods to approximate equation solutions.

In mathematical statistics and probability theory, asymptotics are used in analysis of long-run or large-sample behaviour of random variables and estimators.

In computer science in the analysis of algorithms, considering the performance of algorithms.

The behavior of physical systems, an example being statistical mechanics.

In accident analysis when identifying the causation of crash through count modeling with large number of crash counts in a given time and space.