HTat Flashcards
(127 cards)
-1: Constant time growth
The nomal amount of time an instruction takes under the RAM model
-Log n: logarithmic
Occurs in algos that transform a bigger problem into a smaller problem where the input size is a ration of the origional problem, common in searching and trees
-n: Linear
Algorithms that are forced to pass through all elements of the input(n) a number of times yield linear running times
n^k log n: polylogarithmic
Algorithms that break a problem into smaller parts, solve the ssmaller problems and combine into a solution k>=1
n^2: Quadratic
Subset of polynomial solution. Relatively efficeint to small or medium scale problems. typical of algos that have to analyse all pairs of elements of the input
n^3(+) cubic
Not very efficient but still polynomial. An example of an algorithm in this class is matrix multiplication
2^n: exponential
This is as bad as testing all possible answers to a problem, when algorithms fall into this catagory designers look for aproximation algorithms
Worst case of an algorithm
The worst-case efficiency of an algorithm is its efficiency for the worst-case input of size n, which is an input
(or inputs) of size n for which the algorithm runs the longest among all possible inputs of that size
Advantages of finding the worse case of an algorithm
-Binds runnning time to an upper bound
-ensures no matter the input size it cant be worse that CWorst(n)
Best case of an algorithm
The best-case efficiency of an algorithm is its efficiency for the best-case input of size n, which is an input
(or inputs) of size n for which the algorithm runs the fastest among all the inputs of that size
Advantages of finding best case
If the best case is not good enough the approach may need to be revisited
How can we measure the time efficiency of an algorithm
The standard approach is to identify the basic operation(s) and count how many times it is executed.
As a rule of thumb, it is the most time-consuming operation in the innermost loop of the algorithm
The established framework is:
Count the number of times the algorithm’s basic operation is executed for inputs of size n
Why can’t we use a standard unit of time to measure the efficiency of an algorithm
We can’t have influence from external factors such as:
Dependence on the speed of a particular computer
* Dependence on the quality of the program implementing the algorithm
* Dependence on the quality of the compiler used to generate the executable
* Difficulty of clocking the time precisely
What does RAM stand for in the RAM model of computation
Random access machine
Do we have to take constants into account when talking about growth functions
No, for example in a function that looks like 2x^3 + 10x^2+ 3 the only important term for growth is x^3 according to asymptotic notation
What is Asymptotic notation?
It provides the basic vocabulary for discussing the design and analysis of algorithms
Why is Asymptotic notation used
It’s ubiquitous because it identifies the key point for reasoning about algorithms.
It’s course enough to suppress all the details that should be ignored
precise enough to make useful comparisons between different algorithms
Big O notation
look it up yourself fucking nerd
What is the formula to find average case of an algoritm
(1 x p/n + 2 x p/n + .. + i x p/n) + n x (1-p)
Where to p is the probability of the first match occuring in the ith position for every i
Did you look over week 4 again?
Do it
look over lecture 3
<
look over all of week 3
<
Data Structues
Data structures are the building blocks of programming
-define how data is organised and allocated
Definition of ADT?
Abstract data types