Flashcards in Algorithms And Complexity Deck (46):
What is an algorithm?
-To produce a program to solve all instances of a problem we must specify a general step by step procedure for producing the solution to each instance
-This procedure is an algorithm
-The algorithm solves the problem
What is a parameter?
-variable values used in algorithms
-eg; for finding a value (x) in a list (s) of size (n) there would be the 3 parameters x, s and n
What is a problem?
A question we seek to answer
What is a solution?
Answer to the question asked in that instance
What is an Instance?
-Each specific assignment of values to the parameters is called an instance of the problem (each time problem is defined with values)
What are the primary and secondary interests of efficiency?
-Primary = running time and operations on data structures (time complexity)
-Secondary = memory usage (space complexity)
How important are algorithms and choosing the right one?
-bigger the problem the more evident an efficient algorithm becomes
-should be considered a technology (same as hardware)
-choice of algorithm is as important as choice of hardware
Wat is the point in algorithmic analysis?
-measuring amount of work as it varies with the size of data affected
-measuring performance against another algorithm at the same task
-predicting performance to solve a task
What is algorithm performance?
-expressed as approximate rate of growth of work as size of data grows
-best, average and worst cases are discovered
-worst case is the primary issue
What is big-o notation?
-estimates rate of algorithm growth rather than time/space resources used
-not exact measurements
What are the rules of Big-o notation?
-discard coefficients, logarithmic bases and constants
-determine how time required by required by repetitive work grows as size of data does
-nested relationships = multiply
-sequential relationships = add
What is the order of efficiency measures in big-o notation?
-fastest to slowest:
0(1), O(logn), O(N), O(NlogN), O(N^2), O(N^3), O(2^N), O(N!)
How do we simplify Big-O expressions?
-split into dominant and lesser terms
-discard lesser terms
What else can be measured using big-o?
-some algorithms may be equal on time but may consume more memory
What is big O concerned with and what does it to a function?
-concerned with eventual behaviour
-puts asymptotic upper bound on function
What is Omega?
-describes least amount of a resource that an algorithm needs for some class of input
-measures least amount of time mainly
-lower bound of algorithm
What is theta?
-intersection between omega and big O
What is a problem with growth rate measuring?
-for average problem it is easy to recognise the growth rate for that algorithm
-distinction between upper and lower bound is only worthwhile when you have incomplete knowledge about the thing being measured
What is a mistake many people confuse with UB, LB and best/worst case?
-upper bound does not mean worst case and lower bound is not best case
-best and worst give us solid input instance that we can apply to algorithm description to get a cost measure
-what is being bounded is the growth rate for the cost not the cost itself
-growth rate applies to change in cost as input size changes
What is a misconception people have with best and worst case?
-best case does not occur when size is small and vise versa
-best and worst case instances exist for each possible size of input
What is the issue with a fast algorithm? How do we chose algorithms?
-faster an algorithm is the more complicated it will be to write
-when choosing we need to consider; size of data amount of memory you have, amount of time you have, if we want a fast algorithm that produces an ok solution or a slow one that produces a good solution
What is the difference between decomposition and recursive decomposition?
-decomposition divides problem into dissimilar sub problems
-Recursive decomposition problem into smaller version of the same problem
What properties must problems have to be solved recursively?
-must show self similarity
-structure repeats itself at different levels in scale
-solve larger problems means solving smaller problems within
What is the base case?
-simplest version of problem that can be solved
What is recursive case?
-make calls to self to get results for smaller, simpler versions
-recursive calls must advance to base case
-results of recursive calls combine to solve larger version
What are some problems with recursion?
-can require same resources as alternative approach
-may be much more or much less efficient
-for problems with simple iterative solution, iteration is likely best
Why use recursion?
-can express problems with clear, elegant, direct code
-can intuitively model a task that is recursive in nature
-solution may require recursion
- iteration may not be an option
How do we do recursive decomposition?
-find recursive sub structure
-solve problem using result from smaller problems
-identify base case
-simplest possible case, directly solvable, recursion advances to it
What are common patterns in recursion?
-handle first and or last, recur on remaining
-divide in half, recur on one/both halves
-make a choice on options and recur on updated state
What is the difference between functional and procedural recursion?
-function returns result
-procedural returns void
What are the time complexities of binary and linear search?
-Binary = O(logN)
-linear = O(n)
What is interpolation?
-exploits additional knowledge about values
-similar to binary but uses different method to find mid point
-M = low(((x-s(low)/s(high)-s(low))*(high-low))
-average case is O(log(logN))
-worst case is O(n)
What is robust interpolation search?
-in worst case normal interpolation is the same as sequential
-uses a variable gap
-gap = ((high-low+1)^1/2)
-mid then calculated using normal formula
-after that mid is changed with special formula
-O(log(logN)) average case
-O((logN)^2) worst which is worse than binary but better than interpolation
What is the search tree algorithm?
-in a binary tree every branch has only 2 children
-go down tree comparing value to each branch, if value you are after is smaller go to the left branch and vise versa
-average is O(logN)
-worst is O(n)
-binary search tree is guaranteed O(logN)
-auto sorts data
What are bubble and insertion sort?
-bubble is bubble. Easy but slow
-insertion is based on sequential, similar to sorting cards in order, easy to implement for space but not time
-used when data is nearly ordered
-best case is O(N)
-worst case is O(n^2)
What is merge sort?
-split —> merge
-easy split, hard join
What is quick sort?
-Hard split, easy-join
-Best = O(nlog(n))
-worst = O(n^2)
-pivot choice decided in many ways, can be first or last item, random etc
-can be median value (algorithm to calculate this takes O(n) but will generate quick sort with garenteed O(nlog(n))
Merge vs quicksort
-QS is not as memory hungry
-QS moves elements quicker to position
-Merge will always be O(nlog(n)) but quick can be as slow as O(n^2)
What is a priority queue?
-restricted list of items arranged by priority values
-items will enter queue based on priority level, they wont be stuck on back
-items removed from it must be highest priority of all values in queue. If more than one item with same priority then the one that has been in there the longest is removed
What do these operations mean;
1) insert value x into set s
2) return element of S with largest key
3)return element of S with largest key and remove it from S
4) increase the value of element X’s key to new value k
What is a complete binary tree (CBT)
-a binary tree with leaves on either a single level or on two adjacent levels such that the leaves on the bottommost level are placed as far left as possible
-and such that nodes appear in every possible position in each level above the bottom level
What is a heap?
-implementation of a priority queue
-an array visualised as a nearly CBT
-max heap property = key of a node is larger or equal that of the keys of its children
-min heap = opposite
Wat do the following operations do;
1) produce a max_heap from an unordered array
2) correct a single violation of the heap property in a sub tree of its root
3) inserts a new element
4) extracts the element with the highest key
5) sorts an array in place
How do we build a heap?
-built in a bottom up manner
-elements in the subarray are all leaves of the tree, each one is an one-element heap to begin with
-build_max_heap goes through the remaining nodes of the tree and runs max_heapify on each one
How do we use heaps to sort?
-find max element of A
-swap elements A[n] and A now max element is at end of the array
-discard node n from the heap
-new root may violate max heap property, use max_heapify to fix
-go to point 2 unless heap is empty