Algorithms And Complexity Flashcards Preview

Computer Science - Year 1 > Algorithms And Complexity > Flashcards

Flashcards in Algorithms And Complexity Deck (46):
1

What is an algorithm?

-To produce a program to solve all instances of a problem we must specify a general step by step procedure for producing the solution to each instance
-This procedure is an algorithm
-The algorithm solves the problem

2

What is a parameter?

-variable values used in algorithms
-eg; for finding a value (x) in a list (s) of size (n) there would be the 3 parameters x, s and n

3

What is a problem?

A question we seek to answer

4

What is a solution?

Answer to the question asked in that instance

5

What is an Instance?

-Each specific assignment of values to the parameters is called an instance of the problem (each time problem is defined with values)

6

What are the primary and secondary interests of efficiency?

-Primary = running time and operations on data structures (time complexity)
-Secondary = memory usage (space complexity)

7

How important are algorithms and choosing the right one?

-bigger the problem the more evident an efficient algorithm becomes
-should be considered a technology (same as hardware)
-choice of algorithm is as important as choice of hardware

8

Wat is the point in algorithmic analysis?

-measuring amount of work as it varies with the size of data affected
-measuring performance against another algorithm at the same task
-predicting performance to solve a task

9

What is algorithm performance?

-expressed as approximate rate of growth of work as size of data grows
-best, average and worst cases are discovered
-worst case is the primary issue

10

What is big-o notation?

-estimates rate of algorithm growth rather than time/space resources used
-not exact measurements

11

What are the rules of Big-o notation?

-discard coefficients, logarithmic bases and constants
-determine how time required by required by repetitive work grows as size of data does
-nested relationships = multiply
-sequential relationships = add

12

What is the order of efficiency measures in big-o notation?

-fastest to slowest:
0(1), O(logn), O(N), O(NlogN), O(N^2), O(N^3), O(2^N), O(N!)

13

How do we simplify Big-O expressions?

-split into dominant and lesser terms
-discard lesser terms

14

What else can be measured using big-o?

-memory requirements
-some algorithms may be equal on time but may consume more memory

15

What is big O concerned with and what does it to a function?

-concerned with eventual behaviour
-puts asymptotic upper bound on function

16

What is Omega?

-describes least amount of a resource that an algorithm needs for some class of input
-measures least amount of time mainly
-lower bound of algorithm

17

What is theta?

-intersection between omega and big O

18

What is a problem with growth rate measuring?

-for average problem it is easy to recognise the growth rate for that algorithm
-distinction between upper and lower bound is only worthwhile when you have incomplete knowledge about the thing being measured

19

What is a mistake many people confuse with UB, LB and best/worst case?

-upper bound does not mean worst case and lower bound is not best case
-best and worst give us solid input instance that we can apply to algorithm description to get a cost measure
-what is being bounded is the growth rate for the cost not the cost itself
-growth rate applies to change in cost as input size changes

20

What is a misconception people have with best and worst case?

-best case does not occur when size is small and vise versa
-best and worst case instances exist for each possible size of input

21

What is the issue with a fast algorithm? How do we chose algorithms?

-faster an algorithm is the more complicated it will be to write
-when choosing we need to consider; size of data amount of memory you have, amount of time you have, if we want a fast algorithm that produces an ok solution or a slow one that produces a good solution

22

What is the difference between decomposition and recursive decomposition?

-decomposition divides problem into dissimilar sub problems
-Recursive decomposition problem into smaller version of the same problem

23

What properties must problems have to be solved recursively?

-must show self similarity
-structure repeats itself at different levels in scale
-solve larger problems means solving smaller problems within

24

What is the base case?

-simplest version of problem that can be solved

25

What is recursive case?

-make calls to self to get results for smaller, simpler versions
-recursive calls must advance to base case
-results of recursive calls combine to solve larger version

26

What are some problems with recursion?

-can require same resources as alternative approach
-may be much more or much less efficient
-for problems with simple iterative solution, iteration is likely best

27

Why use recursion?

-can express problems with clear, elegant, direct code
-can intuitively model a task that is recursive in nature
-solution may require recursion
- iteration may not be an option

28

How do we do recursive decomposition?

-find recursive sub structure
-solve problem using result from smaller problems
-identify base case
-simplest possible case, directly solvable, recursion advances to it

29

What are common patterns in recursion?

-handle first and or last, recur on remaining
-divide in half, recur on one/both halves
-make a choice on options and recur on updated state

30

What is the difference between functional and procedural recursion?

-function returns result
-procedural returns void

31

What are the time complexities of binary and linear search?

-Binary = O(logN)
-linear = O(n)

32

What is interpolation?

-exploits additional knowledge about values
-similar to binary but uses different method to find mid point
-M = low(((x-s(low)/s(high)-s(low))*(high-low))
-average case is O(log(logN))
-worst case is O(n)

33

What is robust interpolation search?

-in worst case normal interpolation is the same as sequential
-uses a variable gap
-gap = ((high-low+1)^1/2)
-mid then calculated using normal formula
-after that mid is changed with special formula
-O(log(logN)) average case
-O((logN)^2) worst which is worse than binary but better than interpolation

34

What is the search tree algorithm?

-in a binary tree every branch has only 2 children
-go down tree comparing value to each branch, if value you are after is smaller go to the left branch and vise versa
-average is O(logN)
-worst is O(n)
-binary search tree is guaranteed O(logN)
-auto sorts data

35

What are bubble and insertion sort?

-bubble is bubble. Easy but slow
-insertion is based on sequential, similar to sorting cards in order, easy to implement for space but not time
-used when data is nearly ordered
-best case is O(N)
-worst case is O(n^2)

36

What is merge sort?

-D&C
-very fast
O(nlog(n))
-split —> merge
-easy split, hard join
-memory hungry

37

What is quick sort?

-D&C
-Hard split, easy-join
-Pivot
-Best = O(nlog(n))
-worst = O(n^2)
-pivot choice decided in many ways, can be first or last item, random etc
-can be median value (algorithm to calculate this takes O(n) but will generate quick sort with garenteed O(nlog(n))

38

Merge vs quicksort

-QS is not as memory hungry
-QS moves elements quicker to position
-Merge will always be O(nlog(n)) but quick can be as slow as O(n^2)

39

What is a priority queue?

-restricted list of items arranged by priority values
-items will enter queue based on priority level, they wont be stuck on back
-items removed from it must be highest priority of all values in queue. If more than one item with same priority then the one that has been in there the longest is removed

40

What do these operations mean;
1) Insert(S,x)
2) max(s)
3)Extract_max(s)
4)increase_key(s,x,k)

1) insert value x into set s
2) return element of S with largest key
3)return element of S with largest key and remove it from S
4) increase the value of element X’s key to new value k

41

What is a complete binary tree (CBT)

-a binary tree with leaves on either a single level or on two adjacent levels such that the leaves on the bottommost level are placed as far left as possible
-and such that nodes appear in every possible position in each level above the bottom level

42

What is a heap?

-implementation of a priority queue
-an array visualised as a nearly CBT
-max heap property = key of a node is larger or equal that of the keys of its children
-min heap = opposite

43

Wat do the following operations do;
1) build_max_heap
2)max_heapify
3)insert
4)extract_max
5)heapsort

1) produce a max_heap from an unordered array
2) correct a single violation of the heap property in a sub tree of its root
3) inserts a new element
4) extracts the element with the highest key
5) sorts an array in place

44

How do we build a heap?

-built in a bottom up manner
-elements in the subarray are all leaves of the tree, each one is an one-element heap to begin with
-build_max_heap goes through the remaining nodes of the tree and runs max_heapify on each one

45

How do we use heaps to sort?

-find max element of A[1]
-swap elements A[n] and A[1] now max element is at end of the array
-discard node n from the heap
-new root may violate max heap property, use max_heapify to fix
-go to point 2 unless heap is empty
-O(nlog(N))

46

How do we add values to a heap?

-create new leaf
-values on the path from the new leaf to the root are copied down until a place for the new value is found