Unit 12 - Algorithms Flashcards

1
Q

define big O notation

A

an approximation of how well an algorithm scales with an increasing data set

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

two main parts of big o

A
  • time complexity
  • memory complexity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

description of O(1)

A

constant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

description of O(log n)

A

logarithmic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

description of O(n)

A

linear

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

description of O(n log n)

A

linearithmic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

description of O(n^2)

A

polynomial

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

description of O(2^n)

A

exponential

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

description of O(n!)

A

factorial

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

best to worst big O scenarios

A

O(1)
O(log n)
O(n)
O(n log n)
O(n^2)
O(2^n)
O(n!)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

best, average and worse case scenarios of time complexity for linear search

A

best - O(1)
average - O(n)
worst - O(n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

best, average and worse case scenarios of time complexity for binary search array

A

best - O(1)
average - O(log n)
worst - O(log n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

best, average and worse case scenarios of time complexity for binary search tree

A

best - O(1)
average - O(log n)
worst - O(n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

best, average and worse case scenarios of time complexity for hashing

A

best - O(1)
average - O(1)
worst - O(n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

best, average and worse case scenarios of time complexity for breadth/depth-first traversal of a graph

A

best - O(1)
average - O(V + E)
worst - O(v^2)

V = vertices
E = edges

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

best, average and worse case scenarios of time complexity for bubble sort

A

best - O(n)
average - O(n^2)
worst - O(n^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

best, average and worse case scenarios of time complexity for insertion sort

A

best - O(n)
average - O(n^2)
worst - O(n^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

best, average and worse case scenarios of time complexity for merge sort

A

best - O(n log n)
average - O(n log n)
worst - O(n log n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

best, average and worse case scenarios of time complexity for quick sort

A

best - O(n log n)
average - O(n log n)
worst O(n^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

space complexity for bubble sort

A

O(1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

space complexity for insertion sort

A

O(1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

space complexity for merge sort

A

O(n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

space complexity for quick sort

A

O(log n)

24
Q

what makes a good algorithm

A
  • gives the correct output for any set of valid inputs
  • finite number of steps
  • accomplish an identifiable task
  • must always terminate
  • readable code
  • minimal memory overhead
  • interoperability between algorithms
25
Q

pseudocode for bubble sort

A
26
Q

similarity between linear and binary search

A
  1. same best case case time complexity = O(1)
  2. both find the item being searched for if it is the list
27
Q

differences between binary and linear search

A
  1. different average and worst case time complexities = linear: O(n), binary O(log n)
  2. binary search must be performed on a sorted list
28
Q

similarities between bubble and insertion sort

A
  1. both sort the list
  2. same best, average and worst case time complexities - O(n) and O(n^2)
  3. same space complexity = O(1)
29
Q

differences between bubble and insertion sort

A
  1. in one pass, bubble sort has one item in the correct place where as insertion sort has all items in the correct order
30
Q

pseudocode for insertion sort

A
31
Q

pseudocode for merge sort

A

*** MAIN PROGRAM *******

def mergeSort(alist):

if len(alist)>1: 
    print("Splitting ",alist) 
    mid = len(alist)//2       # performs integer division 
    lefthalf = alist[:mid]    # left half of alist put into lefthalf  
    print ("left half ",lefthalf) 
    righthalf = alist[mid:]   # right half of alist put into righthalf 
    print ("right half ",righthalf) 
    mergeSort(lefthalf) 
    mergeSort(righthalf) 
    i=0 
    j=0 
    k=0 

    while i<len(lefthalf) and j<len(righthalf):
        if lefthalf[i]<righthalf[j]: 
            alist[k]=lefthalf[i] 
            i=i+1 
        else: 
            alist[k]=righthalf[j] 
            j=j+1 
        k=k+1 
    endwhile 
    endif  #check if the left half still has elements not merged   #if so, add them to alist  
    while i<len(lefthalf): 
        alist[k]=lefthalf[i] 
        i=i+1 
        k=k+1  endwhile  #check if the right half still has elements not merged   #if so, add them to alist  
    while j<len(righthalf): 
        alist[k]=righthalf[j] 
        j=j+1 
        k=k+1  endwhile 
    print("Merged sublist ",alist) 
endif          #ENDSUB 

alist = [5, 3, 2, 7, 9, 1, 3, 8]

print(“\nUnsorted list: “,alist)

mergeSort(alist)

print(“\nSorted list: “,alist)

32
Q

how does a quick sort work

A
  • The algorithm works by finding a split point, which divides the list into two sub lists, one containing items less than the value at the split point, one containing items greater than the value at the split point.
  • The first step is to choose a value called the pivot value, typically the first item in the list
  • In order to find the split point where the pivot belongs, left and right pointers are used
  • The left pointer moves right, until it finds an item larger than the pivot, and then it stops and hands over to the right pointer
  • The right pointer moves left until it finds an item smaller than the pivot, and then stops
  • The values in each pointer swap.
  • The pointers repeat this process until they cross over.
  • The pivot value swaps with the right pointer – this value is now in place.
  • Each sub list is treated the same way until it is all sorted.
33
Q

advantages of a quick sort

A
  1. extremely fast time complexity O( n log n)
  2. does not need additional memory like merge sort
34
Q

disadvantages of a quick sort

A
  1. if the split point are not near the middle of the list the division will be uneven = O(n^2)
  2. if the list is very large and recursion continues too long, it may cause stack overflow and the program will crash
35
Q

what are the two ways to traverse a graph

A
  1. depth first
  2. breadth first
36
Q

how does depth-first traversal work

A

you go as far as you can down a path before backtracking and going down the next path. it uses a stack to keep track of the last node visited, and a list to hold the names of nodes that have been visited – start with both empty

37
Q

pseudocode for depth-first traversal

A
38
Q

applications of depth first traversal

A
  1. job-scheduling
  2. finding a path between two vertices
  3. solving puzzles such as navigating a maze
39
Q

how does breadth first traversal work

A

you explore all neighbours of the current vertex, then the neighbours of each of those vertices and so on. a queue is used to keep track of nodes that we still have to visit.

40
Q

applications of breadth first traversal

A
  1. find the shortest path between two points
  2. web crawler
  3. facebook uses it to find all the friends of a given individual
41
Q

what is depth-first traversal also known as on trees

A

pre-order traversal

42
Q

time complexity of a graph with max edges

A

O(n^2)

43
Q

what is djikstra’s algorithm used for

A

designed to find the shortest path between a start node and every other node in a weighted graph

44
Q

how does djikstra’s algorithm work

A
  • uses a priority queue as the supporting data structure to keep record of which vertex to visit next
  • it starts by assigning a temporary distance value to each node
  • the temporary distance is 0 at the start node
45
Q

what are computable problems

A

if there is an algorithm that can solve every instance of it in a finite number of steps

46
Q

what are computational problems

A

theoretically solvable by computer but if they take millions of years to solve, they are in a sense, insolvable

47
Q

2 limitations of computation

A
  1. algorithmic complexity
  2. hardware
48
Q

what is an intractable problem

A

a problem that does not have a polynomial time solution. although they may have a theoretical solution, it is impossible to solve it within a reasonable time for values of n greater than something very small.
big O = O(n!), O(2^n)

49
Q

what is a tractable problem

A

a problem that has a polynomial time solution or better = O(n), O(nlogn), O(n^2), O(n^k)

50
Q

what are heuristic methods

A

one which tries to find a solution, which may not be perfect but which is adequate for its purpose

51
Q

how does the A* algorithm work

A

Two cost functions:
- G(x) - the real cost from the source to the given node
- H(x) - the approximate cost from node(x) to the goal node (heuristic function)

Calculate: f(x) = g(x) + h(x), stipulates that h(x) should NEVER be greater than g(x).

The algorithm focuses only on reaching the goal node, unlike Djikstra’s algorithm which finds the lowest cost or shortest path to every node.

52
Q

example use for djikstra’s algorithm

A
53
Q

example use of A* algorithm

A
54
Q

benefits of recursion

A
  • quicker to write – less lines of code
  • suited to certain problems - trees
  • reduce the size of a problem with each call
  • more natural to read
55
Q

drawbacks of recursion

A
  • can run out of stack space due to too many calls causing it to crash
  • more difficult to trace/follow as each frame on the stack has its own set of variables
  • requires more memory than the equivalent iterative algorithm
  • usually slower