2.3.1 Algorithms Flashcards
Big O Notation
Big O Notation : Evaluates the complexity (measure of efficiency) of the algorithm
- Evaluates the worst case scenario for the algorithm
- States how the memory, time and recourses increases as data increases
- A useful approximation of the time taken to execute an algorithm for a given number of items in a data set.
- Remove constants and only leave dominant term
Time complexity
Time complexity : Estimates increase in Time required to solve a problem as the problem scales.
- Measured with Big O notation
- Heuristic as many variables affect speed
All types of Time Complexities and examples
O(1) - Constant Time Complexity : Time taken to completion independent to data input - constant so most efficient
e.g. Hash function
O(n) - Linear Time Complexity : Time taken to completion directly proportional to number of data items.
e.g. Linear Search
O(n^k) - Polynomial Time Complexity : Time taken to completion proportional to number of data items to the power of k.
e.g. Quadratic time complexity - Bubble sort is O(n^2)
O(2^n) - Exponential time Complexity : Time taken to completion doubles with each additional item added.
e.g. Recursive algorithms that solve two problems size N-1, like Fibonacci sequence (almost : n-1 and n-2)
O(log(n)) - Logarithmic time complexity : Rate Time taken to completion increases by decreases as more data items are added.
e.g. Divide and conquer algorithms like Binary Sort
O(nlog(n)) - Linearithmic
Merge sort or quick sort
Space Complexity
Space complexity : How much more Memory an algorithm takes up as algorithm scales.
- Measured with big O notation
- Algorithms that make many copies store more data which is expensive
Bubble Sort
Bubble Sort : Passes through list Comparing adjacent elements and swapping if in incorrect order.
- One item sorted per pass
- Maximum n-1 passes to sort data
- Use flag that terminates algorithm if no swaps are made (data sorted) to increase efficiency
- Decrementing the inner loop so dont check already sorted items also increases efficiency
Time Complexity :
Worst case : O(n^2)
Average case : O(n^2)
Best case : O(n)
Space Complexity :
O(1) - in place algorithm
Insertion Sort
Insertion Sort : Makes first value sorted list, then inserts each item from unsorted list into the sorted list by comparisons linearly.
- Shifts data forward one then inserts data in
- Sorts one data item at a time
Time Complexity :
Worst case : O(n^2)
Average case : O(n^2)
Best case : O(n)
Space Complexity :
O(1) - as in place sorting algorithm
Merge Sort
Merge Sort : Splits data input in half recursively until all are of size 1
- Recombines sub lists in pairs placing into correct order repeatedly until single ordered list left
- Divide and Conquer Algorithm
- Consistent time complexity
Time Complexity :
Worst case : O(nlog(n))
Average case : O(nlog(n))
Best case : O(nlog(n))
Space Complexity :
O(n)
Quick Sort
Quick Sort : Uses divide and conquer to
- Highlight first element as start pointer and last list element as end pointer
- Repeatedly compare numbers being pointed to:
If incorrect , swap and move end pointer
Else move start pointer - Split list into two lists around end pointer (greater than end pointer to the right, less than to the left)
- Quick sort each sublist
- Repeat until all sublists only one number
- Combine sublists
Time Complexity :
Worst case : O(n^2)
Average case : O(nlog(n))
Best case : O(nlog(n))
Space Complexity :
O(log(n))
Linear Search
Linear Search : Searches through every value in order sequentially until item found or end of data structure.
- Very basic and easy to implement
- Does not require data to be sorted
- More Inefficient than Binary on average
Time Complexity :
Worst case : O(n)
Average case : O(n)
Best case : O(1)
Binary Search
Binary search -
Uses Divide and conquer
- Only works on sorted data.
Process :
- Defines LB and UB pointers
- Calculate Midpoint with (LB + (LB - UB DIV 2))
- Go to index and compare to input
If equal return index and true
else if input larger set LB = index+1
else if input smaller set UB = index-1
- Repeat calculating midpoint of new sub list and comparing until LB >= UB
Repeated until desired element found or determined to not be in list,
- Discards half of input data each iteration – very efficient
Time Complexity :
Worst case : O(log(n))
Average case : O(log(n))
Best case : O(1)
Dijkstra’s Algorithm
Dijkstra’s Algorithm : Pathfinding Algorithm to calculate the shortest path between two nodes on a weighted graph.
- finds all routes to destination not just one
- Visits next smallest working value
- Stops when destination node has smallest working value
- Always finds shortest path
A* Algorithm
A* Algorithm : Pathfinding Algorithm to calculate the shortest path between two nodes on a weighted graph using Heuristics.
- Adds actual cost to heuristic cost to calculate working value
- Doesn’t have to find every possible solution - much more efficient.
- Doesn’t always find shortest path if heuristic inaccurate
- Heuristic calculation takes time
- Effectiveness largely dependent on accuracy of heuristic algorithm.