231-algorithms Flashcards
Goals of algorithmic design
-time complexity- run as quickly as possible
- space complexity - take up the least amount of memory
- situational. if you have a lot of processing power, time is less important and you would focus on less data wastage (space), if you need a lot of data to be processed quickly, time is important
Algorithm
Sequence of steps followed in order to solve a problem/perform particular task
Big O Notation
Expresses the complexity (efficiency) of an algorithm in terms of how well it scales as dataset grows
takes in consideration best, average and worst case scenarios
approximation, allows you to predict the memory req/execution time it takes for an algorithm to finish given the data input
Big O Notation Rules
Remove all terms except the one with the largest factor or exponent
Remove any constants
e.g O (n^2 + n + 1000) simplifies to O (n^2)
Time Complexity
Number of cycles of processing time
running time required in memory depending on amount of data input
Time taken to complete the algorithm/solve the problem
How well an algorithm scales in the processing time taken/number of cycles as the problem dataset grows
Space Complexity
running space required in memory depending on scale data input
Amount of storage algorithm takes
Reducing Space Complexity
Perform all changes on the original piece of data (copies take space)
Reducing Time Complexity
Reduce the number of embedded loops and number of items you have to complete the operations on
Best case time complexities
Linear Search - O(1)
Binary Search array and tree - O(1)
Hashing - O (1)
Breadth/Depth first - O(1)
Bubble sort - O(n)
Insertion sort - O(n)
Merge sort - O(n log n)
Quick sort - O(n log n)
Average case time complexity
Linear search - O(n)
Binary - O(log n)
Hashing - O(1)
Breadth/Depth - O(V+E)
Bubble sort - O(n^2)
Insertion sort - O(n^2)
Merge sort - O(n log n)
Quick sort - O(n log n)
Worst case time complexity
Linear search - O(n)
Binary search array - O(log n)
Binary search tree - O(n)
Hashing - O(n)
Breadth/depth first - O(V^2)
Bubble sort - O(n^2)
Insertion sort - O(n^2)
Merge sort - O(n log n)
Quick sort - O(n^2)
Worst Space time complexity
Linear search - O(1)
Binary search array- O(1)
Binary search tree- O(n)
Hashing- O(n)
Bubble sort - O(1)
Insertion sort - O(1)
Merge sort- O(n)
Quick sort- O(n) or log n (average)
O (1) Constant Big O Notation
Algorithm will always take the same time/space
Independent from the input size
no loops or iterations
O (n) Linear Big O Notation
Time/Space increases in direct proportion to input size
reduced efficiency as dataset grows
for loop/while loop
O (n^2) Polynomial Big O Notation
how well the algorithm scales in time taken/memory is directly proportional to square the power of n of the input size
efficiency is significantly reduced as dataset grows.
deeper nested iterations result in higher power
O ( log n) Logarithmic Big O Notation
amount of time/memory increases at a smaller rate as the size of input increases
Discards irrelevant data input e.g divide and conquer/halving the data set
O (n log n) Linearithmic
divides a data set but can be solved using concurrency or divided lists
has linear and logarithmic growth rate factor to it. e.g
merge sort splits arrays logarithmically, and linear sots when merging them
O (2^n) exponential big o
how the algorithm scales is exponential
With each data input, it doubles
recursive
O (n!) factorial
multiplies an ever increasing amount for every extra item in the dataset, considers such permutation
Algorithm grows rapidly with each addition to input size. Factorial of input.
searching algorithms
to find a specific element within a data structure, performed on a set of data
each algorithm suited to a particular data structure
linear search
- starts at beginning of dataset
- traverses through every item one at a time in sequence
- until it finds the item its searching for
else if all items have been inspected, returns false - unsorted or sorted
- easier to program/implement (linked list/array)
- not efficient for large lists, ideal for smaller datasets
- adding new item to list doesnt matter
linear search pseudocode
.
Function linearSearch(list, item)
index= -1
i=0
found = False
while i < length(list) and found = False
if list[i]= item then
index= i
found= True
endif
i=i+1
endwhile
return index
endfunction
binary search
- starts at midpoint
- uses divide and conquer to search for an item
- halves search space with each iteration logical comparison, repeats until midpoint = item. uses 3 pointers: mid, low, high.
- array/list must be sorted
- implemented w/ array or binary tree
- harder to program,
more complex algorithm, so linear search on small datasets outperform it
-very efficient for large data sets, needs sorted - additional time to sort data, and if adding new item
binary search iterative pseudocode
binary search code (iterative)
function binarySearch(array,item)
low=0
high=array.length-1
while(low<=high){
mid=(low+high)/2
if (array[mid]>item)
high=mid-1
//discard the high half of the array
else if (array[mid] <item)
low=mid+1
else
return mid
//return index of mid
end if
end while
return -1
//returns -1 if value isn’t found
end function