# Unit 12 Algorthiums Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What two pieces of information allow you to analyse an algorithm?

A

● Time Complexity
● Space Complexity

2
Q

What is meant by the time complexity of an algorithm?

A

The time complexity of an algorithm is how much time it requires to solve a particular problem. The time complexity is measured using a notation called big-o notation, which shows the effectiveness of the algorithm. It shows an upper limit for the amount of time taken relative to the number of data elements given as an input. This is good because it allows you to predict the amount of time it takes for an algorithm to finish given the number of data elements.

3
Q

How do you measure of the time complexity?

A

Big-O notation

4
Q

What does the Big-O notation show?

A

The effectiveness of an algorithm

5
Q

What is the Big-O notation good for?

A

It allows you to predict the amount of time taken to solve an algorithm given the number of items stored

6
Q

What does a linear time complexity mean?

A

The amount of time taken to complete an algorithm is independent from the number of elements inputted. O(n)

7
Q

What does a constant time complexity mean?

A

The amount of time taken to complete an algorithm is independent to the number of inputted elements. O(1)

8
Q

What does a polynomial time complexity mean?

A

The amount of time taken to complete an algorithm is proportional to the number of items inputted to the power of n. O(n2)

9
Q

What does an exponential time complexity mean?

A

The amount of time taken to complete an algorithm is proportional to 2 to the power of the number of items inputted. O(2N)

10
Q

What does a logarithmic time complexity mean?

A

The time taken to complete an algorithm will increase at a smaller rate as the number of elements inputted. O(Log n)

11
Q

What is a logarithm?

A

How many times a certain number (base) is multiplied together to reach another number.

12
Q

What is space complexity?

A

The space complexity of an algorithm is the amount of storage the algorithm takes. Space complexity is commonly expressed using Big O (O(n)) notation. Algorithms store extra data whenever they make a copy, this isn’t ideal. When working with lots of data, it’s not a good idea to make copies. As this will take lots of storage which is expensive.

13
Q

What is an algorithm?

A

An algorithm is a series of steps that complete a task

14
Q

How do you reduce the space complexity?

A

Try to complete all of the operations on the same data set

15
Q

How do you reduce the time complexity of an algorithm?

A

You reduce the amount of embedded for loops, and then reduce the amount of items you complete the operations on i.e. divide and conquer

16
Q

What is the Big-O notation of a linear search algorithm?

A

O(n)

Function linearSearch(list, item)
index = -1
i= 0
found = False
while i < length(list) and found = False
if list[i] = item then
index = i
found = True
endif
i=i+ 1
endwhile
return index
endfunction

17
Q

What is the Big-O notation of a binary search algorithm?

A

O(log(n))

function binarySearch(list, item)
found = False
index = -1
first = 0
last = length(list) - 1
while first <= last and found = False
midpoint = int ( first + last) / 2 )
if list[midpoint] = item then
found = True
index = midpoint
else
if list[midpoint] < item then
first = midpoint + 1
else
last = midpoint - 1
endif endif
endwhile
return index
endfunction

18
Q

What is the Big-O notation of a bubble sort algorithm?

A

O(n2)

function bubbleSort(list, item)
found = False
i= 0
while found = False and i < length(item)
if list[i] > list[i+1]
temp = list[i]
list[i] = list[i+1]
list[i+1] = temp
endif
i = i +1
endwhile
return list
endfunction

19
Q

Stacks

A

Stacks are an example of a ​first in, last out ​(FILO) data structure. They are often implemented as an array ​and use a​ single pointer ​which keeps track of the top of the stack (called the top pointer). This points to the element which is​ currently at the top ​of the stack.

The top pointer is initialised at -1; this is because the first element in the stack is in position 0, and having the top initialised at 0 would suggest there is an element in the stack, when in fact the stack is empty.

Algorithms for stacks include adding to the stack, removing from the stack and checking whether the stack is empty/full. These have their own​ special names​, as shown in the table below.

20
Q

Stack Operations

A

size()
isEmpty()
peek()
push(element)
pop()

21
Q

size()

A

Size returns the ​number of elements ​on the stack. The pseudocode is as simple as returning the value of the top pointer plus one (remember that the first element is in position 0).

``` size()
22
Q

isEmpty()

A

To check whether a stack is empty, we need to check whether the top pointer is​ less than 0​. If it is, then the stack is empty, otherwise there is data in the stack

``` isEmpty()
if top < 0:
return True
else:
return False
endif```
23
Q

peek()

A

To return the item at the top of the stack, ​without removing it​, simply return the item at the position indicated by the top pointer. For these examples, we’ll assume our stack is an array called A.

Don’t forget to check that the stack ​has data in it ​before attempting to return data though, as an empty stack could cause errors. It’s useful to use the ​isEmpty​function here.

``` peek()
if isEmpty():
return error
else:
return A[top]
endif```
24
Q

push(element)

A

To add an item to a stack, the new item must be​ passed as a parameter​. Firstly, the top pointer is updated accordingly. Then the new element can be inserted at the position of the top pointer.

``` push(element)
top += 1
A[top] = element```
25
Q

pop()

A

To​ remove an item ​from a stack, the element at the position of the top pointer is recorded before being removed, and then the top pointer ​decremented by one ​before the removed itemisreturned.Aswith​peek(),​it’simportanttocheckthatthestack​isn’tempty​before attempting a pop.

``` pop()
if isEmpty():
return error
else:
toRemove = A[top]
A[top] = “”
top -= 1
26
Q

Queue

A

Queues are a type of​ first in, first out ​(FIFO) data structure. Just like stacks, queues are
often represented as arrays. However, unlike stacks, queues make use of two pointers: front and back​. While front holds the position of the first element, back stores the ​next available space​.

27
Q

Queue Operations

A

size()
isEmpty()
peek()
enqueue(element)
dequeue()

28
Q

size()

A

To work out the size of a queue, simply​ subtract the value of front from back​. If front is at 0 and back is at 5, there are 5 elements in the queue.

``` size()
return back - front```
29
Q

isEmpty()

A

isEmpty()
When a queue is empty, front and back​ point to the same position​. To check whether a queue is empty, just check whether the two pointers hold ​the same value​.

``` isEmpty()
if front == back:
return True else:
return False endif```
30
Q

Peek()

A

Just as with a stack, peek returns the element at the front of the queue ​without removing it​.

peek()
return A[front]

31
Q

enqueue(element)

A

To add an element to a queue, the element is placed in the position of the back pointer and then back is ​incremented by one​.

``` enqueue(element)
A[back] = element back += 1```
32
Q

dequeue()

A

Items are removed from a queue from the ​position of the front pointer​. Just as with stacks, it’s important to check that the queue​ isn’t empty ​before trying to dequeue an element.

``` dequeue()
if isEmpty():
return error
else:
toDequeue = A[front]
A[front] = “”
front += 1
33
Q

A

A linked list is composed of ​nodes​, each of which has a pointer to the ​next item ​in the list.
If a node is referred to as N, the next node can be accessed using ​N.next​. The first item in a list is referred to as the ​head ​and the last as the ​tail​.

Searching a list is performed using a linear search, carried out by sequential next operations until the desired element is found.

34
Q

Trees

A

Trees are formed from ​nodes ​and ​edges​, which ​cannot contain cycles ​and ​aren’t directed​. Trees are useful as a data structure because they can be ​traversed​.

There are two types of traversal to cover: ​depth first ​(post-order) and ​breadth first​. Both of these traversals can be implemented ​recursively ​and differ only in the order in which nodes are visited.

35
Q

Depth first (post-order) traversal

A

Depth first search goes ​as far down into the tree as possible ​before backtracking. The algorithm uses a ​stack ​and goes to the​ left child node ​of the current node when it can. If there is no left child then the algorithm goes to the​ right child​.

If there are no child nodes, the algorithm ​visits ​the current node, outputting the value of the node before backtracking to the next node on the stack and ​moving right​.

36
Q

A

Starting from the left, breadth-first visits ​all children ​of the start node. The algorithm then visits all nodes ​directly connected ​to each of those nodes in turn, continuing until every node has been visited. Unlike depth first traversal (which uses a stack), breadth first uses a ​queue​.

37
Q

Bubble Sort

A

Bubble sort makes ​comparisons and swaps ​between pairs of elements. The largest element in the unsorted part of the input is said to “bubble” to the top of the data with each iteration of the algorithm.

The algorithm starts at the first element in an array and compares it to the second. If they are in the wrong order, the algorithm ​swaps ​the pair. Otherwise, the algorithm moves on. The process is then repeated for every adjacent pair of elements in the array until the end of the array is reached (at which point the largest element is in the last position of the array).

This is referred to as one ​pass ​of the algorithm. For an array with n elements, the algorithm will perform n passes through the data, at which point the input is sorted and can be returned.

38
Q

Bubble Sort Pseudocode

A

A = A​rray of data
for i = 0 to A.length - 1:
noSwap = True
for j=0 to A .length-(​i+1):​
if A[j] > A[j+1]:
temp = A[j]
A[j] = A[j+1]
A[j+1] = temp
noSwap = False
if noSwap:
break
return A

39
Q

Insertion Sort

A

Another sorting algorithm, insertion sort ​places elements into a sorted sequence​. In the ​i​th iteration of the algorithm, the first ​i​elements of the array are sorted. Be careful though, although ​i​elements are ​sorted​, they are not the ​i​smallest elements ​in the input.

40
Q

Insertion sort Pseudocode

A

A = A​rray of data
for i = 1 to A.length - 1:
elem = A[i]
j = i -1
while j > 0 and A[j] > elem:
A[j+1] = A[j]
j = j -1 A[j+1] = elem

41
Q

Merge Sort

A

Merge sort is an example of a class of algorithms known as “​divide and conquer​” for reasons which will soon become clear.

Merge sort is formed from​ two functions​. One called ​MergeSort​and another called Merge​. ​MergeSort​divides its input into two parts and ​recursively ​calls ​MergeSort​on each of those two parts until they are of length 1 at which point ​Merge​is called. ​Merge puts groups of elements back together in a special way, ensuring that the final group produced is sorted.

42
Q

Merge sort Pseudocode

A

A = A​rray of data
MergeSort(A)
if A.length <= 1:
return A
else:
mid = A.length / 2
left = A[0…mid]
right = A[mid+1…A.length-1]
leftSort = MergeSort(left)
rightSort = MergeSort(right)
return Merge(leftSort, rightSort)

43
Q

Quick Sort

A

Quick sort works by ​selecting an element​, often the central element (called a ​pivot​), and dividing the input ​around it. Elements smaller than the pivot are placed in a list to the left of the pivot and others are placed in a list to the right.

This process is then repeated ​recursively ​on each new list until all elements in the input are​ old pivots themselves ​or ​form a list of length 1​. Despite the name, quick sort​ isn’t particularly fast​, in fact it’s time complexity is ​O(n2​)​ ​.

44
Q

Searching algorithms

A

Searching algorithms are used to​ find a specified element ​within a data structure. For example, a searching algorithm could be used to find the name “Alan Turing” in an array of names.

Numerous different searching algorithms exist, each of which is ​suited to a particular data structure ​of​ format of data​. Different searching algorithms are used depending on each individual scenario.

45
Q

Binary Search

A

The binary search algorithm can only be applied on ​sorted data ​and works by finding the middle element ​in a list of data before deciding ​which side ​of the data the desired element is to be found in. The unwanted half of the data is then ​discarded ​and the process repeated ​until the desired element is ​found ​or until it is known that the desired element doesn’t exist ​in the data.

Pseudocode for binary search is shown below.
A =​ Array of data
x =​ Desired element
low = 0
high = A.length -1
while low <= high:
mid = (low + high) / 2
if A[mid] == x:
return mid
else if A[mid] > x:
high = mid -1
else:
low = mid + 1
endif
endwhile

46
Q

Linear Search

A

Linear search is the​ most basic ​searching algorithm. You can think of it as going along a bookshelf one by one until you come across the book you’re looking for. Sometimes the algorithm​ gets lucky ​and finds the desired element almost ​immediately​, while in other situations, the algorithm is ​incredibly inefficient​. It’s time complexity is ​O(n)​.
There’s a lot of ​pot luck ​involved with linear search, but it’s incredibly ​easy to implement​. Unlike binary search, linear search doesn’t require the data to be sorted.

A =​ Array of data
x =​ Desired element
i= 0
while i < A.length:
if A[i] == x:
return i
else:
i = i +1
endif
endwhile

47
Q

Dijkstra’s algorithm

A

Dijkstra’s algorithm finds the ​shortest path between two nodes in a weighted graph​. Remember that graphs are used as an abstraction for a variety of real life scenarios, which means nodes and edges can represent different entities. For example, graphs can be used to model networks, with nodes representing devices and arc costs representing network traffic. In this scenario, Dijkstra’s algorithm can be used to determine the optimal route to use when forwarding packets through a network.

Regardless of the scenario, complexity or size of the graph, Dijkstra’s algorithm can be used to work out the shortest path using a consistent method each time. The algorithm is commonly implemented using a ​priority queue​, with the smallest distances being stored at the front of the list.

48
Q

A* Algorithm

A

The A* Algorithm is a general path-finding algorithm which is an improvement of Dijkstra’s algorithm and has ​two cost functions​:
1) The first cost function is the actual cost between two nodes. This is the same cost as is measured in Dijkstra’s algorithm.
2) The second cost function is an ​approximate cost from node x to the final node​. This is called a heuristic, and aims to make the shortest path finding process more efficient. The approximate cost might be an estimate of the length between x and the final node, calculated using trigonometry.

When calculating the distance between two nodes using the A* algorithm, the approximate cost is added onto the actual cost. This is used to determine which node is visited next. This differs from Dijkstra’s algorithm as a node with a lower actual cost may be rejected in favour of a node with a lower total cost. This is meant to reduce the total time taken to find the shortest path. The heuristic costs are labelled in red on the diagram.