2.3 - Algorithms Flashcards

1
Q

What two pieces of information allow you to analyse an algorithm?

A

● Time Complexity
● Space Complexity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is meant by the time complexity of an algorithm?

A

The amount of time required to solve a particular problem

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How do you measure of the time complexity?

A

Big-O notation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does the big-O notation show?

A

The effectiveness of an algorithm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the Big-O notation good for?

A

It allows you to predict the amount of time taken to solve an algorithm given the number of items stored

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What does a linear time complexity mean?

A

O(n)

This is where the time taken for an algorithm increases proportionally or at the same rate with the size of the data set.

For example, finding the largest number in a list. If the list size doubles, the time taken doubles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What does a constant time complexity mean?

A

O(1)

This is where the time taken for an algorithm stays the same regardless of the size of the data set.

For example, printing the first letter of a string. No matter how big the string gets it won’t take longer to display the first letter.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does a polynomial time complexity mean?

A

O(n^k) (where k>=0)

This is where the time taken for an algorithm increases proportionally to n to the power of a constant.

Bubble sort is an example of such an algorithm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is space complexity?

A

The space complexity is the amount of storage space an algorithm takes up

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is an algorithm?

A

An algorithm is a series of steps that complete a task

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How do you reduce the space complexity?

A

Try to complete all of the operations on the same data set

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How do you reduce the time complexity of an algorithm?

A

You reduce the amount of embedded for loops, and then reduce the amount of items you complete the operations on i.e. divide and conquer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What does a Logarithmic time complexity mean?

A

O(log n)
This is where the time taken for an algorithm increases logarithmically as the data set increases.

As n increases, the time taken increases at a slower rate, e.g. Binary search.

The inverse of exponential growth.

Scales up well as does not increase significantly with the number of data items.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the Big-O notation of a linear search algorithm?

A

O(n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the Big-O notation of a binary search algorithm?

A

O(log(n))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the Big-O notation of a bubble sort algorithm?

A

O(n^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Are stacks FIFO or FILO?

A

FILO

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is the significance of the back pointer in the array representation of a queue?

A

Holds the location of the next available space in the queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the purpose of the front pointer in the array representation of a queue?

A

Points to the space containing the first item in the queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What value is the top pointer initialised to in the array representation of a stack?

A

-1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Give pseudocode for the two functions which add new elements to stacks and queues

A

STACK
push(element)
top += 1
A[top] = element

QUEUE
enqueue(element)
A[back] = element
back += 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Give pseudocode to push onto a stack

A

PROCEDURE AddToStack (item):

IF top == max THEN

stackFull = True

ELSE

top = top + 1

stack[top] = item

ENDIF

ENDPROCEDURE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Give pseudocode to pop onto a stack

A

PROCEDURE DeleteFromStack (item):

IF top == min THEN

stackEmpty = True

ELSE

stack[top] = item

top = top - 1

ENDIF

ENDPROCEDURE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Give pseudocode to enqueue onto a queue

A

PROCEDURE AddToQueue (item):

IF ((front - rear) + 1) == max THEN

queueFull = True

ELSE

rear = rear - 1

queue[rear] = item

ENDIF

ENDPROCEDURE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Give pseudocode to dequeue onto a queue
PROCEDURE DeleteFromQueue (item): IF front == min THEN queueEmpty = True ELSE queue[front] = item front = front + 1 ENDIF ENDPROCEDURE
26
Perform the first pass of bubble sort on the values: 4, 2, 3, 5, 1
4 2 3 5 1 2 4 3 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 1 5
27
Perform insertion sort on the values: 4, 2, 3, 5, 1
4 2 3 5 1 2 4 3 5 1 2 3 4 5 1 2 3 4 5 1 1 2 3 4 5
28
Perform merge sort on the values: 4, 2, 3, 5
4 2 3 5 4 2 3 5 4 2 3 5 2 4 3 5 2 3 4 5
29
Which of our sorting algorithms is the most efficient?
Merge sort
30
Which sorting algorithm uses pivots?
Quick sort
31
Perform quick sort on the values: 3 5 4 1 2
3 5 4 1 2 3 1 2 4 5 1 3 2 4 5 1 2 3 4 5
32
Which searching algorithm requires data to be sorted before starting?
Binary search
33
Perform a linear search for 9 on the values: 1, 3, 6, 7, 9, 12, 15
1 3 6 7 9 12 15 1 3 6 7 9 12 15 1 3 6 7 9 12 15 1 3 6 7 9 12 15 1 3 6 7 9 12 15 1 3 6 7 9 12 15
34
What is the time complexity of binary search?
O(log n)
35
Perform a binary search for 6 on the values: 1, 3, 6, 7, 9, 12, 15
1 3 6 7 9 12 15 1 3 6 7 9 12 15 1 3 6 7 9 12 15 1 3 6 7 9 12 15
36
Which searching algorithm has a time complexity of O(n)?
Linear search
37
What is Dijkstra's algorithm?
A shortest path algorithm used to find the shortest distance between two nodes in a network.
38
How is Dijkstra's algorithm implemented?
You use a priority queue which has the shortest lengths at the front of the queue.
39
What is the A* algorithm?
The A* algorithm is a general path-finding algorithm which has two cost functions: actual cost and an approximate cost.
40
How does the A* algorithm differ from Dijkstra’s algorithm?
A* algorithm has two cost functions: the actual cost and the approximate cost
41
Why are heuristics used in the A* algorithm?
To calculate an approximate cost from node x to the final node. This aims to make the shortest path finding process more efficient.
42
What data structure can be used to implement Dijkstra’s algorithm?
Priority queue
43
State a disadvantage of using the A* algorithm
The speed of the algorithm is constrained by the accuracy of the heuristic
44
What is depth first traversal?
Visit all nodes to the left of the root node Visit right Visit root node Repeat three points for each node visited Depth first isn’t guaranteed to find the quickest solution and possibly may never find the solution if we don’t take precautions not to revisit previously visited states.
45
What is breadth first traversal?
Visit root node Visit all direct subnodes (children) Visit all subnodes of first subnode Repeat three points for each subnode visited Breadth first requires more memory than Depth first search. It is slower if you are looking at deep parts of the tree.
46
Write pseudocode for depth first traversal
FUNCTION dfs(graph, node, visited): markAllVertices (notVisited) createStack() start = currentNode markAsVisited(start) pushIntoStack(start) WHILE StackIsEmpty() == false popFromStack(currentNode) WHILE allNodesVisited() == false markAsVisited(currentNode) //following sub-routine pushes all nodes connected to //currentNode AND that are unvisited pushUnvisitedAdjacents() ENDWHILE ENDWHILE ENDFUNCTION
47
Write pseudocode for breadth first traversal
FUNCTION bfs(graph, node): markAllVertices (notVisited) createQueue() start = currentNode markAsVisited(start) pushIntoQueue(start) WHILE QueueIsEmpty() == false popFromQueue(currentNode) WHILE allNodesVisited() == false markAsVisited(currentNode) //following sub-routine pushes all nodes connected to //currentNode AND that are unvisited pushUnvisitedAdjacents() ENDWHILE ENDWHILE ENDFUNCTION
48
Write pseudocode that outputs linked list in order
FUNCTION OutputLinkedListInOrder (): Ptr = start value REPEAT Go to node(Ptr value) OUTPUT data at node Ptr = value of next item Ptr at node UNTIL Ptr = 0 ENDFUNCTION
49
Write pseudocode that searches for an item in a linked list
FUNCTION SearchForItemInLinkedList (): Ptr = start value REPEAT Go to node(Ptr value) IF data at node == search item OUTPUT AND STOP ELSE Ptr = value of next item Ptr at node ENDIF UNTIL Ptr = 0 OUTPUT data item not found ENDFUNCTION
50
How do you do bubble sort?
Is intuitive (easy to understand and program) but woefully inefficient. Uses a temp element. Moves through the data in the list repeatedly in a linear way Start at the beginning and compare the first item with the second. If they are out of order, swap them and set a variable swapMade true. Do the same with the second and third item, third and fourth, and so on until the end of the list. When, at the end of the list, if swapMade is true, change it to false and start again; otherwise, If it is false, the list is sorted and the algorithm stops.
51
Write pseudocode that performs bubble sort
PROCEDURE (items): swapMade = True WHILE swapMade == True swapMade = False position = 0 FOR position = 0 TO length(list) - 2 IF items[position] > items[position + 1] THEN temp = items[position] items[count] = items[count + 1] items[count + 1] = temp swapMade = True ENDIF NEXT position ENDWHILE PRINT(items) ENDPROCEDURE
52
Write pseudocode that performs insertion sort
PROCEDURE InsertionSort (list): item = length(list) FOR index = 1 TO item - 1 currentvalue = list[index] position = index WHILE position > 0 AND list[position - 1] > currentvalue list[position] = list[position - 1] position = position - 1 ENDWHILE list[position] = currentvalue NEXT index ENDPROCEDURE
53
How does merge sort work?
Works by splitting n data items into n sub-lists one item big. These lists are then merged into sorted lists two items big, which are merged into lists four items big, and so on until there is one sorted list. Is a recursive algorithm so may require more memory space. Is fast and more efficient with larger volumes of data to sort.
54
Write pseudocode that does a merge sort
PROCEDURE MergeSort (listA, listB): a = 0 b = 0 n = 0 WHILE length(listA) > 1 AND length(listB) > 1 IF listA(a) < listB(b) THEN newlist(n) = listA(a) a = a + 1 ELSE newlist(n) = listB(b) b = b + 1 ENDIF n = n + 1 ENDWHILE WHILE length(listA) > 1 newlist(n) = listA(a) a = a + 1 n = n + 1 ENDWHILE WHILE length(listB) > 1 newlist(n) = listB(b) b = b + 1 n = n + 1 ENDWHILE ENDPROCEDURE
55
How do you do quick sort?
Uses divide and conquer. Picks an item as a ‘pivot’. It then creates two sub-lists: those bigger than the pivot and those smaller than it. The same process is then applied recursively to the sub-lists until all items are pivots, which will be in the correct order. (As this recursive algorithm can be memory intensive, iterative variants exist.) Alternative method uses two pointers. Compares the numbers at the pointers and swaps them if they are in the wrong order. Moves one pointer at a time. Very quick for large sets of data. Initial arrangement of data affects the time taken. Harder to code.
56
Write pseudocode for a quick sort
PROCEDURE QuickSort (list, leftPtr, rightPtr): leftPtr = list[start] rightPtr = list[end] WHILE leftPtr! != rightPtr WHILE list[leftPtr] < list[rightPtr] AND leftPtr! != rightPtr leftPtr = leftPtr + 1 ENDWHILE temp = list[leftPtr] list[leftPtr] = list[rightPtr] list[rightPtr] = temp WHILE list[leftPtr] < list[rightPtr] AND leftPtr! != rightPtr rightPtr = rightPtr - 1 ENDWHILE temp = list[leftPtr] list[leftPtr] = list[rightPtr] list[rightPtr] = temp ENDWHILE ENDPROCEDURE
57
How does the binary search work?
Requires the list to be sorted in order to allow the appropriate items to be discarded. It involves checking the item in the middle of the bounds of the space being searched. It the middle item is bigger than the item we are looking for, it becomes the upper bound. If it is smaller than the item we are looking for, it becomes the lower bound. Repeatedly discards and halves the list at each step until the item is found. Is usually faster in a large set of data than linear search because fewer items are checked so is more efficient for large files. Doesn't benefit from an increase in speed with additional processors. Can perform better on large data sets with one processor than linear search with many processors.
58
Write pseudocode for an iterative binary search
FUNCTION BinaryS (list, value, leftPtr, rightPtr): Found = False IF rightPtr < leftPtr THEN RETURN error message ENDIF WHILE Found == False mid = (leftPtr + rightPtr)/2) IF list[mid] > value THEN rightPtr = mid - 1 ELSEIF list[mid] < value THEN leftPtr = mid + 1 ELSE Found = True ENDIF ENDWHILE RETURN mid ENDFUNCTION
59
Write pseudocode for an recursive binary search
FUNCTION BinaryS (list, value, leftPtr, rightPtr): IF rightPtr < leftPtr THEN RETURN error message ENDIF mid = (leftPtr + rightPtr)/2) IF list[mid] > value THEN RETURN BinaryS (list, value, leftPtr, mid-1) ELSEIF list[mid] < value THEN RETURN BinaryS (list, value, mid+1, rightPtr) ELSE RETURN mid ENDFUNCTION
60
How does linear search work?
Start at the first location and check each subsequent location until the desired item is found or the end of the list is reached. Does not needed an ordered list and searches through all items from the beginning one by one. Generally performs much better than binary search if the list is small or if the item being searched for is very close the to start of the list. Can have multiple processors searching different areas at the same time. Linear search scales very with additional processors.
61
Write pseudocode for a linear search
FUNCTION LinearS (list, value): Ptr = 0 WHILE Ptr < length(list) AND list[Ptr] != value Ptr = Ptr + 1 ENDWHILE IF Ptr >= length(list) THEN PRINT("Item is not in the list") ELSE PRINT("Item is at location "+Ptr) ENDIF ENDFUNCTION
62
What is the worst, average and best case for a bubble sort?
Worst Case: n^2 Average Case: n^2 Best Case: n
63
What is the worst, average and best case for a insertion sort?
Worst Case: n^2 Average Case: n^2 Best Case: n
64
What is the worst, average and best case for a merge sort?
Worst Case: n log n Average Case: n log n Best Case: n log n
65
What is the worst, average and best case for a quick sort?
Worst Case: n^2 Average Case: n log n Best Case: n log n
66
What is the worst, average and best case for a binary search?
Worst Case: log2 n Average Case: log2 n-1 Best Case: 1
67
What is the worst, average and best case for a linear search?
Worst Case: n Average Case: n/2 Best Case: 1