# Exam 1 Flashcards

## Studying

What is the approach for designing a dynamic programming problem?

- Define the subproblem in words and try the same problem on a prefix of the input
- Define e recursive relation which expresses k(i) - the solution to the ith subproblem - in terms of smaller subproblems ex) k(1) … k(i-1)

What are some example algorithms for the divide and conquering technique?

merge sort and solving recurrences

Euclids GCD algorithm

Multiplying n-bit integers

FFT

Median

What is Gause’s idea?

It’s based on the idea that:

- multiplication is expensive

- adding and subtracting is cheap

- the product of 2 complex numbers results in 4 terms where each is multiplied with itself

- thus we need 4 real number multiplications to solve

- We can reduce these term multiplications from 4 to 3

original) Ac - bd - (bc + ad)

new) ac, bd, (a +b)(c+d)

both give us the result (a +bi)(c + di)

How does the D&C approach work for the multiplying 2 numbers for the naive approach?

input = x, y where both are n-bit integers

output = z where z = xy?

Runtime is based on the number of bits in x and y

Idea is the break each of the inputs into 2 halves

- Break the input into 2 halves and solve the problems on the 2 halves recursively

ex) x = 182 to binary 10110110

X_l = 1011 = 11

X_R = 0110 = 6

182 = 11 * 2^4 + 6

General:

x = XL * 2^(n / 2) + XR

How does the D&C approach work for the multiplying 2 numbers for the recursive approach of easy multiply? What is the runtime?

input = x, y where both are n-bit integers

output = z where z = xy?

partition x: XL * 2^(n / 2) + XR

partition y: = YL * 2^(n / 2) + YR

So xy = (XL * 2^(n / 2) + XR) (YL * 2^(n / 2) + YR)

Simplify it to be:

(2^n * XL * YL) + (2^(n/2))(XLYR + XRYL) + XRYR

- This results in 3 terms (multiplied with each other) and then added together to give us xy

This is the pseudocode for easymultiply(x,y)

EasyMultiply PseudoCode:

A = easymultiply(xL,yL)

B = easymultiply(xR,yR)

C = easymultiply(xL,yR)

D = easymultiply(xR,yL)

z = (2^n) * A + (2^n/2)(C + D) + B

Runtime = 4 T(n/2) + O(n) = O(n^2)

What is the equation for the improved approach or fast multiply? What change was made?

We simply from 4 to 3 multiplications

(2^n * XL * YL) + (2^(n/2))(XLYR + XRYL) + XRYR

We convert the above to the following:

xy = (2^n)A + (2^n/2)(C- A - B) + B

which is now 3 terms

The run time is now:

O(n ^ log2 3)

This is from:

T(n) = 3T(n/2) + O(n)

evaluates to O(n ^ log2 3)

What are the general forms of recurrence?

For:

T(n) = aT(n / b ) + O(n)

O(n ^ log b a). if a > b

O(n log n). if a = b

O(n) if a < b

What is the name of the shortest path algorithm that finds the single source shortest path from S to any vertex? What is the input, output, and runtime? How does this algorithm work (general steps)? What is it’s limitation?

Dijkstra’s

Input: graph, S E V

Output: distance of Z for all vertices in the graph (Z E V)

The way works:

- Explores the graph in BFS approach

- Take O (n + m) for the BFS exploration (linear time)

- Also takes O (log n) for the min heap or priority queue for the weights

Take total time: O (n + m) * log (n)

Requires positive weights in the graph

What are negative weight cycles in the shortest path problem and how do they affect the result?

They disrupt the finding of the shortest path because a walk through the cycle ends up with negative distances / weights and potentially negative infinity for these values when walking

What is the dynamic programming idea when it comes to finding the shortest path for a single source to the vertices? What is the base and recurrence cases?

- We use a prefix of the result path and condition on the number of edges in the path
- The DP idea => introduce variable i ranging from 0 to n - 1 which is the number of edges that we allow on the paths

For 0 <= i <= n - 1 Z E V

Let D(i, z) = length of the shortest path from S to Z using i <= edges

Base cases:

D(0,S) = 0 the distance of S to itself

For all Z E S, D (0, Z) = infinity if no edge to Z

Recurrence case:

For i >= 1 (when i is at least 1):

If using at most i - 1 edges then

Min { D(i-1, z) , miny { D(i-1, y) + w(y,z) } }

D(i-1, z) if solution uses i - 1 edges

Try all choices for penultimate vertex y and take shortest path from s to y using i-1 edgers + the length of the last edge ( miny { D(i-1, y) + w(y,z) } )

What is the name of the algorithm that is similar to Dijkstra’s but handles negative weight edges and negative cycles? What is the overall time complexity? What are the steps and what does it ultimately return?

Bellman-Ford

Time complexity: O(nm)

Steps:

Init base cases where i = 0

D(0, S) = 0

D(0, Z) = infinity for all Z E V

Then for i = 1 to n - 1

For all Z E V

Set D(i, z) = D(i - 1, z)

For y -> z E edges:

Update to D(i, z) = D(i - 1, y) + w(y,z) if it’s smaller than D(i - 1, z)

Return D(n-1, . )

How are negative cycles detected in Bellman Ford?

When there’s a negative length cycle, there’s going to be shorter distance for the same vertex in the next i value which keeping shrinking over time

When there’s a negative cycle, i = n is going to smaller than i = n - 1. This shouldn’t normally happen

So how do we determine if there’s a negative weight cycle:

We compare the 2 rows n and n - 1 to see if there different - specifically if n is smaller than n - 1

What are 2 algorithm approaches to finding all pairs shortest path for a graph with edges?

A. All-pairs shortest path Bellman-Ford variant

how it works:

- input a graph with vertices/edges/weights for edges

For all edge pairs in the graph y -> z, we find the shortest path by running Bellman-Ford for all S E V

Run time is O(n^2 m)

B. Floyd Warshall

Run time: O(n^3)

How it works:

For every S from 1 to n

Check every t 1 to n

If there’s an edge between s and t then base case = w(s,t) the weight s and t

Else it’s an infinity

For 1 <= i <= n

For 1 <= s <= n

For 1 <= t <= n

D(i,s,t) = min { D(i-1, s, t), D(i - 1, s, i) + D(i - 1, i, t) }

Return D(n, . , . )

We return the case where i = n (return matrix D(n . .) for all possible pair paths

What is the difference between the 2 different all shortest paths algorithms in how they handle negative cycles?

All paths Belman-Ford:

Single source shortest path

Won’t always find the negative cycle because cycles may not be reachable from the start vertex

Floyd-Warshall:

All pairs shortest path

Will find the negative cycles

What is the base case and recurse case for the Fibonacci recursive problem? What is the run time? Is this the best solution? Why or why not?

The recursive formula for the fibonacci problem is

Fn = Fn-1 + Fn-2 (this is true for any nth Fib that is our input)

The base case for this problem is the first two fibonacci numbers which are 0 and 1

F(0) = 0

F(1) = 1

Runtime:

T(n) <= O(1) + T(n-1) + T(n-2)

O(1.618 ^ n ) or exponential run time

This is not the most optimal solution and therefore not the best solution

Why it’ s not optimal is because we calculate the same subproblems over and over again when recursing down the tree of subproblems. We need to store the previously solved subproblems