# Lecture 15-17 - Dynamic Programming Flashcards

Name three key differences between Greedy and Dynamic Programming paradigms

Greedy

o Build up a solution incrementally.

o Iteratively decompose and reduce the size of the problem.

o Top-down approach.

Dynamic programming:

o Solve all possible sub-problems.

o Assemble them to build up solutions to larger problems.

o Bottom-up approach.

Define the optimal sub-structure mathematically

Let Sij = subset of activities in S that start after ai finishes and finish before aj starts. Sij = {ak ∈ S :∀i, j fi ≤ sk < fk ≤ sj} • Aij = optimal solution to Sij • Aij = Aik U { ak } U Akj

How many sub-problems, and choices to consider, are there in the activity selection problem before and after Greedy choice?

Before theorem

Pick the best m such that Aij = Aim U { am } U Amj

Subproblems: 2

Choices: j-i-1

After theorem:

Choose am∈Sij with the earliest finish time (greedy choice)

Sub-problems: 1

Choices to consider: 1

Define the Greedy choice theorem:

Theorem:

Let Sij ≠ ∅, and let am be the activity in Sij with the

earliest finish time: fm = min{ fk : ak ∈Sij}. Then:

1. am is used in some maximum-size subset of

mutually compatible activities of Sij.

2. Sim = ∅, so that choosing am leaves Smj as the only

nonempty subproblem.

What is the input and output in weighted interval scheduling?

Is the greedy choice always effective in this problem?

Input: Set S of n activities, a1, a2, …, an.

– si = start time of activity i.

– fi = finish time of activity i.

– wi = weight of activity i

• Output: find maximum weight subset of mutually compatible activities.

Greedy choice isn’t always effective.

Define Binary choice mathematically in terms of Opt(j) and p(j).

OPT(j) = value of the optimal solution to the problem

p(j) = largest index i < j such that activity/job i is

compatible with activity/job j.

Opt(j) =

0 if j = 0

max {wj + OPT(p(j)), OPT(j-1)} otherwise

Define memoization.

Memoization: Cache results of each subproblem; lookup as needed.

for j = 1 to n

M[j] ← empty.

M[0] ← 0.

M-Compute-Opt(j) if M[j] is empty M[j] ← max(v[j]+M-Compute-Opt(p[j]), M-Compute-Opt(j–1)). return M[j].

Prove that the memoized version of Binary choice takes O(nlogn) time.

Sort by finish time: O(nlogn) Computing p(): O(nlogn) via sorting by start time

M-Compute-opt(j): O(n)

each invocation takes O(1) time and either:

1. returns existing M[j]

2. fills in one new entry M[j] and makes two recursive calls (at most 2n recursive calls)

Remark: O(n) if jobs are presorted by start and finish times.

What’s the main idea of dynamic programming (in words)? How is this used in the Bottom-up algorithm?

Solve the sub-problems in an order that makes sure when you need an answer, it’s already been computed.

When we compute M[j], we only need values M[k] for k < j

BOTTOM-UP (n;s1,…,sn;f1,…,fn;v1,…,vn)

Sort jobs by finish time so that f1≤f2≤…≤fn.

Compute p(1), p(2), …, p(n).

M[0]←0

for j = 1 TO n

M[j] ← max { vj + M[p(j)], M[j–1] }

How many recursive calls are there in the Find-Solution algorithm?

of recursive calls ≤ n ⇒ O(n).

Do you remember how the reconstruction works (table example Lec 15)

Yes

Define the shortest path u to v in terms of weight w().

w (p) = min {w(p) : u -> v} if path exists

inf otherwise

What type of queues does Dijkstra’s algorithm use?

Are negative-weight edges allowed?

What type of keys does each node hold?

Is it dp or greedy choice?

Why is re-insertion in queue not a good idea to deal with negative weight edges? Why is adding a constant also not a good idea? give an example.

priority queue.

No negative weighted edges.

Keys are shortest-path weights (d[v])

Greedy.

Reinsertion -> exponential running time

Constant -> doesn’t always work, see lect 16

How is the Bellman-Ford algorithm different than Djikstra’s?

it allows negative-weight edges.

How does Bellman-Ford detect negative weight cycles?

If Bellman-Ford has not converged after V(G) - 1

iterations, then there cannot be a shortest path tree,

so there must be a negative weight cycle.

Returns TRUE if no negative-weight cycles

reachable from s, FALSE otherwise.

What is the time complexity of Bellman-Ford? is it larger than djisktra’s?

O(VE)

Yes, because we relax much more often than in djikstra’s.