Dynamic programming Flashcards

1
Q

What is the basic idea behind dynammic programming?

A
  • Dynammic programming solves problems by combining the solutions to subproblems.*
  • Dynamic programming is applicable when the sub problems are not independent, that is, when sub problems share subsubproblems.*
  • A dynamic-programming algorithm solves every subsubproblem just once and then saves its answer in a table.*
  • Dynamic programming is typically applied to optimization problems. There are many solution, and we wish to find an optimal one.*
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the 4 steps of dynamic programming?

A
    1. Characterize the structure of OPT*
    1. Recursively define the value of OPT*
    1. Compute the value of OPT in a bottom-up fashion*
    1. Construct OPT from computed information*
  • Step 4 can be ommited if only the value of an optimal solution is required.*
  • When we do perform step 4, we sometimes maintain additional information during step 3 to ease the construction of OPT.*
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What shall we do if we found a working recursion which is not efficient?(becuase it computes the same subproblem over and over again)

A
  • We arrange for each subproblem to be solved only once, saving its solution.*
  • Dynamic programming uses additional memory to save computation time*
  • Dynamic programming approach runs in polynomial time when the number of* distinct sub problems involved is polynomial in the input size.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

There are usually two equivalent ways to implement a dynamic-programming approach. What are they?

A

top-down with memoization

  • we write the procedure recursively in a natural manner, but modified to save the result of each subproblem.(usually in an array)
  • this method first checks to see whether it has previously solved this problem. If so, it returns the saved value, else it computes it.

bottom-up method

  • this method is derived from the notion that solving any particular subproblem depends only on solving “smaller” subproblems.
  • we sort the subproblems by size and solve them in size-order, smallest first. When solving a partricular subproblem we already solved all of the smaller problems its solution depends upon, and we have saved their solutions.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Rod cutting problem

can you describe the recursive alogirthm which runs in exponential time?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Rod cutting problem

Describe top-down with memoization algorithm

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Rod cutting problem

describe the bottom-up algorithm

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
  • Rod cutting problem*
  • describe the extended bottom-up algorithm, where we wish to print the solution.*
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the steps in solving a problem using dynammic programming?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Matrix-chain multiplication

give a formal definition of the problem

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Matrix-chain multiplication

describe the subproblems and OPT

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Matrix-chain multiplication

describe the formula

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Matrix-chain problem

What are the base cases?

A
  • The base cases are subproblems which suggest an immidiate result.*
  • In the Matrix-chain problem, every instance of the problem with only one matrix is a base case. There is only one solution which is 0 - no multiplication is needed.*
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
  • Matrix-chain problem*
  • what is the proof for the formula?*
  • *notice how we start with OPT[i,j] and using definitions (and a theorem), we end up with a recursive defintion, as in the formula.*
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Matrix-Chain

describe the top-down solution

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Matrix-chain

describe the bottom-up algorithm

A
17
Q

Matrix-chain

can you recall the table-sketch for the matrix-chain problem?

A
18
Q

What do we need to prove in the bottom-up algorithm?

A
  1. prove that once our algorithm encounters a new subproblem it has already computed the subsubproblems for our subproblem
  2. prove that the computation of a subproblem returns the value of OPT(subproblem) we defined in the formula.
19
Q
  • Matrix-Chain*
  • describe the reconstruction algorithm*
A
20
Q

The knapsack problem

describe the problem

A
21
Q

The knapsack problem

define the set of solutions for a typical subproblem and their OPT

A
22
Q

The knapsack problem

divide the set of solutions into subsets

A
23
Q

The knapsack problem

describe the structure formula

A
24
Q

The knapsack problem

  • in which two theorems we use in order to prove the formula.*
  • Tip: recall the formula; OPT is divided into cases, what is not trivial and needs to be needs to be explained*
A
25
Q

The knapsack problem

  • describe the proof for the 2nd theorem.*
  • recall: Normally we prove by defining two sets and show equality between them.*
A
26
Q

The knapsack problem

describe the proof for the formula structure

A
27
Q

The knapsack problem

describe the iterative algorithm

A
28
Q

The knapsack problem

correctness proof

A
29
Q

The snapsack problem

The run-time for the iteration algorithm is quite bizzare. Describe it

A
30
Q

The knapsack problem

describe the reconstruction algorithm

A
31
Q

The knapsack problem

what is the proof for the reconstruction algorithm?

A
32
Q

Optimal substructure varies across problem domains in two ways, what are they?

A
  1. How many subproblems an optimal solution to the original problem uses
  2. How many choices we have in determining which sub problems to use in an optimal solution.
33
Q

The running time of a dynamic problem algorithm depends on the product of two factors:

A
  • The number of subproblems overall
  • How many choices we look at for each subproblem
34
Q

If subproblems are not independent we cannot obtain an optimal structure. Give an example of dependence of sub problems in the unweighted longest simple path problem

A

Assume we decompose the longest simple path from s to v, P= {s,..,k,..v} into two paths:

  • longest simple path from s to k
  • longest simple path from k to v
  • Now, it might happen that both in the first path and the second we pass through vertex t - creating a circle. The 2nd subproblem requires the 1st to not hold any of the vertices of its path. Thus these two are dependent.*
  • With shortest path it cannot happen.*
  • Consider the path P ={s,…k,…v} and its two subpaths:*
  • shortest path from s to k
  • shortest path from k to v
  • Assume both the shortest path from s to k and the shortest path from k to v pass through vertex t.*
  • That means vertex “k” is not needed, we could move from s to t, and from t to k - which is shorter. contradicition.*
35
Q

Important point of “transforming” a dynamic programming problem, to such that can’t be solve dynamically.

A

Main point is that we turned the subproblems from beind independent to being dependent.