Harvard University Dynamic Programming Worksheet

Practice Set 3: SolutionsProblem 1:
Bubble-down(i)
if i ≤ bA.heapsize/2c
#make sure this is not a leaf.
if 2i + 1 ≤ A.heapsize
#make sure that there are two children
if A[2i] > A[2i+1] and A[2i] > A[i] #the left child larger than parent and right child.
Swap(A[i], A[2i])
Bubble-down(2i)
else if A[2i+1]>A[i]
#right child larger than parent
Swap(A[i], A[2i+1])
Bubble-down(2i+1)
else
# there is only one child
if A[2i] > A[i]
Swap(A[i],A[2i])
Bubble-up(i)
if i > 1
if A[i] > A[bi/2c]
Swap(A[i],A[bi/2c]
Bubble-up(bi/2c)
Problem 2
• Inserting into a heap is similar to the step carried out in the iterative heap building method. We simply add the new
element to the array at the next available index, increase the heapsize, and perform a bubble-up. Since the bubble-up
operation takes time O(log n), the insertion runs in time O(log n). Suppose the new element to be inserted is k:
A.heapsize ++
A[A.heapsize] = k
Bubble-up(A,A.heapsize)
• Below is the pseudocode for BiggerThan(A,x,i). The parameter i is necessary in order to efficiently implement a
recursive algorithm. The solution works by examining the value at A [i]. If it is larger than x, we print out the value.
Then we make a recursive call to the two children of the node at position i. When a node is discovered whose value is
smaller than x, no recursive calls are made, because we know that all values in the subtree are out of range.
BiggerThan(A,x,i)
if i ≤ A.heapsize
if A [i] > x.
#only continue if index i is in the heap A
#if key is smaller than x, no recursive calls are necessary
print A [i]
BiggerThan(A,x,2i)
BiggerThan(A,x,2i+1)
#repeat for left child
#repeat for right child
In the worst-case, the algorithm above searches through both subtrees at every node. The subtrees are not exactly the
same size, but we will approximate them here as about n/2 nodes in each subtree. In this case, the recurrence for
the runtime is T (n) = 2T (n/2) + c. The solution to this recurrence can be determined using Master method, where
k = log2 2 = 1, nk = n, and f (n) = c. Therefore T (n) is Θ(n).
In the best-case, the algorithm may return in constant time. If the root node is smaller than x, then no recursive calls
are made. Therefore T (n) = O(1) in the best case.
Problem 3:
This question demonstrates that although max-heaps are excellent for finding the max and updating values in the tree,
they are not very efficient when it comes to finding the minimum. A description of the algorithm is given below. The
algorithm takes as input the heap A rooted at index i. It finds the minimum of the heaps roots at each child (recursively),
and compares those two results with the value at index i.
Find-min(A,i)
if i > A.heapsize
1
return inf inity
m1 = Find-min(A,2i)
m2 = Find-min(A,2i+1)
return min(m1,m2, A[i])
The runtime of this algorithm is T (n) = 2T (n/2) + c, in any case, which has solution Θ(n). This can be verified by the
master method, where nk = n and f (n) = c, in which case the dominant function is n, and therefore the runtime is Θ(n).
Notice that this is exactly the same runtime if we had just run a brute-force algorithm and looped through all the elements
in the array looking for the minimum.
If we wanted to efficiently retrieve (and possiblly delete) the maximum and the minimum, one option might be to maintain
concurrently both a max heap and a min heap. This works well for retrieval and insertion, since the operations can be done
in time O(1) and time O(log n) respectively. However, if we would like to delete the maximum element, then it must be
deleted from both heaps. The deletion from the max heap runs in time O(log n), but the deletion from the min-heap only
runs in time O(log n) if we can locate the element efficiently. Normally it takes O(n) to search for an element in an array.
This seems like bad news. There are two possible solutions to this problem:
1. The elements of the heap are objects with have attributes that point to their corresponding element in the other heap.
2. A type of heap called a Max-Min heap. The nodes at even levels are smaller than all the descendants, and the nodes
at odd levels are larger than all the descendants. It combines both types of heaps into one.
Problem 4
• The second-largest element must be in one of the children of the root. Therefore, the second-largest element is at A[2]
or A[3]. The third-largest element has two values that are larger than it. Therefore, it could be a child of the root (in
A[2] or A[3]) or it could be a grand-child of the root (in either A[4] or A[5] or A[6] or A[7]).
• In order to place the second-largest element in A[2], we may have to swap it with A[3]. However, this could result in an
invalid heap. In order to repair the heap after the swap, we simply bubble-down from A[3] after the swap, since it may
have a child that is larger than A[3]. Next, in order to place the third-largest element in A[3], we may have to swap it
with one of A[4] or A[5] or A[6] or A[7]). After the swap, we need to bubble-down from the index i = 4, 5, 6 or 7.
The pseudo-code below performs a constant number of operations in addition to Bubble-down, which runs in time
O(log n) in the worst case. Therefore the overall runtime is O(log n)
if A[2] < A[3] #swap second largest element into place Swap A[2] and A[3] Bubble-down(A, 3) #Repair subheap at index 3 k=3 if A[4] > A[k]
k=4
if A[5] > A[k]
k=5
if A[6] > A[k]
k=6
if A[7] > A[k]
k=7
Swap A[3] and A[k]
#swap third-largest element into place
Bubble-down(A, k)
#Repair subheap at index k
Problem 5
If we build a heap where each internal node has 3 children (instead of just 2) then the array would work as follows:
A[1] contains the root
The parent of node i is at index b(i + 1)/3c.
The four children of the node at position i are A[3i − 1], A[3i], A[3i + 1]
The height of this tree is Θ(log3 (n)), in fact it is exactly blog3 (2n)c.
2
The bubble-up procedure works in the same way. We swap a node with its parent if and only if the parent is smaller.
The bubble-down procedure would also be the same. We bubble-down as long as the current node has a child that is larger.
We swap with the largest of the three children. The height of this new heap is still O(log n). Therefore both bubble up
and bubble down still run in time O(log n). Using 3 children instead of 2 doesn’t change the asymptotic runtime of the
bubble-down or bubble-up procedures.
Problem 6:
• An array sorted in decreasing order represents a max-heap:
However, the converse is not true. A max-heap stored in an array is not necessarily in sorted order.
• Iterative method: Suppose the original input array is sorted in decreasing order. As each new leaf is inserted, there
is no work to be done. The leaf is already in its final position, and no swaps are made with the parent node. The heap
is built in O(n) time.
Suppose the original input array is sorted in increasing order. Then the iterative method will have to continue swapping
each new leaf all the way up to the root position, taking time O(log n) for each insert. This represents the worst case of
the iterative method. The overall run time is O(n log n), since there are a total of n nodes inserted to build the heap.
Bottom-up method:
Suppose the original input array is sorted in decreasing order. Then the array is already a heap. The bottom-up method
would go through each node and verify that it is the max of its children. No swaps would be done. The algorithm
would take O(n) time.
Suppose the original input array is sorted in increasing order. Then the bottom-up method will need to carry out a
Bubble-down operation all the way down to the leaf during each iteration of the bottom-up procedure. This represents
the worst-case run time of the bottom-up method, and has runtime O(n) as shown in class.
3
Problem 7:
The partitioning algorithm operates as shown in our class video, by using an external array. When the partitioning is
complete, we call ArrayCopy(B,A,s,f) which copies the elements from B [1, . . . , n] into array A [s, . . . f ].
Partition(A,s,f)
p = Random(s,f).
#A random number between s and f inclusively.
n=f −s+1
initialize B[1.. n]
i = 1, j = n
#index i inserts to the left of B, index j inserts to the right
for count = s to f do:
#loop through the elements of A and insert them into B
if count 6= p
if A[count] < A[p] B[i] = A[count] i=i+1 else B[j] = A[count] j =j−1 B[i] = A[p] #insert the pivot element into the correct position ArrayCopy(B,A,s,f) #copy B into A return j #return the rank of the pivot Problem 8: Recall that the Partition(A,s,f) algorithm from problem 6 returns the rank of the randomly chosen pivot (not the exact index). Randomized-Select(A,s,f,k) if s = f and k = 1. #If the array has size 1, and the rank is 1, then return the first element return A [s] else p = Partition(A,s,f) #p is the RANK of the pivot element (not the array index, the actual rank) if p = k return A [p + s − 1] #return the pivot element by adjusting the rank p to the start index else if k < p return Randomized-Select(A, s, s + p − 2, k) #Recursive call to the left subarray else return Randomized-Select(A, s + p, f, k − p) #Recursive call to the right subarray Problem 9: • Given an array of size n, we can use the Select algorithm to determine which element represents the median. Once we have this element, we can loop through the array and output all elements that are larger than the median. The Select algorithm runs in Θ(n) time. Therefore the entire algorithm runs in O(n). k = Select((n − 1)/2) for i = 1 to n if A[i] > k
Print A[i]
• In order to print out the largest 10 elements, we can run the Select algorithm to determine the element of rank n − 9.
Once we have this element, we can loop through the array and output all elements that are larger than that element.
k = Select(n − 9)
for i = 1 to n
if A[i] ≥ k
Print A[i]
The two main steps are the same as in the previous algorithm, and therefore the runtime is the same.
• In order to print out the largest n/4 elements, we can run the Select algorithm to determine the element of rank
n − n/4 + 1. Once we have this element, we can loop through the array and output all elements that are larger than
that element.
4
k = Select(3n/4 + 1)
for i = 1 to n
if A[i] ≥ k
Print A[i]
The two main steps are the same as in the previous algorithm, and therefore the runtime is the same.
Note that all three options have the same runtime. However, option 2 is the only one that could be solved in time O(n)
using brute-force.
The runtime of the Select algorithm is Θ(n), and the for-loop runs in time O(n), therefore the entire algorithm runs in
time O(n). Therefore the whole algorithm runs in time Θ(n).
Problem 10:
We demonstrate the three different approaches using k = 3 and the array:
{3, 6, 7, 15, 8, 1, 9, 2, 18, 13}
The three different approaches to solving this problem are:
Option 1: If we sort the entire array, using Mergesort for example, this takes time Θ(n log n).
1, 2, 3, 6, 7, 8, 9, 13, 15, 18
Then we could simply pass through the sorted list from back to front and output the first k items from the list.
1, 2, 3, 6, 7, 8, 9, 13, 15, 18
The step would take time Θ(k). The total runtime would then be Θ(n log n).
Option 2: We could build a min-heap in time O(n). Then we could call Delete-Min exactly k times. Each call to
Delete-min takes time O(log n), and so deleting the min k times takes O(k log n). The entire procedure (building the heap
and deleting the min’s) takes time O(k log n + n). Depending on the √
value of k, this results in a√variety of runtimes. For
example, if k = 3, then the runtime is O(log n + n) = O(n), but if k = n, then the runtime is O( n log n + n) = O(n).
Option 3: We could use the Select algorithm to find the element of rank k. This would take time O(n). For k = 3 for
example, the result is element 3. Next, we would loop through the array to find all numbers that are at most 3. This takes
time O(n) and results in the set:
{3, 1, 2}
Finally we sort this set using Merge-sort, resulting in the final output {1, 2, 3}. Since there are only k items in this set, the
time is O(k log k). Thus the total runtime of this third option is √
O(n + k log k). Again, depending
on the value of k, this

results in different runtimes. If k = 3, the runtime is O(n). If k = n, the runtime is O(n + n log n) = O(n).
Problem 11:


• If we divide our numbers into groups of size n, then there would be exactly n groups. Recall that the first step
of the Select algorithm is to find the median of each group.
√ 2 If we find the median of each group using
√ insertion sort
(as in the original algorithm),
then
this
takes
time
O((
n)
)
=
O(n),
and
since
there
are
a
total
of
n groups, then

3/2
sorting all of them takes
n
·
O(n)
=
O(n
).
If
we
find
the
median
of
each
group
using
mergesort,
then
sorting each





group takes time Θ( n log n), but again since there are n groups, the total time is n · Θ( n log n) = Θ(n log n).
Unfortunately, regardless of what sorting algorithm we use, we cannot find the median of the medians in O(n) time.
Therefore this algorithm is not as efficient as the Select algorithm from class, which runs in time O(n).
5
• If we used groups of 3 instead of groups of 5:
1. We can find the median of each mini-group using Insertion-Sort, in constant time per column. We repeat this for
all n/3 groups. Therefore all medians are found in time O(n):
2. Next we make a recursive call to find the median of medians (called x), which takes time T (n/3).
3. Next, we partition the elements into those that that are larger than x and those that are smaller than x. This takes
time Θ(n)
4. Finally we make a recursive call to one of the above sets. The best-case scenario would be a recursive call on input
of size at least
n
1 n
· ·2=
2 3
3
and thus the worst-case scenario is to a set of size at most 2n/3. So the recursive call takes time T (2n/3) in the worst
case.
The worst-case runtime of this recursive algorithm is :
T (n) = T (n/3) + T (2n/3) + cn
We can show by substitution method that this is not O(n) because it is in fact Ω(n log n)
Problem 12
The worst-case runtime of the Randomized-Select is when the algorithm picks the maximum element as the random pivot.
In this case the recursive call is made to an array of size n − 1. The runtime recurrence isT (n) = T (n − 1) + cn. We can
show this has solution O(n2 ): Goal : Show T (n) ≤ dn2 .
Assume: T (n − 1) ≤ d(n − 1)2
Substitute:
T (n) = T (n − 1) + cn
≤ d(n − 1)2 + cn
= d(n2 − 2n + 1) + cn
≤ dn2 − 2dn + d + cn
≤ dn2 − 2dn + dn + cn
≤ dn2 − n(d − c)
≤ dn2
where the last line is true as long as d ≥ c.
In the best-case scenario, the random pivot is actually the element k. In this case, we return the element of rank k after
the partitioning step. Therefore the algorithm runs in time O(n).
If the Randomized-select algorithm was always able to select a pivot from the middle 75% of the array, then the recursive
call would be made to an array of size at most 7n/8.
The recurrence for the runtime is T (n) = T (7n/8) + cn, which can be solved using the master method: k = log 87 1 = 0,
and so nk = 1. Therefore T (n) is Θ(n).
6
Practice Problem Set 3
.
Problem 1 (**not possible for student presentation)
We discussed the both bubble-down and bubble-up in class. Write the pseudocode for
bubble-down(i) for index i where the heap is stored in array A. Assume that A.heapsize
refers to the number of elements in the heap. Repeat for bubble-up(i).
Problem 2
• How do you insert into a heap? Assume that the array that contains the elements is
not completely full. Write the pseudo-code and justify the runtime.
• Provide a recursive algorithm called BiggerThan(A,x,i) that takes as input a maxheap stored in A and a number x. The parameter i is used to designate the index of
root of a sub-heap. The algorithm prints out all numbers from the subheap rooted
at index i that are larger than x. Determine the recurrence for the runtime of your
algorithm (in the worst-case) and determine the runtime in big-oh notation.
Problem 3
• Given a max-heap, suppose we would like to output the minimum element. Describe
a recursive algorithm that solves this problem and provide the pseudo-code. Write
a recurrence for the worst-case runtime T (n) and determine it’s big-oh notation. Is
your recursive algorithm asymptotically faster than just a brute-force algorithm?
• Suppose you wish to store your data so that you can efficiently remove either the
maximum element or the minimum element. What kind of data structure could you
use? It is possible to perform both types of deletions in time O(log n)?
Problem 4
• Suppose we are given a max-heap stored in array A. Recall that the maximum
element is located at A[1]. Where can we find the second-largest element? What
about the third-largest element?
• Suppose we wish to update our heap so that the largest, second-largest and thirdlargest elements are stored in positions A[1], A[2], A[3] respectively. Describe an
algorithm that takes as input a max-heap in array A and performs the necessary
operations meet the requirement and include the pseudo-code. Justify why it runs
in time O(log n).
Problem 5
A heap can be built using 3 children for each node in the tree instead of 2. How would
you represent this heap an array? Describe where to find the children and the parent of
node i. How would you update the bubble-down and bubble-up procedures?
1
Problem 6 (**not possible for student presentation)
• Array A contains elements that are sorted in decreasing order. Is this a heap?
• In class we saw two algorithms for building a heap. Describe the runtime of each of
these algorithms when the input array is already sorted in decreasing order.
Problem 7 (**not possible for student presentation)
The first step of the algorithm Randomized-Select algorithm from class is to partition
the elements about a randomly selected element in the array. Write the pseudo-code
for Partition(A,s,f) which partitions the elements of array A about a randomly selected
pivot, and returns the rank of the pivot. You may assume that you have access to a
function that selects a random number is some range, for example Random(a,b) returns
a random integer between the integers a and b inclusively. You may use additional space
during the execution of your algorithm if needed.
Problem 8: (**not possible for student presentation)
Write the pseudo-code for the Randomized-Select algorithm from class. Assume the
algorithm takes as input the array A, start and finish indices s and f and the rank k.
You may use the parition algorithm from question 7: Partition(A,s,f).
Problem 9:
Design an algorithm that takes as input an array of n numbers (where n is odd), and
outputs all values that are larger than the median . You may make use of the Select
algorithm from class. Justify why the runtime of your algorithm is O(n).
Suppose we only want to print out the largest 10 elements from the array. Does this
problem have the same asymptotic runtime?
Finally, what if we want to print out the top n/4 elements of the array. Does this
problem have the same asymptotic runtime?
Problem 10: (**not possible for student presentation)
Given as input n distinct numbers in an array, describe an algorithm that outputs the
k smallest elements in sorted order. For example, {3, 2, 4, 5, 6, 10, 7, 1, 8, 9} and k = 4
would result in {1, 2, 3, 4}. There are many options for how to approach this. Consider
the following 3 approaches and compare the the runtime of each:
1. Sorting the entire array first
2. Building a heap
3. Using the Select algorithm to find the kth smallest element and the sorting only a
subset of the elements.
Problem 11:
2
Suppose we carry out a version of the Select algorithm where we divide into groups

of size n instead of groups of size 5. Does this alter the complexity of the algorithm?
Justify your answer. What if we use groups of size 3?
Problem 12:
Consider the Randomized-Select algorithm that picks a pivot randomly. Use the substitution method to derive the worst-case runtime for Randomized-Select. What is the
best-case runtime in big-oh notation?
Suppose a student writes a new magical Partition algorithm that is guaranteed to pick
a pivot that is in the middle 75% of the sorted array. If the Randomized-Select algorithm
from class now uses this new Partition algorithm, what is the worst-case runtime? Write
a recurrence for the worst-case runtime, T (n), and determine the big-oh value of T (n).
Why is this better than the original Randomized-Select algorithm?
3
Dynamic Programming
1
The Dynamic Programming Technique
The dynamic programming technique is a type of “divide-and-conquer” approach to solving problems
by combining subproblems. Recursive algorithms are also a type of divide-and-conquer algorithm. The
fundamental difference between the two types are:
• Recursive algorithms: typically divide the problem into disjoint subproblems.
• Dynamic programming: typically applies to situations where the subproblems overlap.
Furthermore, dynamic programming is usually applied to Optimization problems.
Optimization Problems:
• Many possible solutions exist
• Each solution has a value
• The goal is to find the solution with the optimal (maximum or minimum) value
The key to the dynamic programming technique for an optimization problem is to solve the smaller
subproblems first, and to store these values in a table. By storing the values in a table, we do not need
to re-compute then when we need them later. The larger optimization problem is built by combining the
smaller solutions.
Dynamic Programming technique:
• Identify the characteristic of the optimal solution
• Identify how to recursively define the optimum big problem in terms of the optimum sub-problems
• Compute the sub-problems, typically bottom-up, and store the values in a table
• Compute the optimal solution from the stored information
This lecture on dynamic programming presents two main examples of dynamic programming: Rodcutting and Longest common subsequence.
2
Rod-Cutting
Imagine we have a rod of length n and the opportunity to cut the rod into smaller pieces and sell them for
a certain price. A rod of length n = 4 could be cut up into pieces of length 1, 2, 3 or 4, and each segment
could be sold for the corresponding price: p1 , p2 , p3 or p4 . The total profit we make from this rod depends
on how we make the cuts. A rod of length 4 for example could be cut in may different ways:
1
The total amount of money we make depends on the choices of our cuts and the total price we would
make in that case. Suppose our prices were as below:
Price:
p1
2
p2
5
p3
3
p4
5
If we leave the rod as one piece of length 4 and don’t cut it, then we make p4 = $5. It may be that a
different cut-option from above results in a higher profit. Note that the order of the cuts does not matter,
we are simply interested in the segment sizes and the total price associated with those sizes. The different
possibilities for a rod of length 4 are shown below:
Length 4:
Lengths 1 and 3:
Lengths 2 and 2:
Lengths 1,1, and 2:
Lengths 1,1,1 and 1:
5
3+2=5
5 + 5 = 10
2+2+5=9
2+2+2+2=8
The maximum amount of money we can make is $10 by cutting the rods into two pieces of length 2.
The above solution was found by simply verifying all possible cuts. For a rod of length n there are an
exponential number of different ways we can cut the rod. If we were to verify the profit for each of these
possibilities, the runtime of the algorithm would be exponential!
2.1
Relating subproblems to the optimal solution:
Given a rod of length n there are many ways that this rod could be cut. The goal is determine how to make
those cuts in order to maximize profit. Whatever the optimal solution is, the rod must have a segment cut
off the front that can be referred to as the “first piece” of the optimal solution. In the example below, a
cut was made that resulted in two pieces of length two. The “first piece” has size 2.
The first cut defines the length of the first piece, and this piece could have length either 1, 2, 3, . . . , n.
Whatever the optimal solution is for a rod of length n, the first piece must have some particular length,
say k. Although we don’t know what k is without knowing the optimal solution, we do know that the
remaining section of length n − k must also be cut optimally:
It is this link between the optimal problem and the optimum subproblems that is fundamental to our
dynamic programming solution. We now show how to use this important relationship to solve the optimal
problem.
Let r(n) represent the maximum profit possible on a rod of length n. If the optimal solution resulted in
the first piece having length 3, then the rest of the rod of length n − 3 must also have been cut optimally.
This means that the optimal profit satisfies:
r(n) = p3 + r(n − 3)
Now of course we don’t actually know where the first cut is in the optimal solution. However, we do know
that the optimal solution represents the maximum profit, and so we could simply check all possibilities for
the length of the first piece. Therefore:
r(n) = max (pk + r(n − k))
1≤k≤n
2
For rods of length 0, the profit is 0 and so we define r(0) = 0.
Let’s verify this formula for the example above with n = 4.
r(4) = max{p1 + r(3), p2 + r(2), p3 + r(1), p4 + r(0)}
In order to determine this maximum, we need to solve for r(3), r(2) and r(1). The same formula can
be applied used recursively:
r(3) = max{p1 + r(2), p2 + r(1), p3 + r(0)}
r(2) = max{p1 + r(1), p2 + r(0)}
r(1) = max{p1 + r(0)} = 2
Only the last equation above has an explicit value: r(1) = 2. This value represents the profit on a rod
of length 1. Once this value is found, it can be replaced in the above equations:
r(2) = max{2 + 2, 5 + 0} = 5
and so the maximum profit on a log of length 2 is 5.
r(3) = max{2 + 5, 5 + 2, 3 + 0} = 7
r(4) = max{2 + 7, 5 + 5, 3 + 2, 5 + 0} = 10
Therefore the maximum profit is 10.
The next two subsections look at different ways of taking this recursive definition and turning it into
an algorithm. The first is a recursive approach, and the second is our dynamic programming approach.
2.2
A recursive attempt:
A recursive algorithm for the rod cutting program is based on the equation above. A call to CUT-ROD(n, p)
with price list p and rod length n will simply compute the maximum of all possible first cut options k, and
for each of those make a recursive call to CUT-ROD(n − k, p):
CUT-ROD(n, p)
if n = 0 return 0
max = 0
for i = 1 to n
newcut = p[i]+CUT-ROD(n − i, p)
if newcut > max
max = newcut
return max
This recursive algorithm is unfortunately very inefficient. CUT-ROD calls itself over and over again
with the same parameters, resulting in the same subproblems being computed several times. The figure
below is a recursion tree for the case n = 5. CUT-ROD(5) makes five recursive calls with input size 4, 3, 2, 1
and 0 (shown in blue). The call to CUT-ROD(4) in turn makes recursive calls with input size 3, 2, 1 and 0,
and the call to CUT-ROD(3) also makes recursives calls with input sizes 2, 1 and 0. We can see below that
several sections of this tree are repeated. In particular, there are three separate calls to CUT-ROD(2).
Each of these calls results in the same final value, however the algorithm does not keep track of the fact
that it has already computed CUT-ROD(2), and therefore it simply calls the same subproblem again.
3
The Runtime:
Let T (n) represent the number of calls to CUT-ROD when the initial parameter is n. When n = 0
there is only one call to the algorithm, since this represents the base case and no recursive calls are made.
If n ≥ 1 the algorithm executes a for-loop from i = 1 to i = n and each iteration results in a recursive call
to CUT-ROD(n-i). Therefore:
T (n) = 1 + T (n − 1) + T (n − 2) + . . . + T (0)
Using induction we can show that the solution to this equation is 2n .
Assume T (k) = 2k for k < n Show T (n) = 2n T (n) = 1 + T (n − 1) + T (n − 2) + . . . + T (0) = 1 + 2n−1 + 2n−2 + . . . + 1 = 1 + (2n − 1) = 2n The runtime of CUT-ROD(n) is 2n , and is therefore an exponential time algorithm. This is an extremely inefficient runtime, even for today’s computers. You may be able to run it on a rod of size 40, but for every increase in 1 in your rod length, the computer will take twice as long to complete. The next section demonstrates how dynamic programming drastically improves on the runtime of this recursive algorithm. 2.3 The Dynamic programming approach: The idea behind dynamic programming is to create a table in which we store the values of CUT-ROD(i). This makes it possible for the algorithm to simply look up the values of CUT-ROD(i) for the subproblems once they have been computed. The result is that each call to CUT-ROD(i) for a specific i is only carried out once. The table: Let r [0 . . . n] be an array that stores the optimal values for the CUT-ROD problem. Entry r [i] is the maximum profit possible for a rod of length i. For example, if r [4] = 10, then the optimal cuts on a rod of length 4 result in a profit of 10. Notice that in this algorithm, we assume we have an array that starts at 0, because we need to store the value r [0] = 0 for rods of length 0. This array does not store the actual cuts that result in that profit, it simply stores the best profit possible for a particular length. Using the same example as above, the array r [] will contain the following values when the algorithm as completed: 4 Because solutions to larger problems depend on the optimal values of smaller problems, this array should be filled in a bottom-up approach (meaning for small i towards larger i values). Once the entire array is filled, the final optimal value for the Cut-Rod problem on input of size n is stored in r [n]. Notice that each entry of the table can be filled using values that are already filled in because we completed the table from left to right: The dynamic programming algorithm shown below is called DYNAMIC-CUT-ROD(n, p) where n is the rod length, and p is the price list. The procedure fills in the array r [] from left to right and returns the final optimal value in r [n]. The dynamic programming solution to rod-cutting: • Create a new array r [0 . . . n] and set r [0] = 0. • Fill in the entries of r [j] starting with j = 1 and finishing with j = n. – The entry r [j] is set as the maximum of pk + r [j − k] over all k from k = 1 to j. Note that the values r [j − k] already exist in the table when computing r [j]. • The optimal revenue for cutting the rod of length n is found in r [n] DYNAMIC-CUT-ROD(n, p) Let r [0 . . . n] be an array r [0] = 0 for j = 1 to n: r [j] = 0 for k = 1 to j if p[k] + r[j − k] > r [j]
r [j] = p[k] + r[j − k]
return r [n]
Runtime: The algorithm consists of an inner and outer for-loop (just like Insertion-sort), resulting
in a runtime of Θ(n2 ). The analysis is identical.
2.4
Reconstructing a Solution
The algorithm above returns the maximum profit possible over all possible ways of cutting a rod of length
n. However, it does not actually return the segment sizes that result in that profit. This might seem
strange because of course practically, we would like to know both the maximum profit and also how to cut
the rod to achieve that profit.
A quick and simple update to the above algorithm allows us to keep track of the piece sizes. Let
s [0 . . . n] be a new array where s [i] stores the size of the first piece in the optimal solution for a rod of
length i. In our continued example from above, the values would be:
5
Rod length 0: r [0] = 0
s [0] = 0
No pieces to cut
Rod length 1: r [1] = 2
s [1] = 1
first piece is size 1
Rod length 2: r [2] = 5
s [2] = 2
first piece has size 2.
Rod length 3: r [3] = 7
s [3] = 1
first piece has size 1
Rod length 4: r [4] = 10
s [4] = 2
first piece has size 2
The dynamic programming algorithm can be updated easily to also keep track of this information.
Whenever r [j] is updated in the inner for-loop, the algorithm has found a preferred way to cut the rod.
When this happens, we update s [j] to the size of the first piece that is the current best choice. When
the inner for-loop is complete, all possible first-cut sizes on a rod of length j have been verified, and s [j]
contains the size of the first cut that leads to the optimal value.
DYNAMIC-CUT-ROD(n, p)
Let r [0 . . . n] be an array. Let s [1 . . . n] be an array
r [0] = 0
for j = 1 to n:
r [j] = 0
for k = 1 to j
if p[k] + r[j − k] > r [j]
r [j] = p[k] + r[j − k]
s [j] = k
return r [n]
When the algorithm completes, the optimal value is stored in r [n]. Furthermore, the particular pieces
can be found by examining the size of the first cuts. In our example above, the final value s [4] = 2 indicates
the first piece has size 2. Thus our first segment has size 2. The length of the remaining rod is 4 − 2 = 2.
Next we look up s [2] which tells us the size of the first cut for a rod of length 2. From the above table,
this is 2. Therefore the second piece has size 2. Continuing in this way, we can determine the size of all
the cut segments that correspond to the optimal solution.
PRINT-PIECES(s, n)
j=n
while j > 0
print s [j]
j = j − s [j]
Therefore in O(n) time we can output the piece sizes that correspond to the optimal solution of the
rod-cutting problem.
Summary
• The recursive solution to Cut-Rod has a runtime of Θ(2n ). The algorithm calls itself over and
over again with the same parameters, resulting in the same subproblems being computed several
times.
• The dynamic programming solution for Cut-Rod runs in time O(n2 ) for a rod of length n. It
uses O(n) space to store the values to the subproblems.
6
Dynamic Programming 2
The problem of comparing two strings in order to determine if they have similar characteristics is a
problem that has applications across biology, computer security, and artificial intelligence. In biology for
example, we study DNA sequences and search for sections that may have similarities.
In this lecture we study the problem of comparing strings based on their subsequences. A subsequence of
a string A is a sequence of characters that exists in A in that order. Some characters of A can be “skipped”
by the subsequence. The string A = XY GT W P Y T GX has many possible subsequences, and one such
possible subsequence is shown in blue in the figure below. Notice that the characters of the subsequence
appear in A and in the same order. The sequence Y P W X does not correspond to a subsequence of A.
The goal of this lecture is to provide an efficient algorithm for finding the longest common subsequence between two strings. In the example below, both strings contain the same subsequence Y T W P Y ,
which has length 5. If there is no other longer subsequence, then this is in fact the longest common
subsequence between the two strings A and B.
We shall assume that the two strings in question can be represented as A = (a1 , a2 , . . . , am ) and
B = (b1 , b2 , . . . , bn ), where the values ai and bj represent the individual characters of the strings. The
lengths of the two initial strings do not have to be the same, therefore we let m be the length of A and n
be the length of B.
Let’s first think about why a brute-force algorithm would not be efficient. How many possible subsequences does a string have? Each character in the string can be either included or not included in the
subsequence. So the number of possible subsequences for string A is 2m , and the number of possible
subsequences for string B is 2n . Any brute-force procedure that tries to enumerate all the possible subsequences of each string would take exponential time. That does not even include the time to comparing
the subsequences!
The longest common subsequence problem
In this section we present an efficient dynamic programming algorithm for the longest common subsequence
problem. Recall that the dynamic programming technique relies on the ability to define the optimum
problem in terms of the optimum sub-problems. Therefore we begin by trying to identify a relationship
between the longest common subsequence of two strings A and B, and a an optimal subproblem.
Given the strings A and B, we can focus on the last characters of each string and decide whether or
not that character is or is not included in the longest common subsequence.
Case 1: Same last character
The example below shows two strings A and B that have the same last character, X.
1
Suppose X were not part of the longest common subsequence. The longest subsequence would then
involve characters from the first parts of the strings, up to the second last character, as shown on the left
above. In this case, we could just extend this subsequence by the character X and get a sequence that is
one unit longer. Therefore it must be that the longest common subsequence includes the character X.
Case 2: Different last characters
If the last characters of A and B are different, then the longest common subsequence cannot contain
the last character from both strings. In the example below, the last character P from A and the last X
from B cannot both be included in the longest common subsequence. Instead, the longest subsequence is
the longest common subsequence that we can make either without the last P or without the last X. The
maximum of these two options represents the longest common subsequence in this case. From the figure
below, the optimal value is a subsequence of length 4.
These two cases above represent the key to the dynamic programming solution.
The dynamic programming solution
The above two cases indicate that the optimal solution to the larger problem (the entire strings) can be
found if we know the optimal values of smaller subproblems (shorter strings). The dynamic programming
technique relies on the fact that we can store the values of the subproblems so they can be referenced
by the algorithm when needed.
Our smaller subproblems in this case are based on finding the longest common subsequence within
shorter sections of A and B. For example, notice in case one above, we referenced the problem of the
longest common subsequence between the substring of A which was (a1 , a2 , . . . , am−1 ) and the substring
of B which was (b1 , b2 , . . . , bn−1 ). Therefore a dynamic programming table which sotred the optimal value
for substrings of A and B would enable us to perform the look-ups that we need.
Since there are two strings, and therefore two possible substring lengths, we use a double array, C [i, j]
to store the optimal subproblem values. This double array starts at 0 because if either i = 0 or j = 0 then
the string has length 0, which means the subsequence length is 0.
2
The Table: C [0 . . . m, 0 . . . n]
• Given strings A = (a1 , a2 , . . . , am ) and B = (b1 , b2 , . . . , bn )
• C [i, j] stores the length of the maximum common subsequence between the strings (a1 , . . . , ai )
and (b1 , . . . , bj ).
• C [0, j] = 0 for all j and C [i, 0] = 0 for all i
• The last entry in the table, C [m, n], is the length of the longest common subsequence between
A and B
An example of the table C for the simple strings A = XY X and B = Y XX is shown below:
The final value for the longest common subsequence between A and B is stored in C [3, 3]. Therefore
the longest common subsequence for these strings is 2.
It remains to show how a dynamic programming algorithm can fill the entries of the table efficiently,
and in such a way that each look-up to the table is to values that have previously been stored. The key is
to use Case 1 and Case 2 from above, which define the relationship between the optimal solution and
the smaller subproblems.
Recursively defining C [i, j]
Given strings A = (a1 , a2 , . . . , am ) and B = (b1 , b2 , . . . , bn ) we define C [i, j] in the following recursive
way:
• Case 1: If ai = bj then
C [i, j] = 1 + C [i − 1, j − 1]
• Case 2:. If ai 6= bj then
C [i, j] = max{C [i − 1, j] , C [i, j − 1]}
The final value: C [m, n] represents the longest common subsequence between A and B.
From above it is clear that the entry for C [i, j] depends on C [i − 1, j − 1] or C [i − 1, j] or C [i, j − 1].
Therefore the table must be filled in in such a way that all three of these entries are stored before C [i, j]
is computed. We can visualize this dependence as:
3
If we fill up the table row by row, from left to right, then each look-up in the table will be for values that
are already filled in.
The dynamic programming algorithm fills in this table row by row and left to right:
LCS(A,B)
Step 1: Initialize the first column of the table: for i = 0 to m, C [i, 0] = 0.
Initialize the first row of the table: for all j = 0 to n, C [0, j] = 0
Step 2: Loop through the table row by row from left to right.
for i = 1 to m
for j = 1 to n
if A[i] = B[j]
C [i, j] = C [i − 1, j − 1] + 1
else if C [i − 1, j] > C [i, j − 1]
C [i, j] = C [i − 1, j]
else C [i, j] = C [i, j − 1]
return C [m, n]
Runtime: The algorithm consists of a nested for-loop that iterates through a table of size mn. It
performs a constant number of operations per table entry. Therefore the runtime is O(mn).
Reconstructing the subsequence:
The final entry in C [m, n] is the length of the longest common subsequence. But this value does not tell
us what that common subsequence actually is. In order to trace back the original subsequence, we need
to reconstruct the choices that were made during LCS(A,B).
In order to describe how this reconstruction is possible, we examine a larger example based on the
strings A = GT T CCT XXT X and B = CGXT XXT T GXGX. The completed dynamic programming
table is shown below:
4
j=0
j=1
j=2
j=3
j=4
j=5
j=6
j=7
j=8
j=9
j = 10
j = 11
j = 12
i=0
0
0
0
0
0
0
0
0
0
0
0
0
0
i=1
0
0
1
1
1
1
1
1
1
1
1
1
1
1 2 3 4 5 6 7 8 9 10
i=2
0
0
1
1
2
2
2
2
2
2
2
2
2
A = GTTCCTXXTX
i=3
0
0
1
1
2
2
2
3
3
3
3
3
3
i=4
0
1
1
1
2
2
2
3
3
3
3
3
3
i=5
0
1
1
1
2
2
2
3
3
3
3
3
3
i=6
0
1
1
1
2
2
2
3
4
4
4
4
4
i=7
0
1
1
2
2
3
3
3
4
4
5
5
5
i=8
0
1
1
2
2
3
4
4
4
4
5
5
6
i=9
0
1
1
2
3
3
4
5
5
5
5
5
6
i = 10
0
1
1
2
3
4
4
5
5
5
6
6
6
B = CGXTXXTTGXGX
1 2 3 4 5 6 7 8 9 10 11 12
From the above table we can see that the longest common subsequence has length C [10, 12] = 6. How
was this very last entry computed? Notice that the last characters of the strings A and B are the same,
and so C [10, 12] = 1 + C [9, 11]. Therefore the last entry of the table was computed by extending the
subsequence from C [9, 11] to include the last character X. Therefore the last character in our subsequence
is X. We denote this “choice” made by the algorithm by using the color red in the above table, which
indicates that the value 6 was made by incrementing the value 5 by 1.
Continuing in this way, consider the entry C [9, 11] in the table. The characters A [9] and B [11] are not
the same, and thus the algorithm selects the maximum of C [8, 11] and C [9, 10]. In this case C [9, 10] is
chosen by default since there is a tie. This situation indicates that no matching character was found, and
therefore the longest common subsequence depends on previous values in the table. This step is shown in
blue in the above table.
This pattern can be used to trace the characters of subsequence back through the table C.
PRINT-LCS(A,B,C)
i = m, j = n
while i > 0 and j > 0
if A [i] = B [j]
output A [i]
i = i − 1, j = j − 1
else if C [i − 1, j] > C [i, j − 1]
i=i−1
else j = j − 1
This outputs the characters in reverse order, from back to front. The runtime depends on the maximum
of m, n, and therefore it runs in time O(max{m, n}).
5
Practice Set 10: solutions
Problem 1
Search through the upper diagonals of the table, beginning with the top right corner, until you find the
first occurrence of a 1. The position of this 1 indicates the substring that is a palindrome. Therefore, it
remains only to output all the palindrome substrings with that length. The outer loop goes from k = n to
k = 1. Note that n − k + 1 represents the number of entires in the diagonal. When k = n − 1 for example,
there are 2 entires in the diagonal, and n − k + 1 = 2. The index i + k − 1 represents the indices of the rows
in a specific diagonal. Again, when k = n − 1, the values we consider are L[1, 1 + (n − 1) − 1] = L[1, n − 1]
and L[2, 2 + (n − 1) − 1] = L[2, n].
Initialize F oundM ax = f alse.
for k = n to 1
for i = 1 to i = n − k + 1
if L [i, i + k − 1] = 1
FoundMax = true
Print substring s [i, i + k − 1]
if FoundMax=true break
In the worst case, this searches through the entire upper triangle of the n × n table, which has approximately n2 /2 entries. Since each iteration of the loop above runs in constant time, the runtime is
O(n2 ).
Problem 2
In this problem, we need to keep track of which characters are “selected” to be part of the palindrome
subsequence. In the example table below, note that the length of the longest palindrome subsequence is
stored in P [1, n]. We can back-track through the table in such a way that we select the next subsequence
that contains the longest palindrome subsequence. We stop the search when we arrive at the diagonal.
Set CharsToPrint[i] = T rue for character at index i if it is part of the longest palindrome subsequence:
Initialize array CharsToPrint[1 . . . n] with all False entries.
i=1
j=n
while i 6= j
if s[i] = s[j]
CharsToPrint[i] = T rue
CharsToPrint[j] = T rue
i=i+1
j =j−1
else if P [i + 1, j] > P [i, j − 1]
i=i+1
else
j =j−1
Next, print out the characters of the Palindrome subsequence:
for i = 1 to n
if CharsToPrint[i] = T rue
Print s[i]
The above algorithm either increments i or decrements j at each iteration. Therefore the maximum
number of iterations is O(n). Since each iterations performs a constant number of operations, the runtime
1
is O(n).
The characters we should print are at indices
1, 2, 3, 5, 7 corresponding to the palindrome subsequence BACAB
Problem 3
Brute force: We consider all possible subset of weights, and for each possible subset, we total the weight
to determine if it is exactly T . There are 2n possible subsets, and for each subset it takes time O(n) to
sum the weights. Therefore the runtime of the brute force approach is O(2n · n).
Dynamic Programming solution: We refer to the set of selected weights as S. We begin by
formalizing the problem recursively. There are n items and let’s assume their weights are stored in the
array w [1 . . . n]. Consider the following two cases for the last weight wn :
Case 1:. Weight wn is included in set S, if wn is not too heavy. In this case, the remaining target
weight is T − w [n].
Case 2:. The last weight is not placed in S. In this case, the remaining total weight is still T .
Notice that this recursive relationship references looking up subproblems based on subsets of the weights,
and the target sum. Therefore, we build a table M [i, j] that stores the true/false values corresponding to
whether or not target weight j can be made using weights 1 . . . i.
Defining the table: M [0 . . . n, 0 . . . T ]
• Entry M [i, j] = true if there is a set of weights from 1 . . . i that can be selected to make a sum of j.
• If j = 0 then by selecting no weights we have found a valid solution: M [i, 0] = true
• If there are no weights to select from, then it is impossible to create any sum j > 0: M [0, j] = false
• If weight wi is too heavy (wi > j) then we must consider the set of weights 1 . . . (i − 1): set M [i, j] =
M [i − 1, j]
• Otherwise, we could either include or not include weight wi :
If M [i − 1, j] = true OR if M [i − 1, j − w [i]] = true then set M [i, j] = true
• If entry M [n, T ] = true then there is a selection of weights that have a total sum of T .
The table can be filled up in a row by row, left to right order since each recursive reference above is to
an entry that is either in a previous row or a previous column. The algorithm is described below:
2
WeightSet(w,T)
Step 1: Initialize the all cells with F.
Initialize the first column of the table: for i = 0 to n set M [i, 0] = true
Initialize the first row of the table: for j = 1 to T set M [0, j] = f alse.
Step 2: Loop through the table row by row from left to right, filling in the entries using the
recurrence relationship above.
for i = 1 to n
for j = 1 to T
if w[i] > j
M [i, j] = M [i − 1, j]
else if M [i − 1, j] = true
M [i, j] = true
else if M [i − 1, j − w [i]] = true
M [i, j] = true
Selected[i, j] = true
Return M [n, T ]
Runtime: The algorithm runs through a table of size n·T and performs a constant number of operations
per cell. Therefore the total runtime is Θ(n · T ).
Output the subset: How do you find the subset that creates this sum? Trace back the true values
through the table. Below is an example for a table run on the input weights w = {3, 6, 2, 9, 4, 5} and
T = 10. The red arrows represent the path followed by the search below. Notice that whenever an item
is “chosen” the target sum is updated. The red circles represent the cases for which Selected [i, j] = true.
(You can also write the pseudocode without the use of Selected[].)
Initialize j = T and i = n
While j > 0 :
if Selected [i, j] is true:
Output w [i]
j = j − w [i]
Update i = i − 1
Recursive Solution: The recursive algorithm below takes as input the size of the weights w[] and
indices i and j, and returns TRUE if there is a subset of weights from w[1, . . . , i] that has total sum j. The
initial call to the recursive algorithm is RecursiveWeightSet(w, n, T ). The procedure follows the cases of
the dynamic programming solution above, but makes recusive calls instead of making look-ups in a table.
RecursiveWeightSet(w,i,j)
if j = 0
return true
else if i = 0
return false
else if w[i] > j
3
return RecursiveWeightSet(w, i − 1, j)
else if RecursiveWeightSet(w, i − 1, j) = true
return true
else if RecursiveWeightSet(, i − 1, j − w[i]) = true
return true
Problem 4
Assume that the set of weights is stored in array w [1 . . . n], and their corresponding values are stored in
array v [1 . . . n]. The goal is to select a maximum-value subset S of the weights whose total weight is at
most T .
We begin the dynamic programming solution in the usual way, by formulating the problem recursively.
The optimal subproblems are defined by selecting from smaller subsets of the weights. For example,
consider the maximum-value using only weight 1, and next the maximum-value selected from weights 1
and 2, and next the maximum-value selected from weights 1, 2 and 3. If we continue in this way, eventually
we will determine the maximum-value set selected from all weights.
We now formalize this recursive relationship. If we focus on the last weight in the sequence, this weight
could be either included or excluded from set S. We summarize these two options below:
Option 1: If we include w [i] in set S, then the maximum total weight is now T − w [i], and we
have added a value of v [i]. Therefore, we would like to solve the optimal subproblem of selecting the
maximum-value subset from the weights w1 . . . wi−1 having total weight at most T − w [i].
Option 2: If we do not include w [i] in set S, then the solution is equivalent to solving the subproblem
of selecting the maximum-value subset from the weights 1 . . . (i − 1) having total weight at most T .
Since the goal is to select the set S that maximizes the total value, then the solution should select the
maximum value of the above two options. The optimal solutions to the subproblems need to be stored for
easy look-up by the dynamic programming solution. The table V [i, j] can be used to store the optimal
solution to the problem of using weights w1 , w2 , . . . , wi having a maximum weight of j.
Defining the table: V [0 . . . n, 0 . . . T ]
• Let V [i, j] store the maximum-value subset selected from weights w1 , . . . , wi having total weight at
most j
• If we select no items, then the maximum value is 0. Therefore V [0, j] = 0 for all j ≥ 0
• If the maximum allowable weight is 0, then we must select no items (assume that the weights are
positive). Therefore V [i, 0] = 0 for all i ≥ 0.
• If w [i] ≤ j then we consider the maximum of either including weight wi or not. The entry in red
corresponds to option 1 above, and the entry in blue corresponds to option 2.
V [i, j] = max{v [i] + V [i − 1, j − w [i]], V [i − 1, j]}
• Otherwise if w [i] > j, then we must exclude this weight:
V [i, j] = V [i − 1, j]
• The final optimal value to the original problem is stored in V [n, T ].
The above recursion references entries in either a previous column or a previous row. Therefore the
table can be filled row by row, from left to right.
4
MaxValueSet(w,T )
Step 1: Initialize the first row: for j = 0 to T set V [0, j] = 0
Initialize the first column: for i = 0 to n set V [i, 0] = 0
Step 2: Loop through the table row by row from left to right, filling in the entires using the
above recursive relationship.
for i = 1 to n
for j = 1 to T
if w [i] ≤ j
V [i, j] = max{v [i] + V [i − 1, j − w [i]] , V [i − 1, j]}
else V [i, j] = V [i − 1, j]
return V [n, T ]
Runtime: The algorithm loops through a table of size n · T performing a constant number of steps for
each entry. Therefore the overall runtime is Θ(n · T ).
Below is an example using the weights w = {3, 2, 4, 5, 1} with corresponding values v = {5, 9, 13, 4, 6}
and maximum weight T = 10.
The optimal solution is the set of weights {3, 2, 4, 1} with a total value of 33.
Finding the set of weights: We need to output the set of weights that correspond to the optimal
value in entry V [n, T ]. We can trace back through the table following the orange arrows in the above
diagram. These diagonal arrows represent the case where entry V [i, j] was made by selecting weight w [i].
The vertical arrows represent the case where weight w [i] was not selected.
Initialize j = T and i = n
While i > 0 :
if V [i, j] 6= V [i − 1, j]
j = j − w [i]
Output “weight w [i] is selected with value v [i]”
Update i = i − 1
Problem 5
We begin by identifying subproblems. In order to arrive at station n, there must have been some previous
stop, which was the last stop made before arriving at station n. Suppose that stop was station i. Then the
battery at station i must be long enough to travel n − i miles. Furthermore, the number of stops we took
to get to station i must have been optimal. Certainly, we don’t know what that last stop was. However,
like with the rod-cutting problem, we can take the minimum over all possible previous stops in order to
determine the minimum number of stops to station n.
The table:
5
Define a table M [0 . . . n] where M [i] represents the minimum number of stops needed to get to station
i. We do no include station 0 as a “stop”, but the last station is included as a final “stop”. We initialize
M [0] = 0. The final answer is stored in M [n].
The recurrence:
To get to station i, we must have had some previous stop, at either 0, 1, . . . , i − 1. The value of M [i]
is the minimum over all possible previous stops. Note that a previous stop at station j is only possible if
the amount of water at station j is enough to get you to station i. Therefore, M [i] is the minimum of all
M [j] + 1 where j = 0..i − 1, as long as w [j] ≥ i − j.
We fill in the table from left to right as in the rod-cutting problem:
MinStops(b)
Set M [0] = 0
for i = 1 to n
mymin = n
for j = 0 to i − 1
if b [j] ≥ i − j
mymin = min(M [j] + 1, mymin)
M [i] = mymin
Return M [n]
The runtime:
The runtime to fill entry M [i] of the table is ci, for some constant c, since the for loop must run through
all table entries for which j < i. Therefore the overall runtime is at most c + 2c + 3c + . . . nc = Θ(n2 ). (Same as rod-cutting). 6 Practice Problem Set 10 . Problem 1 In class we saw the dynamic programming solution for Longest Palindrome Substring, which produces the table L [i, j] where L [i, j] is defined as 1 if the substring s [i, j] is a palindrome, and 0 otherwise. The table dimensions are n × n. Using the results of a DP table, describe an algorithm to output all palindrome substrings of maximum length, and justify its runtime of Θ(n2 ). Note: You must provide a general algorithm. The table below is just an example DP table for L [i, j] where n = 9 Problem 2 In class we studied the Longest Palindrome Subsequence. Write a procedure (with pseudo-code) that takes as input the completed DP table and the initial strings, and outputs the longest palindrome Subsequence. Problem 3 A set of n items is such that each item has a specific weight, wi , for 1 ≤ i ≤ n. We would like to find a subset of those items that has total weight T (if there is one). Describe a brute-force algorithm for this problem and determine its runtime. Describe a recursive solution (with pseudo-code) for this problem. Next, provide a dynamic programming solution to this problem and explain the runtime. Describe how to use the DP table to output the elements whose total weight is T . Problem 4 Suppose each of the items in Problem 4 (above) has an associated value, vi , and also a weight wi . Provide a dynamic programming solution that finds a subset of the items with maximum value, such that the total weight is at most T and explain the runtime. Can you also output the specific elements which correspond to the maximum value? Problem 5 1 Suppose you are going on a long bike ride with your e-bike. You must ride exactly n miles. You will need to stop to replace your battery along the way. The bike trail is lined with battery pick-up stations, where you can trade your battery for a new charged battery. There is one battery station at every mile marker. However, the batteries at each station vary. For example, suppose that at station 1 they have batteries that last 3 miles, whereas station 2 may have batteries that last 7 miles. The goal is to complete the bike ride using the minimum number of stops to replace your battery. You cannot run out of power! If you pick up a battery pack worth 4 miles, then you cannot bike more than 4 miles! The input to the problem is an array b[0...n], where b[i] stores the battery life of the batteries at station i. The output is the minimum number of stops required. Assume you start the trail at mile 0, and that you receive the battery at station 0. Provide a dynamic programming solution to this problem, and determine the runtime of your algorithm. 2

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper
Still stressed from student homework?
Get quality assistance from academic writers!

Order your essay today and save 25% with the discount code LAVENDER