Dynamic programming involves breaking down problems into their subproblems. By solving for the optimal subproblems and saving those results into memory to access them whenever a repeated problem needs to be solved, the algorithmic complexity decreases significantly. Implementing dynamic programming algorithms requires higher-level thinking about the problem’s patterns. To explain dynamic programming, let’s re-examine the Fibonacci sequence that was discussed in Chapter 8. Then the chapter will cover the rules of dynamic programming and walk you through some examples to make the concepts more concrete.
Motivations for Dynamic Programming

Recursion tree for Fibonacci numbers
This is known as overlapping subproblems. Calculating the Fibonacci sequence for 6 requires calculating the Fibonacci sequence for 4 and 5. Hence, the Fibonacci sequence for 5 overlaps with the fourth Fibonacci sequence calculation. This problem also has an optimal substructure, which refers to the fact that the optimal solution to the problem contains optimal solutions to its subproblems.
With this, let’s now formalize what dynamic programming is.
Rules of Dynamic Programming
Dynamic programming (DP) is the method of storing values that were already calculated and using those values to avoid any recalculations (typically in a recursive algorithm). This method can be applied only to those problems with overlapping subproblems and optimal substructure.
Overlapping Subproblems
Similar to divide and conquer in recursion, DP combines solutions on subproblems. DP is used when solutions for subproblems are needed multiple times. It stores subproblem solutions typically in a hash table, an array, or a matrix, and this is referred to as memoization . DP is useful for solving problems in which there are many repeated subproblems.
An example of this can be seen with the Fibonacci sequence recursive method. It can be observed that some numbers such as 3 will be recalculated many times.
A hash table can be used to store results to avoid any recalculations. Doing this reduces the time complexity from O(2n) to just O(n), which is an immense change. Calculating O(2n) with a realistically large enough n can take literally years to compute.
Optimal Substructure
An optimal substructure is when the optimal solution of a problem can be found by using the optimal solutions of its subproblems.
For example, the shortest path finding algorithms have optimal substructures. Consider finding the shortest path for traveling between cities by car. If the shortest route from Los Angeles to Vancouver passes through San Francisco and then Seattle, then the shortest route from San Francisco to Vancouver must pass through Seattle as well.
Example: Ways to Cover Steps
- 1.
1 step, 1 step, 1 step, 1 step
- 2.
1 step, 1 step, 2 steps
- 3.
1 step, 3 steps
- 4.
2 steps, 2 steps
Time Complexity: O(3n)
Time Complexity: O(n)
This shows the power of dynamic programing. It improves time complexity immensely.
Classical Dynamic Programming Examples
This section will explore and solve some of the classical dynamic programming problems. The first one that will be explored is the knapsack problem.
The Knapsack Problem
Given n weights and the values of items, put these items in a knapsack of a given capacity, w, to get the maximum total value in the knapsack.
Optimal Substructure
The item is included in the optimal subset.
The item is not included in the optimal set.
- 1.
(excluding the Nth item): max value obtained with n-1 items
- 2.
(including the Nth item): max value obtained with n-1 items minus the Nth item (can only work if the weight of the Nth item is smaller than W)
Naive Approach
Time Complexity: O(2n)

Recursion tree for knapsack
DP Approach
Time Complexity: O(n*w )
Here, n is the number of items, and w is the capacity of the knapsack.
Space Complexity: O(n*w )
This algorithm requires an n times wcombination to store the cached results inside matrixDP.
The next DP question that will be studied is another classic.
Longest Common Subsequence
Given two sequences, find the length of the longest subsequence where a subsequence is defined as a sequence that appears in relative order without necessarily being contiguous. For example, sam, sie, aie, and so forth, are subsequences of sammie. A string has 2n possible subsequences where n is the length of the string.
As a real-world example, let’s consider a generalized computer science problem that appears in main domains such as bioinformatics (DNA sequencing). This algorithm is also how the diff functionality (file comparison to output difference between files) is implemented in version control and operating systems.
Naive Approach
Time Complexity: O(2n)

Recursion tree for longest common string length
DP Approach
Time Complexity: O(m * n)
Space Complexity: O(m * n)
Here, m is the length of str1, and n is the length of str2.
Coin Change
Given a value/money n and an unlimited supply of each coin of different values, S = {S1, S2, .. Sm}, of size M, how many ways can the change be made without considering the order of the coins?
Optimal Substructure
Naive Approach
Time Complexity: O(nm)
Space Complexity: O(n)
Here, m is the number of available types of coins, and n is the desired currency to convert into change.
Overlapping Subproblems

Recursion tree for longest coin change
To account solve for this, a table (matrix) can be used to store already computed results.
DP Approach
Time Complexity: O(m * n)
Space Complexity: O(m * n)
Here, m is the number of available types of coins, and n is the desired currency to convert into change.
Edit (Levenshtein) Distance
Given a string (str1) of length m and another string (str2) of length n, what is the minimum number of edits to convert str1 into str2?
- 1.
Insert
- 2.
Remove
- 3.
Replace
Optimal Substructure
Naive Approach
Time Complexity: O(3m)
The time complexity of the naive solution is exponential, and the worst case is when no characters in the two strings match. This makes sense because each call has three calls (insert, remove, replace).

Recursion tree for edit distance
DP Approach
Time Complexity: O(m * n)
Space Complexity: O(m * n)
Here, m is the length of str1, and n is the length of str2.
Summary
Optimal substructure: The optimal solution to the problem contains optimal solutions to its subproblems.
Overlapping subproblems: The solutions for subproblems are needed multiple times.
To store the already computed solutions to a subproblem, a matrix or a hash table is typically used; this is because both provide O(1) lookup time. Doing this, the time complexity can be improved from exponential (e.g., O(2n )) to polynomial time (e.g., O(n2)).