Back to: Python Data Structure and Algorithims
Introduction to Dynamic Programming: Understanding Overlapping Subproblems
Dynamic Programming (DP) is a powerful technique in computer science used to solve complex problems efficiently by breaking them down into smaller, manageable subproblems. But not all problems can use DP. There are two key requirements, and in this article, we’ll focus on the first one: overlapping subproblems.
What Are Overlapping Subproblems?
At first, the term “overlapping subproblems” might sound intimidating. But it’s actually simple.
When we break a problem into smaller subproblems, sometimes some of these subproblems repeat. These repeated problems are what we call overlapping subproblems.
In short: Overlapping subproblems = repeating subproblems.
Example to Understand Overlapping Subproblems
Imagine you have a problem that can be broken into three subproblems:
-
Subproblem 1 → Returns 10
-
Subproblem 2 → Returns 20
-
Subproblem 3 → Returns 30
Now, suppose the same subproblem appears again while solving the overall problem:
-
Subproblem 1 repeats → should return 10
-
Subproblem 2 repeats → should return 20
If we recompute the answer for every repetition, it would be inefficient.
Instead, we can store the result of each subproblem when we first solve it. Later, if the same subproblem occurs, we retrieve the answer instantly.
This technique is called memoization.
Memoization: Storing Results
Memoization is the process of storing answers to subproblems so that repeated computations can be avoided.
Example (Python-like pseudo code):
Benefit:
-
First time solving → expensive
-
Repeated time → instant retrieval, making the overall solution much more efficient.
Merge Sort: A Counterexample
Not all algorithms with subproblems are suitable for DP. Take merge sort, for instance.
-
Merge sort divides an array into halves recursively.
-
Each subproblem is unique and does not repeat.
-
Therefore, merge sort does not have overlapping subproblems.
Even though merge sort uses subproblems, it cannot take advantage of memoization, so DP isn’t applicable here.
Key Takeaways
-
Dynamic Programming Requirement 1: The problem must have overlapping subproblems.
-
Memoization: Store results of subproblems to avoid recomputation.
-
Non-DP Example: Merge sort has subproblems but they are all unique, so it’s not a candidate for DP.
Simple Real-World Example
Problem: Fibonacci numbers
-
Fibonacci sequence:
0, 1, 1, 2, 3, 5, 8, 13… -
Naive recursive approach recalculates the same Fibonacci numbers multiple times → inefficient.
Dynamic Programming Approach (using memoization):
Here, subproblems overlap: fibonacci(5) is computed multiple times, but memoization ensures each subproblem is solved only once.
Summary
-
Dynamic Programming is efficient because it solves overlapping subproblems once and reuses results.
-
Memoization is key to storing these results.
-
Algorithms like Fibonacci numbers are perfect for DP, while merge sort is not, as its subproblems are unique.
Understanding overlapping subproblems is the first step toward mastering dynamic programming.