Introduction to Dynamic Programming: Understanding Overlapping Subproblems

0

Introduction to Dynamic Programming: Understanding Overlapping Subproblems

Dynamic Programming (DP) is a powerful technique in computer science used to solve complex problems efficiently by breaking them down into smaller, manageable subproblems. But not all problems can use DP. There are two key requirements, and in this article, we’ll focus on the first one: overlapping subproblems.


What Are Overlapping Subproblems?

At first, the term “overlapping subproblems” might sound intimidating. But it’s actually simple.

When we break a problem into smaller subproblems, sometimes some of these subproblems repeat. These repeated problems are what we call overlapping subproblems.

In short: Overlapping subproblems = repeating subproblems.


Example to Understand Overlapping Subproblems

Imagine you have a problem that can be broken into three subproblems:

  1. Subproblem 1 → Returns 10

  2. Subproblem 2 → Returns 20

  3. Subproblem 3 → Returns 30

Now, suppose the same subproblem appears again while solving the overall problem:

  • Subproblem 1 repeats → should return 10

  • Subproblem 2 repeats → should return 20

If we recompute the answer for every repetition, it would be inefficient.

Instead, we can store the result of each subproblem when we first solve it. Later, if the same subproblem occurs, we retrieve the answer instantly.

This technique is called memoization.


Memoization: Storing Results

Memoization is the process of storing answers to subproblems so that repeated computations can be avoided.

Example (Python-like pseudo code):

# Memoization dictionary

memo = {}
def subproblem(n):
    if n in memo:
        return memo[n]  # Retrieve answer if already computed
    # Imagine this takes a trillion operations
    result = expensive_computation(n)
    memo[n] = result  # Store the result
    return result
# Usage
print(subproblem(1))  # First time -> expensive computation
print(subproblem(1))  # Second time -> O(1) lookup from memo


Benefit:

  • First time solving → expensive

  • Repeated time → instant retrieval, making the overall solution much more efficient.


Merge Sort: A Counterexample

Not all algorithms with subproblems are suitable for DP. Take merge sort, for instance.

  • Merge sort divides an array into halves recursively.

  • Each subproblem is unique and does not repeat.

  • Therefore, merge sort does not have overlapping subproblems.

Even though merge sort uses subproblems, it cannot take advantage of memoization, so DP isn’t applicable here.


Key Takeaways

  1. Dynamic Programming Requirement 1: The problem must have overlapping subproblems.

  2. Memoization: Store results of subproblems to avoid recomputation.

  3. Non-DP Example: Merge sort has subproblems but they are all unique, so it’s not a candidate for DP.


Simple Real-World Example

Problem: Fibonacci numbers

  • Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13…

  • Naive recursive approach recalculates the same Fibonacci numbers multiple times → inefficient.

Dynamic Programming Approach (using memoization):

fib_memo = {}
def fibonacci(n):
    if n in fib_memo:
        return fib_memo[n]
    if n <= 1:
        return n
    fib_memo[n] = fibonacci(n-1) + fibonacci(n-2)
    return fib_memo[n]
print(fibonacci(10))  # Output: 55

Here, subproblems overlap: fibonacci(5) is computed multiple times, but memoization ensures each subproblem is solved only once.


Summary

  • Dynamic Programming is efficient because it solves overlapping subproblems once and reuses results.

  • Memoization is key to storing these results.

  • Algorithms like Fibonacci numbers are perfect for DP, while merge sort is not, as its subproblems are unique.

Understanding overlapping subproblems is the first step toward mastering dynamic programming.

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to Top