Skip to content

Dynamic Programming

Introduction and Purpose

Dynamic Programming (DP) is a problem-solving technique that breaks a complex problem into smaller subproblems, solves each subproblem exactly once, and stores the result so it never has to be recomputed. The word "programming" here means planning (not code) — it was coined in the 1950s before the modern usage.

The payoff: problems that would take exponential time with a naive approach often collapse to polynomial time with DP. The tradeoff is extra memory to store intermediate results.

Problems Addressed

Think of a GPS app. When computing the fastest route from city A to city Z, if the optimal path passes through city C, the app doesn't recompute the best route from C to Z every time it considers a detour through C. It computes it once and reuses it. That reuse is the heart of dynamic programming.

DP applies whenever a problem has both of these properties:

1. Optimal substructure: The best solution to the whole problem can be built from the best solutions to its parts.

2. Overlapping subproblems: The same smaller problems appear repeatedly when you break the big problem down.

Without both, DP is unnecessary. If subproblems don't overlap, divide-and-conquer (like merge sort) is the right tool. If substructure isn't optimal, greedy may work.

Common problems solved with DP:

  • Counting paths: How many ways can you reach cell (m, n) in a grid?
  • Optimization: Minimum coins to make change, maximum profit in a knapsack.
  • Sequence problems: Longest common subsequence, edit distance between strings.
  • Decision problems: Can you partition this array into two equal-sum halves?

Common Approaches

The problem with naive recursion: Fibonacci

Fibonacci numbers are defined as: fib(n) = fib(n-1) + fib(n-2), with fib(0) = 0 and fib(1) = 1.

A direct recursive implementation is correct but catastrophically slow:

javascript
function fibNaive(n) {
  if (n <= 1) return n;
  return fibNaive(n - 1) + fibNaive(n - 2);
}

Why is it slow? Look at what gets recomputed:

fib(3) is computed twice, fib(2) three times. For fib(50), this explodes to billions of redundant calls. The time complexity is O(2ⁿ).

DP fixes this by ensuring each subproblem is solved only once.


Approach 1: Memoization (top-down DP)

Keep the recursive structure but cache every result the first time it's computed. On subsequent calls with the same input, return the cached value immediately.

javascript
function fibMemo(n, memo = new Map()) {
  if (n <= 1) return n;
  if (memo.has(n)) return memo.get(n); // Return cached result

  const result = fibMemo(n - 1, memo) + fibMemo(n - 2, memo);
  memo.set(n, result); // Cache before returning
  return result;
}

console.log(fibMemo(10));  // 55
console.log(fibMemo(50));  // 12586269025

Each value is computed once and cached. Time drops from O(2ⁿ) to O(n). Space is O(n) for the cache plus O(n) for the call stack.


Approach 2: Tabulation (bottom-up DP)

Instead of starting at the top and caching on the way down, start at the base cases and build the solution table upward. This is purely iterative — no recursion, no call stack.

javascript
function fibTab(n) {
  if (n <= 1) return n;

  const dp = new Array(n + 1);
  dp[0] = 0;
  dp[1] = 1;

  for (let i = 2; i <= n; i++) {
    dp[i] = dp[i - 1] + dp[i - 2];
  }

  return dp[n];
}

console.log(fibTab(10)); // 55

When to use memoization vs tabulation?

Memoization is easier to write (mirrors the recursive definition) and only computes the subproblems you actually need. Tabulation is more predictable in performance and avoids stack overflow on deep recursion. In interviews, either is accepted — pick whichever you think through faster.


Approach 3: Space optimization

Many DP solutions only need the previous few values from the table, not the whole thing. For Fibonacci, dp[i] only depends on dp[i-1] and dp[i-2], so you can drop the array entirely.

javascript
function fibOptimal(n) {
  if (n <= 1) return n;

  let prev = 0, curr = 1;
  for (let i = 2; i <= n; i++) {
    const next = prev + curr;
    prev = curr;
    curr = next;
  }
  return curr;
}

console.log(fibOptimal(50)); // 12586269025

Time: O(n). Space: O(1) — just two variables. This pattern applies broadly: whenever you spot that dp[i] only looks back a constant number of steps, collapse the array into variables.


Classic example: Coin Change

Given a list of coin denominations and a target amount, find the minimum number of coins needed to make exact change. Unlimited coins of each denomination are available.

This is the prototypical DP optimization problem. Greedy (always pick the largest coin) fails here — try making 11 cents with coins [1, 5, 6, 9]: greedy picks 9+1+1 (3 coins), but optimal is 6+5 (2 coins).

The subproblem: dp[amount] = minimum coins needed to make exactly amount.

For each coin, if it fits (coin <= amount), try using it and see if 1 + dp[amount - coin] beats the current best for dp[amount].

javascript
function coinChange(coins, amount) {
  // Initialize all amounts as "impossible" (Infinity)
  const dp = new Array(amount + 1).fill(Infinity);
  dp[0] = 0; // Base case: 0 coins needed for amount 0

  for (let a = 1; a <= amount; a++) {
    for (const coin of coins) {
      if (coin <= a && dp[a - coin] !== Infinity) {
        dp[a] = Math.min(dp[a], dp[a - coin] + 1);
      }
    }
  }

  return dp[amount] === Infinity ? -1 : dp[amount];
}

console.log(coinChange([1, 5, 6, 9], 11)); // 2 → [5, 6]
console.log(coinChange([1, 5, 10, 25], 36)); // 3 → [25, 10, 1]
console.log(coinChange([2], 3));  // -1 (impossible)

Tracing coinChange([1, 5, 6, 9], 11):

amountdp[amount]Best coins
00(base)
11[1]
51[5]
61[6]
91[9]
102[1+9] or [5+5]
112[5+6]

Classic example: 0/1 Knapsack

You have a bag with a weight limit and a list of items, each with a weight and a value. You can take each item at most once. Maximize the total value without exceeding the weight limit.

This is a 2D DP problem. dp[i][w] = maximum value using the first i items with weight capacity w.

For each item, you have two choices: skip it, or take it (if it fits). Take the better option.

javascript
function knapsack(weights, values, capacity) {
  const n = weights.length;
  // dp[i][w] = max value with first i items and capacity w
  const dp = Array.from({ length: n + 1 }, () => new Array(capacity + 1).fill(0));

  for (let i = 1; i <= n; i++) {
    const w = weights[i - 1];
    const v = values[i - 1];

    for (let cap = 0; cap <= capacity; cap++) {
      // Option 1: skip item i
      dp[i][cap] = dp[i - 1][cap];

      // Option 2: take item i (only if it fits)
      if (w <= cap) {
        dp[i][cap] = Math.max(dp[i][cap], dp[i - 1][cap - w] + v);
      }
    }
  }

  return dp[n][capacity];
}

const weights = [2, 3, 4, 5];
const values  = [3, 4, 5, 6];
const capacity = 8;
console.log(knapsack(weights, values, capacity)); // 10 → items with weights [3,5] → values [4+6]

Time: O(n × capacity). Space: O(n × capacity) — reducible to O(capacity) with the same rolling-row trick.

DP Problem Patterns

Most DP problems fit one of a handful of patterns. Recognizing the pattern is often the hardest part.

Start with 1D problems — they build the intuition. 2D grid problems are the natural next step. Two-sequence problems (LCS, edit distance) are the most common in interviews after 1D. Interval and state-machine DP are advanced and appear in senior-level interviews.

  • Divide & Conquer: Shares DP's structure (split → solve → combine) but subproblems are independent. Merge sort is D&C, not DP — it never reuses the result of sorting a subarray.
  • Greedy: Makes locally optimal choices hoping they lead to a global optimum. Faster but less powerful — greedy fails on coin change with arbitrary denominations, where DP succeeds.
  • Longest Increasing Subsequence: A direct DP application — see LIS. The O(n²) solution in that article is a textbook 1D DP.
  • Longest Common Subsequence (LCS): The canonical two-sequence DP problem. Given two strings, find the longest subsequence common to both. The dp[i][j] table represents results for prefixes of each string. Search for "LCS dynamic programming visualization" for animated walkthroughs.

Tests and Challenges

These problems are ordered by pattern, not just difficulty — work through a pattern before moving to the next:

PatternProblemPlatformWhat it teaches
1DClimbing StairsLeetCode #70Fibonacci in disguise
1DHouse RobberLeetCode #198Skip or take, 1D
1DCoin ChangeLeetCode #322Classic 1D optimization
1DLongest Increasing SubsequenceLeetCode #3001D, builds on all prior patterns
2D GridUnique PathsLeetCode #62First 2D DP
2D GridMinimum Path SumLeetCode #642D optimization
Two-sequenceEdit DistanceLeetCode #72Hardest classic two-sequence DP
KnapsackPartition Equal Subset SumLeetCode #416Subset / knapsack pattern
MixedDP problems (medium)HackerRankVariety of patterns

Recommended path: #70 → #198 → #322 → #300 → #62 → #72. This sequence introduces each new idea on top of the previous one. If you get stuck, NeetCode's DP playlist has video walkthroughs for most of these.

  • Big O Notation — DP trades space for time; understanding O notation is essential for evaluating that tradeoff.
  • Longest Increasing Subsequence — A worked DP example with two full implementations (O(n²) tabulation and O(n log n) with binary search).
  • Binary Search — Used in the optimized LIS solution; also applies to DP problems where you search for the optimal answer over a range.
  • Flood Fill — Grid-based DP problems (Unique Paths, Min Path Sum) operate on the same 2D grid structure as flood fill; understanding grid traversal makes the jump to 2D DP more intuitive.