Back to Data Structures and Algorithms

Dynamic Programming Complete Guide

Algorithm • hard

Dynamic programming solves problems by reusing answers to overlapping subproblems.

The workflow is: define state, define recurrence, define base case, compute in dependency-safe order.

It often converts exponential brute force into polynomial runtime.

Typical Complexity Baseline

MetricValue
Complexity Note 1Varies
Complexity Note 2often reduces exponential to polynomial

See It

Concept Diagram

A small animated SVG that illustrates the core idea the moving parts you need to keep in your head when you read the rest of the guide.

Dynamic programming fill a table once

Each cell is computed from cells already filled. No subproblem is solved twice.

dp[i] = dp[i-1] + dp[i-2] · climbing stairs·dp[0]·dp[1]·dp[2]·dp[3]·dp[4]·dp[5]·dp[6]·dp[7]
Step 1/9Climbing stairs: dp[i] = dp[i-1] + dp[i-2]. Goal: count ways to reach step n.

Watch First

Video Explainer

A quick visual walkthrough before the detailed sections pick the format that helps you learn fastest.

Mechanics

Core Concepts

The building blocks and terminology you need before everything else clicks. Skim or deep-read.

State

What it is Minimal variables that uniquely represent a subproblem.

Why it matters State is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.

Transition

What it is Formula to build current state from smaller states.

Why it matters Transition is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.

Base case

What it is Known trivial answers anchoring recurrence.

Why it matters Base case is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.

Memo table

What it is Cache for top-down or table for bottom-up approach.

Why it matters Memo table is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.

Iteration order

What it is Order that ensures dependencies are already computed.

Why it matters Iteration order is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.

Overlapping subproblems

What it is Same subproblem appears many times in recursion tree.

Why it matters Same subproblem appears many times in recursion tree. In Dynamic Programming, this definition helps you reason about correctness and complexity when inputs scale.

Optimal substructure

What it is Optimal solution can be built from optimal sub-solutions.

Why it matters Optimal solution can be built from optimal sub-solutions. In Dynamic Programming, this definition helps you reason about correctness and complexity when inputs scale.

Memoization

What it is Top-down recursion + cache.

Why it matters Top-down recursion + cache. In Dynamic Programming, this definition helps you reason about correctness and complexity when inputs scale.

Tabulation

What it is Bottom-up iterative table filling.

Why it matters Bottom-up iterative table filling. In Dynamic Programming, this definition helps you reason about correctness and complexity when inputs scale.

State compression

What it is Reduce memory by storing only needed recent rows/columns.

Why it matters Reduce memory by storing only needed recent rows/columns. In Dynamic Programming, this definition helps you reason about correctness and complexity when inputs scale.

Walkthrough

Putting It All Together

Every frame of the concept diagram, laid out top-to-bottom so you can scroll through the algorithm at your own pace.

Step 1/9
dp[i] = dp[i-1] + dp[i-2] · climbing stairs·dp[0]·dp[1]·dp[2]·dp[3]·dp[4]·dp[5]·dp[6]·dp[7]

Climbing stairs: dp[i] = dp[i-1] + dp[i-2]. Goal: count ways to reach step n.

Step 2/9
dp[i] = dp[i-1] + dp[i-2] · climbing stairs1dp[0]·dp[1]·dp[2]·dp[3]·dp[4]·dp[5]·dp[6]·dp[7]

Base case: dp[0] = 1 (one way to be at the start).

Step 3/9
dp[i] = dp[i-1] + dp[i-2] · climbing stairs1dp[0]1dp[1]·dp[2]·dp[3]·dp[4]·dp[5]·dp[6]·dp[7]

Base case: dp[1] = 1.

Step 4/9
dp[i] = dp[i-1] + dp[i-2] · climbing stairs1dp[0]1dp[1]2dp[2]·dp[3]·dp[4]·dp[5]·dp[6]·dp[7]

dp[2] = dp[1] + dp[0] = 1 + 1 = 2.

Step 5/9
dp[i] = dp[i-1] + dp[i-2] · climbing stairs1dp[0]1dp[1]2dp[2]3dp[3]·dp[4]·dp[5]·dp[6]·dp[7]

dp[3] = dp[2] + dp[1] = 2 + 1 = 3.

Step 6/9
dp[i] = dp[i-1] + dp[i-2] · climbing stairs1dp[0]1dp[1]2dp[2]3dp[3]5dp[4]·dp[5]·dp[6]·dp[7]

dp[4] = dp[3] + dp[2] = 3 + 2 = 5.

Step 7/9
dp[i] = dp[i-1] + dp[i-2] · climbing stairs1dp[0]1dp[1]2dp[2]3dp[3]5dp[4]8dp[5]·dp[6]·dp[7]

dp[5] = dp[4] + dp[3] = 5 + 3 = 8.

Step 8/9
dp[i] = dp[i-1] + dp[i-2] · climbing stairs1dp[0]1dp[1]2dp[2]3dp[3]5dp[4]8dp[5]13dp[6]·dp[7]

dp[6] = dp[5] + dp[4] = 8 + 5 = 13.

Step 9/9
dp[i] = dp[i-1] + dp[i-2] · climbing stairs1dp[0]1dp[1]2dp[2]3dp[3]5dp[4]8dp[5]13dp[6]21dp[7]

dp[7] = 13 + 8 = 21. O(n) with memoized subproblems.

Tradeoffs

How It Compares

When this technique wins, when it loses, and what to reach for instead.

vs. Greedy

Choose this when Choose DP when local best choice does not guarantee global optimum.

Tradeoff Greedy is simpler and faster when valid.

vs. Backtracking

Choose this when Choose DP when subproblems overlap significantly.

Tradeoff Backtracking may be simpler for constraint search without overlap.

vs. Divide and conquer

Choose this when Choose DP when branches reuse same states.

Tradeoff Divide-and-conquer fits independent subproblems better.

In Production

Real-World Stories

Where this shows up at real companies and what the on-call engineer learned the hard way.

Google

Speech and sequence models historically relied on DP-style decoding and alignment methods.

Takeaway DP remains practical in sequence optimization pipelines.

Bioinformatics platforms

DNA/protein alignment uses classic DP matrices at scale.

Takeaway DP powers core scientific compute workflows.

Fintech analytics

Portfolio and allocation optimization often combines DP-style dynamic constraints.

Takeaway DP is valuable when decisions depend on prior cumulative state.

Code

Implementation

A clean reference implementation in TypeScript, Python, and Go. Switch languages with the toggle.

Classic 1D DP where state[i] depends on previous two states.

Complexity: Time O(n), Space O(1) with compression

Climbing stairs DP

function climbStairs(stepCount: number): number {
  if (stepCount <= 2) return stepCount

  let oneStepBefore = 2
  let twoStepsBefore = 1

  for (let stepIndex = 3; stepIndex <= stepCount; stepIndex += 1) {
    const currentWays = oneStepBefore + twoStepsBefore
    twoStepsBefore = oneStepBefore
    oneStepBefore = currentWays
  }

  return oneStepBefore
}

Watch Out

Common Problems and Failure Modes

The bugs and edge cases that bite engineers most often. Skim before you ship.

  • State definition too large causing memory/time explosion.
  • Wrong transition missing valid predecessor states.
  • Forgetting base case leading to undefined values.
  • Computing states in wrong order.
  • Not testing small hand-computable examples.

Pro Tips

Tips and Tricks

Small habits that pay off every time you reach for this pattern.

  • Define state in one sentence before writing any transition logic.
  • Start with recursion + memoization, then convert to tabulation if needed.
  • Base cases should be explicit and tested in isolation.
  • If DP table is too large, look for rolling-array/state-compression opportunities.

Pattern Recognition

When to Use

Concrete signals that tell you this is the right pattern both at work and on LeetCode.

Real-system usage signals

  • Eliminates repeated computation across overlapping subproblems.
  • Captures global optimum via local transitions and memoized state.
  • Critical for sequence alignment, knapsack-style optimization, and path counting.

LeetCode-specific tips (pattern-identification signals)

  • Identification signal: brute-force recursion repeats the same subproblems across branches.
  • Identification signal: objective asks min/max/count/ways under constraints where previous states determine next state.
  • If you can define state + transition + base case cleanly, DP is likely the right approach.
  • For Dynamic Programming questions, start by naming the core invariant before writing code.
  • Use the constraint section to set time/space target first, then pick the data structure/algorithm.
  • Solve one tiny example by hand and map each transition to your variables before implementing.
  • Run adversarial cases: empty input, duplicates, max-size input, and sorted/reverse patterns when relevant.
  • During interviews, explain why your approach is the right pattern for this prompt, not just why the code works.

Practice

LeetCode Progression

A curated easy-to-hard problem ladder. Each one reinforces a specific aspect of the pattern.

#ProblemDifficultyTypical Complexity
1Climbing StairsEasyO(n)
2Min Cost Climbing StairsEasyO(n)
3House RobberMediumO(n)
4Coin ChangeMediumO(amount * coins)
5Longest Increasing SubsequenceMediumO(n log n)
6Unique PathsMediumO(m*n)
7Partition Equal Subset SumMediumO(n*sum)
8Edit DistanceHardO(m*n)
9Burst BalloonsHardO(n^3)
10Distinct SubsequencesHardO(m*n)