What Is Dynamic Programming and How To Use It
Channel: CS Dojo
Watch on YouTube →Algorithm • hard
Dynamic programming solves problems by reusing answers to overlapping subproblems.
The workflow is: define state, define recurrence, define base case, compute in dependency-safe order.
It often converts exponential brute force into polynomial runtime.
Typical Complexity Baseline
| Metric | Value |
|---|---|
| Complexity Note 1 | Varies |
| Complexity Note 2 | often reduces exponential to polynomial |
See It
A small animated SVG that illustrates the core idea the moving parts you need to keep in your head when you read the rest of the guide.
Each cell is computed from cells already filled. No subproblem is solved twice.
Watch First
A quick visual walkthrough before the detailed sections pick the format that helps you learn fastest.
Channel: CS Dojo
Watch on YouTube →Mechanics
The building blocks and terminology you need before everything else clicks. Skim or deep-read.
What it is Minimal variables that uniquely represent a subproblem.
Why it matters State is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.
What it is Formula to build current state from smaller states.
Why it matters Transition is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.
What it is Known trivial answers anchoring recurrence.
Why it matters Base case is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.
What it is Cache for top-down or table for bottom-up approach.
Why it matters Memo table is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.
What it is Order that ensures dependencies are already computed.
Why it matters Iteration order is a required building block for understanding how Dynamic Programming stays correct and performant on large inputs.
What it is Same subproblem appears many times in recursion tree.
Why it matters Same subproblem appears many times in recursion tree. In Dynamic Programming, this definition helps you reason about correctness and complexity when inputs scale.
What it is Optimal solution can be built from optimal sub-solutions.
Why it matters Optimal solution can be built from optimal sub-solutions. In Dynamic Programming, this definition helps you reason about correctness and complexity when inputs scale.
What it is Top-down recursion + cache.
Why it matters Top-down recursion + cache. In Dynamic Programming, this definition helps you reason about correctness and complexity when inputs scale.
What it is Bottom-up iterative table filling.
Why it matters Bottom-up iterative table filling. In Dynamic Programming, this definition helps you reason about correctness and complexity when inputs scale.
What it is Reduce memory by storing only needed recent rows/columns.
Why it matters Reduce memory by storing only needed recent rows/columns. In Dynamic Programming, this definition helps you reason about correctness and complexity when inputs scale.
Walkthrough
Every frame of the concept diagram, laid out top-to-bottom so you can scroll through the algorithm at your own pace.
Climbing stairs: dp[i] = dp[i-1] + dp[i-2]. Goal: count ways to reach step n.
Base case: dp[0] = 1 (one way to be at the start).
Base case: dp[1] = 1.
dp[2] = dp[1] + dp[0] = 1 + 1 = 2.
dp[3] = dp[2] + dp[1] = 2 + 1 = 3.
dp[4] = dp[3] + dp[2] = 3 + 2 = 5.
dp[5] = dp[4] + dp[3] = 5 + 3 = 8.
dp[6] = dp[5] + dp[4] = 8 + 5 = 13.
dp[7] = 13 + 8 = 21. O(n) with memoized subproblems.
Tradeoffs
When this technique wins, when it loses, and what to reach for instead.
Choose this when Choose DP when local best choice does not guarantee global optimum.
Tradeoff Greedy is simpler and faster when valid.
Choose this when Choose DP when subproblems overlap significantly.
Tradeoff Backtracking may be simpler for constraint search without overlap.
Choose this when Choose DP when branches reuse same states.
Tradeoff Divide-and-conquer fits independent subproblems better.
In Production
Where this shows up at real companies and what the on-call engineer learned the hard way.
Speech and sequence models historically relied on DP-style decoding and alignment methods.
Takeaway DP remains practical in sequence optimization pipelines.
DNA/protein alignment uses classic DP matrices at scale.
Takeaway DP powers core scientific compute workflows.
Portfolio and allocation optimization often combines DP-style dynamic constraints.
Takeaway DP is valuable when decisions depend on prior cumulative state.
Code
A clean reference implementation in TypeScript, Python, and Go. Switch languages with the toggle.
Classic 1D DP where state[i] depends on previous two states.
Complexity: Time O(n), Space O(1) with compression
Climbing stairs DP
function climbStairs(stepCount: number): number {
if (stepCount <= 2) return stepCount
let oneStepBefore = 2
let twoStepsBefore = 1
for (let stepIndex = 3; stepIndex <= stepCount; stepIndex += 1) {
const currentWays = oneStepBefore + twoStepsBefore
twoStepsBefore = oneStepBefore
oneStepBefore = currentWays
}
return oneStepBefore
}
Watch Out
The bugs and edge cases that bite engineers most often. Skim before you ship.
Pro Tips
Small habits that pay off every time you reach for this pattern.
Pattern Recognition
Concrete signals that tell you this is the right pattern both at work and on LeetCode.
Practice
A curated easy-to-hard problem ladder. Each one reinforces a specific aspect of the pattern.
| # | Problem | Difficulty | Typical Complexity |
|---|---|---|---|
| 1 | Climbing Stairs | Easy | O(n) |
| 2 | Min Cost Climbing Stairs | Easy | O(n) |
| 3 | House Robber | Medium | O(n) |
| 4 | Coin Change | Medium | O(amount * coins) |
| 5 | Longest Increasing Subsequence | Medium | O(n log n) |
| 6 | Unique Paths | Medium | O(m*n) |
| 7 | Partition Equal Subset Sum | Medium | O(n*sum) |
| 8 | Edit Distance | Hard | O(m*n) |
| 9 | Burst Balloons | Hard | O(n^3) |
| 10 | Distinct Subsequences | Hard | O(m*n) |