Back to Data Structures and Algorithms

Data structures & algorithms

Big O Notation

Big O is a way to describe how work grows as input gets bigger. It is less about exact milliseconds and more about growth pattern.

Two axes

Time vs Space Complexity

Big O describes both runtime and memory growth. Both matter when comparing solutions.

Time Complexity

Estimates how the number of operations grows as input size grows. Linear search in an unsorted list is usually O(n); binary search on sorted data is O(log n).

Space Complexity

Estimates how much additional memory grows with input size. In-place swaps often use O(1) extra space; merge sort usually needs O(n).

Common Tradeoff

You can often trade memory for speed. A hash map can reduce repeated lookups from O(n) to about O(1) average time, but it uses extra memory to store keys and buckets.

Scale changes everything

Why It Matters

Two solutions can both work for 100 records, but one becomes painfully slow at 100,000.

Pick for tomorrow

Big O helps you choose solutions that stay fast as usage grows, not just ones that pass today.

Time and space

Teams compare both axes to choose the best compromise for their actual runtime and hardware constraints.

Daily decisions

Where It Is Used

Big O shows up in search, sort, API design, and scaling planning.

  • Search and sort

    Choosing the right algorithm for the data shape and size.

  • API and database

    Designing access patterns that scale with traffic and rows.

  • Scale planning

    Sizing feeds, analytics pipelines, and queue systems.

  • Code review

    Spotting nested loops and accidental quadratic blowups early.

Mental shortcuts

Real-World Metaphors

Concrete examples help Big O click faster when comparing solutions.

  • O(1) Reading the battery percentage shown on your phone screen right now.

    One direct read from a fixed location. It does not require scanning a list, so the steps stay constant.

  • O(log n) Playing a higher/lower guessing game from 1 to 1,000.

    Each guess removes about half of the remaining possibilities, so steps grow slowly even when n gets large.

  • O(n) Scanning every name in an attendance sheet until you find one person.

    In the worst case you check names one by one. If the list doubles, the checks roughly double too.

  • O(n log n) Organizing a big stack of cards by repeatedly splitting into smaller piles, ordering each pile, then combining.

    You touch all n cards across each round, and the number of rounds grows like log n because piles are repeatedly halved.

  • O(n²) Comparing every student in a class with every other student to find duplicate notes.

    Each item is compared with many others, creating a nested loop effect. Doubling n makes work jump by about 4x.

  • O(2ⁿ) Trying every possible yes/no feature combination for a product configuration.

    Each new yes/no choice doubles total combinations. The growth is exponential and becomes huge very quickly.

  • O(n!) Trying every possible seating order for guests around a dinner table.

    Adding one guest creates many new orderings at once, so possibilities explode even faster than exponential growth.

  • O(n + m) Reading two separate checklists: one with n items and one with m items, one pass each.

    Total work is the sum of both list lengths. You are not comparing every item in one list to every item in the other.

When complexity wins

Small vs Large Scale Impact

At small scale, even inefficient code feels fast. At scale, complexity dominates cost and latency.

This is why O(n log n) sort strategies usually beat O(n²) once data gets big the gap widens fast as input grows. The interactive explorer below makes the divergence visible at a glance.

Play with the curves

Interactive Explorer

Compare growth shapes side by side and inspect runtime estimates at any input size.

Chart axis mode

Shape mode plots the canonical Big O view from n=1 to n=32 so every curve's character is visible at once.

Input size1,000Hover or scrub to inspect
OperationsInput size →
O(1)
O(log n)
O(n)
O(n log n)
O(n²)
O(2ⁿ)
O(n!)

Selected baseline: 1,000 items

Scrubbed input: 1,000 items

ComplexityEstimated operationsRough runtime
O(1)1

0.000 ms

0.000 s

O(log n)10

0.000 ms

0.000 s

O(n)1,000

0.020 ms

0.000 s

O(n log n)9,966

0.199 ms

0.000 s

O(n²)1,000,000

20.00 ms

0.020 s

O(2ⁿ)478,590,233,456

9,571,804.67 ms

2.66 hr

O(n!)2,422,786,846,761,135,000

48,455,736,935,222.7 ms

1536.52 years