Skip to main content

Exponential Explosions

"The greatest shortcoming of the human race is our inability to understand the exponential function." Albert Bartlett, physicist


The Notation Came First. The Scale Came Later.

When I first saw O(2n)O(2^n) written on a board, I understood what it meant in the same way I understand what a billion dollars means: technically, clearly, and without any real feeling for it.

O(2n)O(2^n). Okay. An algorithm whose steps double with every new input. I wrote it in my notes. I knew it was "bad." I kept going.

It took a concrete example to stop me in my tracks. Not because the math changed, but because I finally felt the size of it. That's what this post is about: not the notation, which you already have from the last post, but the reality underneath it. The moment when "this grows fast" becomes "this is completely, permanently out of reach."


A Quick Recap Before We Go Further

In the last post, we established that O(n)O(n) means an algorithm's steps grow proportionally with the input, and O(n2)O(n^2) means they grow with the square of the input. Both of those are polynomial, belonging to a family of functions that, even when large, grow in a way computers can eventually handle.

O(n),O(n2),O(n3),O(n100)O(n), \quad O(n^2), \quad O(n^3), \quad O(n^{100})

These are all polynomial. They can get slow. For big enough inputs, even O(n100)O(n^{100}) is painful. But they grow in a fundamentally manageable way.

Then there's the other family:

O(2n),O(3n),O(n!)O(2^n), \quad O(3^n), \quad O(n!)

These are exponential (and worse). And the gap between these two families is not just large. It is one of the most dramatic differences in all of mathematics. Once you see it properly, you cannot unsee it.


The Chessboard Story

Let me tell you a story that's been passed around for centuries, because it earns its place every single time.

A king wanted to reward the inventor of chess. The inventor made what seemed like a humble request: place one grain of wheat on the first square of the chessboard, two grains on the second, four on the third, eight on the fourth, doubling every square, until all 64 squares are filled.

The king laughed. A little grain of wheat? Gladly.

He stopped laughing somewhere around square 40.

Let's do the math. On square nn, the number of grains is 2n12^{n-1}.

SquareGrains on that squareRunning total
11111
105125121,0231{,}023
20524,288524{,}288~11 million
322,147,483,6482{,}147{,}483{,}648~4.34.3 billion
40549\approx 549 billion~1.11.1 trillion
649.2×1018\approx 9.2 \times 10^{18}1.8×1019\approx 1.8 \times 10^{19}

The total across all 64 squares is:

k=0632k=26411.8×1019\sum_{k=0}^{63} 2^k = 2^{64} - 1 \approx 1.8 \times 10^{19}

That is roughly 18 quintillion grains of wheat: about 1,000 times the total wheat production of every country on Earth for the past millennium, combined. From one grain. Doubled 63 times.

Notice what happened

For the first 20 squares, the numbers are almost comfortable. One million feels manageable. The explosion doesn't feel like an explosion. It feels like ordinary growth. Then somewhere in the middle, something shifts, and by the end the numbers are so large they stop feeling like numbers at all.

That's exactly how exponential time complexity behaves in an algorithm. The small inputs are fine. Then, suddenly, they're not.


Now Bring It Into the Classroom

Here's where it clicked for me. In CS class at FC College, we were looking at a simple problem: given a set of nn numbers, does any subset of them add up to exactly zero?

It sounds innocent. The brute-force approach: check every possible subset. For nn numbers, the number of subsets is 2n2^n.

With 10 numbers, there are 210=1,0242^{10} = 1{,}024 subsets to check. A modern computer can do this in a fraction of a millisecond. You'd never even notice the computation happening.

The numbers don't change. The algorithm doesn't change. We added 50 more inputs, just 50, and went from "done in 13 days" to "longer than the universe has existed." That is the nature of O(2n)O(2^n).


Polynomial vs. Exponential: The Visual Truth

Let's see this side by side for a cleaner picture.

Input size nnO(n2)O(n^2) stepsO(n3)O(n^3) stepsO(2n)O(2^n) steps
101001,0001,024
204008,0001,048,576
3090027,000~1 billion
502,500125,000~101510^{15}
10010,0001,000,000~103010^{30}
30090,00027,000,000~109010^{90}

At n=10n = 10, O(2n)O(2^n) and O(n2)O(n^2) are actually close: 1,0241{,}024 versus 100100. Barely a factor of 10. This is the trap. Small inputs make everything look similar.

By n=300n = 300, O(n2)O(n^2) gives us 90,00090{,}000 steps. Trivial. O(2n)O(2^n) gives us a number with 90 digits. The gap between them isn't a gap anymore. It's a different mathematical universe.

An important clarification

When I say O(2n)O(2^n) is "practically impossible," I mean that for large enough inputs, no advancement in hardware will save you. Moore's Law doubling processor speed every two years only adds about one to the effective input size you can handle per year. If you can solve n=60n = 60 today, in a year you might solve n=61n = 61. In 10 years, maybe n=70n = 70. The problem has you outpaced forever.


Why This Is the Heart of P vs NP

We're now holding the piece that makes the whole P vs NP question feel real.

Remember the two boxes from Post #1? P\mathcal{P} is the box of problems a computer can solve fast, and NP\mathcal{NP} is the box of problems a computer can verify fast. When we say "fast," we mean polynomial time: O(n)O(n), O(n2)O(n^2), O(n3)O(n^3), anything in that family.

The question is whether the solving and verifying boxes are the same.

What we've just seen is why that question is so dramatic. For many problems, the best known algorithm to find a solution runs in exponential time. The best known algorithm to check a solution runs in polynomial time. That's a gap the size of the universe. Literally.

If P=NP\mathcal{P} = \mathcal{NP}, it would mean that gap is an illusion: a polynomial-time algorithm exists for those problems, we just haven't found it yet. If PNP\mathcal{P} \neq \mathcal{NP}, the gap is real, and some problems are permanently, mathematically beyond reach for any computer that could ever exist.

Think of it as two categories, separated by an uncrossable line:

  • Polynomial time: reachable. Solvable. Something computers can actually do.
  • Exponential time: permanently beyond reach. No hardware upgrade fixes it. No clever programming trick closes the gap.

That line between those two categories is where the entire P vs NP problem lives.


One More Thing Worth Sitting With

There's something that stuck with me when all of this finally landed. The chessboard king wasn't stupid. He looked at one grain of wheat, then two, then four, then eight. And it seemed fine. Nothing in his experience prepared him for what happens 60 doublings later.

We're in the same position with complexity. Small inputs are always fine. The explosion is always somewhere ahead. And by the time you realize you've walked into exponential territory, the numbers are already beyond anything you can reason about intuitively.

The only protection is knowing the shape of the function before you build on it. Which is exactly why complexity theory exists, and why we're building this understanding from the ground up.


What's Next

Now that we understand what fast and impossibly slow actually mean in terms of runtime, we're ready to draw the first real boundary. In the next post, we formally define the class P\mathcal{P}: the collection of all problems that can be solved in polynomial time. We'll look at real examples, understand what it means for a problem to "belong" to a complexity class, and start to see why P\mathcal{P} is the boundary between what computation can do and what it can't.

See you there. 🚀


Azeon is an open documentation project. If something in this post is unclear or incorrect, open an issue on the GitHub repository linked in the sidebar.