Exponential Explosions
"The greatest shortcoming of the human race is our inability to understand the exponential function." Albert Bartlett, physicist
The Notation Came First. The Scale Came Later.
When I first saw written on a board, I understood what it meant in the same way I understand what a billion dollars means: technically, clearly, and without any real feeling for it.
. Okay. An algorithm whose steps double with every new input. I wrote it in my notes. I knew it was "bad." I kept going.
It took a concrete example to stop me in my tracks. Not because the math changed, but because I finally felt the size of it. That's what this post is about: not the notation, which you already have from the last post, but the reality underneath it. The moment when "this grows fast" becomes "this is completely, permanently out of reach."
A Quick Recap Before We Go Further
In the last post, we established that means an algorithm's steps grow proportionally with the input, and means they grow with the square of the input. Both of those are polynomial, belonging to a family of functions that, even when large, grow in a way computers can eventually handle.
These are all polynomial. They can get slow. For big enough inputs, even is painful. But they grow in a fundamentally manageable way.
Then there's the other family:
These are exponential (and worse). And the gap between these two families is not just large. It is one of the most dramatic differences in all of mathematics. Once you see it properly, you cannot unsee it.
The Chessboard Story
Let me tell you a story that's been passed around for centuries, because it earns its place every single time.
A king wanted to reward the inventor of chess. The inventor made what seemed like a humble request: place one grain of wheat on the first square of the chessboard, two grains on the second, four on the third, eight on the fourth, doubling every square, until all 64 squares are filled.
The king laughed. A little grain of wheat? Gladly.
He stopped laughing somewhere around square 40.
Let's do the math. On square , the number of grains is .
| Square | Grains on that square | Running total |
|---|---|---|
| 1 | ||
| 10 | ||
| 20 | ~ million | |
| 32 | ~ billion | |
| 40 | billion | ~ trillion |
| 64 |
The total across all 64 squares is:
That is roughly 18 quintillion grains of wheat: about 1,000 times the total wheat production of every country on Earth for the past millennium, combined. From one grain. Doubled 63 times.
For the first 20 squares, the numbers are almost comfortable. One million feels manageable. The explosion doesn't feel like an explosion. It feels like ordinary growth. Then somewhere in the middle, something shifts, and by the end the numbers are so large they stop feeling like numbers at all.
That's exactly how exponential time complexity behaves in an algorithm. The small inputs are fine. Then, suddenly, they're not.
Now Bring It Into the Classroom
Here's where it clicked for me. In CS class at FC College, we were looking at a simple problem: given a set of numbers, does any subset of them add up to exactly zero?
It sounds innocent. The brute-force approach: check every possible subset. For numbers, the number of subsets is .
- n = 10 (Small)
- n = 50 (Medium)
- n = 100 (Large)
- n = 300 (Realistic)
With 10 numbers, there are subsets to check. A modern computer can do this in a fraction of a millisecond. You'd never even notice the computation happening.
With 50 numbers, there are subsets. That's over a quadrillion. A computer checking a billion subsets per second would need roughly 13 days to finish.
With 100 numbers (still a completely ordinary input size), there are subsets. At a billion checks per second, finishing would take roughly years. The current age of the universe is about years. We would need thousands of universe-lifetimes to check every subset.
Real-world cryptography uses numbers with hundreds of digits. At , we have subsets. There are estimated to be around atoms in the observable universe. This number is larger than that. No computer, no supercomputer cluster, no hypothetical computer the size of a planet could ever work through this before the stars go cold.
The numbers don't change. The algorithm doesn't change. We added 50 more inputs, just 50, and went from "done in 13 days" to "longer than the universe has existed." That is the nature of .
Polynomial vs. Exponential: The Visual Truth
Let's see this side by side for a cleaner picture.
| Input size | steps | steps | steps |
|---|---|---|---|
| 10 | 100 | 1,000 | 1,024 |
| 20 | 400 | 8,000 | 1,048,576 |
| 30 | 900 | 27,000 | ~1 billion |
| 50 | 2,500 | 125,000 | ~ |
| 100 | 10,000 | 1,000,000 | ~ |
| 300 | 90,000 | 27,000,000 | ~ |
At , and are actually close: versus . Barely a factor of 10. This is the trap. Small inputs make everything look similar.
By , gives us steps. Trivial. gives us a number with 90 digits. The gap between them isn't a gap anymore. It's a different mathematical universe.
When I say is "practically impossible," I mean that for large enough inputs, no advancement in hardware will save you. Moore's Law doubling processor speed every two years only adds about one to the effective input size you can handle per year. If you can solve today, in a year you might solve . In 10 years, maybe . The problem has you outpaced forever.
Why This Is the Heart of P vs NP
We're now holding the piece that makes the whole P vs NP question feel real.
Remember the two boxes from Post #1? is the box of problems a computer can solve fast, and is the box of problems a computer can verify fast. When we say "fast," we mean polynomial time: , , , anything in that family.
The question is whether the solving and verifying boxes are the same.
What we've just seen is why that question is so dramatic. For many problems, the best known algorithm to find a solution runs in exponential time. The best known algorithm to check a solution runs in polynomial time. That's a gap the size of the universe. Literally.
If , it would mean that gap is an illusion: a polynomial-time algorithm exists for those problems, we just haven't found it yet. If , the gap is real, and some problems are permanently, mathematically beyond reach for any computer that could ever exist.
Think of it as two categories, separated by an uncrossable line:
- Polynomial time: reachable. Solvable. Something computers can actually do.
- Exponential time: permanently beyond reach. No hardware upgrade fixes it. No clever programming trick closes the gap.
That line between those two categories is where the entire P vs NP problem lives.
One More Thing Worth Sitting With
There's something that stuck with me when all of this finally landed. The chessboard king wasn't stupid. He looked at one grain of wheat, then two, then four, then eight. And it seemed fine. Nothing in his experience prepared him for what happens 60 doublings later.
We're in the same position with complexity. Small inputs are always fine. The explosion is always somewhere ahead. And by the time you realize you've walked into exponential territory, the numbers are already beyond anything you can reason about intuitively.
The only protection is knowing the shape of the function before you build on it. Which is exactly why complexity theory exists, and why we're building this understanding from the ground up.
What's Next
Now that we understand what fast and impossibly slow actually mean in terms of runtime, we're ready to draw the first real boundary. In the next post, we formally define the class : the collection of all problems that can be solved in polynomial time. We'll look at real examples, understand what it means for a problem to "belong" to a complexity class, and start to see why is the boundary between what computation can do and what it can't.
See you there. 🚀
Azeon is an open documentation project. If something in this post is unclear or incorrect, open an issue on the GitHub repository linked in the sidebar.