Paralysis by Analysis
Don't let perfection be the enemy of good
Here is a scene that plays out thousands of times a day across investing forums, YouTube comment sections, and Discord servers full of people who’ve just discovered DCF models.
Someone has built a spreadsheet. It has tabs for revenue growth, margin assumptions, reinvestment rates, beta calculations, terminal value, sensitivity tables, and a Monte Carlo simulation they copied from a Reddit post. It has taken them three weekends. They are very proud of it.
The output is garbage.
Not because they’re stupid or because valuation is impossibly hard. But because they’ve mistaken complexity for rigour, and volume of inputs for quality of thought. They’ve built an elaborate machine for laundering bad assumptions into official-looking numbers.
The Complexity Trap
There is a peculiar anxiety that grips new investors when they sit down to value a company. The future is uncertain and the variables are many. The stakes feel high and the natural response to that anxiety is to build something bigger - more inputs, more tabs, more decimal places - as if complexity is the same thing as confidence.
It isn’t. It’s usually the opposite.
Aswath Damodaran, who has spent decades teaching valuation at NYU and is probably the most rigorous public thinker on the subject alive, is unambiguous on this:
“The most dangerous words in valuation are ‘I built a very sophisticated model.’”
His principle of parsimony - essentially Occam’s razor applied to finance - holds that simpler models, built on solid assumptions, consistently outperform complex ones built on shaky ones. An estimate of an aggregate operating margin, he argues, is likely to be more accurate than the sum of seventeen individually modelled line items, because each of those line items carries its own error, and those errors, like everything in investing, compound.
This isn’t laziness dressed up as philosophy. It’s a hard-won recognition that the limiting factor in valuation is almost never the sophistication of your model. It’s the quality of your assumptions about the future. And no amount of Excel wizardry changes that.
Damodaran has said,
“If you strip it down, there is no theory in valuation. It’s the simplest of all exercises.”
The mechanics - discounting cash flows, estimating growth rates, calculating a cost of capital - are genuinely not that hard. What’s hard is the forecasting. What’s hard is confronting real uncertainty about how a business will actually evolve. And the response to that difficulty shouldn’t be to add more tabs - it should be to think harder about fewer, better assumptions.
There Is No Perfect Research
There’s a related trap, and it’s the one that stops people investing at all. The belief that if you just read one more annual report, model one more scenario, or wait for one more data point, you’ll finally have enough information to act with confidence.
You won’t, unfortunately the confidence never comes. And while you’re waiting for it, the opportunity often passes.
Damodaran is clear on this too: all valuations are biased, always:
“At the risk of stating the obvious, all valuations are biased, with the only questions being to what degree and in which direction.”
The analyst who believes they’ve achieved objectivity is the most dangerous kind - they’ve stopped questioning their own assumptions. The goal isn’t a perfect, bias-free valuation. It’s an honest one, where you’ve named your assumptions, tested them against reality, and remained genuinely open to being wrong.
“Success in investing, comes not from being right but from being wrong less often than everyone else.”
Paralysis by analysis is, in this sense, its own form of bias. It feels like diligence. It looks like thoroughness. But it’s often just fear dressed up in spreadsheet format - a way of avoiding the discomfort of committing to a position that might turn out to be wrong. Damodaran describes the unhealthy responses to uncertainty in valuation: paralysis and denial, mental shortcuts, herding, outsourcing. All of them, at their root, are ways of not doing the uncomfortable thing, which is making a judgement call and standing behind it.
Put Rubbish In, Get Rubbish Out
Which brings us to the finance YouTube industrial complex.
There is a genre of investing content - enormously popular, regularly featured in algorithmic recommendations, reliably generating hundreds of thousands of views - that works as follows. Take a free DCF template and plug in revenue growth assumptions from a research note. Use a discount rate that feels about right, then input a terminal growth rate that sometimes implies the company will eventually be larger than the global economy. Press calculate and marvel at the number that comes out.
The number always justifies buying the stock. The creator is always very bullish. The comments are always full of people asking what brokerage to use.
This would be entertaining if it were harmless. It isn’t always. The inputs going into these models are often wrong in ways that aren’t random - they’re systematically optimistic, because optimism gets clicks. And as Damodaran and the Morningstar analysts who cite him are fond of noting, financial models are garbage in, garbage out.
“As data and tools have improved, valuations have become less cohesive and meaningful, because we’ve lost connections between stories and numbers that used to exist.”
More tools and less thinking = worse results.
The logical endpoint of this approach was illustrated recently when one prominent investing YouTuber - someone with a substantial following built on precisely this kind of model-heavy content - disclosed that they hadn’t actually made any money in the stock market over the preceding five years. During a period when simply holding a broad index fund would have roughly doubled your money. The admission was remarkable not because it was shameful (investing is hard; everyone underperforms sometimes) but because it came from someone who had spent years presenting themselves as an authority on valuation. The models were impressive and the returns were not. At some point you have to ask: what were the models actually for?
The Bottom Line
The answer Damodaran keeps returning to is not more sophistication. It’s more honesty.
Be explicit about your assumptions. Name the story you’re telling about the company, and then ask whether the numbers in your model actually reflect that story. If your DCF shows high growth, rapid margin expansion, and low risk simultaneously, interrogate that - because businesses that achieve all three at once are extremely rare, and your model probably just reflects what you want to be true.
Use averages where they’re available. Stress-test the inputs that matter most. Accept that your valuation will need to change as new information arrives - that’s not a flaw in the process, that’s the process working correctly.
And above all, resist the temptation to add complexity as a response to uncertainty. The uncertainty doesn’t go away when you add more tabs. You just hide it better, and hiding it is the most dangerous thing you can do. A simple model with honest assumptions is not just easier to build. It’s easier to defend, easier to update, and far less likely to produce the kind of confidently wrong output that leads people to build large YouTube audiences while losing money in one of the strongest bull markets in history.
Damodaran’s shortest summary of all of this:
“Don’t look for precision.”
The irony is that the investors who most obsessively pursue it are often the ones who end up furthest from it.
This post is sponsored by Trading 212.
If you’re looking for a new platform to start or continue your investment journey, you should check out Trading 212. You can sign up using the code “FTSE” to get a free share worth up to £100 or just click on this link;
https://www.trading212.com/Jdsfj/FTSE
Terms Apply. All content is for informational purposes only and is not investment advice. Trading 212 is a platform for investing, and as with any investment, your capital is at risk.


Reading the first paragraph, you could have been talking directly about me 🤣
But as I used to build and run models in my engineering past, I know exactly what you mean. The model is ultimately just a bunch of equations, intentionally intended to aggregate, approximate and simplify reality by stripping out "unnecessary" detail. What is "unnecessary" is often down to the subjective assessment of the modeller.
The model can't, by itself, know whether the output is sensible. Most of my time as a modeller was spent trying to work out why the model wouldn't run or converge cleanly, or where runtime was being lost, or checking how the model's controllers had matched target without violating my imposed constraints. Only then would I look at the actual outputs and try to draw inferences, and that was often the quickest and easiest part of the job.
With financial models, I try to take the same approach: Are there any strange and difficult-to-justify implications of your model, like inconsistencies between revenue, market share and size of total addressable market? If you constrain total market size and market share, what kind of revenue growth does that imply?
Do you have any anomalies in the returns on capital implied by your modelled results by trying to match revenue, earnings and operating profits? Are your implied tax rates reasonable? Can you tally your modelled earnings and cash flow with your balance sheet? Are you even looking?
If you are running your model without even looking at at these factors, you aren't checking your model is producing viable results. But that doesn't mean your model is wrong.
As George Box said, all models are wrong, but some are useful.