Predictably Bad Investments
Despite its being littered with incredibly intelligent practitioners, venture capital is not an industry I would characterize as being particularly interested in scholarship. In many corners there’s a distinct allergy toward academic-leaning approaches — that in some way these commentaries are lesser because they originate from outside of operations. It’s the same type of criticisms we see from athletes who don’t like the truths spoken by those who never played.
One key exception to this dearth is
, who is relatively unique for being both a long-time independent VC and a regular university lecturer. I’ve referenced him multiple times throughout the newsletter not only because he writes “venture scholarship” at (and historically on his site of the same name) but because he (a) spends significant time unpacking (often down to the mathematical studs) the realities of venture, devoid of the notorious narratives that dominate, and (b) is a really good writer!Anyways, he wrote a couple pieces back in 2017 that have stuck with me since. The first (chronologically at least), entitled “On being special in the venture business”, starts from an uncomfortable question he (and so many of us with any modicum of success inevitably encounter) — what makes you so special?
Stumped, he turns to the indefatigable Michael Mauboussin:
“Michael Mauboussin said you can tell skill from luck by asking yourself “can you lose on purpose?” This is an amazing question. In venture the answer is, trivially, yes.”
This is a genuinely interesting answer to a genuinely interesting question! He goes on,
“There are two kinds of pitches. Those that are clearly bad ideas, and those where it’s not clear at all if it’s a good idea or a bad idea. Investing in the former will lose you money. Investing in the latter might lose you money or might make you money. Skill is distinguishing between the two. Then luck comes into play.”
Ultimately the goal, as he sees it, is solving the puzzle of “which of this corpus of potential investments is a no?”
“Figuring out the nos from the maybes is, more than anything else, like solving a puzzle. The puzzle is different each time. Your job is solving the puzzles.”
Neumann chased this appetizer six months layer with a more detailed, process-oriented approach in “Ruling out rather than ruling in”. He places an initial stake in the ground that, for some at least, might sound a bit controversial:
“I don’t believe in gut-level decisions. Having a bad feeling about something might reflect some sort of internalized rules, but there’s no real advantage to keeping them internalized, laziness aside. Getting them out in the open allows you to reason about them.”
And as we already established from his previous piece, every non-no decision always has a distinct element of luck:
“Creating a process that picks winners is the same thing as creating a machine that predicts the future. I don’t believe that’s possible.”
And thus, he establishes his approach that also titles the piece:
“The beauty of constraints is that they rule things out, they don’t rule things in. They create a murky but bounded space of maybe that allows for ideas no process I know of, other than human creativity, could come up with.”
To help Neumann avoid the gut-level decisions and better systematize (or at least document his in-place systems) he built the following schema for such a “rule out not in” approach:
We can extend this operational framework into a more mathematically rigorous approach as well. Data scientist Diag Davenport effectively codified Neumann’s “focus on the true negatives” approach in his 2022 paper “Predictably Bad Investments”. The paper is absolutely worth reading in full for the clarity of thinking alone, but in short, Davenport demonstrates via portfolio bootstrapping not only that carving off the bottom of the returns distribution significantly improves returns (obviously) but that such “predictably bad investments” are algorithmically identifiable.
Thus far, our discussion has focused on just a single application (venture capital), but I’ve frankly seen this same pattern showing up across various domains of late.
Removing Unforced Errors
I was, in my younger years, a competitive tennis player. I was a voracious consumer of “improvement content” — I watched most tournaments, especially every Pete Sampras match I could find; I assembled binder of tactics-focused magazine clippings; and I thought deeply about how to build the perfect game. In my mind palace I was dominant.
On the court, my game never quite matched my mental model. Though I was certainly more scholarly than pretty much any opponent, and though that knowledge sporadically manifested as beautifully composed rallies, I quite often lost to “inferior” players. Why? Because they committed relatively few errors, and I committed far too many. Though aesthetically abominable at times, this “dink and dunk” strategy is, as much as purists like myself bemoan, a great way to secure positive returns (and simply improve over time).
There’s a clear learning here — we often lose (at whatever we’re doing) by committing too many errors. One doesn’t become elite simply by avoiding errors — at some point, such conservatism inevitably yields poorer returns — but to become elite one must first survive. And limiting errors at the outset is a key learning.
This is, I think, generalizable across domains:
As Neumann describes above, and as Davenport reinforces algorithmically, a more reliable method to achieving great investment returns is identifying the bad opportunities and simply not investing in them.
In sports, committing fewer errors will typically lead to more playing time, which will enable greater growth that should enable one to go for more error-prone opportunities when skill has up-leveled.
In startups, I recall a great investor (probably Bill Gurley) commenting that most die of starvation, not competition. Since venture land requires multiple subsequent funding rounds, part of the game is staying alive, which means committing fewer errors even as teams build speculative businesses. Commit fewer unforced errors, spend judiciously, stay alive.
What we’re really talking about here is what statisticians unfortunately call Type I and Type II errors1. The more colloquial versions of these are far clearer — False Positives and False Negatives. These are ultimately errors in logic — incorrect decisions based on presently available information. We can contextualize these within a handy 2x2 matrix (as always):
Although conceptually the “limit errors” tactic is generalizable across domains, the relative rate should differ significantly. This is directly related to the observed delta in outcomes yielded by False Negatives and False Positives in the given domain.
In tennis, for example, the downside of a False Negative — that is, a shot not taken that was in fact the best shot — is simply a lost point. This is an observation of the average, which belies the underlying distribution with which a player must contend. A False Negative decision up 40-Love in the opening game has significantly lower downside risk than the same decision down match point. In this case, the results are highly consequential and essentially binary: lose the point and the match is over; win the point and live to fight again.
There are effectively two strategies the player could employ here:
The more conservative approach of keeping the ball in play and goading an error by the opponent (low false positive rate).
The more liberal approach of forcing the issue in order to win (rather than not lose) the point (high false positive rate).
The decision chosen depends on many factors, though the manner in which the opponent has reached this match point is likely the most salient.
If he has played a more defensive game, committing relatively few errors in the process, a similarly conservative approach is not likely to secure the match point defense.
In contrast, if the opponent has reached this point with a high volatility strategy that includes many errors, playing conservatively may in fact yield highest expected returns.
In investing, the False Negative/Positive ratio is even more sensitive precisely because the skews between upside and downside are so great. It’s a truism that, in the absence of leverage, the most an investor can lose is his total investment. This is a known, capped quantity, and though painful, a venture portfolio constructed properly is built to weather such zeroes. In contrast, any given investment can effectively go to infinity — there is no actual cap on returns.
In his 2015 letter to shareholders, Jeff Bezos uses a different sports analogy to illustrate the distribution differences offered within investing:
“We all know that if you swing for the fences, you’re going to strike out a lot, but you’re also going to hit some home runs. The difference between baseball and business, however, is that baseball has a truncated outcome distribution. When you swing, no matter how well you connect with the ball, the most runs you can get is four. In business, every once in a while, when you step up to the plate, you can score 1,000 runs. This long-tailed distribution of returns is why it’s important to be bold. Big winners pay for so many experiments.”
And due to this massive delta between upside and downside risk, venture investors must be incredibly sensitive to False Negatives because the “losses” (counterfactual in nature) can be exponentially greater than the actual losses of a False Positive.
But if False Negatives are far more painful, why follow a “limit False Positives” strategy? I think one of the learnings from Neumann here is that False Positives are more knowable than False Negatives. They are more controllable.
In venture (as in all investing), there’s a fixed pool of capital. Limiting the number of False Positives yields more opportunity for those ill-made investment dollars to flow into other investments that can’t knowably be grand slams, but just might be. That is, limiting (predictably) False Positives lowers the opportunity cost and enables an investor to have more shots on goal in more fertile areas — increasing the odds of True Positives to bloom (and thus lowering False Negative rates in the process).
Upside vs. Downside Risk
There’s another way to conceptualize this discussion that doesn’t require the (still confusing) tug-of-war between false/true positives/negatives. Utter the word “risk” to most and clear visuals spring to mind:
A gambling table
A stock market crash
Perhaps a shady-looking back alley
That is, when most people consider risk they default to “downside risk”; effectively, what could go wrong if I do XYZ. This is the land of lawyers and accountants, whose jobs are predicated on the ability to avoid and/or weather shocks that catalyze such downside events.
These are all manifestations of risk, for sure, but they represent just one side of the ledger. Our preoccupation with such downside scenarios is understandable — losing something is painful! That visceral reaction, I think, more readily occurs because it involves something we already have (ownership) and a present reality (tangibility). Given these two vectors, it’s little surprise that the other risk type so rarely pops to one’s mind when triggered — that of upside risk.
This form of risk is less intuitive — we might re-characterize it into a more colloquial form by calling it “missed opportunity”. This is the domain of entrepreneurs and alpha-hunting investors. And as I discussed above, it’s a shame this risk form is relatively less developed because this is where larger losses (both personal and commercial) are likely to accrue. Again, “actual losses” (downside risk) feel worse for most, but visceral is non-equivalent with absolute risk.
Thus we return to the start of the piece. By focusing considerable energy on eliminating “predictable errors”, we grant ourselves the space to lessen both downside and upside risk; to incur less pain from false positives and from false negatives; to give ourselves greater opportunities to asymptotically approach “optimal” in whatever domain we’re concerning ourselves with at the time.
Simple advice. Incredibly difficult to achieve.
It’s a truly awful bit of branding, as it is far far too easy to confuse them. Science really needs a marketing arm.
Hey, thanks for mentioning me. I was thinking about that same post myself after reading this from Howard Marks recently: https://www.oaktreecapital.com/insights/memo/fewer-losers-or-more-winners. He says in it "If we avoid the losers, the winners will take care of themselves."
But it's interesting that his motivation is to stay out of the bottom 5%, to be consistent and control risk, while mine would be to be in the top 5%, and take the biggest risks I can find, because only the big risks give the big returns. I'm still thinking why that is through.