What is the next “durable customer relationship” theme that the market is missing?
Successful investment frameworks often arise from a newly legible durable customer relationship. These new perspectives create their own metrics to look through GAAP accounting and showcase durable FCF. John Malone & the cable cowboys promoted homes passed & EBITDA to roll up cable distribution monopolies and exercise economies of scale. Subscription revenue models (both consumer & SaaS) began to command high multiples when investors internalized metrics like CAC, churn, and net dollar retention. What's next? Some products look like they could fit the bill if they could be modeled as more durable than they currently appear (e.g. perhaps asset management incentive fees, bundled biotech R&D pipelines).
What is the right taxonomy of GAAP relationships?
Accounting represents relationships between economic actors, but GAAP makes it hard to conceptualize these relationships. We should be able to visualize the network graph of businesses (nodes) and their relationships with one another (edges). Right now we can only see a single company report data in a siloed format. Network-based accounting would help us aggregate the micro into the macro and then run agent-based models of the economy. This essay contains my first shot at thinking this through -- thoughts on a better taxonomy here?
What is the ultimate leading indicator for customer wallets?
For public companies with product market fit, customer willingness to pay is really determined by ability to pay. If SaaS customers are adding headcount as quickly as possible, they're almost always willing to buy some software to coordinate the team. But that software is often the first thing to be cut in a downturn too. These budget shifts are one of the largest sources of surprises to the market.
How can you tell that your investment ideas / frameworks have gone stale?
The half-life of facts dominates investment performance.
My biggest mistakes (both commission & omission) have come when good ideas have gone stale & bad ideas weren’t discarded quickly enough. The question is how to make a playbook to update more quickly.
Some solutions we've implemented include re-underwriting ideas regularly, red-teaming the thesis (finding a short thesis for your long, vice-versa), writing down a timeline ahead of time & tracking against how things play out, and checking against a massive checklist we've created of every investing mistake we've catalogued. Always looking for a process improvement here.
How can we build investment funds with less key-man risk and more continuity?
Most companies rely on the top 3-5 people but can survive a leadership change. Most funds are so dependent on the top 3-5 people that the fund itself collapses if any of those top people leave. Renaissance & Citadel are probably the best two counterexamples that have operationalized their investment businesses (though we shouldn't expect Ken Griffin to retire anytime soon).
What is the key -- is it about culture, risk management, some ineffable investment prowess, or simply institutionalizing more than the typical fund does?
Why don’t more investment funds build their own research tools? Will the AI/ML wave change this?
The median number of software engineers at a fundamental investment fund is 0. Most funds run the exact same research stack -- software written in the late 80s/early 90s & databases that are cumbersome to work with. The funds that do build their own software often do so in an ad-hoc fashion. Is the issue with the true payoff from building tools, the risk that future AI/ML tools will swamp firm-built tools, the fund lifecycle, or just institutional momentum?
How can we measure a portfolio’s risk based on its holdings’ ownership base?
Some stocks have great setups due to their ownership base -- a growth-inflecting company underowned by growth investors, an underfollowed company about to turn FCF profitable, a durable FCF company working through a busted IPO process, a potential private equity M&A platform stranded in the public markets. Others are maligned because of their owners -- the private equity overhang, a growth investor base stuck in a slow growing stock, a few large owners who effectively corner the float. It's extremely valuable to build relationships with these investors to understand their motivations.
What is the best way to systematize this -- the best taxonomy to categorize these investors & flag these situations? Is it fund LP base, fund factor exposure, fund VaR estimates + crowding (i.e. pod shop risk models), some proxy for vol selling, or another approach entirely? Should we be categorizing funds by some proxy for asset duration and funding liquidity risk? Is it just easier to follow the macro (e.g. ECRI's business cycle)? And lastly, what are the messy realities of aggregating these statistics into portfolio risk weightings?
Where is the ECRI / Ed Leamer business cycle framework wrong?
A popular macro approach is to track the consumer credit cycle. Here's a simplified version. During boom times, housing & durable goods manufacturers over-extrapolate cyclical demand and buy too much inventory. When consumer credit tightens, those businesses are forced to fire-sale their inventory and fire their employees. This sudden demand collapse ricochets across the economy. That's why housing & durable goods are considered the leading sectors. Eventually the credit environment normalizes, and the cycle can begin anew.
Is growth software now cyclical? (i.e. in the distribution phase)
In Technological Revolutions and Financial Capital, Carlota Perez argues that major technologies undergo two phases: an internally-focused boom & bust and an externally-focused distribution of the tech across the broader economy. If you think of the dotcom era as the boom & bust, we are now >20 years into the distribution phase for software. This implies that software businesses should become more and more exposed to the macro credit cycle.
Why do we use static discount rates in DCF models instead of letting the rate at each period be stochastic or a geometric random walk with drift?
Interest rate expectations shift constantly. Just read a few issues of the Philadelphia Fed's Survey of Professional Forecasts to see how difficult this forecast can be. If we’re going to use DCF models to value equities, shouldn’t we adjust the discount rate to compensate for macro interest rate uncertainty?
How can we estimate single-stock price elasticity to $1 of inflows? And where does inflow modeling break?
Gabaix and Koijen's Inelastic Markets Hypothesis estimates that every $1 of cash inflows increases stock prices by ~$5. How can we bring these elasticity estimates to the single-stock level?
And is "cash inflow" even the right metric? For instance, if a company reports bad news and buyers & sellers adjust their bids & asks to drop the stock price without executing any trades, haven't we seen price impact without any $ flows? Lastly, how can we include ETF creation/redemption influence here?
Why don’t companies directly issue shares during index/ETF inclusion events?
IPOs are frequently derided when they pop on the first day (perhaps unfairly). Shouldn’t we treat index/ETF inclusion events the same way? Should TSLA have raised billions directly upon the 12/2020 S&P 500 inclusion?
When will Larry Harris publish an updated version of Trading & Exchanges?
Who are the smartest investment thinkers and writers that you think are underappreciated?
We're always looking for creative thinkers who change how we look at things. Who do you think is underrated?