The AI Bubble That Is Not When Everyone Is All In

At the beginning of this year, I wrote an op-ed for The Globe about what many were already calling the AI bubble. Nearly a year later, almost all of what I said remains true. The piece was always meant to be a largely evergreen, long term view rather than a knee jerk reaction.

The only difference today is that the forces I described back then have only intensified.

We are in a market where Big Tech, venture capital, private equity, and the public markets are all pouring unprecedented capital into AI. But to understand what is actually happening, and how to invest intelligently, we need to separate noise from fundamentals. Here are the five key points:

  1. Why Big Tech Is Going All In while Taking Minimal Risk.
  2. The Demand Side Is Real and Growing.
  3. Not All AI Investments Are Created Equal.
  4. Picking Winners Matters.
  5. Remember, Dot Com Was a Bubble. The Internet Was Not.

1. Why Big Tech Is Going All In while Taking Minimal Risk

The motivations of the large technology companies driving this wave are very different from those of startups and other investors.

For Big Tech, AI is existential. If they underinvest, they risk becoming the next Blockbuster. If they overinvest, they can afford the losses. In practice, they are buying trillions of dollars worth of call options, and very few players in the world can afford to do that.

The asymmetry is obvious. If I were their CEOs, I would do the same.

But being able to absorb risk does not mean they want to absorb all of it. This is why they are using creative financing structures to shift risk off their balance sheets while remaining all in. At the same time, they strengthen their ecosystems by keeping developers, enterprises, and consumers firmly inside their platforms.

This is not classical corporate investing. Their objective is not just profitability. It is long term dominance.

For everyone outside Big Tech, meaning most of us, understanding these incentives is essential. It helps you place your bets intelligently without becoming roadkill when Big Tech transfers risk into the ecosystem.

2. The Demand Side Is Real

AI usage is not slowing. It is accelerating.

The numbers do not lie. Almost every metric, including model inference, GPU utilization, developer adoption, enterprise pilot activity, and startup formation, is rising. You can validate this across numerous public datasets. Directionally, people are using AI more, not less. And unlike previous hype cycles, this wave has real usage, real dollars, and real infrastructure behind it.

Yes, there is froth. But there are also fundamentals.

3. Not All AI Investments Are Created Equal

A common mistake is treating AI investing as a single category.

It is not.

Investing in a public market, commoditized AI business is very different from investing in a frontier technology startup with a decade long horizon. The former may come with thin margins, weak moats, and hidden exposure to Big Tech’s risk shifting. The latter is where transformational returns come from if you know how to evaluate whether a company is truly world class, differentiated, and defensible.

Lumping all AI investments together is as nonsensical as treating all public stocks as the same.

4. Picking Winners Matters

In public markets, you can buy the S&P 500 and call it a day. But that index is not random. Someone selected those 500 winners for you.

In venture, picking winners matters even more. It is a power law business. Spray and pray does not work. Most startups will not survive, and only the strongest will break out, especially in an environment as competitive as today.

Thanks to AI, we are in the middle of a massive platform shift. Venture scale outcomes depend on understanding technology deeply enough to see a decade ahead and identify breakout successes before others do. Long term vision beats short term noise. Daily or quarterly fluctuations are simply noise to be ignored.

5. Dot Com Was a Bubble. The Internet Was Not.

The dot com era had dramatic overvaluation and a painful crash, but the underlying technology still reshaped the world. The problem was not the internet. It was timing, lack of infrastructure, and indiscriminate investing in ideas that were either too early or simply bad.

Looking back, the early internet lacked essential components such as high speed access, mobile connectivity, smartphones, and internet payments. Although some elements of the AI stack may still be evolving, many of the major building blocks, including commercialization, are already in place. AI does not suffer from the same foundational gaps the early internet did.

Calling this a bubble as a blanket statement misses the nuance. AI itself is not a bubble. With a decade long view, it is already reshaping almost every industry at an unprecedented pace. Corrections, consolidations, and failures are normal. The underlying technological shift is as real as the internet was in the 1990s.

There is speculation. There are frothy areas. And yet, there are many areas that are underfunded. That is where the opportunities are.

History shows that great venture funds invest through cycles. They invest in areas that will be transformative in the next decade, not the next quarter.

For us, the five areas we focus on, including Vertical AI platforms, physical AI, AI infrastructure, advanced computing hardware, and smart energy, are the critical elements of AI. Beyond being our expertise, there is another important reason why these categories matter: Bubble or not, they will thrive.

We are not investing in hype, nor in capital intensive businesses where capital is the only moat, nor in companies where technology defensibility is low. As long as we stay disciplined and visionary, and continue to back founders building a decade ahead, we will do well, bubble or not.

After all, there may be multiple macro cycles across a decade. Embrace the bubble.

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Jevons Paradox: Why Efficiency Fuels Transformation

In 1865, William Stanley Jevons, an English economist, observed a curious phenomenon: as steam engines in Britain became more efficient, coal use didn’t fall — it rose. Efficiency lowered the cost of using coal, which made it more attractive, and demand surged.

That insight became known as Jevons Paradox. To put it simply:

  • Technological change increases efficiency or productivity.
  • Efficiency gains lead to lower consumer prices for goods or services.
  • The reduced price creates a substantial increase in quantity demanded (because demand is highly elastic).

Instead of shrinking resource use, efficiency often accelerates it — and with it, broader societal change.

Coal, Then Light

The paradox first appeared in coal: better engines, more coal consumed. Electricity followed a similar path. Consider lighting in Britain:

PeriodTrue price of lighting (per million lumen-hours, £2000)Change vs. startPer-capita consumption (thousand lumen-hours)Change vs. startTotal consumption (billion lumen-hours)Change vs. start
1800£8,0001.118
1900£250↓ ~30×255↑ ~230×10,500↑ ~500×
2000£2.5↓ ~3,000× (vs. 1800) / ↓ ~100× (vs. 1900)13,000↑ ~13,000× (vs. 1800) / ↑ ~50× (vs. 1900)775,000↑ ~40,000× (vs. 1800) / ↑ ~74× (vs. 1900)

Over two centuries, the price of light fell 3,000×, while per-capita use rose 13,000× and total consumption rose 40,000×. A textbook case of Jevons Paradox — efficiency driving demand to entirely new levels.

Computing: From Millions to Pennies

This pattern carried into computing:

YearCost per GigaflopNotes
1984$18.7 million (~$46M today)Early supercomputing era
2000$640 (~$956 today)Mainstream affordability
2017$0.03Virtually free compute

That’s a 99.99%+ decline. What once required national budgets is now in your pocket.

Storage mirrored the same story: by 2018, 8 TB of hard drive storage cost under $200 — about $0.019 per GB, compared to thousands per GB in the mid-20th century.

Connectivity: Falling Costs, Rising Traffic

Connectivity followed suit:

YearTypical Speed & Cost per Mbps (U.S.)Global Internet Traffic
2000Dial-up / early DSL (<1 Mbps); ~$1,200~84 PB/month
2010~5 Mbps broadband; ~$25~20,000 PB/month
2023100–940 Mbps common; ↓ ~60% since 2015 (real terms)>150,000 PB/month

(PB = petabytes)

As costs collapsed, demand exploded. Streaming, cloud services, social apps, mobile collaboration, IoT — all became possible because bandwidth was no longer scarce.

Intelligence: The New Frontier

Now the same dynamic is unfolding with intelligence:

YearCost per Million TokensNotes
2021~$60Early GPT-3 / GPT-4 era
2023~$0.40–$0.60GPT-3.5 scale models
2024< $0.10GPT-4o and peers

That’s a two-order-of-magnitude drop in just a few years. Unsurprisingly, demand is surging — AI copilots in workflows, large-scale analytics in enterprises, and everyday generative tools for individuals.

As we highlighted in our TSF Thesis 3.0, cheap intelligence doesn’t just optimize existing tasks. It reshapes behaviour at scale.

Why It Matters

The recurring pattern is clear:

  • Coal efficiency fueled the Industrial Revolution.
  • Affordable lighting built electrified cities.
  • Cheap compute and storage enabled the digital economy.
  • Low-cost bandwidth drove streaming and cloud collaboration.
  • Now cheap intelligence is reshaping how we live, work, and innovate.

As we highlighted in Thesis 3.0:

“Reflecting on the internet era… as ‘the cost of connectivity’ steadily declined, productivity and demand surged—creating a virtuous cycle of opportunities. The AI era shows remarkable parallels. AI is the first technology capable of learning, reasoning, creativity… Like connectivity in the internet era, ‘the cost of intelligence’ is now rapidly declining, while the value derived continues to surge, driving even greater demand.”

The lesson is simple: efficiency doesn’t just save costs — it reorders economies and societies. And that’s exactly what is happening now.

If you are building a deep tech early-stage startup in the next frontier of computing, we would like to hear from you. This is a generational opportunity as both traditional businesses and entirely new sectors are being reshaped. White-collar jobs and businesses, in particular, will not be the same. We would love to hear from you.

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Five Areas Shaping the Next Frontier

The cost of intelligence is dropping at an unprecedented rate. Just as the drop in the cost of computing unlocked the PC era and the drop in the cost of connectivity enabled the internet era, falling costs today are driving explosive demand for AI adoption. That demand creates opportunity on the supply side too, in the infrastructure, energy, and technologies needed to support and scale this shift.

In our Thesis 3.0, we highlighted how this AI-driven platform shift will reshape behaviour at massive scale. But identifying the how also means knowing where to look.

Every era of technology has a set of areas where breakthroughs cluster, where infrastructure, capital, and talent converge to create the conditions for outsized returns. For the age of intelligent systems, we see five such areas, each distinct but deeply interconnected.

1. Vertical AI Platforms

After large language models, the next wave of value creation will come from Vertical AI Platforms that combine proprietary data, hard-to-replicate models, and orchestration layers designed for complex and large-scale needs.

Built on unique datasets, workflows, and algorithms that are difficult to imitate, these platforms create proprietary intelligence layers that are increasingly agentic. They can actively make decisions, initiate actions, and shape workflows. This makes them both defensible and transformative, even when part of the foundation rests on commodity models.

This shift from passive tools to active participants marks a profound change in how entire sectors operate.

2. Physical AI

The past two decades of digital transformation mostly played out behind screens. The next era brings AI into the physical world.

Physical AI spans autonomous devices, robotics, and AI-powered equipment that can perceive, act, and adapt in real environments. From warehouse automation to industrial robotics to autonomous mobility, this is where algorithms leave the lab and step into society.

We are still early in this curve. Just as industrial machinery transformed factories in the nineteenth century, Physical AI will reshape industries that rely on labour-intensive, precision-demanding, or hazardous work.

The companies that succeed will combine world-class AI models with robust hardware integration and build the trust that humans place in systems operating alongside them every day.

3. AI Infrastructure

Every transformative technology wave has required new infrastructure that is robust, reliable, and efficient. For AI, this means going beyond raw compute to ensure systems that are secure, safe, and trustworthy at scale.

We need security, safety, efficiency, and trustworthiness as first-class priorities. That means building the tools, frameworks, and protocols that make AI more energy efficient, explainable, and interoperable.

The infrastructure layer determines not only who can build AI, but who can trust it. And trust is ultimately what drives adoption.

4. Advanced Computing Hardware

Every computing revolution has been powered by a revolution in hardware. Just as the transistor enabled mainframes and the microprocessor ushered in personal computing, the next era will be defined by breakthroughs in semiconductors and specialized architectures.

From custom chips to new communication fabrics, hardware is what makes new classes of AI and computation possible, both in the cloud and on the edge. But it is not only about raw compute power. The winners will also tackle energy efficiency, latency, and connectivity, areas that become bottlenecks as models scale.

As Moore’s Law hits its limit, we are entering an age of architectural innovation with neuromorphic computing, photonics, quantum computing, and other advances. Much like the steam engine once unlocked new industries, these architectures will redefine what is computationally possible. This is deep tech meeting industrial adoption, and those who can scale it will capture immense value.

5. Smart Energy

Every technological leap has demanded a new energy paradigm. The electrification era was powered by the grid. Today, AI and computing are demanding unprecedented amounts of energy, and the grid as it exists cannot sustain this future.

This is why smart energy is not peripheral, but central. From new energy sources to intelligent distribution networks, the way we generate, store, and allocate energy is being reimagined. The idea of programmable energy, where supply and demand adapt dynamically using AI, will become as fundamental to the AI era as packet switching was to the internet.

Here, deep engineering meets societal need. Without resilient and efficient energy, AI progress stalls. With it, the future scales.

Shaping What Comes Next

The drop in the cost of intelligence is driving demand at a scale we have never seen before. That demand creates opportunity on the supply side too, in the platforms, hardware, energy, physical systems, and infrastructure that make this future possible.

The five areas — Vertical AI Platforms, Physical AI, AI Infrastructure, Advanced Computing Hardware, and Smart Energy — represent the biggest opportunities of this era. They are not isolated. They form an interconnected landscape where advances in one accelerate breakthroughs in the others.

We are domain experts in these five areas. The TSF team brings technical, product and commercialization expertise that helps founders build and scale in precisely these spaces. We are uniquely qualified to do so.

At Two Small Fish, this is the canvas for the next generation of 100x companies. We are excited to partner with the founders building in these areas globally, those who not only see the future, but are already shaping it.

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

TSF Thesis 3.0: The Next Frontier of Computing and Its Applications Reshaping Large-Scale Behaviour

Summary

Driven by rapid advances in AI, the collapse in the cost of intelligence has arrived—bringing massive disruption and generational opportunities.

Building on this platform shift, TSF invests in the next frontier of computing and its applications, backing early-stage products, platforms, and protocols that reshape large-scale behaviour and unlock uncapped, new value through democratization. These opportunities are fueled by the collapsing cost of intelligence and, as a result, the growing demand for access to intelligence as well as its expansion beyond traditional computing devices. What makes them defensible are technology moats and, where fitting, strong data network effects.

Or more succinctly: We invest in the next frontier of computing and its applications, reshaping large-scale behaviour, driven by the collapsing cost of intelligence and defensible through tech and data moats.

Watch this 2-minute video to learn more about our approach:


Our Evolution: From Network Effects to Deep Tech

When we launched TSF in 2015, our initial thesis centred around network effects. Drawing from our experience scaling Wattpad from inception to 100 million users, we became experts in understanding and leveraging exponential value and defensibility created by network effects at scale. This expertise led us to invest—most as the very first cheque—in massively successful companies such as BenchSciAdaPrintify, and SkipTheDishes.

We achieved world-class success with this thesis, but like all good things, that opportunity diminished over time.

Our thesis evolved as the ground shifted toward the end of 2010s. A couple of years ago, we articulated this evolution by focusing on early-stage products, platforms, and protocols that transform user behaviour and empower businesses and individuals to unlock new value. Within this broad focus, we zoomed in specifically on three sectors: AI, decentralized protocols, and semiconductors. That thesis guided investments in great companies such as StoryIdeogramZinite, and Blumind.

But the world doesn’t stand still. In fact, it has never changed so rapidly. This brings us to the next and even more significant shift shaping our thesis.


A New Platform Shift: The Cost of Intelligence is Collapsing

Reflecting on the internet era, the core lesson we learned was that the internet was the first technology in human history that was borderless, connected, ubiquitous, real-time, and free. At its foundation was connectivity, and as “the cost of connectivity” steadily declined, productivity and demand surged, creating a virtuous cycle of opportunities.

The AI era shows remarkable parallels. AI is the first technology capable of learning, reasoning, creativity, cross-domain functionality, and decision-making. Like connectivity in the internet era, “the cost of intelligence” is now rapidly declining, while the value derived from intelligence continues to surge, driving even greater demand.

This shift will create massive economic value, shifting wealth away from many incumbents and opening substantial investment opportunities. However, just like previous platform shifts, the greatest opportunities won’t come from digitizing or automating legacy workflows, but rather from completely reshaping workflows and user behaviour, democratizing access, and unlocking previously impossible value. These disruptive opportunities will expand into adjacent areas, leaving incumbents defenceless as the rules of the game fundamentally change.


Intelligence Beyond Traditional Computing Devices

AI’s influence now extends far beyond pre-programmed software on computing devices. Machines and hardware are becoming intelligent, leveraging collective learning to adapt in real-time, with minimal predefined instruction. As we’ve stated before, software alone once ate the world; now, software and hardware together consume the universe. The intersection of software and hardware is where many of the greatest opportunities lie.

As AI models shrink and hardware improves, complex tasks run locally and effectively at the edge. Your phone and other edge devices are rapidly becoming the new data centres, opening exciting new possibilities.


Democratization and a New Lens on Defensibility

The collapse in the cost of intelligence has democratized everything—including software development—further accelerated by open-source tools. While this democratization unlocks vast opportunities, competition also intensifies. It may be a land grab, but not all opportunities are created equal. The key is knowing which “land” to seize.

Historically, infrastructure initially attracts significant capital, as seen in the early internet boom. Over time, however, much of the economic value tends to shift from infrastructure to applications. Today, the AI infrastructure layer is becoming increasingly commoditized, while the application layer is heavily democratized. That said, there are still plenty of opportunities to be found in both layers—many of them truly transformative. So, where do we find defensible, high-value opportunities?

Our previous thesis identified transformative technologies that achieved mass adoption, changed behaviour, democratized access, and unlocked unprecedented value. This framework remains true and continues to guide our evaluation of “100x” opportunities.

This shift in defensibility brings us to where the next moat lies.


New Defensibility: Deep Tech Meets Data Network Effects

Defensibility has changed significantly. In recent years, the pool of highly defensible early-stage shallow tech opportunities has thinned considerably, with far fewer compelling opportunities available. As a result, we have clearly entered a golden age of deep tech. AI democratization provides capital-efficient access to tools that previously required massive budgets. Our sweet spot is identifying opportunities that remain difficult to build, ensuring they are not easily replicated.

As “full-spectrum specialists,” TSF is uniquely positioned for this new reality. All four TSF partners are engineers and former startup leaders before becoming investors, with hands-on experience spanning artificial intelligence, semiconductors, robotics, photonics, smart energy, blockchain and others. We are not just technical; we are also product people, having built and commercialized cutting-edge innovations ourselves. As a guiding principle, we only invest when our deep domain expertise can help startups scale effectively and rapidly cement their place as future industry-disrupting giants.

Moreover, while traditional network effects have diminished, AI has reinvigorated network effects, making them more potent in new ways. Combining deep tech defensibility with strong data-driven network effects is the new holy grail, and this is precisely our expertise.


What We Don’t Invest In

Although we primarily invest in “bits,” we will also invest in “bits and atoms,” but we won’t invest in “atoms only.” We also have a strong bias towards permissionless innovations, so we usually stay away from highly regulated or bureaucratic verticals with high inertia. Additionally, since one of our guiding principles is to invest only when we have domain expertise in the next frontier of computing, we won’t invest in companies whose core IP falls outside of our computing expertise. We also avoid regional companies, as we focus on backing founders who design for global scale from day one. We invest globally, and almost all our breakout successes such as Printify have users and customers around the world.


Where We’re Heading

Having recalibrated our thesis for this new era, here’s where we’re going next.

We have backed amazing deep tech founders pioneering AI, semiconductors, robotics, photonics, smart energy, and blockchain—companies like FibraBlumindABRAxiomaticHepzibahStoryPoppy, and Viggle—across consumer, enterprise, and industrial sectors. With the AI platform shift underway, many new and exciting investment opportunities have emerged. 

The ground has shifted: the old playbook is out, the new playbook is in. It’s challenging, exciting, and we wouldn’t have it any other way.

To recap our core belief, TSF invests in the next frontier of computing and its applications, backing early-stage products, platforms, and protocols that reshape large-scale behaviour and unlock uncapped, new value through democratization. These opportunities are fueled by the collapsing cost of intelligence and, as a result, the growing demand for access to intelligence as well as its expansion beyond traditional computing devices. What makes them defensible are technology moats and, where fitting, strong data network effects.

Or more succinctly: We invest in the next frontier of computing and its applications, reshaping large-scale behaviour, driven by the collapsing cost of intelligence and defensible through tech and data moats.

So, if you’ve built interesting deep tech in the next frontier of computing, we invest globally and can help you turn it into a product. If you have a product, we can help you turn it into a massively successful business. If this sounds like you, reach out

Together, we will shape the future.

P.S. Please also read our blog post Five Areas Shaping the Next Frontier.

Eva + Allen + Brandon + Albert + Mikayla

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Contrarian Series: Your TAM is Zero? We love it!

Note: One of the most common pieces of feedback we receive from entrepreneurs is that TSF partners don’t think, act, or speak like typical VCs. The Contrarian Series is meant to demystify this, so founders know more about us before pitching.

Just before New Year, I was speaking at the TBDC Venture Day Conference together with BetaKit CEO Siri Agrell and Serial Entrepreneur and former MP Frank Baylis.

When I said “Two Small Fish love Zero TAM businesses,” I said it so matter-of-factly that the crowd was taken aback. I even saw quite a few posts on social media that said, “I can’t believe Allen Lau said it!”

Of course, any business will need to go after a non-zero TAM eventually. But hear me out.

Here’s what I did at Wattpad: I never had a “total addressable market” slide in the early days. I just said, “There are five billion people who can read and write, and I want to capture them all!”

Even when we became a scaleup, I kept the same line. I just said, “There are billions of people who can read, write, or watch our movies, and I want to capture them all!”

Naturally, some VCs tried to box me into the “publishing tool” category or other buckets they deemed appropriate. But Wattpad didn’t really fit into anything that existed at the time. Trust me, I tried to find a box I would fit in too, but none felt natural.

Why? That’s because Wattpad was a category creator. And, of course, that meant our TAM was effectively zero.

In other words, we made our own TAM.

Many of our portfolio companies are also category creators, so their decks often don’t have a TAM slide either.

Yes, any venture-backed company eventually needs a large TAM. And, of course, I don’t mean to suggest that every startup needs to be a category creator.

That said, we’re perfectly fine—in fact, sometimes we even prefer—seeing a pitch deck without a TAM slide. By definition, category creators have first-mover advantages. More importantly, category creators in a large, winner-take-all market—especially those with strong moats—tend to be extremely valuable at scale and, hence, highly investable.

So, founders, if your company is poised to create a large category, skip the TAM slide when pitching to Two Small Fish. We love it!

P.S. Don’t forget, if you have an “exit strategy” slide in your pitch deck, please remove it before pitching to us. TYSM!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

After All, What’s Deep Tech?

“Deep Tech” is one of those terms that gets thrown around a lot in venture capital and startup circles, but defining it precisely is harder than it seems. If you check Wikipedia, you’ll find this:

Deep technology (deep tech) or hard tech is a classification of organization, or more typically a startup company, with the expressed objective of providing technology solutions based on substantial scientific or engineering challenges. They present challenges requiring lengthy research and development and large capital investment before successful commercialization. Their primary risk is technical risk, while market risk is often significantly lower due to the clear potential value of the solution to society. The underlying scientific or engineering problems being solved by deep tech and hard tech companies generate valuable intellectual property and are hard to reproduce.

At a high level, this definition makes sense. Deep tech companies tackle hard scientific and engineering problems, create intellectual property, and take time to commercialize. But what do substantial scientific or engineering challenges actually mean? Specifically, what counts as substantial? “Substantial” is a vague word. A difficult or time-consuming engineering problem isn’t necessarily a deep tech problem. There are plenty of startups that build complex technology but aren’t what I’d call deep tech. It’s about tackling problems where existing knowledge and tools aren’t enough.

In 1964, Supreme Court Justice Potter Stewart famously said, “I know it when I see it” when asked to describe his test for obscenity in Jacobellis v. Ohio. By no means am I comparing deep tech to obscenity—I don’t even want to put these two things in the same sentence. However, there is a parallel between the two: they are both hard to put into a strict formula, but experienced technologists like us recognize deep tech when we see it.

So, at Two Small Fish, we have developed our own simple rule of thumb:

If we see a product and say, “How did they do that?” and upon hearing from the founders how it is supposed to work, we still say, “Team TSF can’t build this ourselves in 6–12 months,” then it’s deep tech.

At TSF, we invest in the next frontier of computing and its applications. We’re not just looking for smart founders. We’re looking for founders who see things others don’t—who work at the edge of what’s possible. And when we find them, we know it when we see it.

This test has been surprisingly effective. Every single investment we’ve made in the past few years has passed it. And I expect it will continue to serve us well.

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

AI Has Democratized Everything

This is the picture I used to open our 2024 AGM a few months ago. It highlights how drastically the landscape has changed in just the past couple of years. I told a similar story to our LPs during the 2023 AGM, but now, the pace of change has accelerated even further, and the disruption is crystal clear.

The following outlines the reasons behind one of the biggest shifts we identified as part of our Thesis 2.0 two years ago.

Like many VCs, we evaluate pitches from countless companies daily. What we’ve noticed is a significant rise in startups that are nearly identical to one another in the same category. Once, I quipped, “This is the fourth one this week—and it’s only Tuesday!”

The reason for this explosion is simple: the cost of starting a software company has plummeted. What once required $1–2M of funding to hire a small team can now be achieved by two founders (or even a solo founder) with little more than a laptop or two and a $20/month subscription to ChatGPT Pro (or your favourite AI coding assistant).

With these tools, founders can build, test, and iterate at unprecedented speeds. The product build-iterate-test-repeat cycle is insanely short. If each iteration is a “shot on goal,” the $1–2M of the past bought you a few shots within a 12–18 month runway. Today, that $20/month can buy you a shot every few hours.

This dramatic drop in costs, coupled with exponentially faster iteration speeds, has led to a flood of startups entering the market in each category. Competition has never been fiercer. This relentless pace also means faster failures, and the startup graveyard is now overflowing.

For early-stage investors, picking winners from this influx of startups has become significantly harder. In the past, you might have been able to identify the category winner out of 10 similar companies. Now, it feels like mission impossible when there are hundreds—or even thousands—of startups in each category. Many of them are even invisible, flying under the radar for much longer because they don’t need to fundraise.

Of course, there will still be many new billion-dollar companies. In fact, I am convinced that this AI-driven platform shift will produce more billion-dollar winners than ever—across virtually every established category and entirely new ones that don’t yet exist. But by the law of large numbers, spotting them among thousands of startups in each category is harder than ever.

If you’re using the same lens that worked in the past to spot and fund these future tech giants, good luck.

That’s why, for a long time now, we’ve been using a very different lens to identify great opportunities with highly defensible moats to stay ahead of the curve. For example, we’ve been exclusively focused on deep tech—a space where we know we have a clear edge. From technology to product to operations, we have the experience to cover the full spectrum and support founders through the unique challenges of building deep tech startups. So far, this approach has been working really well for us.

I guess we are taking our own advice. As a VC firm, we also need to be constantly improving and striving to be unrecognizable every two years!

There’s no doubt the rules of early-stage VC have shifted. How we access, assess, and assist startups has evolved dramatically. The great AI democratization is affecting all sectors, and venture capital is no exception.

For investors who can adapt, this is a time of unparalleled opportunity—perhaps the greatest era yet in tech investing. The playing field has been levelled, and massive disruption (and therefore opportunities) lies ahead. Incumbents are vulnerable, and new champions will emerge in each category – including VC!

Investing during this platform shift is both exciting and challenging. And I wouldn’t want it any other way, because those who figure it out will be handsomely rewarded.

P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Our Secret to Finding 100x Opportunities

In previous blog posts (here and here), I’ve delved into the mathematical model for constructing an early-stage VC portfolio designed to achieve outsized returns. In short, investing early to build a concentrated portfolio of fewer than 20 moonshot companies, each with the potential for 100x returns or more, is the way to go.

The math is straightforward—it doesn’t lie. Not adhering to this model can significantly reduce the likelihood of achieving exceptional returns.

However, simply following this model is not enough to guarantee outsized results. Don’t mistake correlation for causation! The real challenge lies in identifying, evaluating, and supporting these “100x” opportunities to help turn their vision into reality.

At TSF, we use a simple framework to evaluate whether a potential investment can meet the 100x criteria:

10x (early stage) x 10x (transformative behaviour) = 100x conviction

The first “10x” is straightforward: We invest when companies are in their earliest stages. For instance, over the past two years, all but one of TSF’s investments have been pre-revenue. This made financial analysis simple—those spreadsheets were filled with zeros!

Many of these companies are also pre-traction. While having traction isn’t a bad thing, savvy investors shouldn’t rely on it for validation. The reason is simple: traction is visible to everyone. By the time it becomes apparent, the company is often already too expensive and out of reach.

At TSF, we have a unique advantage. Before transitioning to investing, all TSF partners were engineers, product experts, successful entrepreneurs, and operators—including a “recovering CEO”—that’s me! Each partner brings distinct domain expertise, collectively creating a broad and deep perspective. This allows us to invest only when we possess the domain knowledge needed to fully evaluate an opportunity. We “open the hood” to determine whether the technology is genuinely unique, defensible, and disruptive, or whether it is easily replicable. If it’s the latter, we pass quickly. A strong, defensible tech moat is a key criterion for us. This approach means we might pass on some promising “shallow-tech” opportunities, but we’re very comfortable with that. After all, we believe the best days of shallow tech are behind us.

Maintaining a concentrated portfolio allows us to commit only to investments where we have unwavering conviction. In contrast, a large portfolio would require us to find a large number of 100x opportunities and pursue those we might not fully believe in. Frankly, I wouldn’t sleep well if we took that route. This route would also make it difficult to provide the meaningful, tailored support we’ve promised our entrepreneurs (more on that in a future post). 

When evaluating product potential, we look beyond the present. At TSF, we assess how a technology might reshape the landscape over the next decade or more. We start by understanding the intrinsic needs of the user and envision how a product could fundamentally change customer or end-user behaviour. This is crucial: if a product that addresses a massive opportunity has a strong tech moat, first-mover advantages, and the ability to change behaviour while facing few viable alternatives, it can unlock significant new value and create a defensible, category-defining business.

This often translates into substantial commercialization potential. If we can foresee how the product might evolve into adjacent markets (its second, third, or even fourth act) with almost uncapped possibilities, we achieve the “holy trinity” of tech-product-commercialization potential—forming the second 10x of our conviction.

Here’s how we describe it:

Two Small Fish Ventures invests in early-stage products, platforms, and protocols that transform user behaviour and empower businesses and individuals to unlock new, impactful value.

This thesis underpins our investment decisions and ensures that each choice we make aligns with our long-term vision for transformative innovation.

While this framework may sound simple, executing it well is extremely difficult. It requires what I call a “crystal ball” skill set that spans the full spectrum of entrepreneurial, technical, product, and operational backgrounds.

Over the past decade, we’ve built a portfolio of more than 50 companies across three funds. By employing this approach, the entrepreneurs we’ve supported have achieved numerous breakout successes. This post outlines our “secret sauce,” and we will continue to leverage it.

As you can see, early-stage VC is more art than science. To do it well requires thoughtfulness, insight, and the ability to envision the future as a superpower. It’s challenging but incredibly rewarding. I wouldn’t trade it for anything.

P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Winning the Home Run Derby with Proper Portfolio Construction

TLDR – 20 companies in a VC portfolio is the optimal balance between risk and reward, offering a very high chance of hitting outsized returns without significant risk of losing money. This is exactly the approach we follow at Two Small Fish Ventures, as we keep our per-fund portfolio size limited to roughly 20 companies.

In my previous post, VC is a Home Run Derby with Uncapped Runs, I illustrated mathematically why early-stage venture funds’ success doesn’t hinge on minimizing failures, nor does it come from hitting singles (e.g., the number of “3x” companies). These smaller so-called “wins” are just noise.

As I said:

“Venture funds live or die by one thing: the percentage of the portfolio that becomes breakout successes — those capable of generating returns of 10x, 100x, or even 1000x.”

To drive high expected returns for VCs, finding these breakout successes is key. However, expected value alone doesn’t tell the full story. We also need to consider variance. In simple terms, even if a fund’s expected return is 5x or 10x, it doesn’t necessarily mean it’s a good investment. If the variance is too high—meaning the fund has a low probability of achieving that return and a high probability of losing money—it would still be a poor bet.

For example, imagine an investment opportunity that has a 10% chance of returning 100x and a 90% chance of losing everything. Its expected return is 10x (i.e., 10% x 100x + 90% x 0x = 10x). But despite the attractive expected return, it’s still a terrible investment due to the extremely high risk of total loss.

That said, there’s a time-tested solution to turn this kind of high-risk investment into a great one: diversification. While everyone understands the importance of diversification, the real key lies in how it’s done. By building a properly diversified portfolio, we can reduce variance while maintaining a high expected return. This post will illustrate mathematically how the right portfolio construction allows venture funds to generate outsized returns while ensuring a high probability of success.

Moonshot Capital vs. PlayItSafe Capital: A Quick Recap

Let’s start by revisiting our two hypothetical venture capital firms: Moonshot Capital and PlayItSafe Capital. Moonshot Capital swings for the fences, aiming to find the next 100x company while expecting most of the portfolio to fail. PlayItSafe Capital, on the other hand, protects downside risk (at least that’s what they think), but by avoiding bigger risks, it sacrifices the chance of finding outsized returns.

Moonshot Capital: Out of 20 companies, 17 resulted in strikeouts (0x returns), 3 companies achieved 10x returns, and 1 company achieved a 100x return.

PlayItSafe Capital: Out of 20 companies, 7 resulted in strikeouts (0x returns), 7 companies broke even (1x), 5 companies achieved 3x returns, and 1 company achieved a 10x return.

Here’s how their expected returns compare:

Moonshot Capital has an expected return of 6.5x, thanks to one company yielding 100x and three companies yielding 10x (i.e. (1 x 100 + 3 x 10 +16 x 0) x $1 = $130).

PlayItSafe Capital has a much lower expected return of 1.6x, with its highest return from one 10x company, five 3x returns, and several breakeven companies (i.e. (1 x 10 + 5 x 3 + 7 x 1 + 7 x 0) x $1 = $32).

Despite these differences in expected returns, what’s surprising is that counterintuitively, the probability of losing money (i.e., achieving an average return of less than 1x at the fund level) is quite similar for both firms.

Let’s dive into the math to see how we calculate these probabilities:

Moonshot Capital: 12.9% Probability of Losing Money

1. Expected Return :

2. Variance :

3. Standard Deviation :

4. Standard Error :

Using a normal approximation, the z-score to calculate P(X < 1) is:

Looking this up in the standard normal distribution table gives us:

P(X < 1) = 0.129 or 12.9%

PlayItSafe Capital: 11.6% Probability of Losing Money

Similarly, looking this up in the standard normal distribution table gives us (sparing you all the equations):

P(X < 1) = 0.116 or 11.6%

Shockingly, these two firms’ probabilities of losing money are essentially the same. The math does not lie!

Here’s a graphical representation of the outcomes (probability density) for Moonshot Capital and PlayItSafe Capital.

Probability Density Graphs: Comparing Moonshot and PlayItSafe

As you can see, Moonshot has higher upside potential, as the density peaks at 6x, while PlayItSafe is more concentrated around lower returns. Since their downside risks are more or less the same while PlayItSafe’s approach significantly limits its upside, counterintuitively PlayItSafe is far riskier from the risk-reward perspective.

Proper Portfolio Construction: How Portfolio Size Affects Returns

To further optimize Moonshot’s strategy, we will explore how different portfolio sizes affect the balance between risk and reward. Below, I’ve analyzed the outcomes (i.e. portfolio size sensitivity) for Moonshot Capital across portfolio sizes of n = 5, n = 10, n = 20, and n = 30.

The graph below shows the probability density curves for Moonshot Capital with varying portfolio sizes:

As you can see, smaller portfolios (n = 5, n = 10) exhibit higher variance, with a greater spread of potential outcomes. Larger portfolios (n = 20, n = 30) reduce the variance but also diminish the likelihood of hitting outsized returns.

Why 20 is the Optimal Portfolio Size

1. Why 20 is Optimal:

At n = 20, Moonshot Capital strikes an ideal balance. The risk of losing money, i.e. P (X < 1), remains manageable at 12.9%, while the probability of outsized returns remains high: 62.1% chance of hitting a return higher than 5x. This suggests that Moonshot’s high-risk, high-reward approach pays off without exposing the fund to unnecessary risk.

2. Why Bigger Isn’t Always Better (n = 30):

When the portfolio size increases to n = 30, we see a significant drop-off in the likelihood of outsized returns. The probability of achieving a return higher than 5x drops significantly from 62.1% at n = 20 to 41.9% at n = 30, and counterintuitively, the risk of losing money starts to increase. This suggests that larger portfolios can dilute the impact of the big wins that drive fund returns. It also mathematically explains why “spray-and-pray” does not work for early-stage investments.

3. The Pitfalls of Small Portfolios (n = 5 and n = 10):

At smaller portfolio sizes, such as n = 5 or n = 10, the variance increases significantly, making the portfolio’s returns more unpredictable. For example, at n = 5, the probability of losing money is significantly higher, and the risk of extreme outcomes becomes more pronounced. At n = 10, the flat-curve suggests that the variance is very high. This high variance means the returns are volatile and difficult to predict, increasing risk.

Conclusion: How to Win the Home Run Derby With Uncapped Runs

The key takeaway here is that Moonshot Capital’s strategy of swinging for the fences doesn’t mean taking on excessive risk. With 20 companies in the portfolio, Moonshot is the optimal between risk and reward, offering a very high chance of hitting outsized returns without significant risk of losing money.

While n=20 is optimal, n=10 is also pretty good, but n=30 is significantly worse. So, a ‘concentrated’ approach – but not ‘n=5 concentrated’ – is far better than ‘spray and pray,’ if you have to pick between the two.

This is exactly the approach we follow at Two Small Fish Ventures. We don’t write a cheque unless we have that magical “100x conviction.” We also keep our per-fund portfolio size limited to roughly 20 companies. This blog post mathematically breaks down one of our many secret sauces for our success.

Don’t tell anyone.

P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

VC is a Home Run Derby with Uncapped Runs

There’s an old saying that goes, “Know the rules of the game, and you’ll play better than anyone else.” Let’s take baseball as our example. Aiming for a home run often means accepting a higher number of strikeouts. Consider the legendary Babe Ruth: he was a leader in both home runs and strikeouts, a testament to the high-risk, high-reward strategy of swinging for the fences.

Yet, aiming solely for home runs isn’t always the best approach. After all, the game’s objective is to score the most runs, not just to hit the most home runs. Scoring involves hitting the ball, running the bases, and safely returning to home base. Sometimes, it’s more strategic to aim for a base hit, like a single, which offers a much higher chance of advancing runners on base and scoring.

The dynamics change entirely in a home run derby contest, where players have five minutes to hit as many home runs as possible. Here, only home runs count, so players focus on hitting just hard enough to clear the fence, rendering singles pointless.

Imagine if the derby rules also rewarded the home run’s distance, adding extra runs for every foot the ball travels beyond the fence. For context, the centre field is typically about 400 feet from home plate. So, a 420-foot home run, clearing the centre field by 20 feet, would count as a 20-run homer. This rule would drastically alter players’ strategies. Not only would they swing for the fences with every at-bat, but they would also hit as hard as possible, aiming for the longest possible home runs to maximize their scores, even if it reduced their overall chances of hitting a home run.

This scenario mirrors early-stage venture capital, where I liken it to a home run derby with uncapped runs. The potential upside of investments is enormous, offering returns of 100x, 1000x, or more, while the downside is limited to the initial investment. Unlike in a derby, where physical limits cap the maximum score, the VC world is truly without bounds, with numerous instances of investments yielding thousandfold returns.

This distinct dynamic makes assessing VCs fundamentally different from evaluating other asset classes, where protecting the downside is crucial. In the VC realm, the potential for nearly limitless returns makes losses inconsequential, provided VCs invest in early-stage companies with the potential for exponential growth. The risk-reward equation in venture capital is thus highly asymmetrical, favouring bold bets on moonshot startups.

For illustration, let’s consider two hypothetical venture capital firms: Moonshot Capital and PlayItSafe Capital.

Moonshot Capital approaches the game like a home run derby with uncapped runs. They aim for approximately 20 companies in their portfolio, expecting that around 20% will be their home runs—or “value drivers”—capable of generating returns from 10x to 100x or more. 

Imagine they invest $1 in each of 20 companies. One yields a 100x return, three bring in 10x, and the remaining are strikeouts. The outcome would be:

(1 x 100 + 3 x 10 +16 x 0) x $1 = $130

Their $20 investment becomes $130 (or 6.5x), a gain of $110, despite 17 out of 20 companies being strikeouts. Yes, you are correct. 85% of the portfolio companies failed!

PlayItSafe Capital, on the other hand, prioritizes downside protection, ensuring none of the portfolio fails but also avoiding riskier bets. In the end, one company generates one “10x” return, five companies return 3x, and the remainder is equally split between breakeven and failing.

(1 x 10 + 5 x 3 + 7 x 1 + 7 x 0) x $1 = $32

Despite several “successes” and very few “losses,” the fund’s return of $12 pales in comparison to Moonshot Capital’s. Even increasing the number of companies generating a 3x return to 10 with no loss (which is almost impossible to achieve for early-stage VCs) only yields a $29 gain from a total investment of $20:

(1 x 10 + 10 x 3 + 9 x 1) x $1 = $49

No one should invest in the early-stage VC asset class with the expectation of such a paltry return.

As illustrated, success isn’t about minimizing failures, nor is it about the number of “3x” companies or even the number of “unicorn logos” in the portfolio, as how early when the investment was made to these unicorns is crucial as well. One needs to invest in a unicorn when it was a baby-unicorn, not after it became a unicorn.

In summary:

Venture funds live or die by one thing: the percentage of the portfolio that becomes “value drivers”, i.e. those capable of generating returns of 10x, 100x, or even 1000x.

At Two Small Fish Ventures, we are the IRL version of Moonshot Capital. Every investment is made with the belief that $1 could turn into $100. We know that, in the end, only about 20% of our portfolio will become significant value drivers. Yet, with each investment, we truly believe these early-stage companies have the potential to become world-class giants and category creators when we invest. 

This is what venture capital is all about: not only is it exhilarating to be at the forefront of technology, but it’s also a great way to generate wealth and, more importantly, play a role in supporting moonshots that have a chance to change how the world operates.

P.S. This is Part 1 of this series. You can read Part 2, “Winning the Home Run Derby with Proper Portfolio Construction” here.

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.