Portfolio Highlight: ABR’s Funding Round

Edge AI has been a key pillar of our Advanced Computing Hardware investments and a core part of our thesis for a long time. It is the same arc I wrote about in The Next Data Centre: Your Phone a while ago.

We need new architectures to meet the speed, security, and energy demands of the next frontier of computing and its applications, which is the lens I used in The Factory Analogy.

Our portfolio company Applied Brain Research (ABR) just achieved a new milestone: ABR announced the successful closure of its oversubscribed seed funding round, including investment from TSF as a lead investor, with Eva Lau joining the board.

ABR created and patented a new type of AI model, called state space models, to make AI smaller, faster, and more energy efficient than transformer models. State space models deliver real-time voice and time series intelligence without the cloud, built for privacy and efficiency. ABR’s first chip, TSP1, delivers real-time, fully on-device voice AI without the cloud. Full vocabulary speech-to-text and text-to-speech are now possible at under 30mW.

At the edge, every millisecond and every milliwatt count.

For context:

  • 30mW is 100× less than a 3W LED lightbulb.
  • A data-center GPU lives in a different universe: an NVIDIA H200 NVL is up to 600W.

Now connect that to the three constraints that define the edge:

  • Speed: for voice and interaction, half a second is half a second too late. Cloud voice is “a terrible experience,” plagued by delays.
  • Security: shipping voice data to the cloud bakes in privacy risk by default — which is why we keep coming back to intelligence that stays close to the user, as Brandon argued in his post In Favour of Intelligence That Stays Put. ABR calls out “privacy concerns” as a core issue with cloud voice.
  • Energy: edge devices are constrained by battery life and on-device resources. ABR’s on-device voice numbers move this from “interesting” to “deployable.”

This is why ABR enables numerous new use cases that weren’t viable before in categories like AR, robotics, wearables, medical devices, and automotive.

Imagine AR glasses (or other wearables) that respond to your command in real time without draining the battery. Imagine a robot that reacts with no hesitation. Imagine a medical device that can provide insight securely, without exporting sensitive data. Imagine a car that can respond to voice commands even when the network is unreliable. These are just a few examples. The list can go on and on.

Or as Eva put it in ABR’s announcement: sophisticated voice AI doesn’t require the cloud.

A Day at Ontario Tech University

I spent a full day at Ontario Tech University in Oshawa a few weeks ago. It was my first time on campus, despite it being just over a 40-minute drive from Toronto, where I live. I arrived curious and left with a clearer picture of what they’re building.

Ontario Tech is still a relatively young university, just over two decades old. What’s less well known—and something I didn’t fully appreciate before the visit—is how quickly it has grown in that time, now serving around 14,000 students, and how deliberately it has established itself as a research university rather than simply a teaching-focused institution.

That research orientation shows up not just in output, but in where the university has chosen to build depth—areas that sit close to real systems and real constraints.

This came through clearly in conversations with Prof. Peter Lewis, Canada Research Chair in Trustworthy Artificial Intelligence, whose work focuses on trustworthy and ethical AI. The university has launched Canada’s first School of Ethical AI, alongside the Mindful AI Research Institute, and the work here is grounded in how AI systems behave once deployed—how humans interact with them, and how unintended consequences are identified and managed.

Energy is another area where Ontario Tech has built serious capability. The university is home to Canada’s only accredited undergraduate Nuclear Engineering program, which is ranked third in North America and designated as an IAEA Collaborating Centre. In discussions with Prof. Hossam Gaber, the emphasis was on smart energy systems, where software, sensing, and control systems are developed alongside the physical energy infrastructure they operate within.

I also spent time with Prof. Haoxiang Lang, whose work in robotics, automotive systems, and advanced mobility sits at the intersection of computation and the physical world.

That work is closely tied to the Automotive Centre of Excellence, which includes a climatic wind tunnel described as one of the largest and most sophisticated of its kind in the world. The facility enables full-scale testing under extreme environmental conditions—from arctic cold to desert heat—and supports research that needs to be validated under real operating constraints.

I can’t possibly mention all the conversations I had over the course of the day—it was a full schedule—but I also spent time with Dean Hossam Kishawy and Dr. Osman Hamid, discussing how research, entrepreneurship, and industry engagement fit together at Ontario Tech.

The day also included time at Brilliant Catalyst, the university’s innovation hub, speaking with students and founders about entrepreneurship. I had the opportunity to give a keynote on entrepreneurship, and the visit ended with the pitch competition, where I handed the cheque to the winning team—a small moment that underscored how early many technical journeys begin.

Ontario Tech may be young, but it is already operating with the structure and discipline of a mature research institution, while retaining the adaptability of a newer one.

Thank you to Sunny Chen and the Ontario Tech team for the time, access, and thoughtful conversations throughout the day.

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

The AI Bubble That Is Not When Everyone Is All In

At the beginning of this year, I wrote an op-ed for The Globe about what many were already calling the AI bubble. Nearly a year later, almost all of what I said remains true. The piece was always meant to be a largely evergreen, long term view rather than a knee jerk reaction.

The only difference today is that the forces I described back then have only intensified.

We are in a market where Big Tech, venture capital, private equity, and the public markets are all pouring unprecedented capital into AI. But to understand what is actually happening, and how to invest intelligently, we need to separate noise from fundamentals. Here are the five key points:

  1. Why Big Tech Is Going All In while Taking Minimal Risk.
  2. The Demand Side Is Real and Growing.
  3. Not All AI Investments Are Created Equal.
  4. Picking Winners Matters.
  5. Remember, Dot Com Was a Bubble. The Internet Was Not.

1. Why Big Tech Is Going All In while Taking Minimal Risk

The motivations of the large technology companies driving this wave are very different from those of startups and other investors.

For Big Tech, AI is existential. If they underinvest, they risk becoming the next Blockbuster. If they overinvest, they can afford the losses. In practice, they are buying trillions of dollars worth of call options, and very few players in the world can afford to do that.

The asymmetry is obvious. If I were their CEOs, I would do the same.

But being able to absorb risk does not mean they want to absorb all of it. This is why they are using creative financing structures to shift risk off their balance sheets while remaining all in. At the same time, they strengthen their ecosystems by keeping developers, enterprises, and consumers firmly inside their platforms.

This is not classical corporate investing. Their objective is not just profitability. It is long term dominance.

For everyone outside Big Tech, meaning most of us, understanding these incentives is essential. It helps you place your bets intelligently without becoming roadkill when Big Tech transfers risk into the ecosystem.

2. The Demand Side Is Real

AI usage is not slowing. It is accelerating.

The numbers do not lie. Almost every metric, including model inference, GPU utilization, developer adoption, enterprise pilot activity, and startup formation, is rising. You can validate this across numerous public datasets. Directionally, people are using AI more, not less. And unlike previous hype cycles, this wave has real usage, real dollars, and real infrastructure behind it.

Yes, there is froth. But there are also fundamentals.

3. Not All AI Investments Are Created Equal

A common mistake is treating AI investing as a single category.

It is not.

Investing in a public market, commoditized AI business is very different from investing in a frontier technology startup with a decade long horizon. The former may come with thin margins, weak moats, and hidden exposure to Big Tech’s risk shifting. The latter is where transformational returns come from if you know how to evaluate whether a company is truly world class, differentiated, and defensible.

Lumping all AI investments together is as nonsensical as treating all public stocks as the same.

4. Picking Winners Matters

In public markets, you can buy the S&P 500 and call it a day. But that index is not random. Someone selected those 500 winners for you.

In venture, picking winners matters even more. It is a power law business. Spray and pray does not work. Most startups will not survive, and only the strongest will break out, especially in an environment as competitive as today.

Thanks to AI, we are in the middle of a massive platform shift. Venture scale outcomes depend on understanding technology deeply enough to see a decade ahead and identify breakout successes before others do. Long term vision beats short term noise. Daily or quarterly fluctuations are simply noise to be ignored.

5. Dot Com Was a Bubble. The Internet Was Not.

The dot com era had dramatic overvaluation and a painful crash, but the underlying technology still reshaped the world. The problem was not the internet. It was timing, lack of infrastructure, and indiscriminate investing in ideas that were either too early or simply bad.

Looking back, the early internet lacked essential components such as high speed access, mobile connectivity, smartphones, and internet payments. Although some elements of the AI stack may still be evolving, many of the major building blocks, including commercialization, are already in place. AI does not suffer from the same foundational gaps the early internet did.

Calling this a bubble as a blanket statement misses the nuance. AI itself is not a bubble. With a decade long view, it is already reshaping almost every industry at an unprecedented pace. Corrections, consolidations, and failures are normal. The underlying technological shift is as real as the internet was in the 1990s.

There is speculation. There are frothy areas. And yet, there are many areas that are underfunded. That is where the opportunities are.

History shows that great venture funds invest through cycles. They invest in areas that will be transformative in the next decade, not the next quarter.

For us, the five areas we focus on, including Vertical AI platforms, physical AI, AI infrastructure, advanced computing hardware, and smart energy, are the critical elements of AI. Beyond being our expertise, there is another important reason why these categories matter: Bubble or not, they will thrive.

We are not investing in hype, nor in capital intensive businesses where capital is the only moat, nor in companies where technology defensibility is low. As long as we stay disciplined and visionary, and continue to back founders building a decade ahead, we will do well, bubble or not.

After all, there may be multiple macro cycles across a decade. Embrace the bubble.

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Reflections from the Impact 2025 Summit

I had the opportunity to join a panel at the Impact 2025 Summit in Calgary, moderated by Raissa Espiritu, with Janet Bannister and Paul Godman. Ironically, none of us are labelled as impact investors, and I explained on stage why Two Small Fish Ventures does what we do.

At Two Small Fish Ventures, we’ve never called ourselves an impact fund. That’s not because we’re indifferent to impact; in fact, it’s core to what we do. Our focus is on deep tech, the next frontier of computing, where innovation can create meaningful, long-term change. Specifically, we invest in five key areas: Vertical AI Platforms, Physical AI, AI Infrastructure, Advanced Computing Hardware, and Smart Energy.

We care deeply about scientific advancement, and more importantly, about turning those breakthroughs into real-world impact. That’s how meaningful progress happens.

Eva is our General Partner, and both of us are immigrants. Diversity isn’t a marketing point for us; it’s part of who we are. It naturally shows up in our portfolio: about half of our companies have at least one female founder, and many come from underrepresented backgrounds. That said, uncompromisingly, we back amazing deep tech founders who are turning their creations into world-class companies.

It’s actually rare that we talk about topics like women investing or investing in underrepresented groups in isolation. Not because we don’t care, quite the opposite. The fact that Eva is one of the few female GPs leading a venture fund, and that we’re both immigrants, already says a lot. Our actions speak volumes. We walk the walk and talk the talk.

We need to deliver results. Period. Our competition isn’t other venture funds; it’s every other investment opportunity available in the market. If we can’t perform at the highest level — top decile in everything we do — we can’t sustain our mission. Delivering some of the best results in the industry enables us to do what we love and make an impact.

That’s why I believe impact and performance are not opposites. The most powerful kind of impact happens when companies succeed, when they become world-class companies. Strong returns and meaningful impact can, and should, reinforce each other.

I also talked about the importance of choosing the right vehicle for the right purpose. When we made a 2 million dollar donation to the University of Toronto to establish the Commercialization Catalyst Prize, it wasn’t about investing. It was about supporting a different kind of impact — helping scientists and engineers turn their research into innovations that can reach the world. Not every kind of impact should come from the same tool.

At the end of the day, labels matter less than intent and execution. We don’t need to call ourselves an impact fund to make a difference. Our goal is simple: to back bold deep tech founders using science and technology to build a better future and to do it with excellence.

A big thank you to Raissa, George Damian, Sylvia Wang, and the entire Platform Calgary team for putting together such a thoughtful and well-run event.

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Quantum: From Sci-Fi to Investable Frontier

When I was studying electrical engineering, out of my curiosity, I chose to take an elective course on quantum physics as part of advanced optics. It sparked my curiosity in quantum. The strange, abstract, counterintuitive rules, for example particles existing in multiple states or being entangled across distance, captivated me.

Error correction, closely related to fault tolerance in quantum systems today, is the backbone of telecommunications, one of the areas I majored in.

Little did I know these domains would converge in such a way that my earlier academic training would become relevant again years later.

For me, computing is not just my profession, it is also my hobby. As a science nerd, I actively enjoy following advances, and I keep going deeper down the rabbit hole of the next frontier of computing. That mix of personal curiosity and professional focus shapes how I approach both the opportunities and risks in the space. Over the past few years, I have gone deeper into the world of quantum. My academic and professional background gave me the footing to evaluate both what is technically possible and what is commercially viable.

From If to How and When

In June, I wrote Quantum Isn’t Next. It’s Now. We have passed the tipping point where the question is no longer if quantum technology will work, it is how and when it will scale.

This momentum is not just visible to those of us deep in the field. As the Globe and Mail recently reported, we at Two Small Fish have been following quantum for years, but did not think it was mature enough for an early-stage fund with a 10-year lifespan to back. This year, we changed our minds. As I shared in that article: “It’s much more investible now.”

The distinction is clear: when quantum was still a science problem, the central question was whether it could work at all. Now that it has become an engineering problem, the questions are how it will work at scale and when it will be ready for commercialization.

This shift matters for investors. Venture capital focuses on engineering breakthroughs, hard, uncertain, but achievable on a commercialization timeline. Fundamental science, which can take many more years to mature, is better supported by governments, universities, and non-dilutive funding sources. I will leave that discussion for another post.

One of Five Frontiers

At Two Small Fish Ventures, we have identified five areas shaping the next frontier of computing. Quantum falls under the area of advanced computing hardware, where the convergence of different areas of science, engineering, and commercialization is accelerating.

Each of these areas is no longer a speculative science experiment but a rapidly advancing field where engineering and commercialization are converging. Within the next ten years, the winners will emerge from lab prototypes and become scaled companies. Quantum is firmly on that trajectory.

How We Invest in Quantum

Our first principle at Two Small Fish is straightforward: we only invest in things we truly understand, from all three technology, product, and commercialization lenses. That discipline forces us to dig deep before committing capital. And after years of study, it is clear to us that quantum has moved into investable territory, but only selectively.

Not every quantum startup fits a venture time horizon. Some promising projects will take too many years to scale. But we are now seeing opportunities that, within a 10-year window, can realistically grow from an early-stage idea to a successful scale-up. That is the standard we apply to every investment, and quantum finally has companies that meet it.

From Sci-Fi to Reality

Canada has played an outsized role in building the foundation of quantum science. Now, it has the chance to lead in quantum commercialization. The next few years will determine which teams turn breakthrough science into enduring companies.

For investors, this is both an opportunity and a responsibility. The quantum era is not a distant possibility, it is here now. What once sounded like science fiction is now an investable reality. And for those willing to put in the work to understand it, the frontier is already here.

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Jevons Paradox: Why Efficiency Fuels Transformation

In 1865, William Stanley Jevons, an English economist, observed a curious phenomenon: as steam engines in Britain became more efficient, coal use didn’t fall — it rose. Efficiency lowered the cost of using coal, which made it more attractive, and demand surged.

That insight became known as Jevons Paradox. To put it simply:

  • Technological change increases efficiency or productivity.
  • Efficiency gains lead to lower consumer prices for goods or services.
  • The reduced price creates a substantial increase in quantity demanded (because demand is highly elastic).

Instead of shrinking resource use, efficiency often accelerates it — and with it, broader societal change.

Coal, Then Light

The paradox first appeared in coal: better engines, more coal consumed. Electricity followed a similar path. Consider lighting in Britain:

PeriodTrue price of lighting (per million lumen-hours, £2000)Change vs. startPer-capita consumption (thousand lumen-hours)Change vs. startTotal consumption (billion lumen-hours)Change vs. start
1800£8,0001.118
1900£250↓ ~30×255↑ ~230×10,500↑ ~500×
2000£2.5↓ ~3,000× (vs. 1800) / ↓ ~100× (vs. 1900)13,000↑ ~13,000× (vs. 1800) / ↑ ~50× (vs. 1900)775,000↑ ~40,000× (vs. 1800) / ↑ ~74× (vs. 1900)

Over two centuries, the price of light fell 3,000×, while per-capita use rose 13,000× and total consumption rose 40,000×. A textbook case of Jevons Paradox — efficiency driving demand to entirely new levels.

Computing: From Millions to Pennies

This pattern carried into computing:

YearCost per GigaflopNotes
1984$18.7 million (~$46M today)Early supercomputing era
2000$640 (~$956 today)Mainstream affordability
2017$0.03Virtually free compute

That’s a 99.99%+ decline. What once required national budgets is now in your pocket.

Storage mirrored the same story: by 2018, 8 TB of hard drive storage cost under $200 — about $0.019 per GB, compared to thousands per GB in the mid-20th century.

Connectivity: Falling Costs, Rising Traffic

Connectivity followed suit:

YearTypical Speed & Cost per Mbps (U.S.)Global Internet Traffic
2000Dial-up / early DSL (<1 Mbps); ~$1,200~84 PB/month
2010~5 Mbps broadband; ~$25~20,000 PB/month
2023100–940 Mbps common; ↓ ~60% since 2015 (real terms)>150,000 PB/month

(PB = petabytes)

As costs collapsed, demand exploded. Streaming, cloud services, social apps, mobile collaboration, IoT — all became possible because bandwidth was no longer scarce.

Intelligence: The New Frontier

Now the same dynamic is unfolding with intelligence:

YearCost per Million TokensNotes
2021~$60Early GPT-3 / GPT-4 era
2023~$0.40–$0.60GPT-3.5 scale models
2024< $0.10GPT-4o and peers

That’s a two-order-of-magnitude drop in just a few years. Unsurprisingly, demand is surging — AI copilots in workflows, large-scale analytics in enterprises, and everyday generative tools for individuals.

As we highlighted in our TSF Thesis 3.0, cheap intelligence doesn’t just optimize existing tasks. It reshapes behaviour at scale.

Why It Matters

The recurring pattern is clear:

  • Coal efficiency fueled the Industrial Revolution.
  • Affordable lighting built electrified cities.
  • Cheap compute and storage enabled the digital economy.
  • Low-cost bandwidth drove streaming and cloud collaboration.
  • Now cheap intelligence is reshaping how we live, work, and innovate.

As we highlighted in Thesis 3.0:

“Reflecting on the internet era… as ‘the cost of connectivity’ steadily declined, productivity and demand surged—creating a virtuous cycle of opportunities. The AI era shows remarkable parallels. AI is the first technology capable of learning, reasoning, creativity… Like connectivity in the internet era, ‘the cost of intelligence’ is now rapidly declining, while the value derived continues to surge, driving even greater demand.”

The lesson is simple: efficiency doesn’t just save costs — it reorders economies and societies. And that’s exactly what is happening now.

If you are building a deep tech early-stage startup in the next frontier of computing, we would like to hear from you. This is a generational opportunity as both traditional businesses and entirely new sectors are being reshaped. White-collar jobs and businesses, in particular, will not be the same. We would love to hear from you.

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Masterclass Series: The Rule of 3 and 10 — Lessons I Wish I Learned Earlier

One of the most powerful frameworks I’ve come across is the Rule of 3 and 10, coined by Hiroshi Mikitani-san, founder and CEO of Rakuten. The idea is simple: every time a company triples in size, everything breaks.

As Rakuten grew from a handful of people into a global business, Mikitani-san noticed a clear pattern. At each stage — 1 to 3 people, 3 to 10, 10 to 30, 30 to 100, 100 to 300, and beyond — what worked before suddenly stopped working. And by everything, it really does mean everything: payroll, meetings, communication, budgeting, sales, even the org chart. The challenge is that many leaders blow right through these milestones without realizing what’s happening until it’s already broken.

What I Wish I Knew

I’ve been part of many really fast-growing companies — first as an employee, and later as a co-founder in two of them. And I can tell you, this rule is 100% true.

At Wattpad, I didn’t fully internalize it until we were approaching 100 people. By then, we had already missed natural breaking points where we could have rebuilt earlier. That lag made scaling harder than it needed to be.

Looking back, the stages feel something like this:

  • At 3 people, you’re a tight-knit unit where everyone knows everything.
  • At 10, you need to change how you communicate just to stay aligned.
  • At 30, the days of everyone reporting to the CEO are long gone — a first layer of leaders emerges.
  • At 100, there are layers of layers of leaders, and even well-designed systems need rethinking.
  • At 300, you’re running a completely different company than the one you started.
  • At 1,000, it feels like a mini-society with its own subcultures, bureaucracy, and politics — alignment becomes the hardest problem of all.

The Employee’s View

Before becoming an entrepreneur, I lived through this as an employee too. The breaking points are just as visible from the inside.

As companies scale, it gets harder to push things through. Meetings multiply, but decisions slow. Bystander problems appear — more people in the room, but fewer actually taking ownership. From the employee’s perspective, it feels frustrating and inefficient. But it’s not about capability; it’s about systems that no longer fit the size of the company.

Why This Matters

In the moment, it can feel like failure. But it isn’t. It’s simply that scale changes everything.

The good news: these challenges are solvable. Every growing company has faced them. The bad news: if you only react after things break, you’ll always be catching up instead of leading.

My Takeaway

If you’re building a fast-growing company, expect everything to break at 3, 10, 30, 100, 300, 1,000… and plan for it.

Don’t see it as failure. See it as evolution. Each breakdown is proof you’ve unlocked a new stage of growth. The chaos is part of the privilege — it means you’re building something worth scaling.

If I could go back and tell my younger CEO self one thing, it would be this: anticipate the breaks before they happen. Build a culture that embraces reinvention at every stage. You’ll save yourself and your team a lot of unnecessary pain — and you’ll enjoy the ride more.

P.S. The banner is using Ideogram Character to generate. It rocks!

P.P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Quantum Isn’t Next. It’s Now.

In the early 2000s, it was a common joke in the tech world that “next year is the year of the smartphones.” People kept saying it over and over for almost a decade. It became a punchline. The industry nearly lost its credibility.

Until the iPhone launched. “Next year is the year of the smartphones” finally became true.

The same joke has followed quantum for the past ten years: next year is the year of quantum.

Except it hasn’t been. Not yet.

And yet, quietly, the foundations have been built. We’re not there, but we’re far from where we started.

We’re getting closer. Much closer. I can smell it. I can hear it. I can sense it.

Right now, without getting into too much technical detail, we’re still at a small scale: fewer than 100 usable qubits. Commercial viability likely requires thousands, if not millions. The systems are still too error-prone, and hosting your own quantum machine is wildly impractical. They’re expensive, fragile, and noisy.

At this stage, quantum is mostly limited to niche or small-scale applications. But step by step, quantum is inching closer to broader utility.

And while these things don’t progress in straight lines, the momentum is real and accelerating.

Large-scale, commercially deployable, fault-tolerant quantum computers accessed through the cloud are no longer science fiction. They’re within reach.

I spent a few of my academic years in signal processing and error correction. I’ve also spent a bit of time studying quantum mechanics. I understand the challenges of cloud-based access to quantum systems, and I’ve been following the field for quite a while, mostly as a curious science nerd.

All of that gives me reason to trust my sixth sense. Quantum is increasingly becoming a reality.

Nobody knows exactly when the iPhone moment or the ChatGPT moment of quantum will happen.
But I’m absolutely sure we won’t still be saying “next year is the year of quantum” a decade from now.

It will happen, and it will happen much sooner than you might think.

At Two Small Fish, our thesis is centred around the next frontier of computing and its applications.

This is an exciting time and the ideal time to take a closer look at quantum, because the best opportunities tend to emerge right before the technology takes off.

How can we not get excited about new quantum investment opportunities?

P.S. I’m excited to attend the QUANTUM NOW conference this week in Montreal. Also thrilled to see Mark Carney name quantum as one of Canada’s official G7 priorities. That short statement may end up being a big milestone.

P.P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

A Decade of Fish – Celebrating 10 Years of Two Small Fish Ventures

This year marks a big milestone: Two Small Fish Ventures turns ten!

That’s 10 years, 120 months, and 3,653 days (yes, we counted the leap years). What started as a bold experiment in early-stage investing has become a decade-long journey of backing audacious founders building at the edge of what’s possible.

Over the weekend, we wired funds for our 60th first investment. That’s not including the many follow-on cheques we’ve written along the way—if we counted those, the number would be much higher. We’re not naming the company just yet, but like the 59 before it, this one reflects deep conviction. We think it’ll make a splash!

For years, we’ve said we write 5 to 7 new cheques per year. Not because we aim for a quota, but because this is what a power-law portfolio construction strategy naturally produces. In venture, just a few outlier companies drive the vast majority of returns. The trick is to consistently back companies with 100x potential. That’s the focus—not pacing. And yet, the numbers tell their own story: we’ve averaged exactly six new investments a year. Apparently, clarity of focus brings consistency as a byproduct.

We’re now six months into our tenth year, and we’re right on pace.

To the founders we’ve backed: thank you for trusting us at the earliest, riskiest stage.

To those we haven’t met yet: if you’re building deep tech in the next frontier of computing, we’d love to hear from you. We invest globally. If you’ve got a breakthrough, we can help turn it into a product. If you’ve got a product, we can help turn it into a company.

Sound like you? Reach out.

Here’s to the next 10!

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Announcing Our Investment in Hepzibah AI

The Two Small Fish team is thrilled to announce our investment in Hepzibah AI, a new venture founded by Untether AI’s co-founders, serial entrepreneurs Martin Snelgrove and Raymond Chik, along with David Lynch and Taneem Ahmed. Their mission is to bring next-generation, energy-efficient AI inference technologies to market, transforming how AI compute is integrated into everything from consumer electronics to industrial systems. We are proud to be the lead investor in this round, and I will be joining as a board observer to support Hepzibah AI as they build the future of AI inference.

The Vision Behind Hepzibah AI

Hepzibah AI is built on the breakthrough energy-efficient AI inference compute architecture pioneered at Untether AI—but takes it even further. In addition to pushing performance/power harder, it can handle training loads like distillation, and it provides supercomputer-style networking on-chip. Their business model focuses on providing IP and core designs that chipmakers can incorporate into their system-on-chip designs. Rather than manufacturing AI chips themselves, Hepzibah AI will license its advanced AI inference IP for integration into a wide variety of devices and products.

Hepzibah AI’s tagline, “Extreme Full-stack AI: from models to metals,” perfectly encapsulates their vision. They are tackling AI from the highest levels of software optimization down to the most fundamental aspects of hardware architecture, ensuring that AI inference is not only more powerful but also dramatically more efficient.

Why does this matter? AI is rapidly becoming as indispensable as the CPU has been for the past few decades. Today, many modern chips, especially system-on-chip (SoC) devices, include a CPU or MCU core, and increasingly, those same chips will require AI capabilities to keep up with the growing demand for smarter, more efficient processing.

This approach allows Hepzibah AI to focus on programmability and adaptable hardware configurations, ensuring they stay ahead of the rapidly evolving AI landscape. By providing best-in-class AI inference IP, Hepzibah AI is in a prime position to capture this massive opportunity.

An Exceptional Founding Team

Martin Snelgrove and Raymond Chik are luminaries in this space—I’ve known them for decades. David Lynch and Taneem Ahmed also bring deep industry expertise, having spent years building and commercializing cutting-edge silicon and software products.

Their collective experience in this rapidly expanding, soon-to-be ubiquitous industry makes investing in Hepzibah AI a clear choice. We can’t wait to see what they accomplish next.

P.S. You may notice that the logo is a curled skunk. I’d like to highlight that the skunk’s eyes are zeros from the MNIST dataset. 🙂 

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Contrarian Series: Your TAM is Zero? We love it!

Note: One of the most common pieces of feedback we receive from entrepreneurs is that TSF partners don’t think, act, or speak like typical VCs. The Contrarian Series is meant to demystify this, so founders know more about us before pitching.

Just before New Year, I was speaking at the TBDC Venture Day Conference together with BetaKit CEO Siri Agrell and Serial Entrepreneur and former MP Frank Baylis.

When I said “Two Small Fish love Zero TAM businesses,” I said it so matter-of-factly that the crowd was taken aback. I even saw quite a few posts on social media that said, “I can’t believe Allen Lau said it!”

Of course, any business will need to go after a non-zero TAM eventually. But hear me out.

Here’s what I did at Wattpad: I never had a “total addressable market” slide in the early days. I just said, “There are five billion people who can read and write, and I want to capture them all!”

Even when we became a scaleup, I kept the same line. I just said, “There are billions of people who can read, write, or watch our movies, and I want to capture them all!”

Naturally, some VCs tried to box me into the “publishing tool” category or other buckets they deemed appropriate. But Wattpad didn’t really fit into anything that existed at the time. Trust me, I tried to find a box I would fit in too, but none felt natural.

Why? That’s because Wattpad was a category creator. And, of course, that meant our TAM was effectively zero.

In other words, we made our own TAM.

Many of our portfolio companies are also category creators, so their decks often don’t have a TAM slide either.

Yes, any venture-backed company eventually needs a large TAM. And, of course, I don’t mean to suggest that every startup needs to be a category creator.

That said, we’re perfectly fine—in fact, sometimes we even prefer—seeing a pitch deck without a TAM slide. By definition, category creators have first-mover advantages. More importantly, category creators in a large, winner-take-all market—especially those with strong moats—tend to be extremely valuable at scale and, hence, highly investable.

So, founders, if your company is poised to create a large category, skip the TAM slide when pitching to Two Small Fish. We love it!

P.S. Don’t forget, if you have an “exit strategy” slide in your pitch deck, please remove it before pitching to us. TYSM!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Masterclass Series: Complete Redesign That Actually Works

Sonos replaced its CEO last week. The company faced significant backlash after launching a redesigned app earlier last year that was plagued by bugs, missing features, and connectivity issues, frustrating customers and tarnishing its reputation. This also led to layoffs, poor sales, and a significant drop in stock price.

While I usually don’t comment on companies I’m not involved with, as a long-time Sonos user, I was very frustrated that the alarm feature I had been relying on to wake me up in the morning for well over a decade disappeared overnight. There were other issues, too.

Throughout my career, I have worked on numerous redesign projects. A fiasco like this is totally avoidable. Today, I am sharing a couple of internal blog posts I wrote for my team (when I was Wattpad’s CEO) about this topic. Of course, these are just examples of the general framework I used. In practice, there are many specific details in each redesign that I helped guide the team through, as frameworks like this are like a hammer. Even the best hammer in the world is still just a hammer. The devil is in the details of how you use it.

These internal blog posts are just some of the hammers and drills in my toolbox that I use to help our portfolio CEOs navigate trade-offs and move fast without breaking things.

Happy reading through a sample of my collection of half a million words!

Note: These two posts have been mildly edited to improve readability.

Blog Post #1 – Subject: Feature Backward Compatibility

I have gone through major technology platform redesigns many times in my career. One problem that arises every single time is backward compatibility.

The reason is easy to understand: users can interact with complex products (such as Wattpad) in a million different ways. There is no way the engineering team could anticipate all the permutations.

There are two common ways to solve this problem. First, run an extensive beta program. This is what big companies like Apple and Microsoft do when they update their operating systems. This approach is also a great way to push some of the responsibility to their app developers. Even with virtually unlimited resources, crowdsourcing from app developers is still a far better approach. However, running an extensive beta program takes a lot of time and resources. Most companies can’t afford to do that.

The other approach is to roll out the changes progressively and incrementally. It is very tempting to make all the big changes at once, roll them out in one shot, and roll the dice. However, I am almost certain that it will backfire. Not only is it a frustrating experience for both users and engineers, but it also makes the project schedule much less predictable and, in most cases, causes the project to take much longer than anticipated.

Next year, when we focus on our redesign to reduce tech debt, don’t forget to set aside some time budget for these edge conditions that are so easily overlooked. Also, think about how we can roll out the changes more incrementally to minimize the negative impact on our users.

Blog Post #2 – Subject: The Reversibility and Consequentiality Framework

The other day, I spoke to the CEO of another consumer internet company. In terms of the scale of its user base, this company is much smaller than Wattpad, but we are still talking about millions of users here.

Like us, this company has been around for over a decade. Not surprisingly, technical debt has been an ongoing concern. A few years ago, the team decided to completely redesign its platform from the ground up. The redesign was a multi-year effort, and the team finally pulled back the curtain a year ago. While it is working fine now, this CEO told me that it took a few months before they fixed all the issues and reimplemented all the “missing” features because many of their users were using the product in “interesting” ways that the new version did not support.

These problems are fairly common when redesigning a new system from the ground up. In practice, it is simply impossible to take all the permutations into account, no matter how carefully you plan. However, if we mess things up, our user base is so large that it might negatively impact (or ruin!) 100 million people’s lives in the worst-case scenario.

On the flip side, over-planning could burn through a lot of unnecessary cycles.

One way or another, we should not let these challenges deter us from moving forward or even slow us down because there are many ways to mitigate potential problems. In principle, ensuring that the rollout is reversible and inconsequential is key.

The former is easy to understand: Can we roll back when things go wrong? Do we have a kill switch when updating our mobile apps? These are best practices that we have already been using.

However, at times, these best practices might not be possible. Can we reduce the consequentiality when rolling out? If the iOS app were completely redesigned, could we do it in smaller chunks, parallel-run the new and old versions at the same time, or try the new version on 0.1% of our users first? If not, could we roll out the new app in a small country first?

Again, our objective is not to avoid any problem at all costs. Our objective is to minimize (but not eliminate) the negative impact when things go wrong—not if things go wrong. Although Wattpad going dark for 100 million people for an extended period of time is not acceptable, in the spirit of speed, it is perfectly okay if we have ways to hit reverse or reduce the impact to only a small percentage of our users. These are not rocket science, but they do require a bit more thoughtfulness because our user base is so large that we can’t simply roll the dice.

P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

AI Has Democratized Everything

This is the picture I used to open our 2024 AGM a few months ago. It highlights how drastically the landscape has changed in just the past couple of years. I told a similar story to our LPs during the 2023 AGM, but now, the pace of change has accelerated even further, and the disruption is crystal clear.

The following outlines the reasons behind one of the biggest shifts we identified as part of our Thesis 2.0 two years ago.

Like many VCs, we evaluate pitches from countless companies daily. What we’ve noticed is a significant rise in startups that are nearly identical to one another in the same category. Once, I quipped, “This is the fourth one this week—and it’s only Tuesday!”

The reason for this explosion is simple: the cost of starting a software company has plummeted. What once required $1–2M of funding to hire a small team can now be achieved by two founders (or even a solo founder) with little more than a laptop or two and a $20/month subscription to ChatGPT Pro (or your favourite AI coding assistant).

With these tools, founders can build, test, and iterate at unprecedented speeds. The product build-iterate-test-repeat cycle is insanely short. If each iteration is a “shot on goal,” the $1–2M of the past bought you a few shots within a 12–18 month runway. Today, that $20/month can buy you a shot every few hours.

This dramatic drop in costs, coupled with exponentially faster iteration speeds, has led to a flood of startups entering the market in each category. Competition has never been fiercer. This relentless pace also means faster failures, and the startup graveyard is now overflowing.

For early-stage investors, picking winners from this influx of startups has become significantly harder. In the past, you might have been able to identify the category winner out of 10 similar companies. Now, it feels like mission impossible when there are hundreds—or even thousands—of startups in each category. Many of them are even invisible, flying under the radar for much longer because they don’t need to fundraise.

Of course, there will still be many new billion-dollar companies. In fact, I am convinced that this AI-driven platform shift will produce more billion-dollar winners than ever—across virtually every established category and entirely new ones that don’t yet exist. But by the law of large numbers, spotting them among thousands of startups in each category is harder than ever.

If you’re using the same lens that worked in the past to spot and fund these future tech giants, good luck.

That’s why, for a long time now, we’ve been using a very different lens to identify great opportunities with highly defensible moats to stay ahead of the curve. For example, we’ve been exclusively focused on deep tech—a space where we know we have a clear edge. From technology to product to operations, we have the experience to cover the full spectrum and support founders through the unique challenges of building deep tech startups. So far, this approach has been working really well for us.

I guess we are taking our own advice. As a VC firm, we also need to be constantly improving and striving to be unrecognizable every two years!

There’s no doubt the rules of early-stage VC have shifted. How we access, assess, and assist startups has evolved dramatically. The great AI democratization is affecting all sectors, and venture capital is no exception.

For investors who can adapt, this is a time of unparalleled opportunity—perhaps the greatest era yet in tech investing. The playing field has been levelled, and massive disruption (and therefore opportunities) lies ahead. Incumbents are vulnerable, and new champions will emerge in each category – including VC!

Investing during this platform shift is both exciting and challenging. And I wouldn’t want it any other way, because those who figure it out will be handsomely rewarded.

P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Fabless + ventureLAB is Cloud Computing for Semiconductors

This is a follow-up blog post to my last piece about Blumind.

More than two decades ago, before I started my first company, I was involved with an internet startup. Back then, the internet was still in its infancy, and most companies had to host their own servers. The upfront costs were daunting—our startup’s first major purchase was hundreds of thousands of dollars in Sun Microsystems boxes that sat in our office. This significant investment was essential for operations but created a massive barrier to entry for startups.

Fast forward to 2006 when we started Wattpad. We initially used a shared hosting service that cost just $5 per month. This shift was game-changing, enabling us to bootstrap for several years before raising any capital. We also didn’t have to worry about maintaining the machines. It dramatically lowered the barrier to entry, democratizing access to the resources needed to build a tech startup because the upfront cost of starting a software company was virtually zero.

Eventually, as we scaled, we moved to AWS, which was more scalable and reliable. Apparently, we were AWS’s first customer in Canada at the time! It became more expensive as our traffic grew, but we still didn’t have to worry about maintaining our own server farm. This significantly simplified our operations.

A similar evolution has been happening in the semiconductor industry for more than two decades, thanks to the fabless model. Fabless chip manufacturing allows companies—large or small—to design their semiconductors while outsourcing fabrication to specialized foundries. Startups like Blumind leverage this model, focusing solely on designing groundbreaking technology and scaling production when necessary.

But fabrication is not the only capital-intensive aspect. There is also the need for other equipment once the chips are manufactured.

During my recent visit to ventureLAB, where Blumind is based, I saw firsthand how these startups utilize shared resources for this additional equipment. Not only is Blumind fabless, but they can also access various hardware equipment at ventureLAB without the heavy capital expenditure of owning it.

Let’s see how the chip performs at -40C!
Jackpine (first tapeout)
Wolf (second tapeout)
BM110 (third tapeout)

The common perception that semiconductor startups are inherently capital-intensive couldn’t be more wrong. The fabless model—in conjunction with organizations like ventureLAB—functions much like cloud computing does for software startups, enabling semiconductor companies to build and grow with minimal upfront investment. For the most part, all they need initially are engineers’ computers to create their designs until they reach a scale that requires owning their own equipment.

Fabless chip design combined with shared resources at facilities like ventureLAB is democratizing the semiconductor space, lowering the barriers to innovation, and empowering startups to make significant advancements without the financial burden of owning fabrication facilities. Labour costs aside, the upfront cost of starting a semiconductor company like Blumind could be virtually zero too.

That’s why the saying, “software once ate the world alone; now, software and hardware consume the universe together,” is becoming true at an accelerated pace. We have already made several investments based on this theme, and we are super excited about the opportunities ahead.

P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Two Small Fish Ventures Celebrates the Merger of Printful and Printify

We’re thrilled to share that Printify, a company we have proudly backed since its first funding round, has entered into a merger with Printful (see report by TechCrunch). As long-time supporters of the Printify team, we at Two Small Fish Ventures are incredibly happy with this outcome, which marks a significant milestone in the production-on-demand industry and an exciting moment for everyone involved.

Printify and Printful are both leading platforms that empower entrepreneurs and businesses to create and sell custom products worldwide without the need to hold inventory, thanks to their advanced production-on-demand fulfillment networks. Printify has been growing rapidly, now boasting a team of over 700 employees. Combined with Printful’s team, the newly merged company will have well over 2,000 employees, making it by far the number one player in the production-on-demand market.

Printful, with over $130 million raised and a valuation exceeding $1 billion, and Printify, backed by $54.1 million in funding, have established themselves as the top two global leaders in this field. This merger solidifies their position as the dominant force in the industry, setting new standards and driving innovation in production-on-demand services worldwide. We’re proud to have supported Printify from the very beginning and look forward to witnessing the next chapter in their remarkable journey.

P.S. In true spirit of unity, founders Lauris Liberts and James Berdigans have sealed the deal by swapping T-shirts with each other’s logos—because nothing says “teamwork” like wearing the competition’s brand!

P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Axiomatic AI – Make the World’s Information Intelligible

Today’s blog post is brought to you by Eva Lau. She will talk about one of our recent investments: Axiomatic AI.

Congratulations to Axiomatic on their recent US$6M seed round led by Kleiner Perkins! Two Small Fish Ventures is thrilled to be an early investor since the company’s inception—and the only Canadian investor—in what promises to be a game-changer in solving fundamental problems in physics, electronics, and engineering.

Why is this important? Large Language Models (LLMs) excel at languages (as their name suggests) but struggle with logic. That’s why AI can write poetry but struggles with math, as LLMs mainly rely on ‘pattern-matching’ rather than ‘reasoning.’

This is where Axiomatic steps in. The company’s secret sauce is its new AI model called Automated Interpretable Reasoning (AIR), which combines advances in reinforcement learning, LLMs, and world models. Axiomatic’s mission is to create software and algorithms that not only automate processes but also provide clear, understandable insights to fuel innovation and research, ultimately solving real-world problems in engineering and other industrial applications.

The startup is the brainchild of world-renowned professors from MIT, the University of Toronto, and The Institute of Photonic Sciences (ICFO) in Barcelona. The team includes leading engineers, physicists, and computer science experts.

With its innovative models, the startup fits squarely within our fund’s focus: the next frontier of computing and its applications. As all TSF partners are engineers, product experts, and recent operators, we are uniquely positioned to understand the potential of Axiomatic and support the team. 

Axiomatic’s new AIR model is well-positioned to accelerate engineering and scientific discovery, boosting productivity by orders of magnitude in the coming years, and ultimately make the world’s information intelligible.

P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

The depressing numbers of the venture-capital slump don’t tell the full story

Thank you to The Globe for publishing my second op-ed in as many weeks: The depressing numbers of the venture-capital slump don’t tell the full story.

The piece is now available in full here:

Bright spots in the current venture capital landscape exist. You just need to know where to look.

Recent reports are right. Amid high interest rates, venture capitalists have a shrinking pool of cash to dole out to hopeful startups, making it more challenging for those companies to raise funding. In the United States, for example, startup investors handed out US$ 170.6 billion in 2023, a decrease of nearly 30 percent from the year before.

But the headline numbers don’t tell the whole story.

There’s a night-and-day difference between the experience of raising funds for game-changing, deep-technology startups that specialize in artificial intelligence and related fields, such as semiconductors, and those who try to innovate with what’s referred to as shallow tech.

Remember the late 2000s? Apple’s App Store wasn’t groundbreaking in terms of technical innovation, but it nonetheless deserves praise because it revolutionized the smartphone. Then, the App Store’s charts were dominated by simplistic applications from infamous fart apps to iBeer, the app that let you pretend you were drinking from your iPhone.

That’s the difference – those building game-changing tools and those whose products are simply trying to ride the wave.

Tons of startups are pitching themselves as AI or deep-tech companies, but few actually are. This is why many are having trouble raising funds in the current climate.

It’s also why the era of shallow tech is over, and why deep-tech innovations will reshape our world from here on out.

Toronto-based Ideogram, a deep-tech startup, was the first in the industry to integrate text and typography into AI-generated images. (Disclosure: This is a company that is part of my Two Small Fish Ventures portfolio. But I’m not mentioning it just because I have a stake in it. The company’s track record speaks for itself.)

Barely one year old, the startup has fostered a community of more than seven million creators who have generated more than 600 million images. It went on to close a substantial US$80-million Series A funding round.

As a comparison, Wattpad, the company I founded, which later sold for US$660-million, had raised roughly US$120-million in total. Wattpad’s Series A in 2011, five years since inception, was US$3.5-million.

The speed at which Ideogram achieved so much in such a short period of time is eye-popping.

The “platform shifts” over recent decades have largely played out in the same way. From the personal-computer revolution in the late 20th century to the widespread adoption of the internet and cloud computing in the 2000s, and then the mobile era in the 2010s, there’s a clear pattern.

Each shift unleashed a wave of innovation to create new opportunities and fundamentally reshape user behaviour, democratize access and unlock tremendous value. These shifts benefited the billions of internet users and related businesses, but they also paved the way for “shallow tech.”

The late 2000s marked the beginning of a trend where ease of creation and user experience overshadowed the depth of innovation.

When Instagram launched, it was a straightforward photo-sharing app with just a few attractive filters. Over time, driven by the massive amounts of data it collected, it evolved into one of the leading social media platforms.

This time is different. The AI platform shift makes it harder for simplistic, shallow-tech startups to succeed. Gone are the days of building a minimally viable product, accumulating vast data and then establishing a defensible market position.

We’re entering the golden age of deep-tech innovation, and in order to be successful, startups have to embrace the latest platform shift – AI. And this doesn’t happen by tacking on “AI” to a startup’s name the way many companies did with the “mobile-first” rebrand of the 2010s.

In this new era, technological depth is not just a competitive advantage but also a fundamental pillar for building successful companies that have the potential to redefine our world.

For example, OpenAI and Canada’s very own Cohere are truly game-changing AI companies that have far more technical depth than startups from the previous generation. They’ve received massive funding partly because the development of these kinds of products is very capital-intensive but also because their game-changing approach will revolutionize how we live, work and play.

Companies like these are the bright spots in an otherwise gloomy venture-capital landscape.

P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Software Once Ate the World Alone; Now, Software and Hardware Consume the Universe Together

Over a decade ago, in his blog post titled “Why Software is Eating the World,” Marc Andreessen explained why software was transforming industries across the globe. Software would no longer be confined to the tech sector but permeated every aspect of our lives, disrupting traditional businesses and creating new opportunities, driving innovation and reshaping the competitive landscape. Overall, the post underscores the profound impact of software on the economy and society at large.

While the prediction in his blog post was mostly accurate, today, the world is still only partially eaten up by software. Although there are many opportunities for software alone to completely transform user behaviour, upend workflow, or cause other disruptions, the low-hanging fruits are mostly picked. That’s why I said the days of shallow tech are behind us now.

Moving forward, increasingly, there will be more and more opportunities that require hardware and software to be designed and developed together from the get-go to ensure that they can work harmoniously and make an impact that otherwise would not be possible. The best example that people can relate to today is Tesla. For those who have driven a Tesla, I trust many would testify that their software and hardware work really well together. Yes, their self-driving software might be buggy. Yes, the build quality of its hardware might not be the best. However, with many features on their cars – from charging to navigation to even warming up the car remotely – you can just tell that they are not shoehorning their software and their app into their hardware or vice versa.

On the other hand, on many cars from other manufacturers, you can tell their software and hardware teams are separated by the Grand Canyon and perhaps only seriously talk to each other weeks before the car is launched 🙂

We also witness the same thing down to the silicon level. From building the next AI chip to the industrial AI revolution to space tech, software and hardware convergence is happening everywhere. For instance, the high energy required by LLMs is partially because the software “works around” the hardware, which was not designed with AI in mind in the first place. Changes are already underway, ensuring that software and hardware dance together. There is a reason why large tech players like OpenAI and Google are planning to make their own chips.

We are in the midst of a once-in-a-decade “platform shift” because of generative AI. In the last platform shift more than a decade ago, when the confluence of mobile and cloud computing created a massive disruption, there was one “iPhone moment,” and then things progressed continuously. This time, new foundation models are launching at a break-neck pace, which is further exacerbated by open-source. So fast that we are now experiencing one iPhone moment every few weeks.

All of this happens when AI-native startups are an order of magnitude more capital-intensive than in the past cycle. At the same time, investors are also willing to write big cheques to these companies, but perhaps it is appropriate, given all the massive opportunities ahead of us.

Investing in this environment is both exciting and challenging as assessing these new opportunities is drastically different from the previous-generation software-only, shallow-tech startup. 

The next few years are going to be wild.

P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

The Right Type of Investors

Most of Two Small Fish Ventures’ portfolio companies are based in North America. However, we also invest globally, as we firmly believe that global companies can be built anywhere. To us, where founders and their teams sleep at night is irrelevant to their potential for greatness.

Consequently, we actively engage with many tech ecosystems, regardless of their size. A pervasive issue we’ve encountered across these ecosystems is the challenge entrepreneurs face in finding investors who provide not just capital but the right kind of support. This problem is more acute in less developed ecosystems, but even those that are more established are not exempt.

An investor from another ecosystem eloquently discussed this issue in an article. I couldn’t have said it better myself, so with her permission, I’m sharing her insights here, albeit anonymized to avoid casting any ecosystem in a negative light. After all, this challenge is universal:

There are plenty of rich people and “wantrepreneur” investors in our community, but most of them have made their fortune in real estate, finance, or other traditional sectors. They have great intentions, but unfortunately they do not have experience in investing in technology and innovations. Some of them would take too much equity ownership. Some of them have conflicts of interest pursuing their own agendas and push their founders to work on products or customers that they want. Some are so risk averse that they structure their startup investment as if it is a personal loan. We have seen our startup founders take money from these investors and almost always end in disaster.  

​​What our community really needs are the startup investors who have “been there and done that.”  Or we will continue to be stuck in this vortex of wrong investors investing in the wrong companies. We need investors who truly understand the startup founders’ blood, sweat and tears approach. Someone who knows how to be a guide and a coach. Someone who knows how to provide advice, connections, and funding only when the founder really needs it.  

​​To achieve this goal, we need to invite investors from established ecosystems to teach local investors the best practices in venture investing. And we do believe these skills can be learned. The local investor community needs the knowledge and skills to make investment decisions that maximize the founders’ success therefore their chances of success.

Investing in innovation significantly differs from other forms of investment. For instance, real estate investments have established methods to evaluate rental yields, and traditional businesses use EBITDA to estimate enterprise values. However, early-stage startups, particularly those disrupting the status quo, cannot be evaluated using these metrics because of their lack of yields or EBITDA, or even clear business models! 

Often, experienced investors from other sectors mistakenly apply the same approach when they invest in tech startups, leading to almost certain failures. This can result in many problems, such as a messy cap table, ensuring the startup unfundable in future funding rounds and potentially “die young” despite its potential. We’ve regrettably had to pass on numerous investment opportunities due to such issues.

As the quoted investor highlighted, learning the skills and best practices in tech investing is possible. Needless to say, the best way to do this is to learn from people who have “been there and done that.” It’s crucial to acknowledge that investing in tech startups – and innovations in general – is a different sport than other sectors. 

After all, bringing a tennis racket to a hockey game is a recipe for disaster.

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

VC is a Home Run Derby with Uncapped Runs

There’s an old saying that goes, “Know the rules of the game, and you’ll play better than anyone else.” Let’s take baseball as our example. Aiming for a home run often means accepting a higher number of strikeouts. Consider the legendary Babe Ruth: he was a leader in both home runs and strikeouts, a testament to the high-risk, high-reward strategy of swinging for the fences.

Yet, aiming solely for home runs isn’t always the best approach. After all, the game’s objective is to score the most runs, not just to hit the most home runs. Scoring involves hitting the ball, running the bases, and safely returning to home base. Sometimes, it’s more strategic to aim for a base hit, like a single, which offers a much higher chance of advancing runners on base and scoring.

The dynamics change entirely in a home run derby contest, where players have five minutes to hit as many home runs as possible. Here, only home runs count, so players focus on hitting just hard enough to clear the fence, rendering singles pointless.

Imagine if the derby rules also rewarded the home run’s distance, adding extra runs for every foot the ball travels beyond the fence. For context, the centre field is typically about 400 feet from home plate. So, a 420-foot home run, clearing the centre field by 20 feet, would count as a 20-run homer. This rule would drastically alter players’ strategies. Not only would they swing for the fences with every at-bat, but they would also hit as hard as possible, aiming for the longest possible home runs to maximize their scores, even if it reduced their overall chances of hitting a home run.

This scenario mirrors early-stage venture capital, where I liken it to a home run derby with uncapped runs. The potential upside of investments is enormous, offering returns of 100x, 1000x, or more, while the downside is limited to the initial investment. Unlike in a derby, where physical limits cap the maximum score, the VC world is truly without bounds, with numerous instances of investments yielding thousandfold returns.

This distinct dynamic makes assessing VCs fundamentally different from evaluating other asset classes, where protecting the downside is crucial. In the VC realm, the potential for nearly limitless returns makes losses inconsequential, provided VCs invest in early-stage companies with the potential for exponential growth. The risk-reward equation in venture capital is thus highly asymmetrical, favouring bold bets on moonshot startups.

For illustration, let’s consider two hypothetical venture capital firms: Moonshot Capital and PlayItSafe Capital.

Moonshot Capital approaches the game like a home run derby with uncapped runs. They aim for approximately 20 companies in their portfolio, expecting that around 20% will be their home runs—or “value drivers”—capable of generating returns from 10x to 100x or more. 

Imagine they invest $1 in each of 20 companies. One yields a 100x return, three bring in 10x, and the remaining are strikeouts. The outcome would be:

(1 x 100 + 3 x 10 +16 x 0) x $1 = $130

Their $20 investment becomes $130 (or 6.5x), a gain of $110, despite 17 out of 20 companies being strikeouts. Yes, you are correct. 85% of the portfolio companies failed!

PlayItSafe Capital, on the other hand, prioritizes downside protection, ensuring none of the portfolio fails but also avoiding riskier bets. In the end, one company generates one “10x” return, five companies return 3x, and the remainder is equally split between breakeven and failing.

(1 x 10 + 5 x 3 + 7 x 1 + 7 x 0) x $1 = $32

Despite several “successes” and very few “losses,” the fund’s return of $12 pales in comparison to Moonshot Capital’s. Even increasing the number of companies generating a 3x return to 10 with no loss (which is almost impossible to achieve for early-stage VCs) only yields a $29 gain from a total investment of $20:

(1 x 10 + 10 x 3 + 9 x 1) x $1 = $49

No one should invest in the early-stage VC asset class with the expectation of such a paltry return.

As illustrated, success isn’t about minimizing failures, nor is it about the number of “3x” companies or even the number of “unicorn logos” in the portfolio, as how early when the investment was made to these unicorns is crucial as well. One needs to invest in a unicorn when it was a baby-unicorn, not after it became a unicorn.

In summary:

Venture funds live or die by one thing: the percentage of the portfolio that becomes “value drivers”, i.e. those capable of generating returns of 10x, 100x, or even 1000x.

At Two Small Fish Ventures, we are the IRL version of Moonshot Capital. Every investment is made with the belief that $1 could turn into $100. We know that, in the end, only about 20% of our portfolio will become significant value drivers. Yet, with each investment, we truly believe these early-stage companies have the potential to become world-class giants and category creators when we invest. 

This is what venture capital is all about: not only is it exhilarating to be at the forefront of technology, but it’s also a great way to generate wealth and, more importantly, play a role in supporting moonshots that have a chance to change how the world operates.

P.S. This is Part 1 of this series. You can read Part 2, “Winning the Home Run Derby with Proper Portfolio Construction” here.

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Goodbye 2010s, Hello 2020s

As we enter the final hours of the 2010s, to reflect and look forward I would like to share a couple of very contrasting collages. One was taken this year. The other was taken exactly 10 years ago.

Wattpad grew ~100x in virtually every single dimension – number of employees, the size of the office, number of users, number of stories shared but more importantly the positive impact on the Wattpad communities, our employees, our city and millions of lives we touched.

Two Small Fish Ventures grew from a side project to a VC firm with tens of rocket ships in the portfolio.

Most importantly, although the size of my family has not grown 100x (thank God!), my two little girls + an amazing lady has become two amazing young ladies + an even more amazing lady. They are the most influential on the most influential. They are the best and unquantifiable.

Look forward to 100x our impact on 100x more people in the 2020s!

Everything Starts Small

It’s a situation founders know well: the agonizing wait to see if the product/service they’ve launched will take off. The reality is, it takes months and even years to find product-market-fit. And once that happens, the struggle doesn’t really end because there’s always another, more complex problem to solve. It can begin with product-market-fit then morph into customer/user acquisition and engagement and then shift to monetization. For entrepreneurs, building a business can feel like a never-ending cycle of wait-and-see. 

When we launched Wattpad 13 years ago, my co-founder Ivan and I immediately started monetizing with ads. And when I say we “immediately monetized” the site, I really mean we earned $2 in monthly ad revenue a full year later. A minuscule amount. 

When we first launched our Android app, we saw about 10 downloads in the first month. Even in 2011 when Android really started to take off our download numbers were still puny. 

Today, we see more than 60,000 Android users sign up every day and half of our daily usage comes from Android users. Our monthly advertising revenue is in the hundreds of thousands of dollars. We’re no longer talking about trivial amounts. It’s been a long road that had to start somewhere. 

‘Everything starts small’ is a valuable mantra for any entrepreneur. Look at Spotify: When it first launched in the US in 2010 it had 100,000 paid subscribers. Today, Spotify’s number of paid subscribers is about to cross the 100 million mark.

Not too long ago, we launched Paid Stories and we also introduced a subscription model called Premium at Wattpad. The numbers are still small. But they won’t stay that way forever (especially since we’ve rolled out these programs globally). As long as we keep improving, keep optimizing and keep promoting — basically, if we continue to hustle and grind as all great entrepreneurs do — the numbers will go up.

But we can’t expect a silver bullet. No single feature or no single promo or no single country launch will 10x these numbers overnight. While it’s not impossible to find a 10x growth hack, the reality is that it’s probably better to find 100 little things to grow 10%.  

My fellow entrepreneurs, please remember: Tomorrow will be better than today. The day after tomorrow will be better than tomorrow. Everything starts small.

Strategic Partners Turn Your Vision Into Reality Faster Than You Can

A few months ago, Wattpad announced a partnership with Anvil Publishing in the Philippines. Together, we’re launching Bliss Books, a new Young Adult imprint that’ll bring some of the biggest Wattpad stories and authors to bookshelves across the country. 

The news means Wattpad can realize the vision I laid out in the Master Plan much, much faster. But really, speed is just one of the values a strategic partner brings to the table.

Anvil also has deeper insights into local purchasing habits and consumer behaviour than we do. The first part of the Master Plan is to “Discover more great stories,” and we do this by leveraging our Story DNA machine learning technology and a passionate community to find unique voices and amazing stories that are validated in Tagalog. With their local insights, Anvil can corroborate our insights using their local knowledge to guarantee a successful adaptation. 

The best strategic partners also have a reputation you can piggy-back off of. Another element of the Master Plan is ‘Turn these stories into great movies, TV shows, print books, etc.,” Anvil has a reputation for publishing high-quality books, and that’s exactly what we want to do. 

Anvil is the publishing arm of the National Book Store with hundreds of bookstores. It’s established presence means we – through NBS – have the ability to distribute Wattpad books to every practically every part of the country tying into another key part of the Master Plan to “Distribute and monetize content on and off Wattpad and earn money for storytellers.” 

The Philippines is one of Wattpad’s largest markets and a very important one since its home to some of our most passionate users. Plus, when you factor in the expertise and reach of Anvil, it was an easy decision to partner with this local company who can help us continue to celebrate and reward Filipino authors and their fans. 

Entrepreneurs: if you have the ability to form a partnership with another complementary company, seize it. The strategic upside is great and may help you realize your vision faster than you ever could alone.  

Your iteration rate is the key to finding product-market fit for your app

For any entrepreneur launching an app finding product-market fit is a lot like finding the Golden Ticket; it’s rare, but when it happens it’s life-changing.

Unlike an enterprise business, when you build a consumer app your end-user can’t easily tell you what they want (vs. enterprise apps that are focused on solving a known problem or a pain point for clients). Think about it this way: Before the iPhone launched, no consumer research would point out the need for a touchscreen, keyboardless device. Before Snapchat, no consumer would say they wanted the ability to send ephemeral messages.

Consumers aren’t able to tell you what they want; this makes consumer products a shot in the dark. There is no guarantee if or when product-market fit can be found. It’s usually a long journey of continuous iteration.

And ongoing iteration is what gets you to product-market fit. Each iteration gives you one extra at-bat. Hitting a home run is easy if you can strike out 10o times instead of 3. Y Combinator’s Sam Altman said it best in this tweet:

Screen Shot 2019-04-01 at 4.14.45 PM

Finding product-market fit is hard. Look at how many consumer products Facebook and Google shut down even with their massive resources (remember FB Paper, FB Groups app, Google+ app?) Massive resources can help, but it’s not the most critical.

In the early days of Wattpad, despite only having a handful of employees, every day the product looked a bit different. We implemented new concepts in the morning, checked in the afternoon, measured overnight and killed it the next morning if it didn’t work out. That’s how we found product-market fit in many things. And that’s how we left our competitors in the dust.

Although finding product-market fit is freaking hard, it is also very fun and rewarding once you have figured it out.

Keep on iterating!

Masterclass Series: CEO, It’s Your Decision. Don’t Dodge

When you work at a startup, seeking advice and gaining buy-in from the broader team can help you move faster … until it becomes a crutch.

Recently, I bumped into an entrepreneur I invested in. He’s making some changes to the direction of his company, and after explaining them to me, I pointed out some of the potential issues. He immediately asked me: “So, do you want me to revert to the old plan?”

It was the wrong question to ask.

I explained to him that it doesn’t matter what I want. As CEO, with all the context, he’s the only one who can make that decision. As an investor, I’m not thinking about his business 24/7, but he is. It’s his company, and it’s his decision what he does with it (and only his decision). Investors should share their experiences and opinions, but they shouldn’t make decisions that affect the business.

Not long after, I had an investor friend contact me about one of his portfolio companies that’s going through a pretty rough patch. My friend said: “The CEO now blames the board of directors for making the wrong decision.” My ears perked up. This was a red flag and I told my friend as such.

A company’s board of directors only has one decision to make: Hire and fire the CEO. Inexperienced CEOs have a tendency to defer difficult decisions to the board or even other people in the company. It’s not uncommon to hear a newbie (or unconfident) CEO say something like “My recommendation to the board is …” This isn’t helpful. All this does is enable inexperienced board members to jump in and make decisions out of context. It’s tragic, really.

Obviously, I’m not suggesting that there is no value to be gained from consulting with your board: Every CEO has blind spots and can benefit from another perspective. But in the end, what happens in the business is always the CEOs call.

And it doesn’t always have to be the CEO who holds the ultimate decision-making ability (nor should it). I remember speaking with a senior leader at Wattpad, and the person said: “I would advise we do this …” I quickly reminded this person that they are the head of the business unit and the only person accountable for it. It was an important decision with huge implications across the company, so of course, I expected this person would engage with the broader team to think through the different scenarios and make sure all the bases were covered, but at the end of the day, the person was the leader, not an advisor.

These three conversations illustrate one critical point. Whether you’re a co-founder, CEO, technical lead, department manager or even individual contributor, you are the presumed expert in your role, so don’t dodge making tough decisions. Remember: You are not an advisor to your own job.

Don’t Be a Parasite If You Want To Be A Disruptor

I spoke with an entrepreneur whose company is building a new, disruptive product for the education sector. One of the challenges he’s facing is that none of the company’s co-founders have worked in the education sector before. He wondered if he should hire someone with some relevant experience.

Another entrepreneur friend of mine is building a tool that is catered to the public sector. The company is struggling to scale as a business. The sales process is too slow. The product is becoming too specific for one sector.

In both cases when these entrepreneurs asked for my advice, I told them: Don’t be a parasite if you want to be a disruptor.

There are so many verticals out there that still have not been fully transformed by the Internet — education, public sector, book publishing, the list goes one. But it’s extremely hard to transform any industry if you have a lot of dependencies with the old systems. You can’t think out of the box. Your sales cycle is too long. And often you end up with a product or a service that is incremental at best rather than revolutionary.

Now, there’s nothing wrong with that. In fact, a lot of people have built great businesses by providing incremental solutions like consulting services to the government. But, if you want to build something truly transformative and net-native, then you have to stay as far away from the traditional systems as possible and draw closer to your end users or customers.

If you want to create something truly game-changing and be a disruptor, you can’t begin the journey as a parasite.

Embrace tension to move even faster

As a startup scales, it’s natural for tension to creep up among different teams who are working on disparate objectives. Either of these conversations sound familiar?

Showing users more ads can help generate more revenue, but it could also hurt engagement. Do we optimize for revenue or engagement?

We have a limited budget. If we spend it on A, B, and C we won’t be able to pay for X, Y, Z. What should we choose?

The best way entrepreneurs can embrace and then ease tension among their teams is to establish a set of principles. Principles can help teams avoid indecision and move fast.

In the example above about serving ads at the expense of user engagement for instance, if the team has previously established that ad experiments can’t impact engagement by more than X%, it becomes easier for them to test different combinations of ads to drive the most revenue without negatively impacting engagement.

Establishing principles streamlines decision making, eliminates unnecessary meetings and propels the company forward. Everyone knows what to do and understands how much (or how little) leeway the team has.

Of course, there will be times when you may not have a principle to fall back on. That’s when the teams representing the conflicting priorities need to escalate the matter further and involve an arbitrator. Most times decisions are reversible and having an arbitrator can resolve issues quickly. In the world of startups, a quick decision always trumps a slow decision (or worse, no decision at all).  

Tension is natural and a sign your company is growing. But as your business grows and becomes more complex, decisions aren’t as straightforward as they used to. Creating a set of ground rules that inform your team’s priorities and outcomes can help avoid unnecessary confusion and conflict.

Out with the old (product features)

The new year means a fresh start. With that in mind, I urge product managers, designers, engineers and developers – anyone who helps develop a product, really – to think critically about the features they are designing. Have you thought about what features you’ll say goodbye to in January? Because killing features now means better business velocity for the rest of 2019.

As a product and its codebase grows, it is not uncommon to see an increase in technical debt. This debt may be because usage of a feature has scaled beyond its original design (you can’t expect a Toyota Corolla to reach 300 km/h no matter how many turbochargers you add) or because a feature, and subsequently it’s code, is used in more ways than originally intended (like a lawn mower turned into a snow blower – it works, but it shouldn’t). Often, technical debt accumulates because old or infrequently-used features aren’t retired.

There is a cost of removing these old features, of course, but removing features is significantly cheaper in the long-run than maintaining relic code. When you support outdated or unused features you’re also allowing security, performance and backwards compatibility issues to arise.

I remember reading an article about Evernote that claimed 90% of their features (and they have thousands of them) are used by less than 1% of their users. Eventually, the company’s velocity grounded to a halt because every simple feature update required numerous discussions across the company before the change could be implemented.

So make no mistake, it is desirable and even essential to purge old product features. Here’s how in three steps:  

  1. First identify a feature that you think should be retired. Then measure the usage of that feature. The data won’t lie. If usage is low, proceed to step two.
  2. The numbers may not tell you the whole story. Talk to some of the old-timers who have more context than you and understand why the feature existed in the first place. In many cases, you’ll be surprised by the reasons.
  3. Decide to purge, modernize or maintain the status quo. Make a decision and then execute your action plan.

Years ago, I was part of a team that dedicated six months to find bugs and purge unused features. On the surface, it seemed we were spending an inordinate amount of time and effort ‘looking in the rear-view mirror’ and not working on things that took the product forward. In reality though, those six months pushed the product much, much further ahead. By the end of it the product ran faster, the UI was cleaner because many unused features were gone, and annoying glitches were finally addressed. The app went from 1-star to 5-star in a few months without adding anything new.

It’s a good reminder: Less is more. Simple is good.

When tech giants move next door

A slew of international tech companies – Google, Uber, Samsung, Microsoft, Amazon – have committed to or expressed interest in setting up shop in Toronto. If you’re a homegrown startup or scaleup you can’t help but think about the implications of having these giants in your backyard.

Companies often expand their footprint to lower costs, access specialized talent or for a host of other reasons. It’s not new. They aren’t the first international companies who want to set up shop in Toronto, and won’t be the last.

And why not? Toronto is a world-class city with some of the best universities in the world producing some of the finest technical and business talents. We’re home to an incredibly diverse community who have the perspective and understanding to solve global issues and build products and services that work for the world.  

Colleagues and friends have recently been asking me for my take on these moves. Are they helpful or harmful to the city and the local tech ecosystem?

In my opinion, we should welcome these moves – but be wary of them.

When a few foreign companies decide to move to a burgeoning city, they can help build a critical mass that directly supports homegrown companies by spurring interest in the region. They attract high caliber talent and then provide opportunities for these employees to hone their skills and learn new ones so they can further develop into well-rounded and in-demand workers.

But too many foreign companies in a single locale can make it seem like they’ve colonized the area, leaving little room for local businesses. It gets too difficult to compete, too expensive to stay in your backyard. Think about this: If data is the new oil, do you really want all the ‘oil companies’ to be foreign-owned?

So it’s not a choice of either-or. Having zero international companies who operate locally won’t stimulate the ecosystem. With too many foreign companies, locals lose the ability to control their our own destiny,  and eventually, ideas and innovation become stifled.

For now, I welcome these new companies into our backyard but make no mistake, it can never replace building our own homegrown giants. I’m certain that the incredible Toronto tech ecosystem will continue to make waves regardless of who moves next door.