Uniquely World Class

Every VC says they back high-quality companies.

That is like saying humans need to sleep, eat, and drink. True, yes. Useful, no.

The more important question is: what does “high quality” actually mean in venture?

For us, it means the potential to become uniquely world class.

A company that can become the clear winner in an important category. A company with real moats. A company that can build something enormous.

This is why we spend so much time trying to understand what is truly unique about a company. Not what is interesting. Not what demos well. Not what sounds differentiated in a pitch deck. Not what the ARR is today. What is actually hard to replicate? What gets stronger over time? What creates a widening gap versus everyone else? And the only way to know is to spend time with these deep tech founders and really understand how the technology, the product, and the company work.

I have written a long blog post on this topic on Two Small Fish’s website. Here is the link.

Announcing Our Investment in YScope: Make Logging Faster, Smarter, and More Efficient

We are super excited to share that Two Small Fish led YScope’s US$3.9 million financing, with Snow Angels (the Snowflake alumni investment syndicate), Next Wave NYC, UTEST, and other successful founders participating.

YScope was cofounded by University of Toronto Professor Ding Yuan, who is also CEO, Professor Michael Stumm, Dr. Kirk Rodrigues, Dr. David Lion, Yu (Jack) Luo, and Beverly Xu (Guangji Xu). It is a deeply impressive team building open-source logging infrastructure for the AI era, combining deep systems research with real-world production traction.

Its core technology, CLP (Compressed Log Processor), makes log storage, search, and analytics dramatically more efficient for both humans and AI, across cloud and edge environments.

We believe this is a massive opportunity. As the cost of intelligence collapses, AI agents, robots, autonomous vehicles, and other intelligent systems will generate orders of magnitude more machine-generated events. A robotic finger moves. A self-driving car makes a slight turn. An AI agent retries a task. Each action creates an event, and the infrastructure layer that can handle that explosion efficiently will matter enormously.

YScope is also a strong mutual fit for TSF. We invest in the next frontier of computing and its applications, and we know firsthand how painful logging becomes at scale. I have spent enough time with logs that I will never get back. At Wattpad, logging every tap, swipe, and click could easily add up to billions of events a day. That is why YScope’s traction is so compelling, from powering Uber’s production logging platform to operating across more than 1.5 million connected electric vehicles and being used by Fortune 500 organizations.

Congrats to Ding, Michael, Kirk, David, Jack, Beverly, and the entire YScope team. Full blog post here.

Announcing Our Investment in ByteShape: Make AI Massively More Efficient

AI has a massive efficiency problem. It uses too much compute. It costs too much. It uses too much energy. And it is too slow.

Today, it can take a serious cluster of GPUs and a very non-trivial amount of electricity just to answer a simple question like “Can you summarize this document?” or “What should I reply to this email?” The machinery underneath is anything but.

This is why we invested in ByteShape. The company was co-founded by a world-class team out of the University of Toronto: Professor Andreas Moshovos [link]—whose group’s papers have amassed more than 10,000 citations—together with scientists Miloš Nikolić [link], Enrique Torres Sánchez [link], and Ali Hadi Zadeh [link], whose life’s work is making computation more efficient. Both Ali and Miloš were also postgraduate affiliates of the Vector Institute, and Miloš’s PhD research formed the foundation of ByteShape’s core technology—work that earned him recognition as an “ML and Systems Rising Star” by MLCommons last year.

They are building the kind of deep technology that changes the economics of AI deployment, then changes what products become possible. 

Quantization, In Plain English

Many techniques underpin what ByteShape does. One of them jumped out: quantization.

Quantization is about using fewer bits to represent the numbers inside a model. Many models are trained with higher precision formats because it helps learning remain accurate. But AI inference often does not need that much precision everywhere. If you can safely represent weights and activations with fewer bits, you shrink memory use and speed up compute, often dramatically, while keeping outputs essentially the same.

ByteShape’s approach, ShapeLearn, does this in a way that feels obvious in hindsight and very hard in practice. ShapeLearn adaptively taps into the AI training process to learn optimal datatypes for parameters and inputs. The result can be 7x faster training and 10x faster inference. 

In layman’s terms, the idea is simple and powerful: fewer bits, less work, and smaller models, without sacrificing results. All is being done adaptively.

Then ByteShape takes it one step further. ShapeSqueeze is their lossless compression layer that applies per-value encoding to minimize off-chip data transfers, with up to 40% extra compression.

Put the two together, and you get something that really matters in the real world. ShapeLearn reduces what the model needs to store and compute. ShapeSqueeze reduces what the hardware needs to move around. Less compute and less data movement means faster AI, lower cost, and lower energy.

This is not limited to savings in cloud data centres. It is a step-function improvement in what can run locally, which means a step-function improvement in what products can exist. It opens the door to privacy-sensitive and offline workflows, on-device agents, and embedded intelligence in robots and machines where speed, power and thermals matter.

Why TSF invested

Two Small Fish Ventures is an early-stage deep tech venture capital firm investing globally in the next frontier of computing and its applications. We invest where foundational breakthroughs create the conditions for new category-defining companies, and we back founders at the earliest stages when the technology is ready for commercialization.

ByteShape fits that thesis perfectly. They are building a foundational efficiency layer for AI that can reshape performance and cost across cloud, enterprise, and edge deployments. And because all TSF partners are engineers with deep operating experience, we do not just evaluate the science. We lean into technology through commercialization, with hands-on support informed by having built and scaled companies ourselves.

With ByteShape, the future is models that run faster, use less energy, and fit on far smaller hardware, without sacrificing the quality that makes them worth using.

Try it yourself on Hugging Face! [link]

Portfolio Highlight: Zinite. Speed and Energy, Two Birds, One Stone

For most of semiconductor history, progress was a simple loop. Shrink transistors. Fit more into the same area. Get faster compute as a byproduct.

That loop had a name. Moore’s Law. It traces back to Intel co-founder Gordon Moore. He observed in the 1960s that the number of transistors on a chip, and hence its capabilities, tended to double every two years. The industry turned that observation into a roadmap. It was never guaranteed to run forever. Now shrinking is harder because we are starting to hit many limits in physics and economics, and the cost of pushing the frontier keeps rising.

So if the curve is going to keep bending upward, the industry needs new scaling vectors beyond making everything smaller in two dimensions.

This is why Two Small Fish invested in Zinite in 2021 at the company’s inception. The thesis was simple then, and it is still simple now. Scale in the third dimension, using proprietary technology protected by patents to enable true 3D chips.

Zinite stayed deliberately stealth early on, focused on building the core and protecting it properly before saying too much. Five years after we invested, we can finally talk about it more openly.

The company is led by its CEO, Dr. Gem Shoute. Fun fact. Her breakthrough was strong enough that her professors and industry veterans (who helped create fundamental IP used in all chips since 2008) joined her as co-founders, Dr. Doug Barlage and Dr. Ken Cadien.

The Distance Tax

In a recent blog post, I used a factory analogy to explain why speed, latency, and energy are often bottlenecked by movement, not necessarily arithmetic. 

In short, systems don’t lose because they can’t do math. GPUs are already very good at that. Systems lose speed because they can’t feed the math with data fast enough. 

In many systems, moving data costs far more than doing the arithmetic. When movement is expensive, speed and energy efficiency get worse together.

AI inference exacerbates the problem because the computational characteristics of AI inference workloads put a premium on memory behaviour. In many cases, the limiting factor is not arithmetic. It is how efficiently the system can move data. Bringing memory closer to logic matters because it directly reduces that movement.

Sensing fits in the same frame as logic and memory. Sensors generate raw data at high volume. If the system’s first step is to ship raw data far away before anything useful happens, it pays in bandwidth, latency, and power. The more intelligence that can happen closer to where data is produced, the less the system wastes just transporting information.

So the distance tax is one big problem showing up in three places at once. Logic. Memory. Sensing.

Why 3D Matters for Speed and Energy

When people hear 3D chips, they think density. More transistors per area. That matters. The bigger lever is proximity. Current 3D approaches to deliver more performance per area rely on advanced packaging, which is hindered by cost and the distance tax. 

If memory can live closer to logic, the system avoids transfers that dominate both performance and power. If compute and memory can sit closer to sensing, the system avoids hauling raw streams around before doing anything intelligent.

Every avoided transfer is a double win. Speed improves because stalls go down and effective bandwidth goes up. Energy improves because fewer joules are burned moving bits instead of doing work.

That is the two birds, one stone result.

Five years after we invested, Zinite is far from just a concept. The company is doing exceptionally well, and it represents the kind of platform that can extend performance gains into the post-Moore era by reducing the distance tax, not by asking physics for more shrink, but by making data travel less.

A Day at Ontario Tech University

I spent a full day at Ontario Tech University in Oshawa a few weeks ago. It was my first time on campus, despite it being just over a 40-minute drive from Toronto, where I live. I arrived curious and left with a clearer picture of what they’re building.

Ontario Tech is still a relatively young university, just over two decades old. What’s less well known—and something I didn’t fully appreciate before the visit—is how quickly it has grown in that time, now serving around 14,000 students, and how deliberately it has established itself as a research university rather than simply a teaching-focused institution.

That research orientation shows up not just in output, but in where the university has chosen to build depth—areas that sit close to real systems and real constraints.

This came through clearly in conversations with Prof. Peter Lewis, Canada Research Chair in Trustworthy Artificial Intelligence, whose work focuses on trustworthy and ethical AI. The university has launched Canada’s first School of Ethical AI, alongside the Mindful AI Research Institute, and the work here is grounded in how AI systems behave once deployed—how humans interact with them, and how unintended consequences are identified and managed.

Energy is another area where Ontario Tech has built serious capability. The university is home to Canada’s only accredited undergraduate Nuclear Engineering program, which is ranked third in North America and designated as an IAEA Collaborating Centre. In discussions with Prof. Hossam Gaber, the emphasis was on smart energy systems, where software, sensing, and control systems are developed alongside the physical energy infrastructure they operate within.

I also spent time with Prof. Haoxiang Lang, whose work in robotics, automotive systems, and advanced mobility sits at the intersection of computation and the physical world.

That work is closely tied to the Automotive Centre of Excellence, which includes a climatic wind tunnel described as one of the largest and most sophisticated of its kind in the world. The facility enables full-scale testing under extreme environmental conditions—from arctic cold to desert heat—and supports research that needs to be validated under real operating constraints.

I can’t possibly mention all the conversations I had over the course of the day—it was a full schedule—but I also spent time with Dean Hossam Kishawy and Dr. Osman Hamid, discussing how research, entrepreneurship, and industry engagement fit together at Ontario Tech.

The day also included time at Brilliant Catalyst, the university’s innovation hub, speaking with students and founders about entrepreneurship. I had the opportunity to give a keynote on entrepreneurship, and the visit ended with the pitch competition, where I handed the cheque to the winning team—a small moment that underscored how early many technical journeys begin.

Ontario Tech may be young, but it is already operating with the structure and discipline of a mature research institution, while retaining the adaptability of a newer one.

Thank you to Sunny Chen and the Ontario Tech team for the time, access, and thoughtful conversations throughout the day.

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Geopolitics Now Matters to Every CEO

In October, at our Two Small Fish Ventures AGM, I had the chance to sit down with Benjamin Bergen for a fireside chat. At the time, he was still leading the Council of Canadian Innovators. None of us knew he would soon become the new CEO of the CVCA. Looking back, the timing could not have been better.

I have known Benjamin for many years. When I was CEO of Wattpad, I worked closely with him through CCI, which played an important role in advocating for Canadian scaleups. That experience gave me a front row view of how policy, talent mobility, capital, and global markets intersect. I did not expect that perspective to become even more useful on the investor side, but today it is proving to be exactly that.

At Two Small Fish, our portfolio founders often hear us talk about our full cycle view of company building. We have built companies, operated them at global scale, navigated regulatory and geopolitical realities, and now invest across deep tech. We have seen the journey from the very first product decision all the way to commercialization. That experience matters today because geopolitics is no longer something happening far away. It is showing up directly in the work of founders.

The World Has Changed Irreversibly

Founders do not necessarily always think about politics, especially geopolitics. I certainly did not in my early days as a founder. But over the past year, the global environment has shifted in ways that affect talent, capital, customers, supply chains, and data. These forces are becoming part of the operating conditions for every innovative company.

At the AGM, Benjamin and I spent time unpacking what this new reality looks like.

  • Talent We spoke about the growing brain drain and how global mobility is changing. The tightening of the H1B program in the United States has created a ripple effect across the entire talent ecosystem. Early stage companies are rethinking where they build teams, and immigration policy is becoming a strategic consideration rather than an afterthought.
  • Capital The rise of protectionism and shifting global alliances are affecting how and where capital can move. The changing dynamics among the United States, China, and Canada raise new questions for both founders and investors. Some are beginning to view geographic diversification as a practical response to political uncertainty.
  • Customers National preference policies such as Buy Canadian and Buy American are becoming more common. These policies may begin as political statements, but they influence real procurement and partnership decisions. For founders, gaining early customers is no longer just about product and timing. There is a political dimension that needs to be understood.
  • Infrastructure and Defense We also talked about how export controls and security requirements are expanding. Technologies that once seemed purely commercial are now viewed through a strategic lens. Even young companies are discovering that they may be operating in areas that governments consider sensitive.
  • Supply Chains Global supply chains have shown their fragility in areas such as semiconductors, rare earth materials, and energy. These vulnerabilities create friction but also open new opportunities for companies building more resilient and regional alternatives.
  • Data Sovereignty Data localization and national data governance rules continue to spread. More countries want their data stored and processed within their borders. For companies operating internationally, this introduces new architectural and operational decisions much earlier in the journey.

Benjamin also shared how CCI’s new advisory group, Signa Strategies, is helping founders navigate exactly these types of challenges. It felt like a natural evolution of the work he has been doing for years.

As our conversation wrapped up, I was reminded how valuable it is to have seen this ecosystem from both sides. As a founder, I saw how talent, markets, and policy could quietly redirect a company’s path. Through CCI, I saw how national priorities and regulation shape the environment innovators work in. These experiences feel especially relevant now. The geopolitical questions that once appeared at the edges are moving closer to the center.

This is the environment founders are building in today. And with our full cycle experience, we hope to help them navigate it with clarity, context, and confidence.

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

The AI Bubble That Is Not When Everyone Is All In

At the beginning of this year, I wrote an op-ed for The Globe about what many were already calling the AI bubble. Nearly a year later, almost all of what I said remains true. The piece was always meant to be a largely evergreen, long term view rather than a knee jerk reaction.

The only difference today is that the forces I described back then have only intensified.

We are in a market where Big Tech, venture capital, private equity, and the public markets are all pouring unprecedented capital into AI. But to understand what is actually happening, and how to invest intelligently, we need to separate noise from fundamentals. Here are the five key points:

  1. Why Big Tech Is Going All In while Taking Minimal Risk.
  2. The Demand Side Is Real and Growing.
  3. Not All AI Investments Are Created Equal.
  4. Picking Winners Matters.
  5. Remember, Dot Com Was a Bubble. The Internet Was Not.

1. Why Big Tech Is Going All In while Taking Minimal Risk

The motivations of the large technology companies driving this wave are very different from those of startups and other investors.

For Big Tech, AI is existential. If they underinvest, they risk becoming the next Blockbuster. If they overinvest, they can afford the losses. In practice, they are buying trillions of dollars worth of call options, and very few players in the world can afford to do that.

The asymmetry is obvious. If I were their CEOs, I would do the same.

But being able to absorb risk does not mean they want to absorb all of it. This is why they are using creative financing structures to shift risk off their balance sheets while remaining all in. At the same time, they strengthen their ecosystems by keeping developers, enterprises, and consumers firmly inside their platforms.

This is not classical corporate investing. Their objective is not just profitability. It is long term dominance.

For everyone outside Big Tech, meaning most of us, understanding these incentives is essential. It helps you place your bets intelligently without becoming roadkill when Big Tech transfers risk into the ecosystem.

2. The Demand Side Is Real

AI usage is not slowing. It is accelerating.

The numbers do not lie. Almost every metric, including model inference, GPU utilization, developer adoption, enterprise pilot activity, and startup formation, is rising. You can validate this across numerous public datasets. Directionally, people are using AI more, not less. And unlike previous hype cycles, this wave has real usage, real dollars, and real infrastructure behind it.

Yes, there is froth. But there are also fundamentals.

3. Not All AI Investments Are Created Equal

A common mistake is treating AI investing as a single category.

It is not.

Investing in a public market, commoditized AI business is very different from investing in a frontier technology startup with a decade long horizon. The former may come with thin margins, weak moats, and hidden exposure to Big Tech’s risk shifting. The latter is where transformational returns come from if you know how to evaluate whether a company is truly world class, differentiated, and defensible.

Lumping all AI investments together is as nonsensical as treating all public stocks as the same.

4. Picking Winners Matters

In public markets, you can buy the S&P 500 and call it a day. But that index is not random. Someone selected those 500 winners for you.

In venture, picking winners matters even more. It is a power law business. Spray and pray does not work. Most startups will not survive, and only the strongest will break out, especially in an environment as competitive as today.

Thanks to AI, we are in the middle of a massive platform shift. Venture scale outcomes depend on understanding technology deeply enough to see a decade ahead and identify breakout successes before others do. Long term vision beats short term noise. Daily or quarterly fluctuations are simply noise to be ignored.

5. Dot Com Was a Bubble. The Internet Was Not.

The dot com era had dramatic overvaluation and a painful crash, but the underlying technology still reshaped the world. The problem was not the internet. It was timing, lack of infrastructure, and indiscriminate investing in ideas that were either too early or simply bad.

Looking back, the early internet lacked essential components such as high speed access, mobile connectivity, smartphones, and internet payments. Although some elements of the AI stack may still be evolving, many of the major building blocks, including commercialization, are already in place. AI does not suffer from the same foundational gaps the early internet did.

Calling this a bubble as a blanket statement misses the nuance. AI itself is not a bubble. With a decade long view, it is already reshaping almost every industry at an unprecedented pace. Corrections, consolidations, and failures are normal. The underlying technological shift is as real as the internet was in the 1990s.

There is speculation. There are frothy areas. And yet, there are many areas that are underfunded. That is where the opportunities are.

History shows that great venture funds invest through cycles. They invest in areas that will be transformative in the next decade, not the next quarter.

For us, the five areas we focus on, including Vertical AI platforms, physical AI, AI infrastructure, advanced computing hardware, and smart energy, are the critical elements of AI. Beyond being our expertise, there is another important reason why these categories matter: Bubble or not, they will thrive.

We are not investing in hype, nor in capital intensive businesses where capital is the only moat, nor in companies where technology defensibility is low. As long as we stay disciplined and visionary, and continue to back founders building a decade ahead, we will do well, bubble or not.

After all, there may be multiple macro cycles across a decade. Embrace the bubble.

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Reflections from the Impact 2025 Summit

I had the opportunity to join a panel at the Impact 2025 Summit in Calgary, moderated by Raissa Espiritu, with Janet Bannister and Paul Godman. Ironically, none of us are labelled as impact investors, and I explained on stage why Two Small Fish Ventures does what we do.

At Two Small Fish Ventures, we’ve never called ourselves an impact fund. That’s not because we’re indifferent to impact; in fact, it’s core to what we do. Our focus is on deep tech, the next frontier of computing, where innovation can create meaningful, long-term change. Specifically, we invest in five key areas: Vertical AI Platforms, Physical AI, AI Infrastructure, Advanced Computing Hardware, and Smart Energy.

We care deeply about scientific advancement, and more importantly, about turning those breakthroughs into real-world impact. That’s how meaningful progress happens.

Eva is our General Partner, and both of us are immigrants. Diversity isn’t a marketing point for us; it’s part of who we are. It naturally shows up in our portfolio: about half of our companies have at least one female founder, and many come from underrepresented backgrounds. That said, uncompromisingly, we back amazing deep tech founders who are turning their creations into world-class companies.

It’s actually rare that we talk about topics like women investing or investing in underrepresented groups in isolation. Not because we don’t care, quite the opposite. The fact that Eva is one of the few female GPs leading a venture fund, and that we’re both immigrants, already says a lot. Our actions speak volumes. We walk the walk and talk the talk.

We need to deliver results. Period. Our competition isn’t other venture funds; it’s every other investment opportunity available in the market. If we can’t perform at the highest level — top decile in everything we do — we can’t sustain our mission. Delivering some of the best results in the industry enables us to do what we love and make an impact.

That’s why I believe impact and performance are not opposites. The most powerful kind of impact happens when companies succeed, when they become world-class companies. Strong returns and meaningful impact can, and should, reinforce each other.

I also talked about the importance of choosing the right vehicle for the right purpose. When we made a 2 million dollar donation to the University of Toronto to establish the Commercialization Catalyst Prize, it wasn’t about investing. It was about supporting a different kind of impact — helping scientists and engineers turn their research into innovations that can reach the world. Not every kind of impact should come from the same tool.

At the end of the day, labels matter less than intent and execution. We don’t need to call ourselves an impact fund to make a difference. Our goal is simple: to back bold deep tech founders using science and technology to build a better future and to do it with excellence.

A big thank you to Raissa, George Damian, Sylvia Wang, and the entire Platform Calgary team for putting together such a thoughtful and well-run event.

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Quantum: From Sci-Fi to Investable Frontier

When I was studying electrical engineering, out of my curiosity, I chose to take an elective course on quantum physics as part of advanced optics. It sparked my curiosity in quantum. The strange, abstract, counterintuitive rules, for example particles existing in multiple states or being entangled across distance, captivated me.

Error correction, closely related to fault tolerance in quantum systems today, is the backbone of telecommunications, one of the areas I majored in.

Little did I know these domains would converge in such a way that my earlier academic training would become relevant again years later.

For me, computing is not just my profession, it is also my hobby. As a science nerd, I actively enjoy following advances, and I keep going deeper down the rabbit hole of the next frontier of computing. That mix of personal curiosity and professional focus shapes how I approach both the opportunities and risks in the space. Over the past few years, I have gone deeper into the world of quantum. My academic and professional background gave me the footing to evaluate both what is technically possible and what is commercially viable.

From If to How and When

In June, I wrote Quantum Isn’t Next. It’s Now. We have passed the tipping point where the question is no longer if quantum technology will work, it is how and when it will scale.

This momentum is not just visible to those of us deep in the field. As the Globe and Mail recently reported, we at Two Small Fish have been following quantum for years, but did not think it was mature enough for an early-stage fund with a 10-year lifespan to back. This year, we changed our minds. As I shared in that article: “It’s much more investible now.”

The distinction is clear: when quantum was still a science problem, the central question was whether it could work at all. Now that it has become an engineering problem, the questions are how it will work at scale and when it will be ready for commercialization.

This shift matters for investors. Venture capital focuses on engineering breakthroughs, hard, uncertain, but achievable on a commercialization timeline. Fundamental science, which can take many more years to mature, is better supported by governments, universities, and non-dilutive funding sources. I will leave that discussion for another post.

One of Five Frontiers

At Two Small Fish Ventures, we have identified five areas shaping the next frontier of computing. Quantum falls under the area of advanced computing hardware, where the convergence of different areas of science, engineering, and commercialization is accelerating.

Each of these areas is no longer a speculative science experiment but a rapidly advancing field where engineering and commercialization are converging. Within the next ten years, the winners will emerge from lab prototypes and become scaled companies. Quantum is firmly on that trajectory.

How We Invest in Quantum

Our first principle at Two Small Fish is straightforward: we only invest in things we truly understand, from all three technology, product, and commercialization lenses. That discipline forces us to dig deep before committing capital. And after years of study, it is clear to us that quantum has moved into investable territory, but only selectively.

Not every quantum startup fits a venture time horizon. Some promising projects will take too many years to scale. But we are now seeing opportunities that, within a 10-year window, can realistically grow from an early-stage idea to a successful scale-up. That is the standard we apply to every investment, and quantum finally has companies that meet it.

From Sci-Fi to Reality

Canada has played an outsized role in building the foundation of quantum science. Now, it has the chance to lead in quantum commercialization. The next few years will determine which teams turn breakthrough science into enduring companies.

For investors, this is both an opportunity and a responsibility. The quantum era is not a distant possibility, it is here now. What once sounded like science fiction is now an investable reality. And for those willing to put in the work to understand it, the frontier is already here.

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Portfolio Highlight: Axiomatic

Last year we invested in Axiomatic AI. Their mission is to bring verifiable and trustworthy AI into science and engineering, enabling innovation in areas where rigour and reliability are essential. At the core of this is Mission 10×30: achieving a tenfold improvement in scientific and engineering productivity by 2030.

The company was founded by top researchers and professors from MIT, the University of Toronto, and ICFO in Barcelona, bringing deep expertise in physics, computer science, and engineering.

Since our investment, the team has been heads down executing. Now they’ve shared their first public release: Axiomatic Operators.

What They’ve Released

Axiomatic Operators are MCP servers that run directly in your IDE, connecting with systems like Claude Code and Cursor. The suite includes:

  • AxEquationExplorer
  • AxModelFitter
  • AxPhotonicsPreview
  • AxDocumentParser
  • AxPlotToData
  • AxDocumentAnnotator

Why is this important?

Large Language Models (LLMs) excel at languages (as their name suggests) but struggle with logic. That’s why AI can write poetry but often has trouble with math — LLMs mainly rely on pattern matching rather than reasoning.

This is where Axiomatic steps in. Their approach combines advances in reinforcement learning, LLMs, and world models to create AI that is not just fluent but also capable of reasoning with the rigour required in science and engineering.

What’s Next

This first release marks an important step in turning their mission into practical, usable tools. In the coming weeks, the team will share more technical material — including white papers, demo videos, GitHub repositories, and case studies — while continuing to work closely with early access partners.

Find out more on GitHub, including demos, case studies, and everything else you need to make your work days less annoying and more productive: Axiomatic AI GitHub

We’re excited to see their progress. If you’re in science or engineering, we encourage you to give the Axiomatic Operators suite a try: Axiomatic AI.

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Jevons Paradox: Why Efficiency Fuels Transformation

In 1865, William Stanley Jevons, an English economist, observed a curious phenomenon: as steam engines in Britain became more efficient, coal use didn’t fall — it rose. Efficiency lowered the cost of using coal, which made it more attractive, and demand surged.

That insight became known as Jevons Paradox. To put it simply:

  • Technological change increases efficiency or productivity.
  • Efficiency gains lead to lower consumer prices for goods or services.
  • The reduced price creates a substantial increase in quantity demanded (because demand is highly elastic).

Instead of shrinking resource use, efficiency often accelerates it — and with it, broader societal change.

Coal, Then Light

The paradox first appeared in coal: better engines, more coal consumed. Electricity followed a similar path. Consider lighting in Britain:

PeriodTrue price of lighting (per million lumen-hours, £2000)Change vs. startPer-capita consumption (thousand lumen-hours)Change vs. startTotal consumption (billion lumen-hours)Change vs. start
1800£8,0001.118
1900£250↓ ~30×255↑ ~230×10,500↑ ~500×
2000£2.5↓ ~3,000× (vs. 1800) / ↓ ~100× (vs. 1900)13,000↑ ~13,000× (vs. 1800) / ↑ ~50× (vs. 1900)775,000↑ ~40,000× (vs. 1800) / ↑ ~74× (vs. 1900)

Over two centuries, the price of light fell 3,000×, while per-capita use rose 13,000× and total consumption rose 40,000×. A textbook case of Jevons Paradox — efficiency driving demand to entirely new levels.

Computing: From Millions to Pennies

This pattern carried into computing:

YearCost per GigaflopNotes
1984$18.7 million (~$46M today)Early supercomputing era
2000$640 (~$956 today)Mainstream affordability
2017$0.03Virtually free compute

That’s a 99.99%+ decline. What once required national budgets is now in your pocket.

Storage mirrored the same story: by 2018, 8 TB of hard drive storage cost under $200 — about $0.019 per GB, compared to thousands per GB in the mid-20th century.

Connectivity: Falling Costs, Rising Traffic

Connectivity followed suit:

YearTypical Speed & Cost per Mbps (U.S.)Global Internet Traffic
2000Dial-up / early DSL (<1 Mbps); ~$1,200~84 PB/month
2010~5 Mbps broadband; ~$25~20,000 PB/month
2023100–940 Mbps common; ↓ ~60% since 2015 (real terms)>150,000 PB/month

(PB = petabytes)

As costs collapsed, demand exploded. Streaming, cloud services, social apps, mobile collaboration, IoT — all became possible because bandwidth was no longer scarce.

Intelligence: The New Frontier

Now the same dynamic is unfolding with intelligence:

YearCost per Million TokensNotes
2021~$60Early GPT-3 / GPT-4 era
2023~$0.40–$0.60GPT-3.5 scale models
2024< $0.10GPT-4o and peers

That’s a two-order-of-magnitude drop in just a few years. Unsurprisingly, demand is surging — AI copilots in workflows, large-scale analytics in enterprises, and everyday generative tools for individuals.

As we highlighted in our TSF Thesis 3.0, cheap intelligence doesn’t just optimize existing tasks. It reshapes behaviour at scale.

Why It Matters

The recurring pattern is clear:

  • Coal efficiency fueled the Industrial Revolution.
  • Affordable lighting built electrified cities.
  • Cheap compute and storage enabled the digital economy.
  • Low-cost bandwidth drove streaming and cloud collaboration.
  • Now cheap intelligence is reshaping how we live, work, and innovate.

As we highlighted in Thesis 3.0:

“Reflecting on the internet era… as ‘the cost of connectivity’ steadily declined, productivity and demand surged—creating a virtuous cycle of opportunities. The AI era shows remarkable parallels. AI is the first technology capable of learning, reasoning, creativity… Like connectivity in the internet era, ‘the cost of intelligence’ is now rapidly declining, while the value derived continues to surge, driving even greater demand.”

The lesson is simple: efficiency doesn’t just save costs — it reorders economies and societies. And that’s exactly what is happening now.

If you are building a deep tech early-stage startup in the next frontier of computing, we would like to hear from you. This is a generational opportunity as both traditional businesses and entirely new sectors are being reshaped. White-collar jobs and businesses, in particular, will not be the same. We would love to hear from you.

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Announcing Our Investment in FUTURi Power: The Last Dumb Box in Our Home Gets a Brain

For nearly 70 years, the home electrical panel has looked the same. Meanwhile, the home itself is transforming: solar on the roof, batteries in the garage, heat pumps, EVs in the driveway, and smart appliances and devices everywhere.

And yet, the panel? Still the same. It is the last dumb box left, and FUTURi is fixing that with deep tech.

FUTURi’s Energy Processor

FUTURi Power, founded by Dr. Martin Ordonez (UBC Professor, Kaiser Chair at UBC, and recipient of the King Charles III Coronation Medal for leadership in clean energy innovation), reimagines the panel as the Energy Processor, a programmable energy computer that finally gives the home’s electrical system a brain. It is designed as a like-for-like replacement for the traditional panel that is future-proof and intelligently measures and coordinates loads, avoids peaks, and manages energy use at the edge.

Why This Matters

Homes are no longer passive energy consumers. They are dynamic nodes in the grid. By making the panel intelligent, FUTURi enables:

  • For homeowners: Achieve a 100% electric home without costly service upgrades. A smarter, more resilient, and efficient energy ecosystem.
  • For utilities: Demand peaks flattened, demand response (DR) programs and distributed energy resources (DERs) integrated, deferring costly capital expenditures.
  • For builders and communities: Intelligent electrification helps accelerate the deployment of built infrastructure without overloading the grid.

This is why FUTURi and utilities are already collaborating on projects to evaluate how Energy Processors can strengthen the grid and benefit customers.

Our Perspective

As Dr. Martin Ordonez, Founder and CEO of FUTURi Power, puts it: “Panels used to be passive. The Energy Processor is active, safe, and software-defined. It gives homes and grids a common language.”
At TSF, Smart Energy is one of our five focus areas. Our thesis is simple: the cost of intelligence is collapsing, and the biggest opportunities lie where software and hardware come together to reshape behaviour.

FUTURi is exactly that blueprint for intelligent electrification: deep-tech power electronics plus intelligent control. That combination turns a 70-year-old box into the brain of the modern home. Dr. Ordonez and his team are globally recognized experts in electrification who are translating decades of pioneering research into transformative commercial solutions.

And this is just the beginning. There is so much more the company can do to make electricity truly intelligent. FUTURi has a bright future ahead (pun fully intended).

Five Areas Shaping the Next Frontier

The cost of intelligence is dropping at an unprecedented rate. Just as the drop in the cost of computing unlocked the PC era and the drop in the cost of connectivity enabled the internet era, falling costs today are driving explosive demand for AI adoption. That demand creates opportunity on the supply side too, in the infrastructure, energy, and technologies needed to support and scale this shift.

In our Thesis 3.0, we highlighted how this AI-driven platform shift will reshape behaviour at massive scale. But identifying the how also means knowing where to look.

Every era of technology has a set of areas where breakthroughs cluster, where infrastructure, capital, and talent converge to create the conditions for outsized returns. For the age of intelligent systems, we see five such areas, each distinct but deeply interconnected.

1. Vertical AI Platforms

After large language models, the next wave of value creation will come from Vertical AI Platforms that combine proprietary data, hard-to-replicate models, and orchestration layers designed for complex and large-scale needs.

Built on unique datasets, workflows, and algorithms that are difficult to imitate, these platforms create proprietary intelligence layers that are increasingly agentic. They can actively make decisions, initiate actions, and shape workflows. This makes them both defensible and transformative, even when part of the foundation rests on commodity models.

This shift from passive tools to active participants marks a profound change in how entire sectors operate.

2. Physical AI

The past two decades of digital transformation mostly played out behind screens. The next era brings AI into the physical world.

Physical AI spans autonomous devices, robotics, and AI-powered equipment that can perceive, act, and adapt in real environments. From warehouse automation to industrial robotics to autonomous mobility, this is where algorithms leave the lab and step into society.

We are still early in this curve. Just as industrial machinery transformed factories in the nineteenth century, Physical AI will reshape industries that rely on labour-intensive, precision-demanding, or hazardous work.

The companies that succeed will combine world-class AI models with robust hardware integration and build the trust that humans place in systems operating alongside them every day.

3. AI Infrastructure

Every transformative technology wave has required new infrastructure that is robust, reliable, and efficient. For AI, this means going beyond raw compute to ensure systems that are secure, safe, and trustworthy at scale.

We need security, safety, efficiency, and trustworthiness as first-class priorities. That means building the tools, frameworks, and protocols that make AI more energy efficient, explainable, and interoperable.

The infrastructure layer determines not only who can build AI, but who can trust it. And trust is ultimately what drives adoption.

4. Advanced Computing Hardware

Every computing revolution has been powered by a revolution in hardware. Just as the transistor enabled mainframes and the microprocessor ushered in personal computing, the next era will be defined by breakthroughs in semiconductors and specialized architectures.

From custom chips to new communication fabrics, hardware is what makes new classes of AI and computation possible, both in the cloud and on the edge. But it is not only about raw compute power. The winners will also tackle energy efficiency, latency, and connectivity, areas that become bottlenecks as models scale.

As Moore’s Law hits its limit, we are entering an age of architectural innovation with neuromorphic computing, photonics, quantum computing, and other advances. Much like the steam engine once unlocked new industries, these architectures will redefine what is computationally possible. This is deep tech meeting industrial adoption, and those who can scale it will capture immense value.

5. Smart Energy

Every technological leap has demanded a new energy paradigm. The electrification era was powered by the grid. Today, AI and computing are demanding unprecedented amounts of energy, and the grid as it exists cannot sustain this future.

This is why smart energy is not peripheral, but central. From new energy sources to intelligent distribution networks, the way we generate, store, and allocate energy is being reimagined. The idea of programmable energy, where supply and demand adapt dynamically using AI, will become as fundamental to the AI era as packet switching was to the internet.

Here, deep engineering meets societal need. Without resilient and efficient energy, AI progress stalls. With it, the future scales.

Shaping What Comes Next

The drop in the cost of intelligence is driving demand at a scale we have never seen before. That demand creates opportunity on the supply side too, in the platforms, hardware, energy, physical systems, and infrastructure that make this future possible.

The five areas — Vertical AI Platforms, Physical AI, AI Infrastructure, Advanced Computing Hardware, and Smart Energy — represent the biggest opportunities of this era. They are not isolated. They form an interconnected landscape where advances in one accelerate breakthroughs in the others.

We are domain experts in these five areas. The TSF team brings technical, product and commercialization expertise that helps founders build and scale in precisely these spaces. We are uniquely qualified to do so.

At Two Small Fish, this is the canvas for the next generation of 100x companies. We are excited to partner with the founders building in these areas globally, those who not only see the future, but are already shaping it.

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Backing the Scientists Who Helped Invent Blockchain with SureMark Digital

A few years back, Eva met Dr. Scott Stornetta. Later, I did too. Alongside Dr. Stuart Haber, Scott is widely credited as the creator of blockchain. Blockchain is a technology built on a simple but radical idea at the time: decentralization. No single authority, no central point of control, just a trusted system everyone can rely on.

Now, these two scientists are teaming up again to start a new company, SureMark Digital. Their mission is to bring that same decentralized philosophy to identity and authenticity on the internet, enabling anyone to prove who they are, certify their work, and push back against deepfakes and impersonation. No middlemen. No central gatekeepers.

It took us about 3.141592654 seconds to get excited. We are now proud to be the co-lead investor in SureMark’s first institutional round.

At Two Small Fish, we love backing frontier tech that can reshape large-scale behaviour. SureMark checks every box.

Eva has written a deeper dive on what they are building and why it matters. You can read it here.

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Computing. Then Connectivity. Then Intelligence. For Half a Century, Cost Collapses Drove Massive Adoption.

In the history of human civilization, there have been several distinct ages: the Agricultural Age, the Industrial Age, and the Information Age, which we are living in now.

Within each age, there are different eras, each marked by a drastic drop in the cost of a fundamental “atomic unit.” These cost collapses triggered enormous increases in demand and reshaped society by changing human behaviour at scale.

From the late 1970s to the 1990s, the invention of the personal computer drastically reduced the cost of computing [1]. A typical CPU in the early 1980s cost hundreds of dollars and ran at just a few MHz. By the 1990s, processors were orders of magnitude faster for roughly the same price, unlocking entirely new possibilities like spreadsheets and graphical user interfaces (GUIs).

Then, from the mid-1990s to the 2010s, came the next wave: the Internet. It brought a dramatic drop in the cost of connectivity [2]. Bandwidth, once prohibitively expensive, fell by several orders of magnitude — from over $1,200 per Mbps per month in the ’90s to less than a penny today. This enabled browsers, smartphones, social networks, e-commerce, and much of the modern digital economy.

From the mid-2010s to today, we’ve entered the era of AI. This wave has rapidly reduced the cost of intelligence [3]. Just two years ago, generating a million tokens using large language models cost over $100. Today, it’s under $1. This massive drop has enabled applications like facial recognition in photo apps, (mostly) self-driving cars, and — most notably — ChatGPT.

These three eras share more than just timing. They follow a strikingly similar pattern:

First, each era is defined by a core capability, i.e. computing, connectivity, and intelligence respectively.

Second, each unfolds in two waves:

  • The initial wave brings a seemingly obvious application (though often only apparent in hindsight), such as spreadsheets, browsers, or facial recognition.
  • Then, typically a decade or so later, a magical invention emerges — one that radically expands access and shifts behaviour at scale. Think GUI (so we no longer needed to use a command line), the iPhone (leapfrogging flip phones), and now, ChatGPT.

Why does this pattern matter?

Because the second-wave inventions are the ones that lower the barrier to entry, democratize access, and reshape large-scale behaviour. The first wave opens the door; the second wave throws it wide open. It’s the amplifier that delivers exponential adoption.

We’ve seen this movie before. Twice already, over the past 50 years.

The cost of computing dropped, and it transformed business, productivity, and software.

Then the cost of connectivity dropped, and it revolutionized how people communicate, consume, and buy.

Now the cost of intelligence is collapsing, and the effects are unfolding even faster.

Each wave builds on the last. The Internet era was evolving faster than the PC era because the former leveraged the latter’s computing infrastructure. AI is moving even faster because it sits atop both computing and the Internet. Acceleration is not happening in isolation. It’s compounding.

If it feels like the pace of change is increasing, it’s because it is.

Just look at the numbers:

  • Windows took over 2 years to reach 1 million users.
  • Facebook got there in 10 months.
  • ChatGPT did it in 5 days.

These aren’t just vanity metrics — they reflect the power of each era’s cost collapse to accelerate mainstream adoption.

That’s why it’s no surprise — in fact, it’s crystal clear — that the current AI platform shift is more massive than any previous technological shift. It will create massive new economic value, shift wealth away from many incumbents, and open up extraordinary investment opportunities.

That’s why the succinct version of our thesis is:

We invest in the next frontier of computing and its applications, reshaping large-scale behaviour, driven by the collapsing cost of intelligence and defensible through tech and data moats.

(Full version here).

The race is already on. We can’t wait to invest in the next great thing in this new era of intelligence.

Super exciting times ahead indeed.

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!


Footnotes

[1] Cost of Computing

In 1981, the Intel 8088 CPU (used in the first IBM PC) had a clock speed of 4.77 MHz and cost ~$125. By 1995, the Intel Pentium processor ran at 100+ MHz and cost around $250 — a ~20x speed gain at similar cost. Today’s chips are thousands of times faster, and on a per-operation basis, exponentially cheaper.

[2] Cost of Connectivity

In 1998, bandwidth cost over $1,200 per Mbps/month. By 2015, that figure dropped below $1. As of 2024, cloud bandwidth pricing can be less than $0.01 per GB — a near 100,000x drop over 25 years.

[3] Cost of Intelligence

In 2022, generating 1 million tokens via OpenAI’s GPT-3.5 could cost $100+. In 2024, it costs under $1 using GPT-4o or Claude 3.5, with faster performance and higher accuracy — a 100x+ reduction in under two years.

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Quantum Isn’t Next. It’s Now.

In the early 2000s, it was a common joke in the tech world that “next year is the year of the smartphones.” People kept saying it over and over for almost a decade. It became a punchline. The industry nearly lost its credibility.

Until the iPhone launched. “Next year is the year of the smartphones” finally became true.

The same joke has followed quantum for the past ten years: next year is the year of quantum.

Except it hasn’t been. Not yet.

And yet, quietly, the foundations have been built. We’re not there, but we’re far from where we started.

We’re getting closer. Much closer. I can smell it. I can hear it. I can sense it.

Right now, without getting into too much technical detail, we’re still at a small scale: fewer than 100 usable qubits. Commercial viability likely requires thousands, if not millions. The systems are still too error-prone, and hosting your own quantum machine is wildly impractical. They’re expensive, fragile, and noisy.

At this stage, quantum is mostly limited to niche or small-scale applications. But step by step, quantum is inching closer to broader utility.

And while these things don’t progress in straight lines, the momentum is real and accelerating.

Large-scale, commercially deployable, fault-tolerant quantum computers accessed through the cloud are no longer science fiction. They’re within reach.

I spent a few of my academic years in signal processing and error correction. I’ve also spent a bit of time studying quantum mechanics. I understand the challenges of cloud-based access to quantum systems, and I’ve been following the field for quite a while, mostly as a curious science nerd.

All of that gives me reason to trust my sixth sense. Quantum is increasingly becoming a reality.

Nobody knows exactly when the iPhone moment or the ChatGPT moment of quantum will happen.
But I’m absolutely sure we won’t still be saying “next year is the year of quantum” a decade from now.

It will happen, and it will happen much sooner than you might think.

At Two Small Fish, our thesis is centred around the next frontier of computing and its applications.

This is an exciting time and the ideal time to take a closer look at quantum, because the best opportunities tend to emerge right before the technology takes off.

How can we not get excited about new quantum investment opportunities?

P.S. I’m excited to attend the QUANTUM NOW conference this week in Montreal. Also thrilled to see Mark Carney name quantum as one of Canada’s official G7 priorities. That short statement may end up being a big milestone.

P.P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Announcing TSF’s Investment in ENVGO

Humans have conquered land, sea, and space.

Yet the ocean remains surprisingly underdeveloped — in fact, it’s the least developed.

Land transportation has been electrified. In space, payload costs have dropped drastically. Now, it’s time for marine to catch up.

Unlike cars, you can’t simply add an electric motor and battery to a boat and make it work. Why? One reason is that water’s viscosity is much higher than air, meaning drag or resistance is an order of magnitude greater. As a result, replacing a gas motor with an electric one would require a gigantic battery, making it impractical and, frankly, unusable. That’s why marine electrification has lagged.

Until now. 

The “iPhone moment” of marine transportation has arrived. ENVGO’s hydrofoiling NV1 tackles these multidisciplinary complications head-on. Led by successful serial entrepreneur Mike Peasgood, the team brings together expertise in AI, robotics, control systems, computer vision, autonomous systems, and more. Leveraging their prior success as drone pioneers at Aeryon, they are now building a flying robot — on water.

It’s day one of a large-scale transformation of marine transportation. Two Small Fish is privileged and super excited to lead this round of funding, alongside our good friends at Garage, who are also participating. We can’t wait to see how ENVGO reimagines the uncharted waters — pun fully intended.

Read our official blog post by our partner Albert here

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Wattpad Was My Regular Season. TSF Is My Playoff Hockey

When entrepreneurs exit their companies, it is supposed to be a victory lap. But in reality, many find themselves in an unexpected emotional vacuum. More often than you might think, I hear variations of the same quiet confession:

“It should have been the best time of my life. But I felt lost after the exit. I lost my purpose.”

After running Wattpad for 15 years, I understand this all too well. It is like training for and running a marathon for over a decade, only to stop cold the day after the finish line. No more rhythm. No more momentum. No next mile.

Do I Miss Operating

Unsurprisingly, people often ask me:

“Do you like being a VC?”

“Do you miss operating?”

My honest answer is yes and yes

(but I get my fix without being a CEO — see below).

Being a founder and CEO was deeply challenging and also immensely rewarding. It is a role that demands a decade-long commitment to building one and only one thing. And while I loved my time as CEO, I did not feel the need to do it again. Once in a lifetime was enough. I have started three companies. A fourth would have felt repetitive.

What I missed most was not the title or the responsibility. It was the people. The team. The day-to-day collaboration with nearly 300 passionate employees when I stepped down. That sense of shared mission — of solving hard problems together — was what truly filled my cup.

Back in the Trenches in a Different Role

Now at Two Small Fish Ventures as an operating partner, I work with founders across our portfolio. I am no longer the operator inside the company, but I get to be their sounding board — helping them tackle some of the biggest challenges they face.

Let’s be honest: they call me especially when they believe I am the only one who can help them. Their words, not mine. And there have been plenty of those occasions.

That gives me the same hit of adrenaline I used to get from operating. At my core, I love solving hard problems. That part of me did not go away after my exit. I just found a new arena for it — and it is a perfect replacement.

A Playground for a Science Nerd

What people may not realize is that the deep tech VC job is drastically different from a “normal” VC job. As a deep tech VC, I am constantly stretched and go deep — technically, intellectually, and creatively. It forces me to stay sharp, push my boundaries, and reconnect with my roots as a curious, wide-eyed science nerd.

There is something magical about working with founders at the bleeding edge of innovation. I get to dive into breakthrough technologies, understand how they work, and figure out how to turn them into usable and scalable products. It feels like being a kid in a candy store — except the candy is semiconductors, control systems, power electronics, quantum, and other domains in the next frontier of computing.

How could I not love that?

Ironically, I had less time to indulge this curiosity when I was a CEO. Now I can geek out and help shape the future at the same time. It is a net positive to me.

You Do Not Have to Love It All

Of course, every job — including CEO and VC — has its less glamorous parts. Whether you are a founder or a VC, there will always be administrative tasks and responsibilities you would rather skip.

But I have learned not to resent them. As I often say:

“You do not need to love every task. You just need to be curious enough to find the interesting angles in anything.”

Those tasks are the cost of admission to being a deep tech VC. A small price to pay to do the work I love — supporting incredible entrepreneurs as they bring transformative ideas to life, and finding joy in doing so. And knowing what I know now, I do not think I would enjoy being a “normal” VC. I cannot speak for others, but for me, this is the only kind of venture work that truly energizes and fulfills me.

A New Season. A New Purpose.

So yes, being a VC brings me as much joy — and arguably even more fulfillment (and I am surprised that I am saying this) — than being a CEO. I feel incredibly lucky. And I am all in.

It feels like all my past experience has prepared me for what I do today. I often describe this phase of my life this way:

Wattpad was my regular season. TSF is my playoff hockey.

It is faster. It is grittier. The stakes feel higher. Not because I am building one company, but because I am helping many shape the future.

P.S. Go Oilers!!

A Decade of Fish – Celebrating 10 Years of Two Small Fish Ventures

This year marks a big milestone: Two Small Fish Ventures turns ten!

That’s 10 years, 120 months, and 3,653 days (yes, we counted the leap years). What started as a bold experiment in early-stage investing has become a decade-long journey of backing audacious founders building at the edge of what’s possible.

Over the weekend, we wired funds for our 60th first investment. That’s not including the many follow-on cheques we’ve written along the way—if we counted those, the number would be much higher. We’re not naming the company just yet, but like the 59 before it, this one reflects deep conviction. We think it’ll make a splash!

For years, we’ve said we write 5 to 7 new cheques per year. Not because we aim for a quota, but because this is what a power-law portfolio construction strategy naturally produces. In venture, just a few outlier companies drive the vast majority of returns. The trick is to consistently back companies with 100x potential. That’s the focus—not pacing. And yet, the numbers tell their own story: we’ve averaged exactly six new investments a year. Apparently, clarity of focus brings consistency as a byproduct.

We’re now six months into our tenth year, and we’re right on pace.

To the founders we’ve backed: thank you for trusting us at the earliest, riskiest stage.

To those we haven’t met yet: if you’re building deep tech in the next frontier of computing, we’d love to hear from you. We invest globally. If you’ve got a breakthrough, we can help turn it into a product. If you’ve got a product, we can help turn it into a company.

Sound like you? Reach out.

Here’s to the next 10!

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

TSF Thesis 3.0: The Next Frontier of Computing and Its Applications Reshaping Large-Scale Behaviour

Summary

Driven by rapid advances in AI, the collapse in the cost of intelligence has arrived—bringing massive disruption and generational opportunities.

Building on this platform shift, TSF invests in the next frontier of computing and its applications, backing early-stage products, platforms, and protocols that reshape large-scale behaviour and unlock uncapped, new value through democratization. These opportunities are fueled by the collapsing cost of intelligence and, as a result, the growing demand for access to intelligence as well as its expansion beyond traditional computing devices. What makes them defensible are technology moats and, where fitting, strong data network effects.

Or more succinctly: We invest in the next frontier of computing and its applications, reshaping large-scale behaviour, driven by the collapsing cost of intelligence and defensible through tech and data moats.

Watch this 2-minute video to learn more about our approach:


Our Evolution: From Network Effects to Deep Tech

When we launched TSF in 2015, our initial thesis centred around network effects. Drawing from our experience scaling Wattpad from inception to 100 million users, we became experts in understanding and leveraging exponential value and defensibility created by network effects at scale. This expertise led us to invest—most as the very first cheque—in massively successful companies such as BenchSciAdaPrintify, and SkipTheDishes.

We achieved world-class success with this thesis, but like all good things, that opportunity diminished over time.

Our thesis evolved as the ground shifted toward the end of 2010s. A couple of years ago, we articulated this evolution by focusing on early-stage products, platforms, and protocols that transform user behaviour and empower businesses and individuals to unlock new value. Within this broad focus, we zoomed in specifically on three sectors: AI, decentralized protocols, and semiconductors. That thesis guided investments in great companies such as StoryIdeogramZinite, and Blumind.

But the world doesn’t stand still. In fact, it has never changed so rapidly. This brings us to the next and even more significant shift shaping our thesis.


A New Platform Shift: The Cost of Intelligence is Collapsing

Reflecting on the internet era, the core lesson we learned was that the internet was the first technology in human history that was borderless, connected, ubiquitous, real-time, and free. At its foundation was connectivity, and as “the cost of connectivity” steadily declined, productivity and demand surged, creating a virtuous cycle of opportunities.

The AI era shows remarkable parallels. AI is the first technology capable of learning, reasoning, creativity, cross-domain functionality, and decision-making. Like connectivity in the internet era, “the cost of intelligence” is now rapidly declining, while the value derived from intelligence continues to surge, driving even greater demand.

This shift will create massive economic value, shifting wealth away from many incumbents and opening substantial investment opportunities. However, just like previous platform shifts, the greatest opportunities won’t come from digitizing or automating legacy workflows, but rather from completely reshaping workflows and user behaviour, democratizing access, and unlocking previously impossible value. These disruptive opportunities will expand into adjacent areas, leaving incumbents defenceless as the rules of the game fundamentally change.


Intelligence Beyond Traditional Computing Devices

AI’s influence now extends far beyond pre-programmed software on computing devices. Machines and hardware are becoming intelligent, leveraging collective learning to adapt in real-time, with minimal predefined instruction. As we’ve stated before, software alone once ate the world; now, software and hardware together consume the universe. The intersection of software and hardware is where many of the greatest opportunities lie.

As AI models shrink and hardware improves, complex tasks run locally and effectively at the edge. Your phone and other edge devices are rapidly becoming the new data centres, opening exciting new possibilities.


Democratization and a New Lens on Defensibility

The collapse in the cost of intelligence has democratized everything—including software development—further accelerated by open-source tools. While this democratization unlocks vast opportunities, competition also intensifies. It may be a land grab, but not all opportunities are created equal. The key is knowing which “land” to seize.

Historically, infrastructure initially attracts significant capital, as seen in the early internet boom. Over time, however, much of the economic value tends to shift from infrastructure to applications. Today, the AI infrastructure layer is becoming increasingly commoditized, while the application layer is heavily democratized. That said, there are still plenty of opportunities to be found in both layers—many of them truly transformative. So, where do we find defensible, high-value opportunities?

Our previous thesis identified transformative technologies that achieved mass adoption, changed behaviour, democratized access, and unlocked unprecedented value. This framework remains true and continues to guide our evaluation of “100x” opportunities.

This shift in defensibility brings us to where the next moat lies.


New Defensibility: Deep Tech Meets Data Network Effects

Defensibility has changed significantly. In recent years, the pool of highly defensible early-stage shallow tech opportunities has thinned considerably, with far fewer compelling opportunities available. As a result, we have clearly entered a golden age of deep tech. AI democratization provides capital-efficient access to tools that previously required massive budgets. Our sweet spot is identifying opportunities that remain difficult to build, ensuring they are not easily replicated.

As “full-spectrum specialists,” TSF is uniquely positioned for this new reality. All four TSF partners are engineers and former startup leaders before becoming investors, with hands-on experience spanning artificial intelligence, semiconductors, robotics, photonics, smart energy, blockchain and others. We are not just technical; we are also product people, having built and commercialized cutting-edge innovations ourselves. As a guiding principle, we only invest when our deep domain expertise can help startups scale effectively and rapidly cement their place as future industry-disrupting giants.

Moreover, while traditional network effects have diminished, AI has reinvigorated network effects, making them more potent in new ways. Combining deep tech defensibility with strong data-driven network effects is the new holy grail, and this is precisely our expertise.


What We Don’t Invest In

Although we primarily invest in “bits,” we will also invest in “bits and atoms,” but we won’t invest in “atoms only.” We also have a strong bias towards permissionless innovations, so we usually stay away from highly regulated or bureaucratic verticals with high inertia. Additionally, since one of our guiding principles is to invest only when we have domain expertise in the next frontier of computing, we won’t invest in companies whose core IP falls outside of our computing expertise. We also avoid regional companies, as we focus on backing founders who design for global scale from day one. We invest globally, and almost all our breakout successes such as Printify have users and customers around the world.


Where We’re Heading

Having recalibrated our thesis for this new era, here’s where we’re going next.

We have backed amazing deep tech founders pioneering AI, semiconductors, robotics, photonics, smart energy, and blockchain—companies like FibraBlumindABRAxiomaticHepzibahStoryPoppy, and Viggle—across consumer, enterprise, and industrial sectors. With the AI platform shift underway, many new and exciting investment opportunities have emerged. 

The ground has shifted: the old playbook is out, the new playbook is in. It’s challenging, exciting, and we wouldn’t have it any other way.

To recap our core belief, TSF invests in the next frontier of computing and its applications, backing early-stage products, platforms, and protocols that reshape large-scale behaviour and unlock uncapped, new value through democratization. These opportunities are fueled by the collapsing cost of intelligence and, as a result, the growing demand for access to intelligence as well as its expansion beyond traditional computing devices. What makes them defensible are technology moats and, where fitting, strong data network effects.

Or more succinctly: We invest in the next frontier of computing and its applications, reshaping large-scale behaviour, driven by the collapsing cost of intelligence and defensible through tech and data moats.

So, if you’ve built interesting deep tech in the next frontier of computing, we invest globally and can help you turn it into a product. If you have a product, we can help you turn it into a massively successful business. If this sounds like you, reach out

Together, we will shape the future.

P.S. Please also read our blog post Five Areas Shaping the Next Frontier.

Eva + Allen + Brandon + Albert + Mikayla

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Gensee AI

A solo musician doesn’t need a conductor. Neither does a jazz trio.

But an orchestra? That’s a different story. You need a conductor to coordinate, to make sure all the parts come together.

Same with AI agents. One or two can operate fine on their own. But in a multi-agent setup, the real bottleneck is orchestration.

Yesterday, we announced our investment in GenseeAI. That’s the layer the company is building—the conductor for AI agents, i.e. the missing intelligent optimization layer for AI agents and workflows. Their first product, Cognify, takes AI workflows built with frameworks like LangChain or DSPy and intelligently rewrites them to be 10× faster, cheaper, and more reliable. It’s a bit like “compilation” for AI. Given a high-level workflow, Cognify produces a tuned, executable version optimized for production. Their second product, currently under development, goes one step further: a serving layer that continuously optimizes AI agents and workflows at runtime. Think of it as an intelligent “virtual machine” for AI, where the execution of agents and workflows is transparently and “automagically” improved while running.

If you’re building AI systems and want to go from prototype to production with confidence, get in touch with the GenseeAI team.

Read Brandon‘s blog post here or in the following for all the details:

At Two Small Fish, we invest in founders building foundational infrastructure for the AI-native world. We believe one of the most important – yet underdeveloped – layers of this stack is orchestration: how generative AI workflows are built, optimized, and deployed at scale.

Today, building a production-grade genAI app involves far more than calling an LLM. Developers must coordinate multiple steps – prompt chains, tool integrations, memory, RAG, agents – across a fragmented and fast-moving ecosystem and a variety of models. Optimizing this complexity for quality, speed, and cost is often a manual, lengthy process that businesses must navigate before a demo can become a product.

GenseeAI is building the missing optimization layer for AI agents and workflows in an intelligent way. Their first product, Cognify, takes AI workflows built with frameworks like LangChain or DSPy and intelligently rewrites them to be faster, cheaper, and better. It’s a bit like “compilation” for AI: given a high-level workflow, Cognify produces a tuned, executable version optimized for production. 

Their second product–currently under development–goes one step further: a serving layer that continuously optimizes AI agents and workflows at runtime. Think of it as an intelligent “virtual machine” for AI: where the execution of agents and workflows is transparently and automatically improved while running.

We believe GenseeAI is a critical unlock for AI’s next phase. Much of today’s genAI development is stuck in prototype purgatory – great demos that fall apart in the real world due to cost overruns, latency, and poor reliability. Gensee helps teams move from “it works” to “it works well, and at scale.”

What drew us to Gensee was not just the elegance of the idea, but the clarity and depth of its execution. The company is led by Yiying Zhang, a UC San Diego professor with a strong track record in systems infrastructure research, and Shengqi Zhu, an engineering leader who has built and scaled AI systems at Google. Together, they bring a rare blend of academic rigor and hands-on experience in deploying large-scale infrastructure. In early benchmarks, Cognify delivered up to 10× cost reductions and 2× quality improvements – all automatically. Their roadmap – including fully automated optimization, enterprise integrations, and a registry of reusable “optimization tricks” – shows ambition to become the default runtime for generative AI.

As the AI stack matures, we believe Gensee will become a foundational layer for organizations deploying intelligent systems. It’s the kind of infrastructure that quietly powers the AI apps we’ll all use – and we’re proud to support them on that journey.
If you’re building AI systems and want to go from prototype to production with confidence, get in touch with the team at GenseeAI.

Written by Brandon

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Perhaps My Title Should Be…Yoda?

Yesterday was Star Wars Day — aka “May the Fourth be with you” — and it got me thinking, so I put together this blog post.

You might notice my title is “Operating Partner,” not “General Partner,” “Managing Partner,” or “Board Partner.” That’s intentional because I spend most of my time working directly with portfolio CEOs.

The Operating Partner role has its roots in private equity. Historically, Operating Partners are often former CEOs or COOs who use their experience to guide leadership teams, improve operational execution, and drive results, ultimately increasing the value of portfolio companies.

As far as I know, I’m the only former scale-up CEO in Canada who plays this role in an early-stage VC. At least, ChatGPT and Perplexity couldn’t find anyone else! Even in the U.S., this is very rare.

That said, I’ve always felt the “Operating Partner” title is a bit misleading. Unlike many private equity Operating Partners, I don’t step into full-time or part-time leadership roles within portfolio companies. I don’t give advice or directives either. Instead, I help CEOs solve their own problems rather than solving problems for them.

My single objective is to help portfolio CEOs improve the quality of their decisions by leveraging my experience.

Why? Most CEOs don’t need to be told what to do—they already know. Telling a CEO to grow their KPIs faster or hire great people is useless.

No CEO intentionally grows slower or hires bad people!

The real challenge for CEOs isn’t the what—it’s the how. This is where I come in, helping them navigate the how: strategic thinking, future-proofing, and decision-making that drive tangible progress, while staying alert to blind spots that could undermine success.

Hiring is an example. Many venture firms have talent partners who assist portfolio companies with recruitment. These partners, often from recruitment backgrounds, are excellent at sourcing candidates once roles are defined. However, they usually lack deep business context and may not fully understand the culture of the companies they’re supporting. This can result in untargeted candidates who don’t fit. I experienced this issue firsthand when I was a CEO.

That’s why I strongly favour internal recruiters who have an intimate understanding of the business and culture. Even so, recruiters typically get involved after roles are clearly defined. Before that, to design the organization, we need someone who has visibility into the broader perspective of the business. Only one person truly has it: the CEO. Besides, CEOS usually can’t ask their leaders about organizational design for obvious reasons.

That’s where I step in—well before recruiters are involved. I act as a sounding board for organizational design, considering not just immediate hiring needs but also how roles and teams will evolve over time. What level of talent should they hire now? When will this position need to level up? What downstream implications will these decisions have?

By addressing these questions early, I help ensure hiring decisions are aligned with the company’s long-term strategy and culture.

Of course, hiring is just one area where I provide support. Design future-proof stock option plans? Manage internal and external communication challenges? Interact with strategic conglomerates? Navigate inbound acquisition offers? Resolve leadership dysfunction? Handle unreasonable investors? Make board meetings more effective? Fend off super aggressive competitors or internet giants?

And yes, one of the most frequent requests I get is: “Can you help me with my pitch deck?”

Bring them on!

I’ve faced these challenges firsthand multiple times, and when CEOs bring them to me, I’m ready to share my war scars.

At the minimum, I help narrow the options from “I don’t know how” to a set of multiple choices. I don’t make decisions for CEOs; I help them make better ones. They are ultimately responsible for their decisions, and I see my role as a guide, not a decision-maker.

Being the CEO of a fast-scaling company is an enormous challenge that people should not underestimate—the level of experience, capacity, intensity, and mental strength that one needs to cope with. That’s why it is the loneliest job. Empathy is not enough. The best help I ever got was from a more experienced CEO than me at the time — someone who had walked the road ahead — and now it’s my turn to pay it forward. It is payback time for me.

The more I think about it, the less “Operating Partner” seems to fit. I don’t step into the spotlight or take over operations. My role is more like Yoda—helping Skywalker fight the battles while staying behind the scenes.

So perhaps my title shouldn’t be Operating Partner after all. Maybe it should just be… Yoda.

May the Force be with you!

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Network Effect is Dead. Long Live Network Effect.

When Two Small Fish first started in 2015, we formulated our “Thesis 1.0” to focus on network effects exclusively. We leveraged our hands-on product experience in scaling Wattpad from 0 to 100 million users—essentially a marketplace for readers and writers—and applied a similar lens to other verticals, both in B2C and B2B.

It worked incredibly well for TSF because, at the time, network effects were the holy grail for defensibility, yet they were often misunderstood (for example, going viral is not the same as having network effects, and simply operating a marketplace does not guarantee strong network effects!). Our skill is more transferable than you might think!

So, Eva created the ASSET framework, which helped us identify the best network-effect investment opportunities and, more importantly, helped entrepreneurs understand and increase their network effect coefficient—the measure of true network effects—and ultimately embed strong network effects into their products. In short:

A stands for “atomic unit”

S stands for “seed the supply side”

• The other S stands for “scale the demand side”

E stands for “enlarge the network effect” or “enhance the network coefficient”

T stands for “track proprietary insights”

This framework provided a simple yet systematic way to judge whether a company truly had network effects or merely the illusion of them.

However, toward the end of the last decade, it became increasingly difficult to find investable network-effect opportunities. Well-established incumbents already had very strong network effects in place, effectively setting the world order. It became exceedingly difficult for emerging disruptors—both in consumer and enterprise spaces—to find a gap to break through.

We began looking for other forms of technology defensibility (for example, semiconductors) and gradually moved away from “shallow tech” network-effect investments, as we found very few investable opportunities. In fact, our last shallow tech investment was made about three years ago.

Then, in late 2022, ChatGPT arrived.

As the world now understands, generative AI is the first technology in human history capable of learning, reasoning, creativity, cross-domain functionality, and decision-making. It’s the most significant platform shift since mobile, social, and cloud computing in the late 2010s—and arguably the biggest one in human history. It also means the playing field has been leveled. Today, there are numerous ways to create new products with powerful network effects that can render incumbents’ offerings obsolete (for example, I haven’t used Google Search regularly for a long time) because newcomers can disrupt incumbents from all three angles: technology, product, and commercialization (e.g., business models). Incumbents are vulnerable!

On the other hand, the ASSET framework also needed a refresh, as we’re no longer dealing with simple, well-understood marketplaces. What if one side of the marketplace is now AI? Even though our original framework was designed to handle data-driven network effects, the speed and scale of data generation have multiplied by orders of magnitude. How does this affect enlarging the network effects and increasing the coefficient?

The good news is that there are now ways to massively increase the network effect coefficient in a remarkably short time. The bad news is that all your competitors—large or small—can do the same. Competition has never been fiercer.

After ChatGPT was released, we quickly revised our ASSET framework to version 2.0. Since then, we’ve been guest-lecturing this masterclass worldwide for well over a year. By fully leveraging AI’s creativity and reasoning capabilities, entrepreneurs can now harness human-machine collaboration to supercharge both the demand and supply sides, blitz-scale, and create new atomic units. Here’s the gist of 2.0:

A – Atomic Unit of Product

S – Super Seed the Supply Side (now amplified by Gen AI)

S – Supercharge the Demand Side (now leveraging Gen AI)

E – Exponential Engagement (using the human + AI combo)

T – Transform Business with New AI-powered Atomic Units

Like 1.0, this new framework is easy to understand but difficult to master—and it’s even more complex now because, with Gen AI, it’s non-linear. Our masterclass covers the lecture material, but the real work happens in our private tutoring, where execution matters—and this is how we help our portfolio companies win.

The old network effect is dead. Thanks to the AI platform shift, network effects are roaring back in a different and far more potent way in the new world order. The combination of deep tech defensibility plus network effect defensibility is the new holy grail—and we are specialized in both.

With the AI platform shift, all of a sudden, there are many new investable opportunities that didn’t exist before. At the same time, the ground has shifted: the old playbook is out, and the new playbook is in. It’s exciting; we love the challenge, and we wouldn’t have it any other way.

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Investing in Fibra: Revolutionizing Women’s Health with Smart Underwear

At Two Small Fish Ventures, we love backing founders who are not only transforming user behaviour but also unlocking new and impactful value. That’s why we’re excited to announce our investment in Fibra, a pioneering company redefining wearable technology to improve women’s health. We are proud to be the lead investor in this round, and I will be joining as a board observer. 

The Vision Behind Fibra

Fibra is developing smart underwear embedded with proprietory textile-based sensors for seamless, non-invasive monitoring of previously untapped vital biomarkers. Their innovative technology provides continuous, accurate health insights—all within the comfort of everyday clothing. Learning from user data, it then provides personalized insights, helping women track, plan, and optimize their reproductive health with ease. This AI-driven approach enhances the precision and effectiveness of health monitoring, empowering users with actionable information tailored to their unique needs. 

Fibra has already collected millions of data points with its product, further strengthening its AI capabilities and improving the accuracy of its health insights. While Fibra’s initial focus is female fertility tracking, its platform has the potential to expand into broader areas of women’s health, including pregnancy detection/monitoring, menopause, detection of STDs and cervical cancer and many more, fundamentally transforming how we monitor and understand our bodies.

Perfect Founder-Market Fit

Fibra was founded by Parnian Majd, an exceptional leader in biomedical innovation. She holds a Master of Engineering in Biomedical Engineering from the University of Toronto and a Bachelor’s degree in Biomedical Engineering from TMU. Her achievements have been widely recognized, including being an EY Women in Tech Award recipient, a Rogers Women Empowerment Award finalist for Innovation, and more.

We are thrilled to support Parnian and the Fibra team as they push the boundaries of AI-driven smart textiles and health monitoring. We are entering a golden age of deep-tech innovation and software-hardware convergence—a space we are excited to champion at Two Small Fish Ventures.

Stay tuned as Fibra advances its mission to empower women through cutting-edge health technology.

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Announcing Our Investment in Hepzibah AI

The Two Small Fish team is thrilled to announce our investment in Hepzibah AI, a new venture founded by Untether AI’s co-founders, serial entrepreneurs Martin Snelgrove and Raymond Chik, along with David Lynch and Taneem Ahmed. Their mission is to bring next-generation, energy-efficient AI inference technologies to market, transforming how AI compute is integrated into everything from consumer electronics to industrial systems. We are proud to be the lead investor in this round, and I will be joining as a board observer to support Hepzibah AI as they build the future of AI inference.

The Vision Behind Hepzibah AI

Hepzibah AI is built on the breakthrough energy-efficient AI inference compute architecture pioneered at Untether AI—but takes it even further. In addition to pushing performance/power harder, it can handle training loads like distillation, and it provides supercomputer-style networking on-chip. Their business model focuses on providing IP and core designs that chipmakers can incorporate into their system-on-chip designs. Rather than manufacturing AI chips themselves, Hepzibah AI will license its advanced AI inference IP for integration into a wide variety of devices and products.

Hepzibah AI’s tagline, “Extreme Full-stack AI: from models to metals,” perfectly encapsulates their vision. They are tackling AI from the highest levels of software optimization down to the most fundamental aspects of hardware architecture, ensuring that AI inference is not only more powerful but also dramatically more efficient.

Why does this matter? AI is rapidly becoming as indispensable as the CPU has been for the past few decades. Today, many modern chips, especially system-on-chip (SoC) devices, include a CPU or MCU core, and increasingly, those same chips will require AI capabilities to keep up with the growing demand for smarter, more efficient processing.

This approach allows Hepzibah AI to focus on programmability and adaptable hardware configurations, ensuring they stay ahead of the rapidly evolving AI landscape. By providing best-in-class AI inference IP, Hepzibah AI is in a prime position to capture this massive opportunity.

An Exceptional Founding Team

Martin Snelgrove and Raymond Chik are luminaries in this space—I’ve known them for decades. David Lynch and Taneem Ahmed also bring deep industry expertise, having spent years building and commercializing cutting-edge silicon and software products.

Their collective experience in this rapidly expanding, soon-to-be ubiquitous industry makes investing in Hepzibah AI a clear choice. We can’t wait to see what they accomplish next.

P.S. You may notice that the logo is a curled skunk. I’d like to highlight that the skunk’s eyes are zeros from the MNIST dataset. 🙂 

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Contrarian Series: Your TAM is Zero? We love it!

Note: One of the most common pieces of feedback we receive from entrepreneurs is that TSF partners don’t think, act, or speak like typical VCs. The Contrarian Series is meant to demystify this, so founders know more about us before pitching.

Just before New Year, I was speaking at the TBDC Venture Day Conference together with BetaKit CEO Siri Agrell and Serial Entrepreneur and former MP Frank Baylis.

When I said “Two Small Fish love Zero TAM businesses,” I said it so matter-of-factly that the crowd was taken aback. I even saw quite a few posts on social media that said, “I can’t believe Allen Lau said it!”

Of course, any business will need to go after a non-zero TAM eventually. But hear me out.

Here’s what I did at Wattpad: I never had a “total addressable market” slide in the early days. I just said, “There are five billion people who can read and write, and I want to capture them all!”

Even when we became a scaleup, I kept the same line. I just said, “There are billions of people who can read, write, or watch our movies, and I want to capture them all!”

Naturally, some VCs tried to box me into the “publishing tool” category or other buckets they deemed appropriate. But Wattpad didn’t really fit into anything that existed at the time. Trust me, I tried to find a box I would fit in too, but none felt natural.

Why? That’s because Wattpad was a category creator. And, of course, that meant our TAM was effectively zero.

In other words, we made our own TAM.

Many of our portfolio companies are also category creators, so their decks often don’t have a TAM slide either.

Yes, any venture-backed company eventually needs a large TAM. And, of course, I don’t mean to suggest that every startup needs to be a category creator.

That said, we’re perfectly fine—in fact, sometimes we even prefer—seeing a pitch deck without a TAM slide. By definition, category creators have first-mover advantages. More importantly, category creators in a large, winner-take-all market—especially those with strong moats—tend to be extremely valuable at scale and, hence, highly investable.

So, founders, if your company is poised to create a large category, skip the TAM slide when pitching to Two Small Fish. We love it!

P.S. Don’t forget, if you have an “exit strategy” slide in your pitch deck, please remove it before pitching to us. TYSM!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

After All, What’s Deep Tech?

“Deep Tech” is one of those terms that gets thrown around a lot in venture capital and startup circles, but defining it precisely is harder than it seems. If you check Wikipedia, you’ll find this:

Deep technology (deep tech) or hard tech is a classification of organization, or more typically a startup company, with the expressed objective of providing technology solutions based on substantial scientific or engineering challenges. They present challenges requiring lengthy research and development and large capital investment before successful commercialization. Their primary risk is technical risk, while market risk is often significantly lower due to the clear potential value of the solution to society. The underlying scientific or engineering problems being solved by deep tech and hard tech companies generate valuable intellectual property and are hard to reproduce.

At a high level, this definition makes sense. Deep tech companies tackle hard scientific and engineering problems, create intellectual property, and take time to commercialize. But what do substantial scientific or engineering challenges actually mean? Specifically, what counts as substantial? “Substantial” is a vague word. A difficult or time-consuming engineering problem isn’t necessarily a deep tech problem. There are plenty of startups that build complex technology but aren’t what I’d call deep tech. It’s about tackling problems where existing knowledge and tools aren’t enough.

In 1964, Supreme Court Justice Potter Stewart famously said, “I know it when I see it” when asked to describe his test for obscenity in Jacobellis v. Ohio. By no means am I comparing deep tech to obscenity—I don’t even want to put these two things in the same sentence. However, there is a parallel between the two: they are both hard to put into a strict formula, but experienced technologists like us recognize deep tech when we see it.

So, at Two Small Fish, we have developed our own simple rule of thumb:

If we see a product and say, “How did they do that?” and upon hearing from the founders how it is supposed to work, we still say, “Team TSF can’t build this ourselves in 6–12 months,” then it’s deep tech.

At TSF, we invest in the next frontier of computing and its applications. We’re not just looking for smart founders. We’re looking for founders who see things others don’t—who work at the edge of what’s possible. And when we find them, we know it when we see it.

This test has been surprisingly effective. Every single investment we’ve made in the past few years has passed it. And I expect it will continue to serve us well.

P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

AI Has Democratized Everything

This is the picture I used to open our 2024 AGM a few months ago. It highlights how drastically the landscape has changed in just the past couple of years. I told a similar story to our LPs during the 2023 AGM, but now, the pace of change has accelerated even further, and the disruption is crystal clear.

The following outlines the reasons behind one of the biggest shifts we identified as part of our Thesis 2.0 two years ago.

Like many VCs, we evaluate pitches from countless companies daily. What we’ve noticed is a significant rise in startups that are nearly identical to one another in the same category. Once, I quipped, “This is the fourth one this week—and it’s only Tuesday!”

The reason for this explosion is simple: the cost of starting a software company has plummeted. What once required $1–2M of funding to hire a small team can now be achieved by two founders (or even a solo founder) with little more than a laptop or two and a $20/month subscription to ChatGPT Pro (or your favourite AI coding assistant).

With these tools, founders can build, test, and iterate at unprecedented speeds. The product build-iterate-test-repeat cycle is insanely short. If each iteration is a “shot on goal,” the $1–2M of the past bought you a few shots within a 12–18 month runway. Today, that $20/month can buy you a shot every few hours.

This dramatic drop in costs, coupled with exponentially faster iteration speeds, has led to a flood of startups entering the market in each category. Competition has never been fiercer. This relentless pace also means faster failures, and the startup graveyard is now overflowing.

For early-stage investors, picking winners from this influx of startups has become significantly harder. In the past, you might have been able to identify the category winner out of 10 similar companies. Now, it feels like mission impossible when there are hundreds—or even thousands—of startups in each category. Many of them are even invisible, flying under the radar for much longer because they don’t need to fundraise.

Of course, there will still be many new billion-dollar companies. In fact, I am convinced that this AI-driven platform shift will produce more billion-dollar winners than ever—across virtually every established category and entirely new ones that don’t yet exist. But by the law of large numbers, spotting them among thousands of startups in each category is harder than ever.

If you’re using the same lens that worked in the past to spot and fund these future tech giants, good luck.

That’s why, for a long time now, we’ve been using a very different lens to identify great opportunities with highly defensible moats to stay ahead of the curve. For example, we’ve been exclusively focused on deep tech—a space where we know we have a clear edge. From technology to product to operations, we have the experience to cover the full spectrum and support founders through the unique challenges of building deep tech startups. So far, this approach has been working really well for us.

I guess we are taking our own advice. As a VC firm, we also need to be constantly improving and striving to be unrecognizable every two years!

There’s no doubt the rules of early-stage VC have shifted. How we access, assess, and assist startups has evolved dramatically. The great AI democratization is affecting all sectors, and venture capital is no exception.

For investors who can adapt, this is a time of unparalleled opportunity—perhaps the greatest era yet in tech investing. The playing field has been levelled, and massive disruption (and therefore opportunities) lies ahead. Incumbents are vulnerable, and new champions will emerge in each category – including VC!

Investing during this platform shift is both exciting and challenging. And I wouldn’t want it any other way, because those who figure it out will be handsomely rewarded.

P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Contrarian Series: Best Exit Strategy? Not Having One

Note: One of the most common pieces of feedback we receive from entrepreneurs is that TSF partners don’t think, act, or speak like typical VCs. The Contrarian Series is meant to demystify this, so founders know more about us before pitching.

For Wattpad, it was exactly ten years between raising our first round of venture capital in 2011 and the company’s acquisition in 2021. Over that decade, we discussed countless topics in our board meetings.

But one topic we never discussed? Exit strategies.

I distinctly remember, a couple of years before the acquisition, I raised the question to a board member. “We’ve been venture-backed for almost ten years now. Should we start talking about exit…”

I couldn’t even finish the sentence. That board member cut me off:

“Allen, I just want you to build a great company.”

That moment stuck with me. Only after the acquisition did I fully appreciate the significance of those ten years as a venture-backed company without focusing on an exit.

Wattpad’s four largest investors—USV, Khosla Ventures, OMERS, and Tencent—enabled us to focus on building the business, not selling it. OMERS, as a pension fund, and Tencent, as a strategic investor, don’t operate under the typical 10-year fund cycle that drives many venture firms to push for exits. USV, with its consistent track record of generating world-class returns, had the trust of its LPs to prioritize long-term value over short-term outcomes. And Khosla Ventures? Well, no one can tell Vinod Khosla what to do, and he loves making big, long-term bets.

Their perspectives freed us to focus on building a great company rather than prematurely worrying about how to sell it.

In early 2020, a year before Wattpad was acquired for US$660M, we set an ambitious company objective: to become “Investment Ready.” This meant ensuring we could scale profitably and confidently project $100M+ in revenue with a minimum of 40% year-over-year growth. By the end of 2020, we wanted to be in a position to choose between preparing for an IPO (we even reserved our ticker symbol WTPD), raising growth capital to accelerate expansion, or scaling organically without any additional funding.

When an inbound acquisition offer came in mid-2020, this optionality proved invaluable. It allowed us to run a proper process with multiple interested parties. We were clear with potential acquirers: our preference was to remain independent. If the offer wasn’t higher than the value we could command through an IPO, we weren’t interested, and we would walk away. Because we had the fundamentals to back it up, no one doubted us.

This underscores an important point: the best way to generate a great outcome is to build an amazing business. Focus on creating value, and optionality will follow.

Any CEO who claims to have an exit strategy—especially in the early stages—is either naïve, disillusioned, or lying.

Here’s the reality: M&A is far less common than people think. The pool of serious potential acquirers often narrows to just a handful in the best-case scenarios. And even then, the stars have to align—you need the right timing, the right strategic fit, and the right price. It’s easier said than done.

Of course, that doesn’t mean I ignored the idea of acquisition entirely (and founders should consider M&A, but only under the right circumstances, and I will save it for another blog post). For instance, we built relationships with potential strategic acquirers and stayed aware of the landscape. But the time I spent on this was minimal. Even my leadership team occasionally asked why I never talked about M&A. The answer was simple: it wasn’t a priority.

Too many founders overthink their “exit strategy,” and it often backfires. Changing their product to appeal to a potential acquirer? Building one-sided partnerships in the hope they’ll buy the company? Hope is not a strategy.

The same goes for VCs. Some overthink their portfolio companies’ “exit strategy” because they worry about selling before the 10-year fund window closes. While this concern is valid, it doesn’t mean they should push their best portfolio companies to sell. There are many ways for VCs to liquidate their positions without forcing a sale. Ironically, the best way for a founder to help their investors exit is to focus on increasing enterprise value. Shares in a great company are always in demand.

For an early-stage startup, having an exit strategy is as absurd as asking an infant to decide which jobs they’ll apply to after university. The founders’ job is to nurture that infant—raise them into a great human being. The results will follow.

Build a great business, and everything else will fall into place. There’s an old saying: Great companies get bought, not sold. It couldn’t be more true.

P.S. Founders, if you have an exit strategy slide in your pitch deck, please remove it before pitching to us. TYSM!

P.P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

The Three Phases of Building a Great Tech Company: Technology, Product, and Commercialization

There are three distinct phases in the journey of building a great tech company: technology, product, and commercialization. These phases are sequential yet interconnected and sometimes overlap. Needless to say, mastering each is critical to the company’s eventual success. However, it’s important to recognize their differences.

• Building technology is about founders creating what they love. It’s driven by passion and expertise and often leads to groundbreaking innovations.

• Building a product is about creating something others love to use. This is where usability and solving real problems come into focus.

• Commercialization is about building something people will pay for and driving revenue. This phase transforms users into paying customers or finds someone else to pay for it, such as advertisers.

These phases are related but distinct. Great technology doesn’t guarantee anyone will use it, and a widely-used product doesn’t always lead to revenue. I’ve seen many technologists create incredible technologies no one adopts, as well as popular products that fail to commercialize effectively (though it’s rare for a product with tens of millions of users to fail entirely).

For deep tech companies, these phases often have minimal overlap and unfold sequentially. The technology might take years to develop before a usable product emerges, and commercialization may come even later.

In contrast, shallow tech B2B SaaS products often see complete overlap between the phases. For example, a subscription model is typically apparent from the outset, and the tech, product, and commercialization phases blend seamlessly.

Wattpad is also a good example of how these phases can play out differently. Initially, we built our technology and product hand in hand, creating a platform loved by millions of users. However, its commercialization—whether through ads, subscriptions, or movies, the three revenue models we had—was deliberately delayed. Many people assumed we didn’t know how to make money without understanding this counterintuitive approach (but of course, we purposely kept some of our strategies under wraps). This approach allowed us to use “free” as a potent weapon to dominate—and eliminate—our competitors in a winner-takes-all strategy. Operating for years with minimal revenue was clearly the right decision for the market dynamics and our long-term goals. More on this in a separate blog post.

Given this variability, asking, “What is your revenue?” must be thoughtful and context-specific. For some companies, the absence of revenue may be an intentional and brilliant strategy. For others, insufficient revenue could signal serious trouble. It all depends on the company’s stage, strategy, and goals. Understanding the sequence, timing, and specific needs of a business model is crucial for both investors and entrepreneurs. Zero revenue could be a blessing in the right context. On the other hand, pushing for revenue growth—let alone the wrong type of revenue growth—can be fatal, a scenario we’ve seen many times.

At Two Small Fish Ventures, we are very thoughtful and experienced investors. We understand that starting to generate revenue—or choosing not to generate revenue—at the right time is one of the secrets to success that very few people have mastered. We practise what we preach. Over the past two years, all but one of TSF’s investments have been pre-revenue.

No revenue? No problem. In fact, that’s great. Bring them on!

P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Our Secret to Finding 100x Opportunities

In previous blog posts (here and here), I’ve delved into the mathematical model for constructing an early-stage VC portfolio designed to achieve outsized returns. In short, investing early to build a concentrated portfolio of fewer than 20 moonshot companies, each with the potential for 100x returns or more, is the way to go.

The math is straightforward—it doesn’t lie. Not adhering to this model can significantly reduce the likelihood of achieving exceptional returns.

However, simply following this model is not enough to guarantee outsized results. Don’t mistake correlation for causation! The real challenge lies in identifying, evaluating, and supporting these “100x” opportunities to help turn their vision into reality.

At TSF, we use a simple framework to evaluate whether a potential investment can meet the 100x criteria:

10x (early stage) x 10x (transformative behaviour) = 100x conviction

The first “10x” is straightforward: We invest when companies are in their earliest stages. For instance, over the past two years, all but one of TSF’s investments have been pre-revenue. This made financial analysis simple—those spreadsheets were filled with zeros!

Many of these companies are also pre-traction. While having traction isn’t a bad thing, savvy investors shouldn’t rely on it for validation. The reason is simple: traction is visible to everyone. By the time it becomes apparent, the company is often already too expensive and out of reach.

At TSF, we have a unique advantage. Before transitioning to investing, all TSF partners were engineers, product experts, successful entrepreneurs, and operators—including a “recovering CEO”—that’s me! Each partner brings distinct domain expertise, collectively creating a broad and deep perspective. This allows us to invest only when we possess the domain knowledge needed to fully evaluate an opportunity. We “open the hood” to determine whether the technology is genuinely unique, defensible, and disruptive, or whether it is easily replicable. If it’s the latter, we pass quickly. A strong, defensible tech moat is a key criterion for us. This approach means we might pass on some promising “shallow-tech” opportunities, but we’re very comfortable with that. After all, we believe the best days of shallow tech are behind us.

Maintaining a concentrated portfolio allows us to commit only to investments where we have unwavering conviction. In contrast, a large portfolio would require us to find a large number of 100x opportunities and pursue those we might not fully believe in. Frankly, I wouldn’t sleep well if we took that route. This route would also make it difficult to provide the meaningful, tailored support we’ve promised our entrepreneurs (more on that in a future post). 

When evaluating product potential, we look beyond the present. At TSF, we assess how a technology might reshape the landscape over the next decade or more. We start by understanding the intrinsic needs of the user and envision how a product could fundamentally change customer or end-user behaviour. This is crucial: if a product that addresses a massive opportunity has a strong tech moat, first-mover advantages, and the ability to change behaviour while facing few viable alternatives, it can unlock significant new value and create a defensible, category-defining business.

This often translates into substantial commercialization potential. If we can foresee how the product might evolve into adjacent markets (its second, third, or even fourth act) with almost uncapped possibilities, we achieve the “holy trinity” of tech-product-commercialization potential—forming the second 10x of our conviction.

Here’s how we describe it:

Two Small Fish Ventures invests in early-stage products, platforms, and protocols that transform user behaviour and empower businesses and individuals to unlock new, impactful value.

This thesis underpins our investment decisions and ensures that each choice we make aligns with our long-term vision for transformative innovation.

While this framework may sound simple, executing it well is extremely difficult. It requires what I call a “crystal ball” skill set that spans the full spectrum of entrepreneurial, technical, product, and operational backgrounds.

Over the past decade, we’ve built a portfolio of more than 50 companies across three funds. By employing this approach, the entrepreneurs we’ve supported have achieved numerous breakout successes. This post outlines our “secret sauce,” and we will continue to leverage it.

As you can see, early-stage VC is more art than science. To do it well requires thoughtfulness, insight, and the ability to envision the future as a superpower. It’s challenging but incredibly rewarding. I wouldn’t trade it for anything.

P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Fabless + ventureLAB is Cloud Computing for Semiconductors

This is a follow-up blog post to my last piece about Blumind.

More than two decades ago, before I started my first company, I was involved with an internet startup. Back then, the internet was still in its infancy, and most companies had to host their own servers. The upfront costs were daunting—our startup’s first major purchase was hundreds of thousands of dollars in Sun Microsystems boxes that sat in our office. This significant investment was essential for operations but created a massive barrier to entry for startups.

Fast forward to 2006 when we started Wattpad. We initially used a shared hosting service that cost just $5 per month. This shift was game-changing, enabling us to bootstrap for several years before raising any capital. We also didn’t have to worry about maintaining the machines. It dramatically lowered the barrier to entry, democratizing access to the resources needed to build a tech startup because the upfront cost of starting a software company was virtually zero.

Eventually, as we scaled, we moved to AWS, which was more scalable and reliable. Apparently, we were AWS’s first customer in Canada at the time! It became more expensive as our traffic grew, but we still didn’t have to worry about maintaining our own server farm. This significantly simplified our operations.

A similar evolution has been happening in the semiconductor industry for more than two decades, thanks to the fabless model. Fabless chip manufacturing allows companies—large or small—to design their semiconductors while outsourcing fabrication to specialized foundries. Startups like Blumind leverage this model, focusing solely on designing groundbreaking technology and scaling production when necessary.

But fabrication is not the only capital-intensive aspect. There is also the need for other equipment once the chips are manufactured.

During my recent visit to ventureLAB, where Blumind is based, I saw firsthand how these startups utilize shared resources for this additional equipment. Not only is Blumind fabless, but they can also access various hardware equipment at ventureLAB without the heavy capital expenditure of owning it.

Let’s see how the chip performs at -40C!
Jackpine (first tapeout)
Wolf (second tapeout)
BM110 (third tapeout)

The common perception that semiconductor startups are inherently capital-intensive couldn’t be more wrong. The fabless model—in conjunction with organizations like ventureLAB—functions much like cloud computing does for software startups, enabling semiconductor companies to build and grow with minimal upfront investment. For the most part, all they need initially are engineers’ computers to create their designs until they reach a scale that requires owning their own equipment.

Fabless chip design combined with shared resources at facilities like ventureLAB is democratizing the semiconductor space, lowering the barriers to innovation, and empowering startups to make significant advancements without the financial burden of owning fabrication facilities. Labour costs aside, the upfront cost of starting a semiconductor company like Blumind could be virtually zero too.

That’s why the saying, “software once ate the world alone; now, software and hardware consume the universe together,” is becoming true at an accelerated pace. We have already made several investments based on this theme, and we are super excited about the opportunities ahead.

P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Portfolio Highlight: Blumind

When it comes to watches, my go-to is a Fitbit. It may not be the most common choice, but I value practicality, especially when not having to recharge daily is a necessity to me. My Fitbit lasts about 4 to 5 days—decent, but still not perfect.

Now, imagine if we could extend that battery life to a month or even a year. The freedom and convenience would be incredible. Considering the immense computing demands of modern smartwatches, this might sound far-fetched. But that’s where our portfolio company, Blumind, comes into play.

Blumind’s ultra-low power, always-on, real-time, offline AI chip holds the potential to redefine how we think about battery life and device efficiency. This advancement enables edge computing with extended battery life, potentially lasting years – not a typo – instead of days. Products powered by Blumind can transform user behaviours and empower businesses and individuals to unlock new and impactful value (see our thesis).

Blumind’s secret lies in its brain-inspired, all-analog chip design. The human brain is renowned for its energy-efficient computing abilities. Unlike most modern chips that rely on digital systems and require continuous digital-to-analog and analog-to-digital conversions (which drain power), Blumind’s approach emulates the brain’s seamless analog processing. This unique architecture makes it perfect for power-sensitive AI applications, resulting in chips that could be up to 1000 times more energy-efficient than conventional chips, making them ideal for edge computing.

Blumind’s breakthrough technology has practical and wide-ranging applications. Here are just a few use cases:

Always-on Keyword Detection: Integrates into various devices for continuous voice activation without excessive power usage.

Rapid Image Recognition: Supports always-on visual wake word detection for applications such as access control, enhancing human-device interaction with real-time responses.

Time-Series Data Processing: Processes data streams with exceptional speed for real-time analysis in areas like predictive maintenance, health monitoring, and weather forecasting.

These capabilities unlock new possibilities across multiple industries, including wearables, smart home technology, security, agriculture, medical, smart mobility, and even military and aerospace.

A few weeks ago, I visited Blumind’s team at their ventureLAB office and got an up-close look at their BM110 chip, now in its third tapeout. Blumind exemplifies the future of semiconductor startups through its fabless model, which significantly lowers the initial infrastructure costs associated with traditional semiconductor companies. With resources like ventureLAB supporting them, Blumind has managed to innovate with remarkable efficiency and sustainability. (I’ll share more about the fabless model in an upcoming post.)

I’m thrilled to see where Blumind’s journey leads and how its groundbreaking technology will transform daily life and reshape multiple industries. When devices can go years without needing a recharge instead of mere hours, that’s nothing short of game-changing.

Image: Close-up view of BM110. It is a piece of art!

Image: Qualification in action. Note that BM110 (lower-left corner) is tiny and space-efficient.

Image: The Blumind team is working hard at their ventureLAB office. More on this in a separate blog post here.

Our portfolio company, Blumind, is revolutionizing device efficiency with its ultra-low power, always-on, real-time, offline AI chip. Inspired by the human brain’s energy-efficient computing, Blumind’s innovative all-analog design significantly reduces power consumption, making its chips up to 1000 times more efficient than conventional digital chips. 

This advancement enables edge computing with extended battery life, potentially lasting YEARS - not a typo - instead of days. Practical applications of Blumind’s technology include always-on keyword detection for voice activation, rapid image recognition for access control, and real-time time-series data analysis for predictive maintenance and health monitoring. These capabilities unlock new and previously impossible opportunities across various industries, from wearables and smart homes to security, agriculture, military, and aerospace.

Recently, I visited Blumind’s team at their ventureLAB office and witnessed their  third-tapeout BM110 chip in action. I’m excited to see Blumind’s continued growth and how its transformative technology will reshape industries, making long-lasting, energy-efficient devices a reality.

P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Two Small Fish Ventures Celebrates the Merger of Printful and Printify

We’re thrilled to share that Printify, a company we have proudly backed since its first funding round, has entered into a merger with Printful (see report by TechCrunch). As long-time supporters of the Printify team, we at Two Small Fish Ventures are incredibly happy with this outcome, which marks a significant milestone in the production-on-demand industry and an exciting moment for everyone involved.

Printify and Printful are both leading platforms that empower entrepreneurs and businesses to create and sell custom products worldwide without the need to hold inventory, thanks to their advanced production-on-demand fulfillment networks. Printify has been growing rapidly, now boasting a team of over 700 employees. Combined with Printful’s team, the newly merged company will have well over 2,000 employees, making it by far the number one player in the production-on-demand market.

Printful, with over $130 million raised and a valuation exceeding $1 billion, and Printify, backed by $54.1 million in funding, have established themselves as the top two global leaders in this field. This merger solidifies their position as the dominant force in the industry, setting new standards and driving innovation in production-on-demand services worldwide. We’re proud to have supported Printify from the very beginning and look forward to witnessing the next chapter in their remarkable journey.

P.S. In true spirit of unity, founders Lauris Liberts and James Berdigans have sealed the deal by swapping T-shirts with each other’s logos—because nothing says “teamwork” like wearing the competition’s brand!

P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Bridge Technologies are Rarely Great Investments

More than two decades ago, I co-founded my first company, Tira Wireless. The business went through several iterations, and eventually, we landed on building a mobile content delivery product. We raised roughly $30M in funding, which was a significant amount at the time. We even ranked as Canada’s Third Fastest Growing Technology Company in the Deloitte Technology Fast 50.

We had a good run, but eventually, Tira had to shut its doors.

We made numerous strategic mistakes, and I learned a lot—lessons that, quite frankly, helped me make far better decisions when I later started Wattpad.

One of the most important mistakes we made was falling into the “bridge technology” trap.

What is the “bridge technology” trap?

Reflecting on significant “platform shifts” over recent decades reveals a pattern: each shift unleashes waves of innovation. Consider the PC revolution in the late 20th century, the widespread adoption of the internet and cloud computing in the 2000s, and the mobile era in the 2010s. These shifts didn’t just create new opportunities; they also created significant pain points as the world tried to leap from one technology to another. Many companies emerged to solve problems arising from these changes.

Tira started when the world began its transition from web to mobile. Initially, there were countless mobile platforms and operating systems. These idiosyncrasies created a huge pain point, and Tira capitalized on that. But in a few short years, mobile consolidated into just two major players—iOS and Android. The pain point rapidly disappeared, and so did Tira’s business.

Similarly, most of these “bridge technology” companies perform very well during the transition because they solve a critical, short-term pain point. However, as the world completes the transition, their business disappears. For instance, numerous companies focused on converting websites into iPhone apps when the App Store launched. Where are they now?

Some companies try to leverage what they’ve built and pivot into something new. But building something new is challenging enough, and maintaining a soon-to-be-declining bridge business while transitioning into a new one is even harder. This is akin to the innovator’s dilemma: successful companies often struggle with disruptive innovation, torn between innovating (and risking profitable products) or maintaining the status quo (and risking obsolescence).

As an investor, it makes no sense to invest in a “bridge” company that is fully expected to pivot within a few years. A pivot should be a Plan B, not Plan A. It’s extremely rare for bridge technology companies to become great, venture-scale investments. In fact, I can’t think of any off the top of my head.

We are currently in the midst of a tectonic AI platform shift. We’re seeing a huge volume of pitches, which is incredibly exciting. Many of these startups built great technologies and products. However, a significant number of these pitches also represent bridge technologies. As the current AI platform shift matures, these bridge technologies will lose relevance. Sometimes, it’s obvious they’re bridge technologies; other times, it requires significant thought to identify them. This challenge is intellectually stimulating, and I enjoy every moment of it. Each analysis informs us of what the future looks like, and just as importantly, what it will not look like. With each passing day, we gain stronger conviction about where the world is heading. It’s further strengthening our “seeing the future is our superpower” muscle, and that’s the most exciting part.

P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Portfolio Highlight: #paid

#paid was one of the first investments we made at Two Small Fish Ventures. It’s been over a decade since we backed Bryan and Adam, who were still working out of Toronto Metropolitan University’s DMZ at the time. They had a vision to build a platform that connected creators and brands before “creator” was even a term! Back then, influencer and creator marketing campaigns were just tiny experiments.

A decade later, the creator economy has taken off. It’s now a $24 billion market—an order of magnitude larger than just a few years ago, with no signs of slowing down. The next wave of growth is still ahead as ad spending continues to shift away from traditional media. With the global ad market approaching $800 billion, one thing remains true: ad dollars follow the eyeballs—always. And where are those eyeballs today? On creators and influencers.

Today, #paid has become the world’s dominant platform, with over 100,000 creators onboard. It addresses a significant challenge: most creators don’t know how to connect with brands, especially iconic brands like Disney, Sephora, or IKEA. On the other hand, brands struggle to find the right creators amidst a sea of talent. #paid bridges this gap, acting as the marketplace that makes collaboration easy. They use data-driven insights to determine what makes a successful match, ensuring that both creators and brands can find each other effortlessly.

At #paid, brands and creators work with a dedicated team of experts to build creative strategies backed by research, first-party data, and industry benchmarks. This means campaigns run smoothly, allowing creators to focus on doing what they love—creating—without getting bogged down by administrative tasks.

I’m not just speaking as an investor—I’ve actually run a campaign with #paid as an influencer myself, and I can personally vouch for how seamless the experience was.

If you think #paid is all about TikTok, Snap, or Instagram, think again. Brands leverage #paid content across every platform. Want proof? Just check out the Infiniti TV commercial, which came from a #paid campaign.

How about billboards in major cities like NYC, Toronto, and more? #paid has that covered too.

#paid also brings creators and marketers together in real life. I had the privilege of speaking at their Creator Marketing Summit in NYC a few weeks ago, and I was amazed at how far #paid has come. The summit brought together hundreds of creators and top brand marketers—an impressive showcase of the platform’s evolution.

Looking back on this journey, here are my key takeaways:

• Great companies take a decade to build.

• To create a category leader, especially in winner-take-all markets, the idea has to be bold and often misunderstood at first. Bryan and Adam saw something that few others did, and their first-mover advantage has solidified #paid’s leading position today.

• There’s no such thing as “done.” #paid constantly reinvents itself. Generative AI is another exciting opportunity for step-function growth, and I can’t wait to see what’s next.

Bryan and Adam should be incredibly proud of what they’ve accomplished.

P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Venture Capital is Call Options on Startups

Early-stage venture capital (VC) has always been the oddball in asset management. Unlike other asset classes, it offers the highest potential returns, but it also comes with the highest variance—especially when portfolio construction isn’t done right. On top of that, it has an inherent “default rate” of about 80%.

Tell a traditional fund manager about this 80% default rate, and you’ll likely get a strange look.

A few months ago, I was trying to explain how VC works to a fund manager. After covering the usual points—how VC is essentially a home run derby with many misses—he paused and said, “I get it. VC is like buying call options on startups.”

I hadn’t considered it that way before, but he was absolutely right.

For those unfamiliar, buying a call option gives you the right, but not the obligation, to purchase a stock at a predetermined price (the strike price) before a specified expiration date. Investors use this strategy to profit from an anticipated—but not guaranteed—increase in the stock’s price. If the stock price rises above the strike price (plus the premium paid), the option becomes profitable. The potential profit is theoretically unlimited, while the maximum loss is limited to the premium paid.

Similarly, investing in a startup gives you the chance to acquire equity at an attractive price, with a ~20% chance the startup will take off—though this usually takes about a decade to materialize. VCs use this strategy to profit from a potential—but not guaranteed—rise in the company’s value. If the startup succeeds and its valuation soars beyond the investment (plus associated costs), the return can be massive. The potential profit is virtually unlimited if the company becomes a breakout success, while the maximum loss is limited to the initial investment.

VC and call options are strikingly similar, don’t you think? They’re like twins!

From now on, I’ll tell people: Venture capital is call options on startups.

P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Winning the Home Run Derby with Proper Portfolio Construction

TLDR – 20 companies in a VC portfolio is the optimal balance between risk and reward, offering a very high chance of hitting outsized returns without significant risk of losing money. This is exactly the approach we follow at Two Small Fish Ventures, as we keep our per-fund portfolio size limited to roughly 20 companies.

In my previous post, VC is a Home Run Derby with Uncapped Runs, I illustrated mathematically why early-stage venture funds’ success doesn’t hinge on minimizing failures, nor does it come from hitting singles (e.g., the number of “3x” companies). These smaller so-called “wins” are just noise.

As I said:

“Venture funds live or die by one thing: the percentage of the portfolio that becomes breakout successes — those capable of generating returns of 10x, 100x, or even 1000x.”

To drive high expected returns for VCs, finding these breakout successes is key. However, expected value alone doesn’t tell the full story. We also need to consider variance. In simple terms, even if a fund’s expected return is 5x or 10x, it doesn’t necessarily mean it’s a good investment. If the variance is too high—meaning the fund has a low probability of achieving that return and a high probability of losing money—it would still be a poor bet.

For example, imagine an investment opportunity that has a 10% chance of returning 100x and a 90% chance of losing everything. Its expected return is 10x (i.e., 10% x 100x + 90% x 0x = 10x). But despite the attractive expected return, it’s still a terrible investment due to the extremely high risk of total loss.

That said, there’s a time-tested solution to turn this kind of high-risk investment into a great one: diversification. While everyone understands the importance of diversification, the real key lies in how it’s done. By building a properly diversified portfolio, we can reduce variance while maintaining a high expected return. This post will illustrate mathematically how the right portfolio construction allows venture funds to generate outsized returns while ensuring a high probability of success.

Moonshot Capital vs. PlayItSafe Capital: A Quick Recap

Let’s start by revisiting our two hypothetical venture capital firms: Moonshot Capital and PlayItSafe Capital. Moonshot Capital swings for the fences, aiming to find the next 100x company while expecting most of the portfolio to fail. PlayItSafe Capital, on the other hand, protects downside risk (at least that’s what they think), but by avoiding bigger risks, it sacrifices the chance of finding outsized returns.

Moonshot Capital: Out of 20 companies, 17 resulted in strikeouts (0x returns), 3 companies achieved 10x returns, and 1 company achieved a 100x return.

PlayItSafe Capital: Out of 20 companies, 7 resulted in strikeouts (0x returns), 7 companies broke even (1x), 5 companies achieved 3x returns, and 1 company achieved a 10x return.

Here’s how their expected returns compare:

Moonshot Capital has an expected return of 6.5x, thanks to one company yielding 100x and three companies yielding 10x (i.e. (1 x 100 + 3 x 10 +16 x 0) x $1 = $130).

PlayItSafe Capital has a much lower expected return of 1.6x, with its highest return from one 10x company, five 3x returns, and several breakeven companies (i.e. (1 x 10 + 5 x 3 + 7 x 1 + 7 x 0) x $1 = $32).

Despite these differences in expected returns, what’s surprising is that counterintuitively, the probability of losing money (i.e., achieving an average return of less than 1x at the fund level) is quite similar for both firms.

Let’s dive into the math to see how we calculate these probabilities:

Moonshot Capital: 12.9% Probability of Losing Money

1. Expected Return :

2. Variance :

3. Standard Deviation :

4. Standard Error :

Using a normal approximation, the z-score to calculate P(X < 1) is:

Looking this up in the standard normal distribution table gives us:

P(X < 1) = 0.129 or 12.9%

PlayItSafe Capital: 11.6% Probability of Losing Money

Similarly, looking this up in the standard normal distribution table gives us (sparing you all the equations):

P(X < 1) = 0.116 or 11.6%

Shockingly, these two firms’ probabilities of losing money are essentially the same. The math does not lie!

Here’s a graphical representation of the outcomes (probability density) for Moonshot Capital and PlayItSafe Capital.

Probability Density Graphs: Comparing Moonshot and PlayItSafe

As you can see, Moonshot has higher upside potential, as the density peaks at 6x, while PlayItSafe is more concentrated around lower returns. Since their downside risks are more or less the same while PlayItSafe’s approach significantly limits its upside, counterintuitively PlayItSafe is far riskier from the risk-reward perspective.

Proper Portfolio Construction: How Portfolio Size Affects Returns

To further optimize Moonshot’s strategy, we will explore how different portfolio sizes affect the balance between risk and reward. Below, I’ve analyzed the outcomes (i.e. portfolio size sensitivity) for Moonshot Capital across portfolio sizes of n = 5, n = 10, n = 20, and n = 30.

The graph below shows the probability density curves for Moonshot Capital with varying portfolio sizes:

As you can see, smaller portfolios (n = 5, n = 10) exhibit higher variance, with a greater spread of potential outcomes. Larger portfolios (n = 20, n = 30) reduce the variance but also diminish the likelihood of hitting outsized returns.

Why 20 is the Optimal Portfolio Size

1. Why 20 is Optimal:

At n = 20, Moonshot Capital strikes an ideal balance. The risk of losing money, i.e. P (X < 1), remains manageable at 12.9%, while the probability of outsized returns remains high: 62.1% chance of hitting a return higher than 5x. This suggests that Moonshot’s high-risk, high-reward approach pays off without exposing the fund to unnecessary risk.

2. Why Bigger Isn’t Always Better (n = 30):

When the portfolio size increases to n = 30, we see a significant drop-off in the likelihood of outsized returns. The probability of achieving a return higher than 5x drops significantly from 62.1% at n = 20 to 41.9% at n = 30, and counterintuitively, the risk of losing money starts to increase. This suggests that larger portfolios can dilute the impact of the big wins that drive fund returns. It also mathematically explains why “spray-and-pray” does not work for early-stage investments.

3. The Pitfalls of Small Portfolios (n = 5 and n = 10):

At smaller portfolio sizes, such as n = 5 or n = 10, the variance increases significantly, making the portfolio’s returns more unpredictable. For example, at n = 5, the probability of losing money is significantly higher, and the risk of extreme outcomes becomes more pronounced. At n = 10, the flat-curve suggests that the variance is very high. This high variance means the returns are volatile and difficult to predict, increasing risk.

Conclusion: How to Win the Home Run Derby With Uncapped Runs

The key takeaway here is that Moonshot Capital’s strategy of swinging for the fences doesn’t mean taking on excessive risk. With 20 companies in the portfolio, Moonshot is the optimal between risk and reward, offering a very high chance of hitting outsized returns without significant risk of losing money.

While n=20 is optimal, n=10 is also pretty good, but n=30 is significantly worse. So, a ‘concentrated’ approach – but not ‘n=5 concentrated’ – is far better than ‘spray and pray,’ if you have to pick between the two.

This is exactly the approach we follow at Two Small Fish Ventures. We don’t write a cheque unless we have that magical “100x conviction.” We also keep our per-fund portfolio size limited to roughly 20 companies. This blog post mathematically breaks down one of our many secret sauces for our success.

Don’t tell anyone.

P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Axiomatic AI – Make the World’s Information Intelligible

Today’s blog post is brought to you by Eva Lau. She will talk about one of our recent investments: Axiomatic AI.

Congratulations to Axiomatic on their recent US$6M seed round led by Kleiner Perkins! Two Small Fish Ventures is thrilled to be an early investor since the company’s inception—and the only Canadian investor—in what promises to be a game-changer in solving fundamental problems in physics, electronics, and engineering.

Why is this important? Large Language Models (LLMs) excel at languages (as their name suggests) but struggle with logic. That’s why AI can write poetry but struggles with math, as LLMs mainly rely on ‘pattern-matching’ rather than ‘reasoning.’

This is where Axiomatic steps in. The company’s secret sauce is its new AI model called Automated Interpretable Reasoning (AIR), which combines advances in reinforcement learning, LLMs, and world models. Axiomatic’s mission is to create software and algorithms that not only automate processes but also provide clear, understandable insights to fuel innovation and research, ultimately solving real-world problems in engineering and other industrial applications.

The startup is the brainchild of world-renowned professors from MIT, the University of Toronto, and The Institute of Photonic Sciences (ICFO) in Barcelona. The team includes leading engineers, physicists, and computer science experts.

With its innovative models, the startup fits squarely within our fund’s focus: the next frontier of computing and its applications. As all TSF partners are engineers, product experts, and recent operators, we are uniquely positioned to understand the potential of Axiomatic and support the team. 

Axiomatic’s new AIR model is well-positioned to accelerate engineering and scientific discovery, boosting productivity by orders of magnitude in the coming years, and ultimately make the world’s information intelligible.

P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Viggle AI Leads the Next Wave of Disruption in Content

We’re thrilled to share that Toronto-based Viggle AI, a Canadian start-up revolutionizing character animation through generative AI, has raised US$19 million in funding. The round was led by a16z with Two Small Fish participating as a significant investor. As part of the investment, I also became an advisor to the company. 

Creators are unleashing their creativity with Viggle AI by generating some of the most entertaining memes and videos online. You’ve probably seen a clip of Joaquin Phoenix’s Joker persona replacing recreating Lil Yachy’s walkout from the Summer Smash Festival – it was made with Viggle AI! 

But Viggle AI is much more than a simple meme generator. It’s a powerful platform that can completely reinvent how games, animation, and other videos are produced. 

Powered by JST-1, the first-in-the-world 3D-video foundation model with actual physics understanding, Viggle AI can make any character move as you want. Its unique AI model can generate high-quality, realistic, physics-based 3D animations and videos from either static images or text prompts. 

For professional animation engineers, game designers, and VFX artists, this is game-changing. Viggle AI can streamline the ideation and pre-production process allowing them to focus on their creative vision and ultimately reduce production timelines. 

And, for content creators and everyday users, Viggle AI can generate high-quality animations using simple prompts to create engaging animated character videos within a matter of minutes. 

Easier. Faster. Cheaper. Viggle AI is a truly transformative product that will unlock new values for consumers and professionals alike.  

Here are a couple of fun examples of Viggle AI in action – I was terrible at dancing, but now I can do it!

Since launching in March, Viggle AI has taken the internet by storm and now boasts over 4 million users. When the startup first landed on our radar it only had 1000s of users. This rapid growth is not only a testament to Viggle AI’s ability to create an engaging product but also Two Small Fish’s ability to spot tech giants in the making.  

Two Small Fish has an unparalleled track record of helping create the future of content through technology. After all, the team built Wattpad from a simple app for fiction into a massive global entertainment powerhouse with 100 million users. Seeing the future is our superpower. We’re the best investors to help future tech giants like Viggle AI as they transform how content is created, remixed, customized, consumed, and interacted with. We’re excited to continue to play a role in reinventing content creation and entertainment. 

Congratulations Hang Chu and the entire Viggle AI team! 

P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

WEBTOON IPO

I haven’t been involved with Wattpad for a while now, so it’s a strange feeling—though not in a bad way—to catch up on all the details about WEBTOON and Wattpad in the SEC filing. From what I’ve gathered, WEBTOON is performing exceptionally well, with revenue now surpassing $1 billion.

Three years ago, one of the main reasons I was drawn to Naver WEBTOON among all the suitors was Naver’s intention to spin out WEBTOON, together with Wattpad, as a separate, entertainment-focused, NASDAQ-listed company. This was a significant undertaking with numerous challenges, and the WEBTOON team is delivering on the promise. I’m pleased to see that Wattpad is playing a crucial role in this upcoming IPO.

The timing has turned out to be ideal for both WEBTOON and myself personally. With the rise of generative AI, the media industry is undergoing a new wave of massive disruption. It’s exciting to see WEBTOON raising more capital to seize this opportunity. From a distance, I wish the WEBTOON team all the best!

At Two Small Fish Ventures, we’re equally excited as we witness many incredible AI-native media startups and are actively investing in several amazing ones. I’ll share more about this in future posts.

This is a once-in-a-decade, platform-shift opportunity. It is arguably the biggest platform shift in the past century! TSF is actively investing in the next frontier of computing and its applications as a lead investor or as part of a syndicate. If you’re a founder of an early-stage AI-native company—media or not—don’t hesitate to reach out to us, as TSF is a rare investor who understands this space extremely well, and possibly the best investor with real-world operating experience who can help you achieve massive success like Wattpad did.

P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

The depressing numbers of the venture-capital slump don’t tell the full story

Thank you to The Globe for publishing my second op-ed in as many weeks: The depressing numbers of the venture-capital slump don’t tell the full story.

The piece is now available in full here:

Bright spots in the current venture capital landscape exist. You just need to know where to look.

Recent reports are right. Amid high interest rates, venture capitalists have a shrinking pool of cash to dole out to hopeful startups, making it more challenging for those companies to raise funding. In the United States, for example, startup investors handed out US$ 170.6 billion in 2023, a decrease of nearly 30 percent from the year before.

But the headline numbers don’t tell the whole story.

There’s a night-and-day difference between the experience of raising funds for game-changing, deep-technology startups that specialize in artificial intelligence and related fields, such as semiconductors, and those who try to innovate with what’s referred to as shallow tech.

Remember the late 2000s? Apple’s App Store wasn’t groundbreaking in terms of technical innovation, but it nonetheless deserves praise because it revolutionized the smartphone. Then, the App Store’s charts were dominated by simplistic applications from infamous fart apps to iBeer, the app that let you pretend you were drinking from your iPhone.

That’s the difference – those building game-changing tools and those whose products are simply trying to ride the wave.

Tons of startups are pitching themselves as AI or deep-tech companies, but few actually are. This is why many are having trouble raising funds in the current climate.

It’s also why the era of shallow tech is over, and why deep-tech innovations will reshape our world from here on out.

Toronto-based Ideogram, a deep-tech startup, was the first in the industry to integrate text and typography into AI-generated images. (Disclosure: This is a company that is part of my Two Small Fish Ventures portfolio. But I’m not mentioning it just because I have a stake in it. The company’s track record speaks for itself.)

Barely one year old, the startup has fostered a community of more than seven million creators who have generated more than 600 million images. It went on to close a substantial US$80-million Series A funding round.

As a comparison, Wattpad, the company I founded, which later sold for US$660-million, had raised roughly US$120-million in total. Wattpad’s Series A in 2011, five years since inception, was US$3.5-million.

The speed at which Ideogram achieved so much in such a short period of time is eye-popping.

The “platform shifts” over recent decades have largely played out in the same way. From the personal-computer revolution in the late 20th century to the widespread adoption of the internet and cloud computing in the 2000s, and then the mobile era in the 2010s, there’s a clear pattern.

Each shift unleashed a wave of innovation to create new opportunities and fundamentally reshape user behaviour, democratize access and unlock tremendous value. These shifts benefited the billions of internet users and related businesses, but they also paved the way for “shallow tech.”

The late 2000s marked the beginning of a trend where ease of creation and user experience overshadowed the depth of innovation.

When Instagram launched, it was a straightforward photo-sharing app with just a few attractive filters. Over time, driven by the massive amounts of data it collected, it evolved into one of the leading social media platforms.

This time is different. The AI platform shift makes it harder for simplistic, shallow-tech startups to succeed. Gone are the days of building a minimally viable product, accumulating vast data and then establishing a defensible market position.

We’re entering the golden age of deep-tech innovation, and in order to be successful, startups have to embrace the latest platform shift – AI. And this doesn’t happen by tacking on “AI” to a startup’s name the way many companies did with the “mobile-first” rebrand of the 2010s.

In this new era, technological depth is not just a competitive advantage but also a fundamental pillar for building successful companies that have the potential to redefine our world.

For example, OpenAI and Canada’s very own Cohere are truly game-changing AI companies that have far more technical depth than startups from the previous generation. They’ve received massive funding partly because the development of these kinds of products is very capital-intensive but also because their game-changing approach will revolutionize how we live, work and play.

Companies like these are the bright spots in an otherwise gloomy venture-capital landscape.

P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Canada risks losing out on the GREATEST prize: ownership of industry-disrupting companies and technologies

Thank you to The Globe for publishing my op-ed about the recent capital gains tax increase last week. The piece is now available here.

Once again, to summarize, as the world shifts to intangible assets, the consequences go far beyond brain drain and job loss. We risk losing out on the GREATEST prize: ownership of industry-disrupting, IP-based companies and technologies. This aspect, often overlooked, is illustrated with real-world numbers.

Not having significant ownership of these assets in the information age is equivalent to not having electricity and oil in the industrial age. This would have a devastating and long-term impact on our economy and reputation on the world stage. Canada would be left behind with digital breadcrumbs, selling our next generation short.

The policy change clearly didn’t take this into consideration. Saying that it impacts only 0.13% of the population is so wrong on many fronts. It is abundantly clear that it will impact EVERYONE.

Don’t forget to tell them.

Here is the full copy of my op-ed:

The Liberal government is increasing taxes on investment. Anyone experienced in entrepreneurship and investment knows this will stifle growth. We are at tremendous risk of losing our brightest entrepreneurs – along with the high-skilled jobs they create – to other countries.

This is evidenced by a new survey conducted after the capital-gains tax changes: Just 5.3 per cent of Canadian founders believe Canada is the best place to grow a company.

As the world shifts to intangible assets, the consequences go beyond brain drain and job loss. We will lose out on the greatest prize of the innovation economy: ownership of industry-disrupting companies and technologies. This would have a devastating and long-term impact on our economy and reputation on the world stage.

I will admit that this latest change to taxation has an immaterial impact on me personally. Wattpad, the company I co-founded, was acquired by Korean internet giant Naver for $840-million in 2021 so I’ve already paid my dues as stipulated under the budget at the time. But my experience illustrates how this tax change is detrimental to Canada and future generations.

Because I raised most of the capital from outside of Canada, only half of the company was owned by Canadians, including founders, employees and investors. In other words, when Wattpad was acquired, $420-million of the economic value left our country.

Before the tax hike, it was reported that when our tech startups become scaleups, about 75 cents out of every invested dollar comes from outside of Canada. This means many of these fast-growing companies are already majority-owned by foreigners.

As a venture capitalist, I see this trend play out all the time. The firm I co-founded, Two Small Fish Ventures, has a portfolio of 50 early-stage tech companies. We are the only Canadian investor in many of our recent investments. Foreign investors, especially U.S. investors, are aggressively writing cheques to own a significant portion of these early promising Canadian startups when they are relatively inexpensive.

The tax increase will only exacerbate this problem.

When a company’s assets are purely intangible, and its biggest investors and markets exist outside Canada, it’s natural and far easier for the company to move outside Canada or be acquired by foreigners, such as Wattpad. Needless to say, the economic value creation postacquisition is also captured outside of Canada.

One might argue that these companies create many jobs in Canada, so we still captured some value, right? Well, again, when a company’s assets are mostly intangible, the majority of the economic value created is captured by its IP, not the jobs created. As an example, Wattpad’s payroll was about $30-million per year, not small, but it is a minuscule number compared to the nearly billion dollars that the company was valued at.

There’s also a tectonic shift under way across the innovation economy. The rise of AI and related fields such as semiconductors in particular is an order of magnitude more capital-intensive than previous generations of tech companies. Canada has produced some of the best AI researchers in the world, but when 40 of Forbes’ 2024 AI 50 List are in the U.S. (and more than 30 of them in Silicon Valley) while only two are in Canada, we could have and should have owned a much bigger piece of the pie.

The best example is OpenAI, which was co-founded by Ilya Sutskever, a Canadian. The company is based in San Francisco. The majority of its employees are not in Canada. All the major investors are U.S.-based. Canada only has the bragging rights.

And, do I have to remind everyone that Elon Musk is also Canadian?

In the post-pandemic world, capital and talent are more mobile than ever. The pull to move to other countries is also stronger than ever. Canada is already becoming the best training ground for other countries to capture the value created by these companies outside of Canada.

I want Canada to win. I really do. What motivates me now as an investor is to help create more homegrown Canadian tech giants – and to keep them in Canada. My job just got much harder.

Higher taxes mean less capital, reduced investment, diminished ownership and fewer economic benefits. Period.

At a time when we need more capital to own a meaningful piece of the IP-based economy, our country is going backward. As the economy increasingly shifts toward intangible assets, we will be left behind with digital bread crumbs, selling our next generation short.

P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Software Once Ate the World Alone; Now, Software and Hardware Consume the Universe Together

Over a decade ago, in his blog post titled “Why Software is Eating the World,” Marc Andreessen explained why software was transforming industries across the globe. Software would no longer be confined to the tech sector but permeated every aspect of our lives, disrupting traditional businesses and creating new opportunities, driving innovation and reshaping the competitive landscape. Overall, the post underscores the profound impact of software on the economy and society at large.

While the prediction in his blog post was mostly accurate, today, the world is still only partially eaten up by software. Although there are many opportunities for software alone to completely transform user behaviour, upend workflow, or cause other disruptions, the low-hanging fruits are mostly picked. That’s why I said the days of shallow tech are behind us now.

Moving forward, increasingly, there will be more and more opportunities that require hardware and software to be designed and developed together from the get-go to ensure that they can work harmoniously and make an impact that otherwise would not be possible. The best example that people can relate to today is Tesla. For those who have driven a Tesla, I trust many would testify that their software and hardware work really well together. Yes, their self-driving software might be buggy. Yes, the build quality of its hardware might not be the best. However, with many features on their cars – from charging to navigation to even warming up the car remotely – you can just tell that they are not shoehorning their software and their app into their hardware or vice versa.

On the other hand, on many cars from other manufacturers, you can tell their software and hardware teams are separated by the Grand Canyon and perhaps only seriously talk to each other weeks before the car is launched 🙂

We also witness the same thing down to the silicon level. From building the next AI chip to the industrial AI revolution to space tech, software and hardware convergence is happening everywhere. For instance, the high energy required by LLMs is partially because the software “works around” the hardware, which was not designed with AI in mind in the first place. Changes are already underway, ensuring that software and hardware dance together. There is a reason why large tech players like OpenAI and Google are planning to make their own chips.

We are in the midst of a once-in-a-decade “platform shift” because of generative AI. In the last platform shift more than a decade ago, when the confluence of mobile and cloud computing created a massive disruption, there was one “iPhone moment,” and then things progressed continuously. This time, new foundation models are launching at a break-neck pace, which is further exacerbated by open-source. So fast that we are now experiencing one iPhone moment every few weeks.

All of this happens when AI-native startups are an order of magnitude more capital-intensive than in the past cycle. At the same time, investors are also willing to write big cheques to these companies, but perhaps it is appropriate, given all the massive opportunities ahead of us.

Investing in this environment is both exciting and challenging as assessing these new opportunities is drastically different from the previous-generation software-only, shallow-tech startup. 

The next few years are going to be wild.

P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

The Right Type of Investors

Most of Two Small Fish Ventures’ portfolio companies are based in North America. However, we also invest globally, as we firmly believe that global companies can be built anywhere. To us, where founders and their teams sleep at night is irrelevant to their potential for greatness.

Consequently, we actively engage with many tech ecosystems, regardless of their size. A pervasive issue we’ve encountered across these ecosystems is the challenge entrepreneurs face in finding investors who provide not just capital but the right kind of support. This problem is more acute in less developed ecosystems, but even those that are more established are not exempt.

An investor from another ecosystem eloquently discussed this issue in an article. I couldn’t have said it better myself, so with her permission, I’m sharing her insights here, albeit anonymized to avoid casting any ecosystem in a negative light. After all, this challenge is universal:

There are plenty of rich people and “wantrepreneur” investors in our community, but most of them have made their fortune in real estate, finance, or other traditional sectors. They have great intentions, but unfortunately they do not have experience in investing in technology and innovations. Some of them would take too much equity ownership. Some of them have conflicts of interest pursuing their own agendas and push their founders to work on products or customers that they want. Some are so risk averse that they structure their startup investment as if it is a personal loan. We have seen our startup founders take money from these investors and almost always end in disaster.  

​​What our community really needs are the startup investors who have “been there and done that.”  Or we will continue to be stuck in this vortex of wrong investors investing in the wrong companies. We need investors who truly understand the startup founders’ blood, sweat and tears approach. Someone who knows how to be a guide and a coach. Someone who knows how to provide advice, connections, and funding only when the founder really needs it.  

​​To achieve this goal, we need to invite investors from established ecosystems to teach local investors the best practices in venture investing. And we do believe these skills can be learned. The local investor community needs the knowledge and skills to make investment decisions that maximize the founders’ success therefore their chances of success.

Investing in innovation significantly differs from other forms of investment. For instance, real estate investments have established methods to evaluate rental yields, and traditional businesses use EBITDA to estimate enterprise values. However, early-stage startups, particularly those disrupting the status quo, cannot be evaluated using these metrics because of their lack of yields or EBITDA, or even clear business models! 

Often, experienced investors from other sectors mistakenly apply the same approach when they invest in tech startups, leading to almost certain failures. This can result in many problems, such as a messy cap table, ensuring the startup unfundable in future funding rounds and potentially “die young” despite its potential. We’ve regrettably had to pass on numerous investment opportunities due to such issues.

As the quoted investor highlighted, learning the skills and best practices in tech investing is possible. Needless to say, the best way to do this is to learn from people who have “been there and done that.” It’s crucial to acknowledge that investing in tech startups – and innovations in general – is a different sport than other sectors. 

After all, bringing a tennis racket to a hockey game is a recipe for disaster.

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

VC is a Home Run Derby with Uncapped Runs

There’s an old saying that goes, “Know the rules of the game, and you’ll play better than anyone else.” Let’s take baseball as our example. Aiming for a home run often means accepting a higher number of strikeouts. Consider the legendary Babe Ruth: he was a leader in both home runs and strikeouts, a testament to the high-risk, high-reward strategy of swinging for the fences.

Yet, aiming solely for home runs isn’t always the best approach. After all, the game’s objective is to score the most runs, not just to hit the most home runs. Scoring involves hitting the ball, running the bases, and safely returning to home base. Sometimes, it’s more strategic to aim for a base hit, like a single, which offers a much higher chance of advancing runners on base and scoring.

The dynamics change entirely in a home run derby contest, where players have five minutes to hit as many home runs as possible. Here, only home runs count, so players focus on hitting just hard enough to clear the fence, rendering singles pointless.

Imagine if the derby rules also rewarded the home run’s distance, adding extra runs for every foot the ball travels beyond the fence. For context, the centre field is typically about 400 feet from home plate. So, a 420-foot home run, clearing the centre field by 20 feet, would count as a 20-run homer. This rule would drastically alter players’ strategies. Not only would they swing for the fences with every at-bat, but they would also hit as hard as possible, aiming for the longest possible home runs to maximize their scores, even if it reduced their overall chances of hitting a home run.

This scenario mirrors early-stage venture capital, where I liken it to a home run derby with uncapped runs. The potential upside of investments is enormous, offering returns of 100x, 1000x, or more, while the downside is limited to the initial investment. Unlike in a derby, where physical limits cap the maximum score, the VC world is truly without bounds, with numerous instances of investments yielding thousandfold returns.

This distinct dynamic makes assessing VCs fundamentally different from evaluating other asset classes, where protecting the downside is crucial. In the VC realm, the potential for nearly limitless returns makes losses inconsequential, provided VCs invest in early-stage companies with the potential for exponential growth. The risk-reward equation in venture capital is thus highly asymmetrical, favouring bold bets on moonshot startups.

For illustration, let’s consider two hypothetical venture capital firms: Moonshot Capital and PlayItSafe Capital.

Moonshot Capital approaches the game like a home run derby with uncapped runs. They aim for approximately 20 companies in their portfolio, expecting that around 20% will be their home runs—or “value drivers”—capable of generating returns from 10x to 100x or more. 

Imagine they invest $1 in each of 20 companies. One yields a 100x return, three bring in 10x, and the remaining are strikeouts. The outcome would be:

(1 x 100 + 3 x 10 +16 x 0) x $1 = $130

Their $20 investment becomes $130 (or 6.5x), a gain of $110, despite 17 out of 20 companies being strikeouts. Yes, you are correct. 85% of the portfolio companies failed!

PlayItSafe Capital, on the other hand, prioritizes downside protection, ensuring none of the portfolio fails but also avoiding riskier bets. In the end, one company generates one “10x” return, five companies return 3x, and the remainder is equally split between breakeven and failing.

(1 x 10 + 5 x 3 + 7 x 1 + 7 x 0) x $1 = $32

Despite several “successes” and very few “losses,” the fund’s return of $12 pales in comparison to Moonshot Capital’s. Even increasing the number of companies generating a 3x return to 10 with no loss (which is almost impossible to achieve for early-stage VCs) only yields a $29 gain from a total investment of $20:

(1 x 10 + 10 x 3 + 9 x 1) x $1 = $49

No one should invest in the early-stage VC asset class with the expectation of such a paltry return.

As illustrated, success isn’t about minimizing failures, nor is it about the number of “3x” companies or even the number of “unicorn logos” in the portfolio, as how early when the investment was made to these unicorns is crucial as well. One needs to invest in a unicorn when it was a baby-unicorn, not after it became a unicorn.

In summary:

Venture funds live or die by one thing: the percentage of the portfolio that becomes “value drivers”, i.e. those capable of generating returns of 10x, 100x, or even 1000x.

At Two Small Fish Ventures, we are the IRL version of Moonshot Capital. Every investment is made with the belief that $1 could turn into $100. We know that, in the end, only about 20% of our portfolio will become significant value drivers. Yet, with each investment, we truly believe these early-stage companies have the potential to become world-class giants and category creators when we invest. 

This is what venture capital is all about: not only is it exhilarating to be at the forefront of technology, but it’s also a great way to generate wealth and, more importantly, play a role in supporting moonshots that have a chance to change how the world operates.

P.S. This is Part 1 of this series. You can read Part 2, “Winning the Home Run Derby with Proper Portfolio Construction” here.

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Assessing Different Asset Classes

Diversifying a portfolio across various asset classes is the first principle for enhancing returns without significantly increasing risk from an investment standpoint. Traditionally, the go-to formula has been a 60/40 split—60% in stocks and 40% in bonds, a practice primarily due to the limited accessibility of alternative asset classes. However, recent years have seen a democratization of access to a wider array of asset classes, including private equity, venture capital and numerous alternatives, opening doors for more investors to explore areas once reserved for the privileged few. This broadening of opportunities is undoubtedly beneficial to many.

Yet, it introduces a new challenge: How do we assess fund managers across different asset classes? This task can be daunting even for seasoned investment professionals, as investing encompasses a vast range of specialties. A common mistake is posing the wrong questions, as assessment criteria are not interchangeable across asset classes. It is akin to comparing athletes from different sports—evaluating NBA players is not the same as evaluating MLB players since each asset class is akin to a distinct sport. For instance, inquiring about the batting average of an Olympic gold medalist swimmer is as illogical as expecting an NBA MVP to be proficient with a baseball bat. 

It’s also unwise to question a fish on its ability to skate!

This blog post is the first in a series designed to demystify this process. I do not claim expertise in all asset classes—no one can. However, I hope to share my experiences to help you sidestep common mistakes and empower you with the basics to evaluate investment opportunities in unfamiliar territories, especially early-stage venture capital, which is my swim lane and relatively few people have the experience to assess. Please note, this blog post does not constitute investment advice or a comprehensive guide across all asset classes as we only cover a handful for illustration purposes. 

Here is a chart that highlights the key differences:

How should you interpret this chart? Let me use early-stage venture capital, or simply referred to as VC, as an example.

Assessing VC is more art than science and more qualitative than quantitative. It offers far higher return potential than almost any other asset class. On the other hand, the risk of losing money is also higher than in other asset classes, with the predictability of the potential target return being low and its variance high.

Individual investments within a fund portfolio have a very high failure rate, even for the best funds. This is by design because VC is a home run derby. Strikeouts, singles, or doubles don’t impact the return at all, as only the home runs count. This is unique to VC and counterintuitive to managers from other asset classes.

The dispersion among fund managers is also much higher, as the top decile funds generate significantly better returns than the rest. Vintages also make a far more significant influence, as market downturns have an outsized impact on fund returns, even for the best funds. However, the best funds still generate very good returns during bad years. These funds simply generate enormous returns during the good years!

VC takes a decade or more to generate returns. The first few years usually have nothing to show for because it takes a few years to find the startups to invest in, and they take time to grow and realize the gain. Because of this, VC funds are usually illiquid.

On the other extreme, fixed income is more science than art. It is number-driven, much more predictable, and has lower returns, but any default is a cardinal sin!

Each row on the chart deserves a separate blog post. Stay tuned for subsequent posts in this series, where we’ll dive deeper into these topics.

This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.

Story Protocol

Earlier today, TSFV announced our latest investment: Story Protocol. In short, Story Protocol is “Git for creative IP.” We backed the founders in late 2022, when the company was operating in stealth mode. Now, we’re committing additional funding in Story Protocol’s latest round, led by Andreessen Horowitz. So far, the company raised over US$54 million in funding. 

Being part of the founding team of Wattpad – the world’s largest storytelling platform – the Two Small Fish Ventures team is especially excited about Story Protocol and what it means for creators and the industry as a whole. The internet is a co-creation and remixing machine, and this trend will be supercharged by Generative AI. Story Procol is building the core infrastructure for this era.

On a more personal note, I am also super excited to work with its co-founder Seung-yoon Lee. We know S.Y. Lee well from our Wattpad days, as he was the co-founder and CEO of Radish, a direct Wattpad competitor. Although we were once competitors, we’re now partners ready to usher in a new era for IP together. 

Please read Brandon’s blog post for more details.

Ideogram.ai

Last week, Two Small Fish Ventures announced our most recent investment in Ideogram AI, a Toronto-based generative AI company. The company was founded by former Google Brain researchers and launched with $16.5 million USD ($22.3 million CAD) in seed funding led by Andreessen Horowitz and Index Ventures. The round was actually closed at the beginning of 2023. We can finally talk about it now, as the company was in stealth.

The founders of Ideogram – Mohammad NorouziWilliam ChanChitwan Saharia and Jonathan Ho – are renowned scientists who pioneered research in generative AI text-to-image systems. They’re also “the brains” behind Google Brain’s Imagen (pun fully intended).

Ideogram’s new product is transformative as it has successfully addressed an issue that has plagued many popular AI image generators to date: producing reliable text in varying colours, fonts, sizes, and styles within an image – be it lettering on signs or company logos – with just a few clicks, words, or taps.

For instance, when I typed:

“Cartoonish happy animals with a big sign that says ‘animal kingdom’, vibrant, graffiti, typography.”

The result was excellent. I can already imagine so many new use cases that weren’t possible before.

Check it out on Ideogram.ai.