That is like saying humans need to sleep, eat, and drink. True, yes. Useful, no.
The more important question is: what does “high quality” actually mean in venture?
For us, it means the potential to become uniquely world class.
A company that can become the clear winner in an important category. A company with real moats. A company that can build something enormous.
This is why we spend so much time trying to understand what is truly unique about a company. Not what is interesting. Not what demos well. Not what sounds differentiated in a pitch deck. Not what the ARR is today. What is actually hard to replicate? What gets stronger over time? What creates a widening gap versus everyone else? And the only way to know is to spend time with these deep tech founders and really understand how the technology, the product, and the company work.
I have written a long blog post on this topic on Two Small Fish’s website. Here is the link.
We are super excited to share that Two Small Fish led YScope’s US$3.9 million financing, with Snow Angels (the Snowflake alumni investment syndicate), Next Wave NYC, UTEST, and other successful founders participating.
YScope was cofounded by University of Toronto Professor Ding Yuan, who is also CEO, Professor Michael Stumm, Dr. Kirk Rodrigues, Dr. David Lion, Yu (Jack) Luo, and Beverly Xu (Guangji Xu). It is a deeply impressive team building open-source logging infrastructure for the AI era, combining deep systems research with real-world production traction.
Its core technology, CLP (Compressed Log Processor), makes log storage, search, and analytics dramatically more efficient for both humans and AI, across cloud and edge environments.
We believe this is a massive opportunity. As the cost of intelligence collapses, AI agents, robots, autonomous vehicles, and other intelligent systems will generate orders of magnitude more machine-generated events. A robotic finger moves. A self-driving car makes a slight turn. An AI agent retries a task. Each action creates an event, and the infrastructure layer that can handle that explosion efficiently will matter enormously.
YScope is also a strong mutual fit for TSF. We invest in the next frontier of computing and its applications, and we know firsthand how painful logging becomes at scale. I have spent enough time with logs that I will never get back. At Wattpad, logging every tap, swipe, and click could easily add up to billions of events a day. That is why YScope’s traction is so compelling, from powering Uber’s production logging platform to operating across more than 1.5 million connected electric vehicles and being used by Fortune 500 organizations.
Congrats to Ding, Michael, Kirk, David, Jack, Beverly, and the entire YScope team. Full blog post here.
Today is Pi Day, and it feels like a good excuse to reflect on an old friend.
Most people say goodbye to our friend π after school. I’ve been lucky enough to stay in touch. The relationship has evolved over the years, from a childhood friendship in math class to something that followed me into engineering school and later into my work. It is a good reminder that the academic foundations we build early do not stay behind. They continue to shape how we see the world and how we build what comes next.
At Two Small Fish, the next frontier of computing is our investment thesis. We see it taking shape across five areas: Vertical AI Platforms, Physical AI, AI Infrastructure, Advanced Computing Hardware, and Smart Energy. For Pi Day, I thought it would be fun to pick one equation I learned along the way for each of these five areas, and reflect on how it still connects to the technologies shaping this next frontier.
For Vertical AI Platforms, I think of the Gaussian distribution: f(x) = 1/(σ√(2π)) · e^(-(x-μ)^2 / 2σ²), which is foundational in probability and statistics. Even as AI becomes more vertical and more embedded in real workflows, it still rests on probability, statistics, and uncertainty. π is there too.
In Physical AI, the equation I think of is ω = 2πf, which defines angular frequency. I studied control systems and, one summer during my junior year at university, wrote software to control a robotic arm. That was an early lesson that once software meets motion, π becomes part of how the physical world is described.
In AI Infrastructure, I think of the Fourier transform: X(f) = ∫ x(t)e^(-j2πft) dt. I studied signal processing, my bachelor’s thesis was in image processing, and my master’s thesis was on noisy CDMA wireless networks. That math shaped how I thought about signals, images, noise, and communication then, and Fourier shows up in modern LLMs now.
In Advanced Computing Hardware, my equation is ℏ = h/2π. I studied optics in communications, which included a lot of quantum mechanics, so Planck’s constant was part of the vocabulary of the field. What stayed with me is that π shows up at the quantum level as part of the structure, not just the math.
In Smart Energy, the equation I would use is Xₗ = 2πfL, which calculates inductive reactance. It is a simple reminder that in AC systems, frequency directly shapes behaviour. As energy systems become smarter and more dynamic, π remains embedded in the physics underneath.
That may be why Pi Day still resonates with me. π is one of those rare constants that keeps reappearing across disciplines, from robotics to quantum mechanics, from signal processing to energy systems, and now across the next frontier of computing.
P.S. I also realized I missed mentioning my other friend Euler back on February 7. Next time!
OpenClaw, an AI agent that can operate a computer on your behalf, has taken the world by storm. Unless you have been living under a rock, you have probably either tried it already or at least wanted to find out what all the buzz is about.
Many, however, have failed to get past installation because it is so difficult. There is a reason why thousands of people lined up for help just to get OpenClaw installed on their machines. More importantly, using it without proper safeguards can create a real security risk.
From my perspective, three issues stand out in OpenClaw’s current form.
First, it is difficult to install, even for technical users. That matters more than many builders realize. A product does not become broadly useful simply because it is powerful. It becomes useful when people can actually get it running without friction or handholding.
Second, it can create a real security risk if not used properly. Tools that operate at the machine level can be compelling, but they also introduce a very different level of responsibility. Most users do not want to expose their full machine environment just to perform a simple task.
Third, it can become expensive quickly. Token bills can become material before users even realize it. A tool may look impressive in a demo, but if the economics do not work, adoption will eventually stall. In AI, performance matters, but efficiency matters just as much.
This is why, after looking at many options, I chose to use Crate from our portfolio company, Gensee, myself, and I believe it is by far the best way to try OpenClaw.
It addresses all three issues directly: one-click install in 60 seconds, a secure sandbox that only accesses what you explicitly allow, and deep expertise from Dr. Shengqi Zhu and award-winning operating systems expert Professor Yiying Zhang, whose work on agentic optimization and efficiency is exactly what makes this possible. That expertise is also why they have been able to make Crate completely free to use.
In other words, it makes OpenClaw easy, safe, and completely free.
There is also a bonus. Crate comes with Gensee’s proprietary AI search engine built in. That search engine ranked #1 on Source Bench for finding the highest-quality web sources.
Another bonus is that Crate comes pre-installed with a set of common, useful skills vetted by the Gensee team for safety, while still allowing users to install additional skills themselves. That makes it both easier to get started and more flexible over time.
A final bonus is flexible control. Users can create multiple instances, pause and resume them, take snapshots, and roll back at any time. That means full control without the usual complexity.
So Gensee Crate is not just an easier and safer way to use OpenClaw. It is also a better one, and that points to where this market is going. The first wave of a technology shows what is possible; the next wave makes it practical for mainstream users. AI agents are now entering that phase. To become part of everyday workflows, they need to be easy to use, safe by design, and efficient enough to be economically viable. That is where adoption happens.
And that is why Gensee Crate is the best way to try out OpenClaw and why it is worth paying attention to.
If you are curious about OpenClaw, try Gensee Crate here.
AI has a massive efficiency problem. It uses too much compute. It costs too much. It uses too much energy. And it is too slow.
Today, it can take a serious cluster of GPUs and a very non-trivial amount of electricity just to answer a simple question like “Can you summarize this document?” or “What should I reply to this email?” The machinery underneath is anything but.
This is why we invested in ByteShape. The company was co-founded by a world-class team out of the University of Toronto: Professor Andreas Moshovos [link]—whose group’s papers have amassed more than 10,000 citations—together with scientists Miloš Nikolić [link], Enrique Torres Sánchez [link], and Ali Hadi Zadeh [link], whose life’s work is making computation more efficient. Both Ali and Miloš were also postgraduate affiliates of the Vector Institute, and Miloš’s PhD research formed the foundation of ByteShape’s core technology—work that earned him recognition as an “ML and Systems Rising Star” by MLCommons last year.
They are building the kind of deep technology that changes the economics of AI deployment, then changes what products become possible.
Quantization, In Plain English
Many techniques underpin what ByteShape does. One of them jumped out: quantization.
Quantization is about using fewer bits to represent the numbers inside a model. Many models are trained with higher precision formats because it helps learning remain accurate. But AI inference often does not need that much precision everywhere. If you can safely represent weights and activations with fewer bits, you shrink memory use and speed up compute, often dramatically, while keeping outputs essentially the same.
ByteShape’s approach, ShapeLearn, does this in a way that feels obvious in hindsight and very hard in practice. ShapeLearn adaptively taps into the AI training process to learn optimal datatypes for parameters and inputs. The result can be 7x faster training and 10x faster inference.
In layman’s terms, the idea is simple and powerful: fewer bits, less work, and smaller models, without sacrificing results. All is being done adaptively.
Then ByteShape takes it one step further. ShapeSqueeze is their lossless compression layer that applies per-value encoding to minimize off-chip data transfers, with up to 40% extra compression.
Put the two together, and you get something that really matters in the real world. ShapeLearn reduces what the model needs to store and compute. ShapeSqueeze reduces what the hardware needs to move around. Less compute and less data movement means faster AI, lower cost, and lower energy.
This is not limited to savings in cloud data centres. It is a step-function improvement in what can run locally, which means a step-function improvement in what products can exist. It opens the door to privacy-sensitive and offline workflows, on-device agents, and embedded intelligence in robots and machines where speed, power and thermals matter.
Why TSF invested
Two Small Fish Ventures is an early-stage deep tech venture capital firm investing globally in the next frontier of computing and its applications. We invest where foundational breakthroughs create the conditions for new category-defining companies, and we back founders at the earliest stages when the technology is ready for commercialization.
ByteShape fits that thesis perfectly. They are building a foundational efficiency layer for AI that can reshape performance and cost across cloud, enterprise, and edge deployments. And because all TSF partners are engineers with deep operating experience, we do not just evaluate the science. We lean into technology through commercialization, with hands-on support informed by having built and scaled companies ourselves.
With ByteShape, the future is models that run faster, use less energy, and fit on far smaller hardware, without sacrificing the quality that makes them worth using.
Today, writing software is no more difficult than pressing a button. You describe what you want. In a few minutes, not a mockup but a fully functional application is ready to use.
I can testify to this personally. In 15 minutes, using AI, I have “written” more software than I did in a full year when I was writing software professionally. Although my old skill is now obsolete, it is wonderful because I can build faster than I ever could. This is the best of times!
So yes, in a narrow sense, the old software opportunity is dead.
The writing has been on the wall for a while. Shallow tech software has been democratized and, in many cases, is not investable. Public markets have finally figured out that a new wave of software is coming. They just do not really know what it is yet, so they sell indiscriminately. Generic business and financial skills do not work during a paradigm shift because disruption does not show up neatly on a spreadsheet full of ARR, EBITDA, and CAGR. Those are the wrong questions to ask when the underlying rules are being rewritten.
At the same time, the early phase of a paradigm shift is often the best time to invest. The people who have new specific knowledge and the courage to build for an AI native world will have a clear edge and, if they are right, capture outsized returns.
Now here is the twist.
When the cost of X collapses, the world does not get less of the thing. It gets flooded with it. That is Jevons Paradox in action. Make something cheaper and easier, and overall demand goes up significantly, often faster than the drop in price. We have seen versions of this before as humanity adopted electricity, personal computers, the internet, and now intelligence.
So software is not dead. We are about to have 10x, 100x, maybe 1000x more software than we have today.
We have seen a similar movie in content. Thanks to the internet and mobile devices, as the cost of content creation and distribution dropped, the amount of content exploded. That created giants that seized the opportunity. Fun fact, I co-founded a business two decades ago on that thesis and rode that wave myself, so yes, I have been there and done that.
Back to software.
The question now is how to capture the opportunity when the world has 1000x more software and the cost of creating software is approaching zero. Inevitably, the business model shifts because we move to a different part of the price elasticity curve when software becomes abundant. When code becomes cheap, value migrates to what stays scarce.
Shallow tech, run-of-the-mill software companies, including a lot of AI wrappers, are generally not investable from a VC perspective because they are so easy to build, copy, and replace. I have been saying this for many years, even before ChatGPT came out. If you still need more evidence, you are already behind. The button is not coming. The button is here.
This does not mean these companies cannot make money. Some will. But “can generate cash when bootstrapping” and “can return a venture fund” are not the same statement.
In contrast, deep tech software is a fantastic opportunity. There is a reason TSF shifted to deep tech investments years ago. That was not an accident. When the cost curve of intelligence collapses, businesses whose primary moat is “we can write this software” or “we spent 100 engineer years building it” need a rethink.
This is why we are unapologetically investing in deep tech.
Deep tech software is a completely different sport. In many cases, the moat is not in the software. The moat is the unique technology embedded in the software, plus the data and the system it connects to. The software is the container. The defensibility sits underneath.
People often ask how to draw the line between deep tech software and everything else. We have a definition, and it is more true than ever in this “software is abundant” era. More importantly, making that call takes specialized skill. That is why deep tech investing is reserved for trained eyes, as it requires engineering judgment, product instinct, operating experience, and recognition of a market gap that comes from building and commercializing disruptive opportunities. We can do deep tech because we are equipped to do so. Been there. Done that.
To be clear, of course, I am not suggesting the only software opportunity is deep tech. There is also a massive opportunity in bespoke software and disposable software.
For decades, companies bought off-the-shelf software because that was the only option that made economic sense, even when the software was not a perfect fit for their workflow. You ended up customizing your workflow around the software. Bespoke-built software was too expensive, too slow, and too hard to maintain.
Now the economics are changing.
We can now build software for problems that were previously too small to matter economically. We can now create personal tools designed for an audience of one. We will ship internal workflows the way we send emails. We can now generate software that lives for a week, does its job, and disappears.
That is a massive opportunity. Much of it will look like a low-tech, large-scale service business. Some of it will become platforms and infrastructure for software generation itself. Some of it will become entirely new categories we do not have names for yet. Some of it will help make deep tech software even more defensible.
But the direction is clear. Software is becoming abundant, and the economics of software will be drastically different.
So, is software dead?
Yes, software as a scarce craft is dying.
Software-as-a-moat because “we spent 100-engineer-years building it” is dying.
But software as leverage is exploding. Software as the fabric of everything is exploding. The world is not losing software. The world is getting more of it than we can possibly imagine.
Back to the movie analogy. It is like the theatre business. The movie is not the only product. The experience is the product. The popcorn is the product. The atmosphere is the product. The movie is what gets you in the door.
We need new architectures to meet the speed, security, and energy demands of the next frontier of computing and its applications, which is the lens I used in The Factory Analogy.
Our portfolio company Applied Brain Research (ABR) just achieved a new milestone: ABR announced the successful closure of its oversubscribed seed funding round, including investment from TSF as a lead investor, with Eva Lau joining the board.
ABR created and patented a new type of AI model, called state space models, to make AI smaller, faster, and more energy efficient than transformer models. State space models deliver real-time voice and time series intelligence without the cloud, built for privacy and efficiency. ABR’s first chip, TSP1, delivers real-time, fully on-device voice AI without the cloud. Full vocabulary speech-to-text and text-to-speech are now possible at under 30mW.
At the edge, every millisecond and every milliwatt count.
For context:
30mW is 100× less than a 3W LED lightbulb.
A data-center GPU lives in a different universe: an NVIDIA H200 NVL is up to 600W.
Now connect that to the three constraints that define the edge:
Speed: for voice and interaction, half a second is half a second too late. Cloud voice is “a terrible experience,” plagued by delays.
Security: shipping voice data to the cloud bakes in privacy risk by default — which is why we keep coming back to intelligence that stays close to the user, as Brandon argued in his post In Favour of Intelligence That Stays Put. ABR calls out “privacy concerns” as a core issue with cloud voice.
Energy: edge devices are constrained by battery life and on-device resources. ABR’s on-device voice numbers move this from “interesting” to “deployable.”
This is why ABR enables numerous new use cases that weren’t viable before in categories like AR, robotics, wearables, medical devices, and automotive.
Imagine AR glasses (or other wearables) that respond to your command in real time without draining the battery. Imagine a robot that reacts with no hesitation. Imagine a medical device that can provide insight securely, without exporting sensitive data. Imagine a car that can respond to voice commands even when the network is unreliable. These are just a few examples. The list can go on and on.
Or as Eva put it in ABR’s announcement: sophisticated voice AI doesn’t require the cloud.
At the beginning of this year, I wrote an op-ed for The Globe about what many were already calling the AI bubble. Nearly a year later, almost all of what I said remains true. The piece was always meant to be a largely evergreen, long term view rather than a knee jerk reaction.
The only difference today is that the forces I described back then have only intensified.
We are in a market where Big Tech, venture capital, private equity, and the public markets are all pouring unprecedented capital into AI. But to understand what is actually happening, and how to invest intelligently, we need to separate noise from fundamentals. Here are the five key points:
Why Big Tech Is Going All In while Taking Minimal Risk.
The Demand Side Is Real and Growing.
Not All AI Investments Are Created Equal.
Picking Winners Matters.
Remember, Dot Com Was a Bubble. The Internet Was Not.
1. Why Big Tech Is Going All In while Taking Minimal Risk
The motivations of the large technology companies driving this wave are very different from those of startups and other investors.
For Big Tech, AI is existential. If they underinvest, they risk becoming the next Blockbuster. If they overinvest, they can afford the losses. In practice, they are buying trillions of dollars worth of call options, and very few players in the world can afford to do that.
The asymmetry is obvious. If I were their CEOs, I would do the same.
But being able to absorb risk does not mean they want to absorb all of it. This is why they are using creative financing structures to shift risk off their balance sheets while remaining all in. At the same time, they strengthen their ecosystems by keeping developers, enterprises, and consumers firmly inside their platforms.
This is not classical corporate investing. Their objective is not just profitability. It is long term dominance.
For everyone outside Big Tech, meaning most of us, understanding these incentives is essential. It helps you place your bets intelligently without becoming roadkill when Big Tech transfers risk into the ecosystem.
2. The Demand Side Is Real
AI usage is not slowing. It is accelerating.
The numbers do not lie. Almost every metric, including model inference, GPU utilization, developer adoption, enterprise pilot activity, and startup formation, is rising. You can validate this across numerous public datasets. Directionally, people are using AI more, not less. And unlike previous hype cycles, this wave has real usage, real dollars, and real infrastructure behind it.
Yes, there is froth. But there are also fundamentals.
3. Not All AI Investments Are Created Equal
A common mistake is treating AI investing as a single category.
It is not.
Investing in a public market, commoditized AI business is very different from investing in a frontier technology startup with a decade long horizon. The former may come with thin margins, weak moats, and hidden exposure to Big Tech’s risk shifting. The latter is where transformational returns come from if you know how to evaluate whether a company is truly world class, differentiated, and defensible.
Lumping all AI investments together is as nonsensical as treating all public stocks as the same.
4. Picking Winners Matters
In public markets, you can buy the S&P 500 and call it a day. But that index is not random. Someone selected those 500 winners for you.
In venture, picking winners matters even more. It is a power law business. Spray and pray does not work. Most startups will not survive, and only the strongest will break out, especially in an environment as competitive as today.
Thanks to AI, we are in the middle of a massive platform shift. Venture scale outcomes depend on understanding technology deeply enough to see a decade ahead and identify breakout successes before others do. Long term vision beats short term noise. Daily or quarterly fluctuations are simply noise to be ignored.
5. Dot Com Was a Bubble. The Internet Was Not.
The dot com era had dramatic overvaluation and a painful crash, but the underlying technology still reshaped the world. The problem was not the internet. It was timing, lack of infrastructure, and indiscriminate investing in ideas that were either too early or simply bad.
Looking back, the early internet lacked essential components such as high speed access, mobile connectivity, smartphones, and internet payments. Although some elements of the AI stack may still be evolving, many of the major building blocks, including commercialization, are already in place. AI does not suffer from the same foundational gaps the early internet did.
Calling this a bubble as a blanket statement misses the nuance. AI itself is not a bubble. With a decade long view, it is already reshaping almost every industry at an unprecedented pace. Corrections, consolidations, and failures are normal. The underlying technological shift is as real as the internet was in the 1990s.
There is speculation. There are frothy areas. And yet, there are many areas that are underfunded. That is where the opportunities are.
⸻
History shows that great venture funds invest through cycles. They invest in areas that will be transformative in the next decade, not the next quarter.
For us, the five areas we focus on, including Vertical AI platforms, physical AI, AI infrastructure, advanced computing hardware, and smart energy, are the critical elements of AI. Beyond being our expertise, there is another important reason why these categories matter: Bubble or not, they will thrive.
We are not investing in hype, nor in capital intensive businesses where capital is the only moat, nor in companies where technology defensibility is low. As long as we stay disciplined and visionary, and continue to back founders building a decade ahead, we will do well, bubble or not.
After all, there may be multiple macro cycles across a decade. Embrace the bubble.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
Last year we invested in Axiomatic AI. Their mission is to bring verifiable and trustworthy AI into science and engineering, enabling innovation in areas where rigour and reliability are essential. At the core of this is Mission 10×30: achieving a tenfold improvement in scientific and engineering productivity by 2030.
The company was founded by top researchers and professors from MIT, the University of Toronto, and ICFO in Barcelona, bringing deep expertise in physics, computer science, and engineering.
Since our investment, the team has been heads down executing. Now they’ve shared their first public release: Axiomatic Operators.
What They’ve Released
Axiomatic Operators are MCP servers that run directly in your IDE, connecting with systems like Claude Code and Cursor. The suite includes:
AxEquationExplorer
AxModelFitter
AxPhotonicsPreview
AxDocumentParser
AxPlotToData
AxDocumentAnnotator
Why is this important?
Large Language Models (LLMs) excel at languages (as their name suggests) but struggle with logic. That’s why AI can write poetry but often has trouble with math — LLMs mainly rely on pattern matching rather than reasoning.
This is where Axiomatic steps in. Their approach combines advances in reinforcement learning, LLMs, and world models to create AI that is not just fluent but also capable of reasoning with the rigour required in science and engineering.
What’s Next
This first release marks an important step in turning their mission into practical, usable tools. In the coming weeks, the team will share more technical material — including white papers, demo videos, GitHub repositories, and case studies — while continuing to work closely with early access partners.
Find out more on GitHub, including demos, case studies, and everything else you need to make your work days less annoying and more productive: Axiomatic AI GitHub
We’re excited to see their progress. If you’re in science or engineering, we encourage you to give the Axiomatic Operators suite a try: Axiomatic AI.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
In 1865, William Stanley Jevons, an English economist, observed a curious phenomenon: as steam engines in Britain became more efficient, coal use didn’t fall — it rose. Efficiency lowered the cost of using coal, which made it more attractive, and demand surged.
That insight became known as Jevons Paradox. To put it simply:
Technological change increases efficiency or productivity.
Efficiency gains lead to lower consumer prices for goods or services.
The reduced price creates a substantial increase in quantity demanded (because demand is highly elastic).
Instead of shrinking resource use, efficiency often accelerates it — and with it, broader societal change.
Coal, Then Light
The paradox first appeared in coal: better engines, more coal consumed. Electricity followed a similar path. Consider lighting in Britain:
Period
True price of lighting (per million lumen-hours, £2000)
Change vs. start
Per-capita consumption (thousand lumen-hours)
Change vs. start
Total consumption (billion lumen-hours)
Change vs. start
1800
£8,000
—
1.1
—
18
—
1900
£250
↓ ~30×
255
↑ ~230×
10,500
↑ ~500×
2000
£2.5
↓ ~3,000× (vs. 1800) / ↓ ~100× (vs. 1900)
13,000
↑ ~13,000× (vs. 1800) / ↑ ~50× (vs. 1900)
775,000
↑ ~40,000× (vs. 1800) / ↑ ~74× (vs. 1900)
Over two centuries, the price of light fell 3,000×, while per-capita use rose 13,000× and total consumption rose 40,000×. A textbook case of Jevons Paradox — efficiency driving demand to entirely new levels.
Computing: From Millions to Pennies
This pattern carried into computing:
Year
Cost per Gigaflop
Notes
1984
$18.7 million (~$46M today)
Early supercomputing era
2000
$640 (~$956 today)
Mainstream affordability
2017
$0.03
Virtually free compute
That’s a 99.99%+ decline. What once required national budgets is now in your pocket.
Storage mirrored the same story: by 2018, 8 TB of hard drive storage cost under $200 — about $0.019 per GB, compared to thousands per GB in the mid-20th century.
Connectivity: Falling Costs, Rising Traffic
Connectivity followed suit:
Year
Typical Speed & Cost per Mbps (U.S.)
Global Internet Traffic
2000
Dial-up / early DSL (<1 Mbps); ~$1,200
~84 PB/month
2010
~5 Mbps broadband; ~$25
~20,000 PB/month
2023
100–940 Mbps common; ↓ ~60% since 2015 (real terms)
>150,000 PB/month
(PB = petabytes)
As costs collapsed, demand exploded. Streaming, cloud services, social apps, mobile collaboration, IoT — all became possible because bandwidth was no longer scarce.
Intelligence: The New Frontier
Now the same dynamic is unfolding with intelligence:
Year
Cost per Million Tokens
Notes
2021
~$60
Early GPT-3 / GPT-4 era
2023
~$0.40–$0.60
GPT-3.5 scale models
2024
< $0.10
GPT-4o and peers
That’s a two-order-of-magnitude drop in just a few years. Unsurprisingly, demand is surging — AI copilots in workflows, large-scale analytics in enterprises, and everyday generative tools for individuals.
As we highlighted in our TSF Thesis 3.0, cheap intelligence doesn’t just optimize existing tasks. It reshapes behaviour at scale.
Why It Matters
The recurring pattern is clear:
Coal efficiency fueled the Industrial Revolution.
Affordable lighting built electrified cities.
Cheap compute and storage enabled the digital economy.
Low-cost bandwidth drove streaming and cloud collaboration.
Now cheap intelligence is reshaping how we live, work, and innovate.
As we highlighted in Thesis 3.0:
“Reflecting on the internet era… as ‘the cost of connectivity’ steadily declined, productivity and demand surged—creating a virtuous cycle of opportunities. The AI era shows remarkable parallels. AI is the first technology capable of learning, reasoning, creativity… Like connectivity in the internet era, ‘the cost of intelligence’ is now rapidly declining, while the value derived continues to surge, driving even greater demand.”
The lesson is simple: efficiency doesn’t just save costs — it reorders economies and societies. And that’s exactly what is happening now.
If you are building a deep tech early-stage startup in the next frontier of computing, we would like to hear from you. This is a generational opportunity as both traditional businesses and entirely new sectors are being reshaped. White-collar jobs and businesses, in particular, will not be the same. We would love to hear from you.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
For nearly 70 years, the home electrical panel has looked the same. Meanwhile, the home itself is transforming: solar on the roof, batteries in the garage, heat pumps, EVs in the driveway, and smart appliances and devices everywhere.
And yet, the panel? Still the same. It is the last dumb box left, and FUTURi is fixing that with deep tech.
FUTURi’s Energy Processor
FUTURi Power, founded by Dr. Martin Ordonez (UBC Professor, Kaiser Chair at UBC, and recipient of the King Charles III Coronation Medal for leadership in clean energy innovation), reimagines the panel as the Energy Processor, a programmable energy computer that finally gives the home’s electrical system a brain. It is designed as a like-for-like replacement for the traditional panel that is future-proof and intelligently measures and coordinates loads, avoids peaks, and manages energy use at the edge.
Why This Matters
Homes are no longer passive energy consumers. They are dynamic nodes in the grid. By making the panel intelligent, FUTURi enables:
For homeowners: Achieve a 100% electric home without costly service upgrades. A smarter, more resilient, and efficient energy ecosystem.
For utilities: Demand peaks flattened, demand response (DR) programs and distributed energy resources (DERs) integrated, deferring costly capital expenditures.
For builders and communities: Intelligent electrification helps accelerate the deployment of built infrastructure without overloading the grid.
This is why FUTURi and utilities are already collaborating on projects to evaluate how Energy Processors can strengthen the grid and benefit customers.
Our Perspective
As Dr. Martin Ordonez, Founder and CEO of FUTURi Power, puts it: “Panels used to be passive. The Energy Processor is active, safe, and software-defined. It gives homes and grids a common language.” At TSF, Smart Energy is one of our five focus areas. Our thesis is simple: the cost of intelligence is collapsing, and the biggest opportunities lie where software and hardware come together to reshape behaviour.
FUTURi is exactly that blueprint for intelligent electrification: deep-tech power electronics plus intelligent control. That combination turns a 70-year-old box into the brain of the modern home. Dr. Ordonez and his team are globally recognized experts in electrification who are translating decades of pioneering research into transformative commercial solutions.
And this is just the beginning. There is so much more the company can do to make electricity truly intelligent. FUTURi has a bright future ahead (pun fully intended).
The cost of intelligence is dropping at an unprecedented rate. Just as the drop in the cost of computing unlocked the PC era and the drop in the cost of connectivity enabled the internet era, falling costs today are driving explosive demand for AI adoption. That demand creates opportunity on the supply side too, in the infrastructure, energy, and technologies needed to support and scale this shift.
In our Thesis 3.0, we highlighted how this AI-driven platform shift will reshape behaviour at massive scale. But identifying the how also means knowing where to look.
Every era of technology has a set of areas where breakthroughs cluster, where infrastructure, capital, and talent converge to create the conditions for outsized returns. For the age of intelligent systems, we see five such areas, each distinct but deeply interconnected.
1. Vertical AI Platforms
After large language models, the next wave of value creation will come from Vertical AI Platforms that combine proprietary data, hard-to-replicate models, and orchestration layers designed for complex and large-scale needs.
Built on unique datasets, workflows, and algorithms that are difficult to imitate, these platforms create proprietary intelligence layers that are increasingly agentic. They can actively make decisions, initiate actions, and shape workflows. This makes them both defensible and transformative, even when part of the foundation rests on commodity models.
This shift from passive tools to active participants marks a profound change in how entire sectors operate.
2. Physical AI
The past two decades of digital transformation mostly played out behind screens. The next era brings AI into the physical world.
Physical AI spans autonomous devices, robotics, and AI-powered equipment that can perceive, act, and adapt in real environments. From warehouse automation to industrial robotics to autonomous mobility, this is where algorithms leave the lab and step into society.
We are still early in this curve. Just as industrial machinery transformed factories in the nineteenth century, Physical AI will reshape industries that rely on labour-intensive, precision-demanding, or hazardous work.
The companies that succeed will combine world-class AI models with robust hardware integration and build the trust that humans place in systems operating alongside them every day.
3. AI Infrastructure
Every transformative technology wave has required new infrastructure that is robust, reliable, and efficient. For AI, this means going beyond raw compute to ensure systems that are secure, safe, and trustworthy at scale.
We need security, safety, efficiency, and trustworthiness as first-class priorities. That means building the tools, frameworks, and protocols that make AI more energy efficient, explainable, and interoperable.
The infrastructure layer determines not only who can build AI, but who can trust it. And trust is ultimately what drives adoption.
4. Advanced Computing Hardware
Every computing revolution has been powered by a revolution in hardware. Just as the transistor enabled mainframes and the microprocessor ushered in personal computing, the next era will be defined by breakthroughs in semiconductors and specialized architectures.
From custom chips to new communication fabrics, hardware is what makes new classes of AI and computation possible, both in the cloud and on the edge. But it is not only about raw compute power. The winners will also tackle energy efficiency, latency, and connectivity, areas that become bottlenecks as models scale.
As Moore’s Law hits its limit, we are entering an age of architectural innovation with neuromorphic computing, photonics, quantum computing, and other advances. Much like the steam engine once unlocked new industries, these architectures will redefine what is computationally possible. This is deep tech meeting industrial adoption, and those who can scale it will capture immense value.
5. Smart Energy
Every technological leap has demanded a new energy paradigm. The electrification era was powered by the grid. Today, AI and computing are demanding unprecedented amounts of energy, and the grid as it exists cannot sustain this future.
This is why smart energy is not peripheral, but central. From new energy sources to intelligent distribution networks, the way we generate, store, and allocate energy is being reimagined. The idea of programmable energy, where supply and demand adapt dynamically using AI, will become as fundamental to the AI era as packet switching was to the internet.
Here, deep engineering meets societal need. Without resilient and efficient energy, AI progress stalls. With it, the future scales.
Shaping What Comes Next
The drop in the cost of intelligence is driving demand at a scale we have never seen before. That demand creates opportunity on the supply side too, in the platforms, hardware, energy, physical systems, and infrastructure that make this future possible.
The five areas — Vertical AI Platforms, Physical AI, AI Infrastructure, Advanced Computing Hardware, and Smart Energy — represent the biggest opportunities of this era. They are not isolated. They form an interconnected landscape where advances in one accelerate breakthroughs in the others.
We are domain experts in these five areas. The TSF team brings technical, product and commercialization expertise that helps founders build and scale in precisely these spaces. We are uniquely qualified to do so.
At Two Small Fish, this is the canvas for the next generation of 100x companies. We are excited to partner with the founders building in these areas globally, those who not only see the future, but are already shaping it.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
A few years back, Eva met Dr. Scott Stornetta. Later, I did too. Alongside Dr. Stuart Haber, Scott is widely credited as the creator of blockchain. Blockchain is a technology built on a simple but radical idea at the time: decentralization. No single authority, no central point of control, just a trusted system everyone can rely on.
Now, these two scientists are teaming up again to start a new company, SureMark Digital. Their mission is to bring that same decentralized philosophy to identity and authenticity on the internet, enabling anyone to prove who they are, certify their work, and push back against deepfakes and impersonation. No middlemen. No central gatekeepers.
It took us about 3.141592654 seconds to get excited. We are now proud to be the co-lead investor in SureMark’s first institutional round.
At Two Small Fish, we love backing frontier tech that can reshape large-scale behaviour. SureMark checks every box.
Eva has written a deeper dive on what they are building and why it matters. You can read it here.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
In the history of human civilization, there have been several distinct ages: the Agricultural Age, the Industrial Age, and the Information Age, which we are living in now.
Within each age, there are different eras, each marked by a drastic drop in the cost of a fundamental “atomic unit.” These cost collapses triggered enormous increases in demand and reshaped society by changing human behaviour at scale.
From the late 1970s to the 1990s, the invention of the personal computer drastically reduced the cost of computing [1]. A typical CPU in the early 1980s cost hundreds of dollars and ran at just a few MHz. By the 1990s, processors were orders of magnitude faster for roughly the same price, unlocking entirely new possibilities like spreadsheets and graphical user interfaces (GUIs).
Then, from the mid-1990s to the 2010s, came the next wave: the Internet. It brought a dramatic drop in the cost of connectivity [2]. Bandwidth, once prohibitively expensive, fell by several orders of magnitude — from over $1,200 per Mbps per month in the ’90s to less than a penny today. This enabled browsers, smartphones, social networks, e-commerce, and much of the modern digital economy.
From the mid-2010s to today, we’ve entered the era of AI. This wave has rapidly reduced the cost of intelligence [3]. Just two years ago, generating a million tokens using large language models cost over $100. Today, it’s under $1. This massive drop has enabled applications like facial recognition in photo apps, (mostly) self-driving cars, and — most notably — ChatGPT.
These three eras share more than just timing. They follow a strikingly similar pattern:
First, each era is defined by a core capability, i.e. computing, connectivity, and intelligence respectively.
Second, each unfolds in two waves:
The initial wave brings a seemingly obvious application (though often only apparent in hindsight), such as spreadsheets, browsers, or facial recognition.
Then, typically a decade or so later, a magical invention emerges — one that radically expands access and shifts behaviour at scale. Think GUI (so we no longer needed to use a command line), the iPhone (leapfrogging flip phones), and now, ChatGPT.
Why does this pattern matter?
Because the second-wave inventions are the ones that lower the barrier to entry, democratize access, and reshape large-scale behaviour. The first wave opens the door; the second wave throws it wide open. It’s the amplifier that delivers exponential adoption.
We’ve seen this movie before. Twice already, over the past 50 years.
The cost of computing dropped, and it transformed business, productivity, and software.
Then the cost of connectivity dropped, and it revolutionized how people communicate, consume, and buy.
Now the cost of intelligence is collapsing, and the effects are unfolding even faster.
Each wave builds on the last. The Internet era was evolving faster than the PC era because the former leveraged the latter’s computing infrastructure. AI is moving even faster because it sits atop both computing and the Internet. Acceleration is not happening in isolation. It’s compounding.
If it feels like the pace of change is increasing, it’s because it is.
Just look at the numbers:
Windows took over 2 years to reach 1 million users.
Facebook got there in 10 months.
ChatGPT did it in 5 days.
These aren’t just vanity metrics — they reflect the power of each era’s cost collapse to accelerate mainstream adoption.
That’s why it’s no surprise — in fact, it’s crystal clear — that the current AI platform shift is more massive than any previous technological shift. It will create massive new economic value, shift wealth away from many incumbents, and open up extraordinary investment opportunities.
That’s why the succinct version of our thesis is:
We invest in the next frontier of computing and its applications, reshaping large-scale behaviour, driven by the collapsing cost of intelligence and defensible through tech and data moats.
The race is already on. We can’t wait to invest in the next great thing in this new era of intelligence.
Super exciting times ahead indeed.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
Footnotes
[1] Cost of Computing
In 1981, the Intel 8088 CPU (used in the first IBM PC) had a clock speed of 4.77 MHz and cost ~$125. By 1995, the Intel Pentium processor ran at 100+ MHz and cost around $250 — a ~20x speed gain at similar cost. Today’s chips are thousands of times faster, and on a per-operation basis, exponentially cheaper.
[2] Cost of Connectivity
In 1998, bandwidth cost over $1,200 per Mbps/month. By 2015, that figure dropped below $1. As of 2024, cloud bandwidth pricing can be less than $0.01 per GB — a near 100,000x drop over 25 years.
[3] Cost of Intelligence
In 2022, generating 1 million tokens via OpenAI’s GPT-3.5 could cost $100+. In 2024, it costs under $1 using GPT-4o or Claude 3.5, with faster performance and higher accuracy — a 100x+ reduction in under two years.
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
Driven by rapid advances in AI, the collapse in the cost of intelligence has arrived—bringing massive disruption and generational opportunities.
Building on this platform shift, TSF invests in the next frontier of computing and its applications, backing early-stage products, platforms, and protocols that reshape large-scale behaviour and unlock uncapped, new value through democratization. These opportunities are fueled by the collapsing cost of intelligence and, as a result, the growing demand for access to intelligence as well as its expansion beyond traditional computing devices. What makes them defensible are technology moats and, where fitting, strong data network effects.
Ormore succinctly: We invest in the next frontier of computing and its applications, reshaping large-scale behaviour, driven by the collapsing cost of intelligence and defensible through tech and data moats.
Watch this 2-minute video to learn more about our approach:
Our Evolution: From Network Effects to Deep Tech
When we launched TSF in 2015, our initial thesis centred around network effects. Drawing from our experience scaling Wattpad from inception to 100 million users, we became experts in understanding and leveraging exponential value and defensibility created by network effects at scale. This expertise led us to invest—most as the very first cheque—in massively successful companies such as BenchSci, Ada, Printify, and SkipTheDishes.
We achieved world-class success with this thesis, but like all good things, that opportunity diminished over time.
Our thesis evolved as the ground shifted toward the end of 2010s. A couple of years ago, we articulated this evolution by focusing on early-stage products, platforms, and protocols that transform user behaviour and empower businesses and individuals to unlock new value. Within this broad focus, we zoomed in specifically on three sectors: AI, decentralized protocols, and semiconductors. That thesis guided investments in great companies such as Story, Ideogram, Zinite, and Blumind.
But the world doesn’t stand still. In fact, it has never changed so rapidly. This brings us to the next and even more significant shift shaping our thesis.
A New Platform Shift: The Cost of Intelligence is Collapsing
Reflecting on the internet era, the core lesson we learned was that the internet was the first technology in human history that was borderless, connected, ubiquitous, real-time, and free. At its foundation was connectivity, and as “the cost of connectivity” steadily declined, productivity and demand surged, creating a virtuous cycle of opportunities.
The AI era shows remarkable parallels. AI is the first technology capable of learning, reasoning, creativity, cross-domain functionality, and decision-making. Like connectivity in the internet era, “the cost of intelligence” is now rapidly declining, while the value derived from intelligence continues to surge, driving even greater demand.
This shift will create massive economic value, shifting wealth away from many incumbents and opening substantial investment opportunities. However, just like previous platform shifts, the greatest opportunities won’t come from digitizing or automating legacy workflows, but rather from completely reshaping workflows and user behaviour, democratizing access, and unlocking previously impossible value. These disruptive opportunities will expand into adjacent areas, leaving incumbents defenceless as the rules of the game fundamentally change.
Intelligence Beyond Traditional Computing Devices
AI’s influence now extends far beyond pre-programmed software on computing devices. Machines and hardware are becoming intelligent, leveraging collective learning to adapt in real-time, with minimal predefined instruction. As we’ve stated before, software alone once ate the world; now, software and hardware together consume the universe. The intersection of software and hardware is where many of the greatest opportunities lie.
As AI models shrink and hardware improves, complex tasks run locally and effectively at the edge. Your phone and other edge devices are rapidly becoming the new data centres, opening exciting new possibilities.
Democratization and a New Lens on Defensibility
The collapse in the cost of intelligence has democratized everything—including software development—further accelerated by open-source tools. While this democratization unlocks vast opportunities, competition also intensifies. It may be a land grab, but not all opportunities are created equal. The key is knowing which “land” to seize.
Historically, infrastructure initially attracts significant capital, as seen in the early internet boom. Over time, however, much of the economic value tends to shift from infrastructure to applications. Today, the AI infrastructure layer is becoming increasingly commoditized, while the application layer is heavily democratized. That said, there are still plenty of opportunities to be found in both layers—many of them truly transformative. So, where do we find defensible, high-value opportunities?
Our previous thesis identified transformative technologies that achieved mass adoption, changed behaviour, democratized access, and unlocked unprecedented value. This framework remains true and continues to guide our evaluation of “100x” opportunities.
This shift in defensibility brings us to where the next moat lies.
New Defensibility: Deep Tech Meets Data Network Effects
Defensibility has changed significantly. In recent years, the pool of highly defensible early-stage shallow tech opportunities has thinned considerably, with far fewer compelling opportunities available. As a result, we have clearly entered a golden age of deep tech. AI democratization provides capital-efficient access to tools that previously required massive budgets. Our sweet spot is identifying opportunities that remain difficult to build, ensuring they are not easily replicated.
As “full-spectrum specialists,” TSF is uniquely positioned for this new reality. All four TSF partners are engineers and former startup leaders before becoming investors, with hands-on experience spanning artificial intelligence, semiconductors, robotics, photonics, smart energy, blockchain and others. We are not just technical; we are also product people, having built and commercialized cutting-edge innovations ourselves. As a guiding principle, we only invest when our deep domain expertise can help startups scale effectively and rapidly cement their place as future industry-disrupting giants.
Moreover, while traditional network effects have diminished, AI has reinvigorated network effects, making them more potent in new ways. Combining deep tech defensibility with strong data-driven network effects is the new holy grail, and this is precisely our expertise.
What We Don’t Invest In
Although we primarily invest in “bits,” we will also invest in “bits and atoms,” but we won’t invest in “atoms only.” We also have a strong bias towards permissionless innovations, so we usually stay away from highly regulated or bureaucratic verticals with high inertia. Additionally, since one of our guiding principles is to invest only when we have domain expertise in the next frontier of computing, we won’t invest in companies whose core IP falls outside of our computing expertise. We also avoid regional companies, as we focus on backing founders who design for global scale from day one. We invest globally, and almost all our breakout successes such as Printify have users and customers around the world.
Where We’re Heading
Having recalibrated our thesis for this new era, here’s where we’re going next.
We have backed amazing deep tech founders pioneering AI, semiconductors, robotics, photonics, smart energy, and blockchain—companies like Fibra, Blumind, ABR, Axiomatic, Hepzibah, Story, Poppy, and Viggle—across consumer, enterprise, and industrial sectors. With the AI platform shift underway, many new and exciting investment opportunities have emerged.
The ground has shifted: the old playbook is out, the new playbook is in. It’s challenging, exciting, and we wouldn’t have it any other way.
To recap our core belief, TSF invests in the next frontier of computing and its applications, backing early-stage products, platforms, and protocols that reshape large-scale behaviour and unlock uncapped, new value through democratization. These opportunities are fueled by the collapsing cost of intelligence and, as a result, the growing demand for access to intelligence as well as its expansion beyond traditional computing devices. What makes them defensible are technology moats and, where fitting, strong data network effects.
Ormore succinctly: We invest in the next frontier of computing and its applications, reshaping large-scale behaviour, driven by the collapsing cost of intelligence and defensible through tech and data moats.
So, if you’ve built interesting deep tech in the next frontier of computing, we invest globally and can help you turn it into a product. If you have a product, we can help you turn it into a massively successful business. If this sounds like you, reach out.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
A solo musician doesn’t need a conductor. Neither does a jazz trio.
But an orchestra? That’s a different story. You need a conductor to coordinate, to make sure all the parts come together.
Same with AI agents. One or two can operate fine on their own. But in a multi-agent setup, the real bottleneck is orchestration.
Yesterday, we announced our investment in GenseeAI. That’s the layer the company is building—the conductor for AI agents, i.e. the missing intelligent optimization layer for AI agents and workflows. Their first product, Cognify, takes AI workflows built with frameworks like LangChain or DSPy and intelligently rewrites them to be 10× faster, cheaper, and more reliable. It’s a bit like “compilation” for AI. Given a high-level workflow, Cognify produces a tuned, executable version optimized for production. Their second product, currently under development, goes one step further: a serving layer that continuously optimizes AI agents and workflows at runtime. Think of it as an intelligent “virtual machine” for AI, where the execution of agents and workflows is transparently and “automagically” improved while running.
If you’re building AI systems and want to go from prototype to production with confidence, get in touch with the GenseeAI team.
Read Brandon‘s blog post here or in the following for all the details:
At Two Small Fish, we invest in founders building foundational infrastructure for the AI-native world. We believe one of the most important – yet underdeveloped – layers of this stack is orchestration: how generative AI workflows are built, optimized, and deployed at scale.
Today, building a production-grade genAI app involves far more than calling an LLM. Developers must coordinate multiple steps – prompt chains, tool integrations, memory, RAG, agents – across a fragmented and fast-moving ecosystem and a variety of models. Optimizing this complexity for quality, speed, and cost is often a manual, lengthy process that businesses must navigate before a demo can become a product.
GenseeAI is building the missing optimization layer for AI agents and workflows in an intelligent way. Their first product, Cognify, takes AI workflows built with frameworks like LangChain or DSPy and intelligently rewrites them to be faster, cheaper, and better. It’s a bit like “compilation” for AI: given a high-level workflow, Cognify produces a tuned, executable version optimized for production.
Their second product–currently under development–goes one step further: a serving layer that continuously optimizes AI agents and workflows at runtime. Think of it as an intelligent “virtual machine” for AI: where the execution of agents and workflows is transparently and automatically improved while running.
We believe GenseeAI is a critical unlock for AI’s next phase. Much of today’s genAI development is stuck in prototype purgatory – great demos that fall apart in the real world due to cost overruns, latency, and poor reliability. Gensee helps teams move from “it works” to “it works well, and at scale.”
What drew us to Gensee was not just the elegance of the idea, but the clarity and depth of its execution. The company is led by Yiying Zhang, a UC San Diego professor with a strong track record in systems infrastructure research, and Shengqi Zhu, an engineering leader who has built and scaled AI systems at Google. Together, they bring a rare blend of academic rigor and hands-on experience in deploying large-scale infrastructure. In early benchmarks, Cognify delivered up to 10× cost reductions and 2× quality improvements – all automatically. Their roadmap – including fully automated optimization, enterprise integrations, and a registry of reusable “optimization tricks” – shows ambition to become the default runtime for generative AI.
As the AI stack matures, we believe Gensee will become a foundational layer for organizations deploying intelligent systems. It’s the kind of infrastructure that quietly powers the AI apps we’ll all use – and we’re proud to support them on that journey. If you’re building AI systems and want to go from prototype to production with confidence, get in touch with the team at GenseeAI.
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
Thank you to The Globe for publishing my op-ed about AI last week. In it, I draw parallels between the dot-com crash and the current AI boom—keeping in mind the old saying, “History doesn’t repeat itself, but it often rhymes.” The piece also explores how the atomic unit of this transformation is the ever-declining “cost of intelligence.” AI is the first technology in human history capable of learning, reasoning, creativity, cross-domain thinking, and decision-making. This fundamental shift will impact every sector, without exception, spurring the rise of new tech giants and inevitable casualties in the process. The key is knowing which land to grab!
The piece is now available below.
In the past month, everyone I spoke to has been talking about DeepSeek and Nvidia. Is Nvidia facing extinction? Have certain tech giants overspent on AI? Are we seeing a bubble about to burst, or just another public market overreaction? And what about traditional sectors, like industrials, that haven’t yet felt AI’s impact?
Let’s step back. We’ll revisit companies that soared or collapsed during the dot-com crash – and the lessons we can learn. As Mark Twain reputedly said, “History doesn’t repeat itself, but it often rhymes.”
The answer is that the reports of Nvidia’s demise are greatly exaggerated, though other companies face greater danger. At the same time, new opportunities are vast because this AI-driven shift could dwarf past tech disruptions.
Before 2000, the dot-com mania hit full speed. High-flying infrastructure players such as Global Crossing – once worth US$47-billion – provided backbone networks. Cisco delivered networking equipment, and Sun Microsystems built servers. However, amid the crash, Global Crossing went bankrupt in January, 2002. Cisco plummeted from more than US$500-billion in market cap to about $100-billion. Sun Microsystems sank from a US$200-billion market cap to under US$10-billion.
They failed or shrank for different reasons. Global Crossing needed huge investments before real revenue arrived. Cisco had decent unit economics but lost pricing power when open networking standards commoditized its gear. Sun Microsystems suffered when cheaper hardware and free, open-source software (such as Linux and Apache) undercut it, and commodity hardware plus cloud computing made its servers irrelevant.
However, these companies did not decline because they were infrastructure providers. They declined because they failed to identify the right business model before their capital ran out or were disrupted by alternatives, including open or free systems, despite having the first-mover advantage.
Meanwhile, other infrastructure players thrived. Amazon, seen mostly as an e-commerce site, earned 70 per cent of its operating profit from Amazon Web Services – hosting startups and big players such as Netflix. AWS eliminated the need to buy hardware and continually cut prices, especially in its earlier years, catalyzing a new wave of businesses and ultimately driving demand while increasing AWS’s revenue.
In hindsight, the dot-com boom was real – it simply took time for usage to catch up to the hype. By the late 2000s, mobile, social and cloud surged. Internet-native giants (Netflix, Google, etc.) grew quickly with products that truly fit the medium. Early front-runners such as Yahoo! and eBay faded. Keep in mind that Facebook was founded in 2004, well after the crash, and Apple shifted from iPods to the revolutionary iPhone in 2007, which further catalyzed the internet explosion. A first-mover advantage might not always pay off.
The first lesson we learned is that open systems disrupt and commoditize infrastructure. At that time, and we are seeing it again, an army of contributors drove open systems for free, allowing them to out-innovate proprietary solutions.
Companies that compete directly against open systems – note that Nvidia does not – are particularly vulnerable at the infrastructure layer when many open and free alternatives (such as those solely building LLMs without any applications) exist. DeepSeek, for example, was inevitable – this is how technology evolves.
Open standards, open source and other open systems dramatically lower costs, reduce barriers to AI adoption and undermine incumbents’ pricing power by offering free, high-quality alternatives. This “creative destruction” drives technological progress.
In other words, OpenAI is in a vulnerable position, as it resembles the software side of Sun Microsystems – competing with free alternatives such as Linux. It also requires significant capital to build out, yet its infrastructure is rapidly becoming commoditized, much like Global Crossing’s situation. On the other hand, Nvidia has a strong portfolio of proprietary technologies with few commoditized alternatives, making its position relatively secure. Nvidia is not the new Sun Microsystems or Cisco.
Most importantly, the disruption and commoditization of infrastructure also democratize AI innovation. Until recently, starting an AI company often required raising millions – if not tens of millions – just to get off the ground. That is already changing, as numerous fast-growing companies have started and scaled with minimal initial capital. This is leading to an explosion of innovative startups and further accelerating the flywheel.
The next lesson we learned is that the internet was the first technology in human history that was borderless, connected, ubiquitous, real-time, and free. Its atomic unit is connectivity. During its rise, “the cost of connectivity” steadily declined, while productivity gains from increased connectivity continued to expand demand. The flywheel turned faster and faster, forming a virtuous cycle.
Similarly, AI is the first technology in human history capable of learning, reasoning, creativity, cross-domain functions and decision-making. Crucially, AI’s influence is no longer confined to preprogrammed software running on computing devices; it now extends into all types of machines. Hardware and software, combined with collective learning, enable autonomous cars and other systems like robots to adapt intelligently in real time with little or no predefined instructions.
These breakthroughs are reaching sectors scarcely touched by the internet revolution, including manufacturing and energy. This goes beyond simple digitization; we are entering an era of autonomous operations and, ultimately, autonomous businesses, allowing humans to focus on higher-value tasks.
As with connectivity costs in the internet era, in this AI era, “the cost of intelligence” has been steadily declining. Meanwhile, the value derived from increased intelligence continues to grow, driving further demand – this mirrors how the internet played out and is already happening again for AI. The parallels between these two platform shifts suggest that massive economic value will be created or shifted from incumbents, opening substantial investment opportunities across early-stage ventures, growth-stage private markets and public investments.
Just as the early internet boom heavily focused on infrastructure, a significant amount of capital has been invested in enabling AI technologies. However, over time, economic value shifts from infrastructure to applications – just as it did with the internet.
This doesn’t mean there are no opportunities in AI infrastructure – far from it. Remember, more than half of Amazon’s profits come from AWS. Services, such as AWS, that provide access to AI, will continue to benefit as demand soars. Similarly, Nvidia will continue to benefit from the rising demand. However, many of today’s most-valuable companies – both public and private – are in the application layer or operate full-stack models.
Despite these advancements, this transformation won’t happen overnight, but it will likely unfold more quickly than the internet disruption – which took more than a decade – because many core technologies for rapid innovation are already in place.
AI revenues might appear modest today and don’t yet show up in the public markets. However, if we look closer, some AI-native startups are already growing at an unprecedented pace. The disruption isn’t a prediction; it’s already happening.
As Bill Gates once said, “Most people overestimate what they can achieve in one year and underestimate what they can achieve in ten years.”
The AI revolution is just beginning. The next decade will bring enormous opportunities – and a new wave of tech giants, alongside inevitable casualties.
It’s a land grab – you just need to know which land to seize!
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
Fibra is developing smart underwear embedded with proprietory textile-based sensors for seamless, non-invasive monitoring of previously untapped vital biomarkers. Their innovative technology provides continuous, accurate health insights—all within the comfort of everyday clothing. Learning from user data, it then provides personalized insights, helping women track, plan, and optimize their reproductive health with ease. This AI-driven approach enhances the precision and effectiveness of health monitoring, empowering users with actionable information tailored to their unique needs.
Fibra has already collected millions of data points with its product, further strengthening its AI capabilities and improving the accuracy of its health insights. While Fibra’s initial focus is female fertility tracking, its platform has the potential to expand into broader areas of women’s health, including pregnancy detection/monitoring, menopause, detection of STDs and cervical cancer and many more, fundamentally transforming how we monitor and understand our bodies.
Perfect Founder-Market Fit
Fibra was founded by Parnian Majd, an exceptional leader in biomedical innovation. She holds a Master of Engineering in Biomedical Engineering from the University of Toronto and a Bachelor’s degree in Biomedical Engineering from TMU. Her achievements have been widely recognized, including being an EY Women in Tech Award recipient, a Rogers Women Empowerment Award finalist for Innovation, and more.
We are thrilled to support Parnian and the Fibra team as they push the boundaries of AI-driven smart textiles and health monitoring. We are entering a golden age of deep-tech innovation and software-hardware convergence—a space we are excited to champion at Two Small Fish Ventures.
Stay tuned as Fibra advances its mission to empower women through cutting-edge health technology.
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
The Two Small Fish team is thrilled to announce our investment in Hepzibah AI, a new venture founded by Untether AI’s co-founders, serial entrepreneurs Martin Snelgrove and Raymond Chik, along with David Lynch and Taneem Ahmed. Their mission is to bring next-generation, energy-efficient AI inference technologies to market, transforming how AI compute is integrated into everything from consumer electronics to industrial systems. We are proud to be the lead investor in this round, and I will be joining as a board observer to support Hepzibah AI as they build the future of AI inference.
The Vision Behind Hepzibah AI
Hepzibah AI is built on the breakthrough energy-efficient AI inference compute architecture pioneered at Untether AI—but takes it even further. In addition to pushing performance/power harder, it can handle training loads like distillation, and it provides supercomputer-style networking on-chip. Their business model focuses on providing IP and core designs that chipmakers can incorporate into their system-on-chip designs. Rather than manufacturing AI chips themselves, Hepzibah AI will license its advanced AI inference IP for integration into a wide variety of devices and products.
Hepzibah AI’s tagline, “Extreme Full-stack AI: from models to metals,” perfectly encapsulates their vision. They are tackling AI from the highest levels of software optimization down to the most fundamental aspects of hardware architecture, ensuring that AI inference is not only more powerful but also dramatically more efficient.
Why does this matter? AI is rapidly becoming as indispensable as the CPU has been for the past few decades. Today, many modern chips, especially system-on-chip (SoC) devices, include a CPU or MCU core, and increasingly, those same chips will require AI capabilities to keep up with the growing demand for smarter, more efficient processing.
This approach allows Hepzibah AI to focus on programmability and adaptable hardware configurations, ensuring they stay ahead of the rapidly evolving AI landscape. By providing best-in-class AI inference IP, Hepzibah AI is in a prime position to capture this massive opportunity.
An Exceptional Founding Team
Martin Snelgrove and Raymond Chik are luminaries in this space—I’ve known them for decades. David Lynch and Taneem Ahmed also bring deep industry expertise, having spent years building and commercializing cutting-edge silicon and software products.
Their collective experience in this rapidly expanding, soon-to-be ubiquitous industry makes investing in Hepzibah AI a clear choice. We can’t wait to see what they accomplish next.
P.S. You may notice that the logo is a curled skunk. I’d like to highlight that the skunk’s eyes are zeros from the MNIST dataset. 🙂
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
It’s been almost three years since I stepped aside from my role as CEO of Wattpad, yet I’m still amazed by the reactions I get when I bump into people who have been part of the Wattpad story. The impact continues to surface in unexpected and inspiring ways frequently.
Wattpad has always been a platform built on storytelling for all ages and genders. That being said, our core demographic—roughly 50% of our users—has been teenage girls. Young women have always played a pivotal role in the Wattpad community.
Next year, Wattpad will turn 20 (!)—a milestone that feels both surreal and deeply rewarding. When we started in 2006, we couldn’t have imagined the journey ahead. But one thing is certain: our early users have grown up, and many of them are now in their 20s and 30s, making their mark on the world in remarkable ways.
A perfect example: at our recent masterclass at the University of Toronto, I ran into Nour. A decade ago, she was pulling all-nighters reading on Wattpad. Today, she’s an Engineering Science student at the University of Toronto, specializing in machine intelligence. Her story is not unique. Over the years, I’ve met countless female Wattpad users who are now scientists, engineers, and entrepreneurs, building startups and pushing boundaries in STEM fields.
This is incredibly fulfilling. Many of them have told me that they looked up to Wattpad and our journey as a source of inspiration. The idea that something we built has played even a small role in shaping their ambitions is humbling.
Now, as an investor at Two Small Fish, I’m excited about the prospect of supporting these entrepreneurs in the next stage of their journey. Some of these Wattpad users will go on to build the next great startups, and it would be incredible to be part of their success, just as they were part of Wattpad’s.
On this International Women’s Day, I want to celebrate this unintended but, in hindsight, obvious outcome: a generation of young women who grew up on Wattpad are now stepping into leadership roles in tech and beyond. They are the next wave of innovators, creators, and entrepreneurs, and I can’t wait to see what they build next.
P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
In many companies, the bottleneck isn’t necessarily in the execution of decisions. The real bottleneck is the excessive time people waste making decisions.
When I was Wattpad’s CEO, everyone in the company knew I had a simple 2×2 framework to empower the whole team to make fast, high-quality decisions – all by themselves!
The essence of this framework comes down to two questions:
• Is this decision reversible?
• Is this decision consequential?
These two factors create four types of decisions:
1. Reversible and inconsequential
2. Reversible and consequential
3. Irreversible and inconsequential
4. Irreversible and consequential
Examples of Each Type
1. Reversible and Inconsequential
This actually makes up the bulk of decisions in a company:
• Internal Slack messages? Delete them if you don’t like them.
• Marketing team’s benign social media copy?Remove the post if it doesn’t work.
• Small typo like the one in the above image? Yes, I purposely left the typo there. I look sloppy, but I could silently replace it with a better one when I have time.
• Small bugs in the product? If a bug fix causes other problems, revert the changes.
The list goes on. The trick is to empower each person in the company to make these decisions independently. I reinforced the same message to the Wattpad team over and over again:
From the most junior interns to the most senior leaders—you’re empowered to make the call all by yourself.
No boss to ask. No approval process. Just do it!
The company moves fast when most decisions don’t require a meeting!
2. Irreversible and Inconsequential
Here’s an example:
At one point, we ran out of space at Wattpad’s Toronto HQ and needed overflow space. We found a small office—just a few hundred square feet with a couple of meeting rooms—in the building right next door. The location was perfect, but the space itself? Just okay.
The problem was the lease—it was relatively long. Once we signed, we couldn’t back out. That limited our flexibility (irreversible), but we knew that if we needed more room, we could always find another expansion space. The cost was small in the grand scheme of things (inconsequential).
Given our growth, there was little downside to signing the lease. So we moved fast, signed the deal, and moved on to the next item on the to-do list.
For this type of decision, you can still move fast. Just be careful—double-check the lease for any hidden “gotchas.” It’s not about if we sign or not. We will sign, but we just want to make sure the bases are covered before we do.
You’d be surprised how much time people waste on indecision. Just make the call and do the due diligence!
When done properly, product releases can be very consequential but still reversible. At Wattpad, we released high-risk software all the time—but always with a way to roll back if things didn’t work.
We knew how to press the undo button!
For these kinds of decisions, move fast and make the call—but monitor the outcome and always be ready to press undo.
Important: How to Increase the Quality of These Decisions
For both Irreversible and Inconsequential decisions and Reversible and Consequential decisions, always ask:
Is there any way to make this decision more reversible or less consequential?
If you can tweak the decision to minimize fallout—no matter how small—do it. It will save time and stress down the road.
4. Irreversible and Consequential
Many of these are leadership-team-level or CEO-level decisions.
They’re rare but also the hardest to make. They require a lot of context, consideration, and, sometimes, choosing between two bad options. Occasionally, you get a good one and choose between a few great choices.
The ultimate example for me?
Whether to take the company public, maintain the status quo and keep going, or accept an acquisition offer.
Sometimes, knowing which quadrant a decision falls into is an art. But imagine if we didn’t have this framework—slow decision-making would have ground the company to a halt.
The key to moving fast isn’t just execution—it’s deciding fast, too.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
In the past few days, Eva and I had the privilege of joining the University of Toronto delegation in Stockholm to celebrate University Professor Emeritus Geoffrey Hinton, the 2024 Nobel Laureate in Physics. The events, organized by the University, were a fitting tribute to Professor Hinton’s groundbreaking contributions to AI, a technology that will transform our world in the decades to come.
The celebration was a blend of thoughtful discussions, historic venues, and memorable moments. It all began with a birthday party for Professor Hinton, followed by a fireside chat, an inspiring dinner at the iconic Vasa Museum, and a panel exploring Canada’s leadership in AI at the Embassy of Canada to Sweden. Each event underscored not only Professor Hinton’s remarkable achievements but also the global impact of Canadian innovation in AI and technology more broadly.
Rather than recount every detail, I’ll let the pictures and their captions tell the story of this extraordinary week. It was an incredible opportunity for us to honour a visionary scientist.
Eating birthday cake with University of Toronto President Meric Gertler and Dean, Faculty of Arts & Science at University of Toronto Melanie Woodin.This was the chip that was built for Professor Hinton in the late 80s for him to test his artificial neural network.The chip was developed before sub-micron technology was widely available. Professor Hinton believes it might be 3-5 microns, but even he wasn’t 100% sure. Upon closer inspection, it appears there were six neurons on the grid.Taking a picture with U of T President Meric Gertler and Chancellor Wes Hall and Professor Leah Cowen.We are dining in front of the unique and well preserved warship Vasa from 1628!Fireside chat with University Professor Emeritus Geoffrey Hinton, and Patchen Barss, science journalist, speaker and author.Taking a picture with Untether AI co-founder Raymond Chik and Vector Institute CEO Tony Gaffney at the reception after the fireside chat. An insightful panel moderated by Professor Leah Cowen. Panellists included Professor Eyal de Lara, Vector Institute CEO Tony Gaffney, Professor David Lie and Professor Amy Loutfi.Ericsson’s Head of AI Jörgen GustafssonView Jörgen Gustafsson and Jason LaTorre, the Ambassador of Canada to Sweden.
P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
More than two decades ago, before I started my first company, I was involved with an internet startup. Back then, the internet was still in its infancy, and most companies had to host their own servers. The upfront costs were daunting—our startup’s first major purchase was hundreds of thousands of dollars in Sun Microsystems boxes that sat in our office. This significant investment was essential for operations but created a massive barrier to entry for startups.
Fast forward to 2006 when we started Wattpad. We initially used a shared hosting service that cost just $5 per month. This shift was game-changing, enabling us to bootstrap for several years before raising any capital. We also didn’t have to worry about maintaining the machines. It dramatically lowered the barrier to entry, democratizing access to the resources needed to build a tech startup because the upfront cost of starting a software company was virtually zero.
Eventually, as we scaled, we moved to AWS, which was more scalable and reliable. Apparently, we were AWS’s first customer in Canada at the time! It became more expensive as our traffic grew, but we still didn’t have to worry about maintaining our own server farm. This significantly simplified our operations.
A similar evolution has been happening in the semiconductor industry for more than two decades, thanks to the fabless model. Fabless chip manufacturing allows companies—large or small—to design their semiconductors while outsourcing fabrication to specialized foundries. Startups like Blumind leverage this model, focusing solely on designing groundbreaking technology and scaling production when necessary.
But fabrication is not the only capital-intensive aspect. There is also the need for other equipment once the chips are manufactured.
During my recent visit to ventureLAB, where Blumind is based, I saw firsthand how these startups utilize shared resources for this additional equipment. Not only is Blumind fabless, but they can also access various hardware equipment at ventureLAB without the heavy capital expenditure of owning it.
Let’s see how the chip performs at -40C!
Jackpine (first tapeout)
Wolf (second tapeout)
BM110 (third tapeout)
The common perception that semiconductor startups are inherently capital-intensive couldn’t be more wrong. The fabless model—in conjunction with organizations like ventureLAB—functions much like cloud computing does for software startups, enabling semiconductor companies to build and grow with minimal upfront investment. For the most part, all they need initially are engineers’ computers to create their designs until they reach a scale that requires owning their own equipment.
Fabless chip design combined with shared resources at facilities like ventureLAB is democratizing the semiconductor space, lowering the barriers to innovation, and empowering startups to make significant advancements without the financial burden of owning fabrication facilities. Labour costs aside, the upfront cost of starting a semiconductor company like Blumind could be virtually zero too.
P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
When it comes to watches, my go-to is a Fitbit. It may not be the most common choice, but I value practicality, especially when not having to recharge daily is a necessity to me. My Fitbit lasts about 4 to 5 days—decent, but still not perfect.
Now, imagine if we could extend that battery life to a month or even a year. The freedom and convenience would be incredible. Considering the immense computing demands of modern smartwatches, this might sound far-fetched. But that’s where our portfolio company, Blumind, comes into play.
Blumind’s ultra-low power, always-on, real-time, offline AI chip holds the potential to redefine how we think about battery life and device efficiency. This advancement enables edge computing with extended battery life, potentially lasting years – not a typo – instead of days. Products powered by Blumind can transform user behaviours and empower businesses and individuals to unlock new and impactful value (see our thesis).
Blumind’s secret lies in its brain-inspired, all-analog chip design. The human brain is renowned for its energy-efficient computing abilities. Unlike most modern chips that rely on digital systems and require continuous digital-to-analog and analog-to-digital conversions (which drain power), Blumind’s approach emulates the brain’s seamless analog processing. This unique architecture makes it perfect for power-sensitive AI applications, resulting in chips that could be up to 1000 times more energy-efficient than conventional chips, making them ideal for edge computing.
Blumind’s breakthrough technology has practical and wide-ranging applications. Here are just a few use cases:
• Always-on Keyword Detection: Integrates into various devices for continuous voice activation without excessive power usage.
• Rapid Image Recognition: Supports always-on visual wake word detection for applications such as access control, enhancing human-device interaction with real-time responses.
• Time-Series Data Processing: Processes data streams with exceptional speed for real-time analysis in areas like predictive maintenance, health monitoring, and weather forecasting.
These capabilities unlock new possibilities across multiple industries, including wearables, smart home technology, security, agriculture, medical, smart mobility, and even military and aerospace.
A few weeks ago, I visited Blumind’s team at their ventureLAB office and got an up-close look at their BM110 chip, now in its third tapeout. Blumind exemplifies the future of semiconductor startups through its fabless model, which significantly lowers the initial infrastructure costs associated with traditional semiconductor companies. With resources like ventureLAB supporting them, Blumind has managed to innovate with remarkable efficiency and sustainability. (I’ll share more about the fabless model in an upcoming post.)
I’m thrilled to see where Blumind’s journey leads and how its groundbreaking technology will transform daily life and reshape multiple industries. When devices can go years without needing a recharge instead of mere hours, that’s nothing short of game-changing.
Image: Close-up view of BM110. It is a piece of art!
Image: Qualification in action. Note that BM110 (lower-left corner) is tiny and space-efficient.
Image: The Blumind team is working hard at their ventureLAB office. More on this in a separate blog post here.
P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
More than two decades ago, I co-founded my first company, Tira Wireless. The business went through several iterations, and eventually, we landed on building a mobile content delivery product. We raised roughly $30M in funding, which was a significant amount at the time. We even ranked as Canada’s Third Fastest Growing Technology Company in the Deloitte Technology Fast 50.
We had a good run, but eventually, Tira had to shut its doors.
We made numerous strategic mistakes, and I learned a lot—lessons that, quite frankly, helped me make far better decisions when I later started Wattpad.
One of the most important mistakes we made was falling into the “bridge technology” trap.
What is the “bridge technology” trap?
Reflecting on significant “platform shifts” over recent decades reveals a pattern: each shift unleashes waves of innovation. Consider the PC revolution in the late 20th century, the widespread adoption of the internet and cloud computing in the 2000s, and the mobile era in the 2010s. These shifts didn’t just create new opportunities; they also created significant pain points as the world tried to leap from one technology to another. Many companies emerged to solve problems arising from these changes.
Tira started when the world began its transition from web to mobile. Initially, there were countless mobile platforms and operating systems. These idiosyncrasies created a huge pain point, and Tira capitalized on that. But in a few short years, mobile consolidated into just two major players—iOS and Android. The pain point rapidly disappeared, and so did Tira’s business.
Similarly, most of these “bridge technology” companies perform very well during the transition because they solve a critical, short-term pain point. However, as the world completes the transition, their business disappears. For instance, numerous companies focused on converting websites into iPhone apps when the App Store launched. Where are they now?
Some companies try to leverage what they’ve built and pivot into something new. But building something new is challenging enough, and maintaining a soon-to-be-declining bridge business while transitioning into a new one is even harder. This is akin to the innovator’s dilemma: successful companies often struggle with disruptive innovation, torn between innovating (and risking profitable products) or maintaining the status quo (and risking obsolescence).
As an investor, it makes no sense to invest in a “bridge” company that is fully expected to pivot within a few years. A pivot should be a Plan B, not Plan A. It’s extremely rare for bridge technology companies to become great, venture-scale investments. In fact, I can’t think of any off the top of my head.
We are currently in the midst of a tectonic AI platform shift. We’re seeing a huge volume of pitches, which is incredibly exciting. Many of these startups built great technologies and products. However, a significant number of these pitches also represent bridge technologies. As the current AI platform shift matures, these bridge technologies will lose relevance. Sometimes, it’s obvious they’re bridge technologies; other times, it requires significant thought to identify them. This challenge is intellectually stimulating, and I enjoy every moment of it. Each analysis informs us of what the future looks like, and just as importantly, what it will not look like. With each passing day, we gain stronger conviction about where the world is heading. It’s further strengthening our “seeing the future is our superpower” muscle, and that’s the most exciting part.
P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.