OpenClaw, an AI agent that can operate a computer on your behalf, has taken the world by storm. Unless you have been living under a rock, you have probably either tried it already or at least wanted to find out what all the buzz is about.
Many, however, have failed to get past installation because it is so difficult. There is a reason why thousands of people lined up for help just to get OpenClaw installed on their machines. More importantly, using it without proper safeguards can create a real security risk.
From my perspective, three issues stand out in OpenClaw’s current form.
First, it is difficult to install, even for technical users. That matters more than many builders realize. A product does not become broadly useful simply because it is powerful. It becomes useful when people can actually get it running without friction or handholding.
Second, it can create a real security risk if not used properly. Tools that operate at the machine level can be compelling, but they also introduce a very different level of responsibility. Most users do not want to expose their full machine environment just to perform a simple task.
Third, it can become expensive quickly. Token bills can become material before users even realize it. A tool may look impressive in a demo, but if the economics do not work, adoption will eventually stall. In AI, performance matters, but efficiency matters just as much.
This is why, after looking at many options, I chose to use Crate from our portfolio company, Gensee, myself, and I believe it is by far the best way to try OpenClaw.
It addresses all three issues directly: one-click install in 60 seconds, a secure sandbox that only accesses what you explicitly allow, and deep expertise from Dr. Shengqi Zhu and award-winning operating systems expert Professor Yiying Zhang, whose work on agentic optimization and efficiency is exactly what makes this possible. That expertise is also why they have been able to make Crate completely free to use.
In other words, it makes OpenClaw easy, safe, and completely free.
There is also a bonus. Crate comes with Gensee’s proprietary AI search engine built in. That search engine ranked #1 on Source Bench for finding the highest-quality web sources.
Another bonus is that Crate comes pre-installed with a set of common, useful skills vetted by the Gensee team for safety, while still allowing users to install additional skills themselves. That makes it both easier to get started and more flexible over time.
A final bonus is flexible control. Users can create multiple instances, pause and resume them, take snapshots, and roll back at any time. That means full control without the usual complexity.
So Gensee Crate is not just an easier and safer way to use OpenClaw. It is also a better one, and that points to where this market is going. The first wave of a technology shows what is possible; the next wave makes it practical for mainstream users. AI agents are now entering that phase. To become part of everyday workflows, they need to be easy to use, safe by design, and efficient enough to be economically viable. That is where adoption happens.
And that is why Gensee Crate is the best way to try out OpenClaw and why it is worth paying attention to.
If you are curious about OpenClaw, try Gensee Crate here.
At Two Small Fish Ventures, we invest in the next frontier of computing and its applications. Supporting that thesis is our focus on research grounded innovation, which means we spend a lot of time with people who are building from first principles and turning technical breakthroughs into real companies. Not surprisingly, many of those people are world-class women researchers, scientists, and engineers. We have been fortunate to back a good number of them, and we are better for it.
This shows up in our portfolio, but it also shows up in our own team. Roughly half of our team is female. Our investment team is also roughly half female, with Eva and Mikayla bringing perspectives that genuinely shape how we think, how we evaluate, and how we support founders.
This is not just something to celebrate. It makes us better. One of the most common pieces of feedback we hear from founders is that we ask very different questions. That is exactly the point. Different perspectives lead to better conversations, smaller blind spots, and stronger judgment. In deep tech, where the path from breakthrough to company is rarely straightforward, that matters.
So today, we celebrate the many women founders, researchers, scientists, and engineers we have backed, and the many more we hope to back in the years ahead.
Today, writing software is no more difficult than pressing a button. You describe what you want. In a few minutes, not a mockup but a fully functional application is ready to use.
I can testify to this personally. In 15 minutes, using AI, I have “written” more software than I did in a full year when I was writing software professionally. Although my old skill is now obsolete, it is wonderful because I can build faster than I ever could. This is the best of times!
So yes, in a narrow sense, the old software opportunity is dead.
The writing has been on the wall for a while. Shallow tech software has been democratized and, in many cases, is not investable. Public markets have finally figured out that a new wave of software is coming. They just do not really know what it is yet, so they sell indiscriminately. Generic business and financial skills do not work during a paradigm shift because disruption does not show up neatly on a spreadsheet full of ARR, EBITDA, and CAGR. Those are the wrong questions to ask when the underlying rules are being rewritten.
At the same time, the early phase of a paradigm shift is often the best time to invest. The people who have new specific knowledge and the courage to build for an AI native world will have a clear edge and, if they are right, capture outsized returns.
Now here is the twist.
When the cost of X collapses, the world does not get less of the thing. It gets flooded with it. That is Jevons Paradox in action. Make something cheaper and easier, and overall demand goes up significantly, often faster than the drop in price. We have seen versions of this before as humanity adopted electricity, personal computers, the internet, and now intelligence.
So software is not dead. We are about to have 10x, 100x, maybe 1000x more software than we have today.
We have seen a similar movie in content. Thanks to the internet and mobile devices, as the cost of content creation and distribution dropped, the amount of content exploded. That created giants that seized the opportunity. Fun fact, I co-founded a business two decades ago on that thesis and rode that wave myself, so yes, I have been there and done that.
Back to software.
The question now is how to capture the opportunity when the world has 1000x more software and the cost of creating software is approaching zero. Inevitably, the business model shifts because we move to a different part of the price elasticity curve when software becomes abundant. When code becomes cheap, value migrates to what stays scarce.
Shallow tech, run-of-the-mill software companies, including a lot of AI wrappers, are generally not investable from a VC perspective because they are so easy to build, copy, and replace. I have been saying this for many years, even before ChatGPT came out. If you still need more evidence, you are already behind. The button is not coming. The button is here.
This does not mean these companies cannot make money. Some will. But “can generate cash when bootstrapping” and “can return a venture fund” are not the same statement.
In contrast, deep tech software is a fantastic opportunity. There is a reason TSF shifted to deep tech investments years ago. That was not an accident. When the cost curve of intelligence collapses, businesses whose primary moat is “we can write this software” or “we spent 100 engineer years building it” need a rethink.
This is why we are unapologetically investing in deep tech.
Deep tech software is a completely different sport. In many cases, the moat is not in the software. The moat is the unique technology embedded in the software, plus the data and the system it connects to. The software is the container. The defensibility sits underneath.
People often ask how to draw the line between deep tech software and everything else. We have a definition, and it is more true than ever in this “software is abundant” era. More importantly, making that call takes specialized skill. That is why deep tech investing is reserved for trained eyes, as it requires engineering judgment, product instinct, operating experience, and recognition of a market gap that comes from building and commercializing disruptive opportunities. We can do deep tech because we are equipped to do so. Been there. Done that.
To be clear, of course, I am not suggesting the only software opportunity is deep tech. There is also a massive opportunity in bespoke software and disposable software.
For decades, companies bought off-the-shelf software because that was the only option that made economic sense, even when the software was not a perfect fit for their workflow. You ended up customizing your workflow around the software. Bespoke-built software was too expensive, too slow, and too hard to maintain.
Now the economics are changing.
We can now build software for problems that were previously too small to matter economically. We can now create personal tools designed for an audience of one. We will ship internal workflows the way we send emails. We can now generate software that lives for a week, does its job, and disappears.
That is a massive opportunity. Much of it will look like a low-tech, large-scale service business. Some of it will become platforms and infrastructure for software generation itself. Some of it will become entirely new categories we do not have names for yet. Some of it will help make deep tech software even more defensible.
But the direction is clear. Software is becoming abundant, and the economics of software will be drastically different.
So, is software dead?
Yes, software as a scarce craft is dying.
Software-as-a-moat because “we spent 100-engineer-years building it” is dying.
But software as leverage is exploding. Software as the fabric of everything is exploding. The world is not losing software. The world is getting more of it than we can possibly imagine.
Back to the movie analogy. It is like the theatre business. The movie is not the only product. The experience is the product. The popcorn is the product. The atmosphere is the product. The movie is what gets you in the door.
I spent Saturday morning in Hong Kong as a speaker at the Canadian Engineering Asia Pacific Conference, a gathering that felt historic.
Not one, not two, but eight deans of Canadian engineering. In the same room, on the same program, in Asia. The conference materials called it a “historic gathering,” and that’s not an exaggeration.
Hong Kong is the perfect place for this to happen. It has a very large base of Canadian engineering alumni. You could feel it immediately. The electromagnetic pull of hundreds of iron rings in the room. A community that’s stayed connected not just to each other, but to an idea.
And despite the diversity of schools, disciplines, and career paths represented, the conference kept circling back to a single word.
Trust.
Yes, one panel was explicitly about modern engineering ethics and building trust. It was moderated by Dean Kevin Deluzio (Queen’s University) and featured Dean Heather Sheardown (McMaster University), Dean Mary Wells (University of Waterloo), and Dean Caroline Cao (University of Ottawa). What struck me was how the theme showed up everywhere else too. Education, innovation, even the informal hallway conversations. Trust wasn’t a topic. It was the subtext.
This is where Canadian engineering has something uniquely world class to contribute. Why? Because we have a cultural and professional tradition that keeps pulling us back to first principles. What we build touches people. And we take an oath to uphold high ethical standards, safety, and integrity in our professional work. That oath is not performative. It is a commitment the public can hold us to. That is trust.
This conference also marked 100 years since the Calling of an Engineer tradition began in 1925, a uniquely Canadian ritual built around that vow, to uphold high ethical standards, safety, and integrity in our professional work.
That vow is trust.
My panel focused on the future of engineering education, and it was moderated by Dean Chris Yip (University of Toronto). I had the privilege of sharing the stage with Dean Phillip Choi (University of Regina), Dean James Olsen (University of British Columbia), and Dean Viviane Yargeau (McGill University). I shared a view that we are going through a platform shift driven by AI disruption. It is a foundational change that will reshape every sector and touch every aspect of our lives, including university education, where AI can reshape how university students learn and how courses are designed.
That is why I also believe this may be the best time to become an engineer. As an early stage investor in the next frontier of computing and its applications, I get to see this shift firsthand every day. The collapsing cost of intelligence, and hence abundance, is changing what is possible, and it is creating the conditions for entirely new category defining companies.
The most moving part of the day was the re obligation ceremony, hundreds of Canadian engineers forming a human chain to renew our vows.
Standing there, I was reminded of something simple. Canada’s brand, when we earn it, is built on trustworthiness.
Trust becomes a competitive advantage for Canada. But it’s not something you declare. It’s something you practice day in and day out.
That’s what the iron ring symbolizes at its best, not nostalgia, not ceremony, but a commitment to be worthy of trust through ethics, safety, and integrity, in the work we do and the systems we leave behind.
A century in, the ring still does what it was meant to do. And right now, that feels more important than ever.
And on that note, I trust we do not have to wait another 100 years for the next one. Let’s do an Iron Ring 101 next year!
P.S. The group picture is only University of Toronto, so you can tell how big the crowd was. We have eight universities represented!
I spent a full day at Ontario Tech University in Oshawa a few weeks ago. It was my first time on campus, despite it being just over a 40-minute drive from Toronto, where I live. I arrived curious and left with a clearer picture of what they’re building.
Ontario Tech is still a relatively young university, just over two decades old. What’s less well known—and something I didn’t fully appreciate before the visit—is how quickly it has grown in that time, now serving around 14,000 students, and how deliberately it has established itself as a research university rather than simply a teaching-focused institution.
That research orientation shows up not just in output, but in where the university has chosen to build depth—areas that sit close to real systems and real constraints.
This came through clearly in conversations with Prof. Peter Lewis, Canada Research Chair in Trustworthy Artificial Intelligence, whose work focuses on trustworthy and ethical AI. The university has launched Canada’s first School of Ethical AI, alongside the Mindful AI Research Institute, and the work here is grounded in how AI systems behave once deployed—how humans interact with them, and how unintended consequences are identified and managed.
Energy is another area where Ontario Tech has built serious capability. The university is home to Canada’s only accredited undergraduate Nuclear Engineering program, which is ranked third in North America and designated as an IAEA Collaborating Centre. In discussions with Prof. Hossam Gaber, the emphasis was on smart energy systems, where software, sensing, and control systems are developed alongside the physical energy infrastructure they operate within.
I also spent time with Prof. Haoxiang Lang, whose work in robotics, automotive systems, and advanced mobility sits at the intersection of computation and the physical world.
That work is closely tied to the Automotive Centre of Excellence, which includes a climatic wind tunnel described as one of the largest and most sophisticated of its kind in the world. The facility enables full-scale testing under extreme environmental conditions—from arctic cold to desert heat—and supports research that needs to be validated under real operating constraints.
I can’t possibly mention all the conversations I had over the course of the day—it was a full schedule—but I also spent time with Dean Hossam Kishawy and Dr. Osman Hamid, discussing how research, entrepreneurship, and industry engagement fit together at Ontario Tech.
The day also included time at Brilliant Catalyst, the university’s innovation hub, speaking with students and founders about entrepreneurship. I had the opportunity to give a keynote on entrepreneurship, and the visit ended with the pitch competition, where I handed the cheque to the winning team—a small moment that underscored how early many technical journeys begin.
Ontario Tech may be young, but it is already operating with the structure and discipline of a mature research institution, while retaining the adaptability of a newer one.
Thank you to Sunny Chen and the Ontario Tech team for the time, access, and thoughtful conversations throughout the day.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
I had the opportunity to join a panel at the Impact 2025 Summit in Calgary, moderated by Raissa Espiritu, with Janet Bannister and Paul Godman. Ironically, none of us are labelled as impact investors, and I explained on stage why Two Small Fish Ventures does what we do.
At Two Small Fish Ventures, we’ve never called ourselves an impact fund. That’s not because we’re indifferent to impact; in fact, it’s core to what we do. Our focus is on deep tech, the next frontier of computing, where innovation can create meaningful, long-term change. Specifically, we invest in five key areas: Vertical AI Platforms, Physical AI, AI Infrastructure, Advanced Computing Hardware, and Smart Energy.
We care deeply about scientific advancement, and more importantly, about turning those breakthroughs into real-world impact. That’s how meaningful progress happens.
Eva is our General Partner, and both of us are immigrants. Diversity isn’t a marketing point for us; it’s part of who we are. It naturally shows up in our portfolio: about half of our companies have at least one female founder, and many come from underrepresented backgrounds. That said, uncompromisingly, we back amazing deep tech founders who are turning their creations into world-class companies.
It’s actually rare that we talk about topics like women investing or investing in underrepresented groups in isolation. Not because we don’t care, quite the opposite. The fact that Eva is one of the few female GPs leading a venture fund, and that we’re both immigrants, already says a lot. Our actions speak volumes. We walk the walk and talk the talk.
We need to deliver results. Period. Our competition isn’t other venture funds; it’s every other investment opportunity available in the market. If we can’t perform at the highest level — top decile in everything we do — we can’t sustain our mission. Delivering some of the best results in the industry enables us to do what we love and make an impact.
That’s why I believe impact and performance are not opposites. The most powerful kind of impact happens when companies succeed, when they become world-class companies. Strong returns and meaningful impact can, and should, reinforce each other.
I also talked about the importance of choosing the right vehicle for the right purpose. When we made a 2 million dollar donation to the University of Toronto to establish the Commercialization Catalyst Prize, it wasn’t about investing. It was about supporting a different kind of impact — helping scientists and engineers turn their research into innovations that can reach the world. Not every kind of impact should come from the same tool.
At the end of the day, labels matter less than intent and execution. We don’t need to call ourselves an impact fund to make a difference. Our goal is simple: to back bold deep tech founders using science and technology to build a better future and to do it with excellence.
A big thank you to Raissa, George Damian, Sylvia Wang, and the entire Platform Calgary team for putting together such a thoughtful and well-run event.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
When I was studying electrical engineering, out of my curiosity, I chose to take an elective course on quantum physics as part of advanced optics. It sparked my curiosity in quantum. The strange, abstract, counterintuitive rules, for example particles existing in multiple states or being entangled across distance, captivated me.
Error correction, closely related to fault tolerance in quantum systems today, is the backbone of telecommunications, one of the areas I majored in.
Little did I know these domains would converge in such a way that my earlier academic training would become relevant again years later.
For me, computing is not just my profession, it is also my hobby. As a science nerd, I actively enjoy following advances, and I keep going deeper down the rabbit hole of the next frontier of computing. That mix of personal curiosity and professional focus shapes how I approach both the opportunities and risks in the space. Over the past few years, I have gone deeper into the world of quantum. My academic and professional background gave me the footing to evaluate both what is technically possible and what is commercially viable.
From If to How and When
In June, I wrote Quantum Isn’t Next. It’s Now. We have passed the tipping point where the question is no longer if quantum technology will work, it is how and when it will scale.
This momentum is not just visible to those of us deep in the field. As the Globe and Mail recently reported, we at Two Small Fish have been following quantum for years, but did not think it was mature enough for an early-stage fund with a 10-year lifespan to back. This year, we changed our minds. As I shared in that article: “It’s much more investible now.”
The distinction is clear: when quantum was still a science problem, the central question was whether it could work at all. Now that it has become an engineering problem, the questions are how it will work at scale and when it will be ready for commercialization.
This shift matters for investors. Venture capital focuses on engineering breakthroughs, hard, uncertain, but achievable on a commercialization timeline. Fundamental science, which can take many more years to mature, is better supported by governments, universities, and non-dilutive funding sources. I will leave that discussion for another post.
One of Five Frontiers
At Two Small Fish Ventures, we have identified five areas shaping the next frontier of computing. Quantum falls under the area of advanced computing hardware, where the convergence of different areas of science, engineering, and commercialization is accelerating.
Each of these areas is no longer a speculative science experiment but a rapidly advancing field where engineering and commercialization are converging. Within the next ten years, the winners will emerge from lab prototypes and become scaled companies. Quantum is firmly on that trajectory.
How We Invest in Quantum
Our first principle at Two Small Fish is straightforward: we only invest in things we truly understand, from all three technology, product, and commercialization lenses. That discipline forces us to dig deep before committing capital. And after years of study, it is clear to us that quantum has moved into investable territory, but only selectively.
Not every quantum startup fits a venture time horizon. Some promising projects will take too many years to scale. But we are now seeing opportunities that, within a 10-year window, can realistically grow from an early-stage idea to a successful scale-up. That is the standard we apply to every investment, and quantum finally has companies that meet it.
From Sci-Fi to Reality
Canada has played an outsized role in building the foundation of quantum science. Now, it has the chance to lead in quantum commercialization. The next few years will determine which teams turn breakthrough science into enduring companies.
For investors, this is both an opportunity and a responsibility. The quantum era is not a distant possibility, it is here now. What once sounded like science fiction is now an investable reality. And for those willing to put in the work to understand it, the frontier is already here.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
In 1865, William Stanley Jevons, an English economist, observed a curious phenomenon: as steam engines in Britain became more efficient, coal use didn’t fall — it rose. Efficiency lowered the cost of using coal, which made it more attractive, and demand surged.
That insight became known as Jevons Paradox. To put it simply:
Technological change increases efficiency or productivity.
Efficiency gains lead to lower consumer prices for goods or services.
The reduced price creates a substantial increase in quantity demanded (because demand is highly elastic).
Instead of shrinking resource use, efficiency often accelerates it — and with it, broader societal change.
Coal, Then Light
The paradox first appeared in coal: better engines, more coal consumed. Electricity followed a similar path. Consider lighting in Britain:
Period
True price of lighting (per million lumen-hours, £2000)
Change vs. start
Per-capita consumption (thousand lumen-hours)
Change vs. start
Total consumption (billion lumen-hours)
Change vs. start
1800
£8,000
—
1.1
—
18
—
1900
£250
↓ ~30×
255
↑ ~230×
10,500
↑ ~500×
2000
£2.5
↓ ~3,000× (vs. 1800) / ↓ ~100× (vs. 1900)
13,000
↑ ~13,000× (vs. 1800) / ↑ ~50× (vs. 1900)
775,000
↑ ~40,000× (vs. 1800) / ↑ ~74× (vs. 1900)
Over two centuries, the price of light fell 3,000×, while per-capita use rose 13,000× and total consumption rose 40,000×. A textbook case of Jevons Paradox — efficiency driving demand to entirely new levels.
Computing: From Millions to Pennies
This pattern carried into computing:
Year
Cost per Gigaflop
Notes
1984
$18.7 million (~$46M today)
Early supercomputing era
2000
$640 (~$956 today)
Mainstream affordability
2017
$0.03
Virtually free compute
That’s a 99.99%+ decline. What once required national budgets is now in your pocket.
Storage mirrored the same story: by 2018, 8 TB of hard drive storage cost under $200 — about $0.019 per GB, compared to thousands per GB in the mid-20th century.
Connectivity: Falling Costs, Rising Traffic
Connectivity followed suit:
Year
Typical Speed & Cost per Mbps (U.S.)
Global Internet Traffic
2000
Dial-up / early DSL (<1 Mbps); ~$1,200
~84 PB/month
2010
~5 Mbps broadband; ~$25
~20,000 PB/month
2023
100–940 Mbps common; ↓ ~60% since 2015 (real terms)
>150,000 PB/month
(PB = petabytes)
As costs collapsed, demand exploded. Streaming, cloud services, social apps, mobile collaboration, IoT — all became possible because bandwidth was no longer scarce.
Intelligence: The New Frontier
Now the same dynamic is unfolding with intelligence:
Year
Cost per Million Tokens
Notes
2021
~$60
Early GPT-3 / GPT-4 era
2023
~$0.40–$0.60
GPT-3.5 scale models
2024
< $0.10
GPT-4o and peers
That’s a two-order-of-magnitude drop in just a few years. Unsurprisingly, demand is surging — AI copilots in workflows, large-scale analytics in enterprises, and everyday generative tools for individuals.
As we highlighted in our TSF Thesis 3.0, cheap intelligence doesn’t just optimize existing tasks. It reshapes behaviour at scale.
Why It Matters
The recurring pattern is clear:
Coal efficiency fueled the Industrial Revolution.
Affordable lighting built electrified cities.
Cheap compute and storage enabled the digital economy.
Low-cost bandwidth drove streaming and cloud collaboration.
Now cheap intelligence is reshaping how we live, work, and innovate.
As we highlighted in Thesis 3.0:
“Reflecting on the internet era… as ‘the cost of connectivity’ steadily declined, productivity and demand surged—creating a virtuous cycle of opportunities. The AI era shows remarkable parallels. AI is the first technology capable of learning, reasoning, creativity… Like connectivity in the internet era, ‘the cost of intelligence’ is now rapidly declining, while the value derived continues to surge, driving even greater demand.”
The lesson is simple: efficiency doesn’t just save costs — it reorders economies and societies. And that’s exactly what is happening now.
If you are building a deep tech early-stage startup in the next frontier of computing, we would like to hear from you. This is a generational opportunity as both traditional businesses and entirely new sectors are being reshaped. White-collar jobs and businesses, in particular, will not be the same. We would love to hear from you.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
For nearly 70 years, the home electrical panel has looked the same. Meanwhile, the home itself is transforming: solar on the roof, batteries in the garage, heat pumps, EVs in the driveway, and smart appliances and devices everywhere.
And yet, the panel? Still the same. It is the last dumb box left, and FUTURi is fixing that with deep tech.
FUTURi’s Energy Processor
FUTURi Power, founded by Dr. Martin Ordonez (UBC Professor, Kaiser Chair at UBC, and recipient of the King Charles III Coronation Medal for leadership in clean energy innovation), reimagines the panel as the Energy Processor, a programmable energy computer that finally gives the home’s electrical system a brain. It is designed as a like-for-like replacement for the traditional panel that is future-proof and intelligently measures and coordinates loads, avoids peaks, and manages energy use at the edge.
Why This Matters
Homes are no longer passive energy consumers. They are dynamic nodes in the grid. By making the panel intelligent, FUTURi enables:
For homeowners: Achieve a 100% electric home without costly service upgrades. A smarter, more resilient, and efficient energy ecosystem.
For utilities: Demand peaks flattened, demand response (DR) programs and distributed energy resources (DERs) integrated, deferring costly capital expenditures.
For builders and communities: Intelligent electrification helps accelerate the deployment of built infrastructure without overloading the grid.
This is why FUTURi and utilities are already collaborating on projects to evaluate how Energy Processors can strengthen the grid and benefit customers.
Our Perspective
As Dr. Martin Ordonez, Founder and CEO of FUTURi Power, puts it: “Panels used to be passive. The Energy Processor is active, safe, and software-defined. It gives homes and grids a common language.” At TSF, Smart Energy is one of our five focus areas. Our thesis is simple: the cost of intelligence is collapsing, and the biggest opportunities lie where software and hardware come together to reshape behaviour.
FUTURi is exactly that blueprint for intelligent electrification: deep-tech power electronics plus intelligent control. That combination turns a 70-year-old box into the brain of the modern home. Dr. Ordonez and his team are globally recognized experts in electrification who are translating decades of pioneering research into transformative commercial solutions.
And this is just the beginning. There is so much more the company can do to make electricity truly intelligent. FUTURi has a bright future ahead (pun fully intended).
The cost of intelligence is dropping at an unprecedented rate. Just as the drop in the cost of computing unlocked the PC era and the drop in the cost of connectivity enabled the internet era, falling costs today are driving explosive demand for AI adoption. That demand creates opportunity on the supply side too, in the infrastructure, energy, and technologies needed to support and scale this shift.
In our Thesis 3.0, we highlighted how this AI-driven platform shift will reshape behaviour at massive scale. But identifying the how also means knowing where to look.
Every era of technology has a set of areas where breakthroughs cluster, where infrastructure, capital, and talent converge to create the conditions for outsized returns. For the age of intelligent systems, we see five such areas, each distinct but deeply interconnected.
1. Vertical AI Platforms
After large language models, the next wave of value creation will come from Vertical AI Platforms that combine proprietary data, hard-to-replicate models, and orchestration layers designed for complex and large-scale needs.
Built on unique datasets, workflows, and algorithms that are difficult to imitate, these platforms create proprietary intelligence layers that are increasingly agentic. They can actively make decisions, initiate actions, and shape workflows. This makes them both defensible and transformative, even when part of the foundation rests on commodity models.
This shift from passive tools to active participants marks a profound change in how entire sectors operate.
2. Physical AI
The past two decades of digital transformation mostly played out behind screens. The next era brings AI into the physical world.
Physical AI spans autonomous devices, robotics, and AI-powered equipment that can perceive, act, and adapt in real environments. From warehouse automation to industrial robotics to autonomous mobility, this is where algorithms leave the lab and step into society.
We are still early in this curve. Just as industrial machinery transformed factories in the nineteenth century, Physical AI will reshape industries that rely on labour-intensive, precision-demanding, or hazardous work.
The companies that succeed will combine world-class AI models with robust hardware integration and build the trust that humans place in systems operating alongside them every day.
3. AI Infrastructure
Every transformative technology wave has required new infrastructure that is robust, reliable, and efficient. For AI, this means going beyond raw compute to ensure systems that are secure, safe, and trustworthy at scale.
We need security, safety, efficiency, and trustworthiness as first-class priorities. That means building the tools, frameworks, and protocols that make AI more energy efficient, explainable, and interoperable.
The infrastructure layer determines not only who can build AI, but who can trust it. And trust is ultimately what drives adoption.
4. Advanced Computing Hardware
Every computing revolution has been powered by a revolution in hardware. Just as the transistor enabled mainframes and the microprocessor ushered in personal computing, the next era will be defined by breakthroughs in semiconductors and specialized architectures.
From custom chips to new communication fabrics, hardware is what makes new classes of AI and computation possible, both in the cloud and on the edge. But it is not only about raw compute power. The winners will also tackle energy efficiency, latency, and connectivity, areas that become bottlenecks as models scale.
As Moore’s Law hits its limit, we are entering an age of architectural innovation with neuromorphic computing, photonics, quantum computing, and other advances. Much like the steam engine once unlocked new industries, these architectures will redefine what is computationally possible. This is deep tech meeting industrial adoption, and those who can scale it will capture immense value.
5. Smart Energy
Every technological leap has demanded a new energy paradigm. The electrification era was powered by the grid. Today, AI and computing are demanding unprecedented amounts of energy, and the grid as it exists cannot sustain this future.
This is why smart energy is not peripheral, but central. From new energy sources to intelligent distribution networks, the way we generate, store, and allocate energy is being reimagined. The idea of programmable energy, where supply and demand adapt dynamically using AI, will become as fundamental to the AI era as packet switching was to the internet.
Here, deep engineering meets societal need. Without resilient and efficient energy, AI progress stalls. With it, the future scales.
Shaping What Comes Next
The drop in the cost of intelligence is driving demand at a scale we have never seen before. That demand creates opportunity on the supply side too, in the platforms, hardware, energy, physical systems, and infrastructure that make this future possible.
The five areas — Vertical AI Platforms, Physical AI, AI Infrastructure, Advanced Computing Hardware, and Smart Energy — represent the biggest opportunities of this era. They are not isolated. They form an interconnected landscape where advances in one accelerate breakthroughs in the others.
We are domain experts in these five areas. The TSF team brings technical, product and commercialization expertise that helps founders build and scale in precisely these spaces. We are uniquely qualified to do so.
At Two Small Fish, this is the canvas for the next generation of 100x companies. We are excited to partner with the founders building in these areas globally, those who not only see the future, but are already shaping it.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
A few years back, Eva met Dr. Scott Stornetta. Later, I did too. Alongside Dr. Stuart Haber, Scott is widely credited as the creator of blockchain. Blockchain is a technology built on a simple but radical idea at the time: decentralization. No single authority, no central point of control, just a trusted system everyone can rely on.
Now, these two scientists are teaming up again to start a new company, SureMark Digital. Their mission is to bring that same decentralized philosophy to identity and authenticity on the internet, enabling anyone to prove who they are, certify their work, and push back against deepfakes and impersonation. No middlemen. No central gatekeepers.
It took us about 3.141592654 seconds to get excited. We are now proud to be the co-lead investor in SureMark’s first institutional round.
At Two Small Fish, we love backing frontier tech that can reshape large-scale behaviour. SureMark checks every box.
Eva has written a deeper dive on what they are building and why it matters. You can read it here.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
At most dinners, introductions start with your name and maybe what you do.
At this one, we began with: “Second edition.” “Fourth edition.”
Why? Because this was our “School of Fish – Legends of Semiconductors” dinner, hosted at our home, where your relationship with the Sedra & Smith textbook was the common thread. (I’m second edition, if you’re wondering.)
We were incredibly honoured to have Dr. Adel Sedra, former Dean of Engineering at the University of Waterloo, join us. Recently appointed to the Order of Canada, Dr. Sedra is a towering figure in the world of electrical engineering. Since 1982, his textbook has taught more than three-quarters of the world’s electrical engineers. It is hard to find someone in the field who has not studied from it. I consider myself extraordinarily fortunate, not just to have learned from his book, but to have been his student more than 30 years ago at the University of Toronto. Few have had the privilege of learning directly from a legend.
We were equally honoured to host Benny Lau, co-founder of ATI Technologies, whose legacy lives on in AMD’s GPUs to this day. AMD acquired ATI for $5.4 billion nearly 20 years ago, still one of the largest tech acquisitions in Canadian history. When Eva worked at ATI, she had the chance to work closely with Benny. His presence brought our conversation full circle, from classroom to commercialization. Adding even more depth to the evening, Benny was also once a student of Dr. Sedra. Two generations of engineers at the same table, both shaped by the same teacher.
From left to right: Benny Lau, Eva Lau, Ljubisa Bajic
This evening was also a chance to reconnect with those who shaped my own journey. Martin Snelgrove and Raymond Chik, my professor and TA respectively, were both there and are now serial entrepreneurs. They are also co-founders of Hepzibah, a Two Small Fish portfolio company. (I still can’t help but sometimes call him Professor Snelgrove.) Xerxes Wania, another one of my TAs from back in the day, went on to build and exit two semiconductor companies and added his voice to the conversation.
From left to right: Xerxes Wania, Dr. Adel Sedra, Allen Lau, Martin Snelgrove, Raymond Chik
We were also joined by Ljubisa Bajic, former CEO of TensTorrent and now CEO of Taalas, who also spent part of his career at ATI, further adding to the thread that connected many of us. Chris Yip, Dean of Engineering at the University of Toronto, and Deepa Kundur, current Chair of U of T’s Department of Electrical & Computer Engineering—continuing the legacy of leadership that Dr. Sedra once held in that position—also attended. Professor Tony Chan Carusone, now also CTO of Alphawave Semi and coauthor of the Sedra & Smith textbook starting with the 8th edition, brought both academic and commercial perspectives to the table.
From the TSF portfolio side, we were thrilled to have Professor Doug Barlage of the University of Alberta and Professor Chris Eliasmith of the University of Waterloo, co-founders of Zinite and ABR, respectively.
And of course, our partner Dr. Albert Chen joined us. He is a graduate of Waterloo Engineering and knows a thing or two about semiconductors himself.
Semiconductors brought us together that night. Textbook and tapeout were what we talked about, and we all loved them.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
In the history of human civilization, there have been several distinct ages: the Agricultural Age, the Industrial Age, and the Information Age, which we are living in now.
Within each age, there are different eras, each marked by a drastic drop in the cost of a fundamental “atomic unit.” These cost collapses triggered enormous increases in demand and reshaped society by changing human behaviour at scale.
From the late 1970s to the 1990s, the invention of the personal computer drastically reduced the cost of computing [1]. A typical CPU in the early 1980s cost hundreds of dollars and ran at just a few MHz. By the 1990s, processors were orders of magnitude faster for roughly the same price, unlocking entirely new possibilities like spreadsheets and graphical user interfaces (GUIs).
Then, from the mid-1990s to the 2010s, came the next wave: the Internet. It brought a dramatic drop in the cost of connectivity [2]. Bandwidth, once prohibitively expensive, fell by several orders of magnitude — from over $1,200 per Mbps per month in the ’90s to less than a penny today. This enabled browsers, smartphones, social networks, e-commerce, and much of the modern digital economy.
From the mid-2010s to today, we’ve entered the era of AI. This wave has rapidly reduced the cost of intelligence [3]. Just two years ago, generating a million tokens using large language models cost over $100. Today, it’s under $1. This massive drop has enabled applications like facial recognition in photo apps, (mostly) self-driving cars, and — most notably — ChatGPT.
These three eras share more than just timing. They follow a strikingly similar pattern:
First, each era is defined by a core capability, i.e. computing, connectivity, and intelligence respectively.
Second, each unfolds in two waves:
The initial wave brings a seemingly obvious application (though often only apparent in hindsight), such as spreadsheets, browsers, or facial recognition.
Then, typically a decade or so later, a magical invention emerges — one that radically expands access and shifts behaviour at scale. Think GUI (so we no longer needed to use a command line), the iPhone (leapfrogging flip phones), and now, ChatGPT.
Why does this pattern matter?
Because the second-wave inventions are the ones that lower the barrier to entry, democratize access, and reshape large-scale behaviour. The first wave opens the door; the second wave throws it wide open. It’s the amplifier that delivers exponential adoption.
We’ve seen this movie before. Twice already, over the past 50 years.
The cost of computing dropped, and it transformed business, productivity, and software.
Then the cost of connectivity dropped, and it revolutionized how people communicate, consume, and buy.
Now the cost of intelligence is collapsing, and the effects are unfolding even faster.
Each wave builds on the last. The Internet era was evolving faster than the PC era because the former leveraged the latter’s computing infrastructure. AI is moving even faster because it sits atop both computing and the Internet. Acceleration is not happening in isolation. It’s compounding.
If it feels like the pace of change is increasing, it’s because it is.
Just look at the numbers:
Windows took over 2 years to reach 1 million users.
Facebook got there in 10 months.
ChatGPT did it in 5 days.
These aren’t just vanity metrics — they reflect the power of each era’s cost collapse to accelerate mainstream adoption.
That’s why it’s no surprise — in fact, it’s crystal clear — that the current AI platform shift is more massive than any previous technological shift. It will create massive new economic value, shift wealth away from many incumbents, and open up extraordinary investment opportunities.
That’s why the succinct version of our thesis is:
We invest in the next frontier of computing and its applications, reshaping large-scale behaviour, driven by the collapsing cost of intelligence and defensible through tech and data moats.
The race is already on. We can’t wait to invest in the next great thing in this new era of intelligence.
Super exciting times ahead indeed.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
Footnotes
[1] Cost of Computing
In 1981, the Intel 8088 CPU (used in the first IBM PC) had a clock speed of 4.77 MHz and cost ~$125. By 1995, the Intel Pentium processor ran at 100+ MHz and cost around $250 — a ~20x speed gain at similar cost. Today’s chips are thousands of times faster, and on a per-operation basis, exponentially cheaper.
[2] Cost of Connectivity
In 1998, bandwidth cost over $1,200 per Mbps/month. By 2015, that figure dropped below $1. As of 2024, cloud bandwidth pricing can be less than $0.01 per GB — a near 100,000x drop over 25 years.
[3] Cost of Intelligence
In 2022, generating 1 million tokens via OpenAI’s GPT-3.5 could cost $100+. In 2024, it costs under $1 using GPT-4o or Claude 3.5, with faster performance and higher accuracy — a 100x+ reduction in under two years.
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
In the early 2000s, it was a common joke in the tech world that “next year is the year of the smartphones.” People kept saying it over and over for almost a decade. It became a punchline. The industry nearly lost its credibility.
Until the iPhone launched. “Next year is the year of the smartphones” finally became true.
The same joke has followed quantum for the past ten years: next year is the year of quantum.
Except it hasn’t been. Not yet.
And yet, quietly, the foundations have been built. We’re not there, but we’re far from where we started.
We’re getting closer. Much closer. I can smell it. I can hear it. I can sense it.
Right now, without getting into too much technical detail, we’re still at a small scale: fewer than 100 usable qubits. Commercial viability likely requires thousands, if not millions. The systems are still too error-prone, and hosting your own quantum machine is wildly impractical. They’re expensive, fragile, and noisy.
At this stage, quantum is mostly limited to niche or small-scale applications. But step by step, quantum is inching closer to broader utility.
And while these things don’t progress in straight lines, the momentum is real and accelerating.
Large-scale, commercially deployable, fault-tolerant quantum computers accessed through the cloud are no longer science fiction. They’re within reach.
I spent a few of my academic years in signal processing and error correction. I’ve also spent a bit of time studying quantum mechanics. I understand the challenges of cloud-based access to quantum systems, and I’ve been following the field for quite a while, mostly as a curious science nerd.
All of that gives me reason to trust my sixth sense. Quantum is increasingly becoming a reality.
Nobody knows exactly when the iPhone moment or the ChatGPT moment of quantum will happen. But I’m absolutely sure we won’t still be saying “next year is the year of quantum” a decade from now.
It will happen, and it will happen much sooner than you might think.
This is an exciting time and the ideal time to take a closer look at quantum, because the best opportunities tend to emerge right before the technology takes off.
How can we not get excited about new quantum investment opportunities?
P.S. I’m excited to attend the QUANTUM NOW conference this week in Montreal. Also thrilled to see Mark Carney name quantum as one of Canada’s official G7 priorities. That short statement may end up being a big milestone.
P.P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
When entrepreneurs exit their companies, it is supposed to be a victory lap. But in reality, many find themselves in an unexpected emotional vacuum. More often than you might think, I hear variations of the same quiet confession:
“It should have been the best time of my life. But I felt lost after the exit. I lost my purpose.”
After running Wattpad for 15 years, I understand this all too well. It is like training for and running a marathon for over a decade, only to stop cold the day after the finish line. No more rhythm. No more momentum. No next mile.
Do I Miss Operating
Unsurprisingly, people often ask me:
“Do you like being a VC?”
“Do you miss operating?”
My honest answer is yes and yes
(but I get my fix without being a CEO — see below).
Being a founder and CEO was deeply challenging and also immensely rewarding. It is a role that demands a decade-long commitment to building one and only one thing. And while I loved my time as CEO, I did not feel the need to do it again. Once in a lifetime was enough. I have started three companies. A fourth would have felt repetitive.
What I missed most was not the title or the responsibility. It was the people. The team. The day-to-day collaboration with nearly 300 passionate employees when I stepped down. That sense of shared mission — of solving hard problems together — was what truly filled my cup.
Let’s be honest: they call me especially when they believe I am the only one who can help them. Their words, not mine. And there have been plenty of those occasions.
That gives me the same hit of adrenaline I used to get from operating. At my core, I love solving hard problems. That part of me did not go away after my exit. I just found a new arena for it — and it is a perfect replacement.
A Playground for a Science Nerd
What people may not realize is that the deep tech VC job is drastically different from a “normal” VC job. As a deep tech VC, I am constantly stretched and go deep — technically, intellectually, and creatively. It forces me to stay sharp, push my boundaries, and reconnect with my roots as a curious, wide-eyed science nerd.
There is something magical about working with founders at the bleeding edge of innovation. I get to dive into breakthrough technologies, understand how they work, and figure out how to turn them into usable and scalable products. It feels like being a kid in a candy store — except the candy is semiconductors, control systems, power electronics, quantum, and other domains in the next frontier of computing.
How could I not love that?
Ironically, I had less time to indulge this curiosity when I was a CEO. Now I can geek out and help shape the future at the same time. It is a net positive to me.
You Do Not Have to Love It All
Of course, every job — including CEO and VC — has its less glamorous parts. Whether you are a founder or a VC, there will always be administrative tasks and responsibilities you would rather skip.
But I have learned not to resent them. As I often say:
“You do not need to love every task. You just need to be curious enough to find the interesting angles in anything.”
Those tasks are the cost of admission to being a deep tech VC. A small price to pay to do the work I love — supporting incredible entrepreneurs as they bring transformative ideas to life, and finding joy in doing so. And knowing what I know now, I do not think I would enjoy being a “normal” VC. I cannot speak for others, but for me, this is the only kind of venture work that truly energizes and fulfills me.
A New Season. A New Purpose.
So yes, being a VC brings me as much joy — and arguably even more fulfillment (and I am surprised that I am saying this) — than being a CEO. I feel incredibly lucky. And I am all in.
It feels like all my past experience has prepared me for what I do today. I often describe this phase of my life this way:
Wattpad was my regular season. TSF is my playoff hockey.
It is faster. It is grittier. The stakes feel higher. Not because I am building one company, but because I am helping many shape the future.
That’s 10 years, 120 months, and 3,653 days (yes, we counted the leap years). What started as a bold experiment in early-stage investing has become a decade-long journey of backing audacious founders building at the edge of what’s possible.
Over the weekend, we wired funds for our 60th first investment. That’s not including the many follow-on cheques we’ve written along the way—if we counted those, the number would be much higher. We’re not naming the company just yet, but like the 59 before it, this one reflects deep conviction. We think it’ll make a splash!
For years, we’ve said we write 5 to 7 new cheques per year. Not because we aim for a quota, but because this is what a power-law portfolio construction strategy naturally produces. In venture, just a few outlier companies drive the vast majority of returns. The trick is to consistently back companies with 100x potential. That’s the focus—not pacing. And yet, the numbers tell their own story: we’ve averaged exactly six new investments a year. Apparently, clarity of focus brings consistency as a byproduct.
We’re now six months into our tenth year, and we’re right on pace.
To the founders we’ve backed: thank you for trusting us at the earliest, riskiest stage.
To those we haven’t met yet: if you’re building deep tech in the next frontier of computing, we’d love to hear from you. We invest globally. If you’ve got a breakthrough, we can help turn it into a product. If you’ve got a product, we can help turn it into a company.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
A solo musician doesn’t need a conductor. Neither does a jazz trio.
But an orchestra? That’s a different story. You need a conductor to coordinate, to make sure all the parts come together.
Same with AI agents. One or two can operate fine on their own. But in a multi-agent setup, the real bottleneck is orchestration.
Yesterday, we announced our investment in GenseeAI. That’s the layer the company is building—the conductor for AI agents, i.e. the missing intelligent optimization layer for AI agents and workflows. Their first product, Cognify, takes AI workflows built with frameworks like LangChain or DSPy and intelligently rewrites them to be 10× faster, cheaper, and more reliable. It’s a bit like “compilation” for AI. Given a high-level workflow, Cognify produces a tuned, executable version optimized for production. Their second product, currently under development, goes one step further: a serving layer that continuously optimizes AI agents and workflows at runtime. Think of it as an intelligent “virtual machine” for AI, where the execution of agents and workflows is transparently and “automagically” improved while running.
If you’re building AI systems and want to go from prototype to production with confidence, get in touch with the GenseeAI team.
Read Brandon‘s blog post here or in the following for all the details:
At Two Small Fish, we invest in founders building foundational infrastructure for the AI-native world. We believe one of the most important – yet underdeveloped – layers of this stack is orchestration: how generative AI workflows are built, optimized, and deployed at scale.
Today, building a production-grade genAI app involves far more than calling an LLM. Developers must coordinate multiple steps – prompt chains, tool integrations, memory, RAG, agents – across a fragmented and fast-moving ecosystem and a variety of models. Optimizing this complexity for quality, speed, and cost is often a manual, lengthy process that businesses must navigate before a demo can become a product.
GenseeAI is building the missing optimization layer for AI agents and workflows in an intelligent way. Their first product, Cognify, takes AI workflows built with frameworks like LangChain or DSPy and intelligently rewrites them to be faster, cheaper, and better. It’s a bit like “compilation” for AI: given a high-level workflow, Cognify produces a tuned, executable version optimized for production.
Their second product–currently under development–goes one step further: a serving layer that continuously optimizes AI agents and workflows at runtime. Think of it as an intelligent “virtual machine” for AI: where the execution of agents and workflows is transparently and automatically improved while running.
We believe GenseeAI is a critical unlock for AI’s next phase. Much of today’s genAI development is stuck in prototype purgatory – great demos that fall apart in the real world due to cost overruns, latency, and poor reliability. Gensee helps teams move from “it works” to “it works well, and at scale.”
What drew us to Gensee was not just the elegance of the idea, but the clarity and depth of its execution. The company is led by Yiying Zhang, a UC San Diego professor with a strong track record in systems infrastructure research, and Shengqi Zhu, an engineering leader who has built and scaled AI systems at Google. Together, they bring a rare blend of academic rigor and hands-on experience in deploying large-scale infrastructure. In early benchmarks, Cognify delivered up to 10× cost reductions and 2× quality improvements – all automatically. Their roadmap – including fully automated optimization, enterprise integrations, and a registry of reusable “optimization tricks” – shows ambition to become the default runtime for generative AI.
As the AI stack matures, we believe Gensee will become a foundational layer for organizations deploying intelligent systems. It’s the kind of infrastructure that quietly powers the AI apps we’ll all use – and we’re proud to support them on that journey. If you’re building AI systems and want to go from prototype to production with confidence, get in touch with the team at GenseeAI.
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
Thank you to The Globe for publishing my op-ed about AI last week. In it, I draw parallels between the dot-com crash and the current AI boom—keeping in mind the old saying, “History doesn’t repeat itself, but it often rhymes.” The piece also explores how the atomic unit of this transformation is the ever-declining “cost of intelligence.” AI is the first technology in human history capable of learning, reasoning, creativity, cross-domain thinking, and decision-making. This fundamental shift will impact every sector, without exception, spurring the rise of new tech giants and inevitable casualties in the process. The key is knowing which land to grab!
The piece is now available below.
In the past month, everyone I spoke to has been talking about DeepSeek and Nvidia. Is Nvidia facing extinction? Have certain tech giants overspent on AI? Are we seeing a bubble about to burst, or just another public market overreaction? And what about traditional sectors, like industrials, that haven’t yet felt AI’s impact?
Let’s step back. We’ll revisit companies that soared or collapsed during the dot-com crash – and the lessons we can learn. As Mark Twain reputedly said, “History doesn’t repeat itself, but it often rhymes.”
The answer is that the reports of Nvidia’s demise are greatly exaggerated, though other companies face greater danger. At the same time, new opportunities are vast because this AI-driven shift could dwarf past tech disruptions.
Before 2000, the dot-com mania hit full speed. High-flying infrastructure players such as Global Crossing – once worth US$47-billion – provided backbone networks. Cisco delivered networking equipment, and Sun Microsystems built servers. However, amid the crash, Global Crossing went bankrupt in January, 2002. Cisco plummeted from more than US$500-billion in market cap to about $100-billion. Sun Microsystems sank from a US$200-billion market cap to under US$10-billion.
They failed or shrank for different reasons. Global Crossing needed huge investments before real revenue arrived. Cisco had decent unit economics but lost pricing power when open networking standards commoditized its gear. Sun Microsystems suffered when cheaper hardware and free, open-source software (such as Linux and Apache) undercut it, and commodity hardware plus cloud computing made its servers irrelevant.
However, these companies did not decline because they were infrastructure providers. They declined because they failed to identify the right business model before their capital ran out or were disrupted by alternatives, including open or free systems, despite having the first-mover advantage.
Meanwhile, other infrastructure players thrived. Amazon, seen mostly as an e-commerce site, earned 70 per cent of its operating profit from Amazon Web Services – hosting startups and big players such as Netflix. AWS eliminated the need to buy hardware and continually cut prices, especially in its earlier years, catalyzing a new wave of businesses and ultimately driving demand while increasing AWS’s revenue.
In hindsight, the dot-com boom was real – it simply took time for usage to catch up to the hype. By the late 2000s, mobile, social and cloud surged. Internet-native giants (Netflix, Google, etc.) grew quickly with products that truly fit the medium. Early front-runners such as Yahoo! and eBay faded. Keep in mind that Facebook was founded in 2004, well after the crash, and Apple shifted from iPods to the revolutionary iPhone in 2007, which further catalyzed the internet explosion. A first-mover advantage might not always pay off.
The first lesson we learned is that open systems disrupt and commoditize infrastructure. At that time, and we are seeing it again, an army of contributors drove open systems for free, allowing them to out-innovate proprietary solutions.
Companies that compete directly against open systems – note that Nvidia does not – are particularly vulnerable at the infrastructure layer when many open and free alternatives (such as those solely building LLMs without any applications) exist. DeepSeek, for example, was inevitable – this is how technology evolves.
Open standards, open source and other open systems dramatically lower costs, reduce barriers to AI adoption and undermine incumbents’ pricing power by offering free, high-quality alternatives. This “creative destruction” drives technological progress.
In other words, OpenAI is in a vulnerable position, as it resembles the software side of Sun Microsystems – competing with free alternatives such as Linux. It also requires significant capital to build out, yet its infrastructure is rapidly becoming commoditized, much like Global Crossing’s situation. On the other hand, Nvidia has a strong portfolio of proprietary technologies with few commoditized alternatives, making its position relatively secure. Nvidia is not the new Sun Microsystems or Cisco.
Most importantly, the disruption and commoditization of infrastructure also democratize AI innovation. Until recently, starting an AI company often required raising millions – if not tens of millions – just to get off the ground. That is already changing, as numerous fast-growing companies have started and scaled with minimal initial capital. This is leading to an explosion of innovative startups and further accelerating the flywheel.
The next lesson we learned is that the internet was the first technology in human history that was borderless, connected, ubiquitous, real-time, and free. Its atomic unit is connectivity. During its rise, “the cost of connectivity” steadily declined, while productivity gains from increased connectivity continued to expand demand. The flywheel turned faster and faster, forming a virtuous cycle.
Similarly, AI is the first technology in human history capable of learning, reasoning, creativity, cross-domain functions and decision-making. Crucially, AI’s influence is no longer confined to preprogrammed software running on computing devices; it now extends into all types of machines. Hardware and software, combined with collective learning, enable autonomous cars and other systems like robots to adapt intelligently in real time with little or no predefined instructions.
These breakthroughs are reaching sectors scarcely touched by the internet revolution, including manufacturing and energy. This goes beyond simple digitization; we are entering an era of autonomous operations and, ultimately, autonomous businesses, allowing humans to focus on higher-value tasks.
As with connectivity costs in the internet era, in this AI era, “the cost of intelligence” has been steadily declining. Meanwhile, the value derived from increased intelligence continues to grow, driving further demand – this mirrors how the internet played out and is already happening again for AI. The parallels between these two platform shifts suggest that massive economic value will be created or shifted from incumbents, opening substantial investment opportunities across early-stage ventures, growth-stage private markets and public investments.
Just as the early internet boom heavily focused on infrastructure, a significant amount of capital has been invested in enabling AI technologies. However, over time, economic value shifts from infrastructure to applications – just as it did with the internet.
This doesn’t mean there are no opportunities in AI infrastructure – far from it. Remember, more than half of Amazon’s profits come from AWS. Services, such as AWS, that provide access to AI, will continue to benefit as demand soars. Similarly, Nvidia will continue to benefit from the rising demand. However, many of today’s most-valuable companies – both public and private – are in the application layer or operate full-stack models.
Despite these advancements, this transformation won’t happen overnight, but it will likely unfold more quickly than the internet disruption – which took more than a decade – because many core technologies for rapid innovation are already in place.
AI revenues might appear modest today and don’t yet show up in the public markets. However, if we look closer, some AI-native startups are already growing at an unprecedented pace. The disruption isn’t a prediction; it’s already happening.
As Bill Gates once said, “Most people overestimate what they can achieve in one year and underestimate what they can achieve in ten years.”
The AI revolution is just beginning. The next decade will bring enormous opportunities – and a new wave of tech giants, alongside inevitable casualties.
It’s a land grab – you just need to know which land to seize!
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
Fibra is developing smart underwear embedded with proprietory textile-based sensors for seamless, non-invasive monitoring of previously untapped vital biomarkers. Their innovative technology provides continuous, accurate health insights—all within the comfort of everyday clothing. Learning from user data, it then provides personalized insights, helping women track, plan, and optimize their reproductive health with ease. This AI-driven approach enhances the precision and effectiveness of health monitoring, empowering users with actionable information tailored to their unique needs.
Fibra has already collected millions of data points with its product, further strengthening its AI capabilities and improving the accuracy of its health insights. While Fibra’s initial focus is female fertility tracking, its platform has the potential to expand into broader areas of women’s health, including pregnancy detection/monitoring, menopause, detection of STDs and cervical cancer and many more, fundamentally transforming how we monitor and understand our bodies.
Perfect Founder-Market Fit
Fibra was founded by Parnian Majd, an exceptional leader in biomedical innovation. She holds a Master of Engineering in Biomedical Engineering from the University of Toronto and a Bachelor’s degree in Biomedical Engineering from TMU. Her achievements have been widely recognized, including being an EY Women in Tech Award recipient, a Rogers Women Empowerment Award finalist for Innovation, and more.
We are thrilled to support Parnian and the Fibra team as they push the boundaries of AI-driven smart textiles and health monitoring. We are entering a golden age of deep-tech innovation and software-hardware convergence—a space we are excited to champion at Two Small Fish Ventures.
Stay tuned as Fibra advances its mission to empower women through cutting-edge health technology.
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
The Two Small Fish team is thrilled to announce our investment in Hepzibah AI, a new venture founded by Untether AI’s co-founders, serial entrepreneurs Martin Snelgrove and Raymond Chik, along with David Lynch and Taneem Ahmed. Their mission is to bring next-generation, energy-efficient AI inference technologies to market, transforming how AI compute is integrated into everything from consumer electronics to industrial systems. We are proud to be the lead investor in this round, and I will be joining as a board observer to support Hepzibah AI as they build the future of AI inference.
The Vision Behind Hepzibah AI
Hepzibah AI is built on the breakthrough energy-efficient AI inference compute architecture pioneered at Untether AI—but takes it even further. In addition to pushing performance/power harder, it can handle training loads like distillation, and it provides supercomputer-style networking on-chip. Their business model focuses on providing IP and core designs that chipmakers can incorporate into their system-on-chip designs. Rather than manufacturing AI chips themselves, Hepzibah AI will license its advanced AI inference IP for integration into a wide variety of devices and products.
Hepzibah AI’s tagline, “Extreme Full-stack AI: from models to metals,” perfectly encapsulates their vision. They are tackling AI from the highest levels of software optimization down to the most fundamental aspects of hardware architecture, ensuring that AI inference is not only more powerful but also dramatically more efficient.
Why does this matter? AI is rapidly becoming as indispensable as the CPU has been for the past few decades. Today, many modern chips, especially system-on-chip (SoC) devices, include a CPU or MCU core, and increasingly, those same chips will require AI capabilities to keep up with the growing demand for smarter, more efficient processing.
This approach allows Hepzibah AI to focus on programmability and adaptable hardware configurations, ensuring they stay ahead of the rapidly evolving AI landscape. By providing best-in-class AI inference IP, Hepzibah AI is in a prime position to capture this massive opportunity.
An Exceptional Founding Team
Martin Snelgrove and Raymond Chik are luminaries in this space—I’ve known them for decades. David Lynch and Taneem Ahmed also bring deep industry expertise, having spent years building and commercializing cutting-edge silicon and software products.
Their collective experience in this rapidly expanding, soon-to-be ubiquitous industry makes investing in Hepzibah AI a clear choice. We can’t wait to see what they accomplish next.
P.S. You may notice that the logo is a curled skunk. I’d like to highlight that the skunk’s eyes are zeros from the MNIST dataset. 🙂
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
It’s been almost three years since I stepped aside from my role as CEO of Wattpad, yet I’m still amazed by the reactions I get when I bump into people who have been part of the Wattpad story. The impact continues to surface in unexpected and inspiring ways frequently.
Wattpad has always been a platform built on storytelling for all ages and genders. That being said, our core demographic—roughly 50% of our users—has been teenage girls. Young women have always played a pivotal role in the Wattpad community.
Next year, Wattpad will turn 20 (!)—a milestone that feels both surreal and deeply rewarding. When we started in 2006, we couldn’t have imagined the journey ahead. But one thing is certain: our early users have grown up, and many of them are now in their 20s and 30s, making their mark on the world in remarkable ways.
A perfect example: at our recent masterclass at the University of Toronto, I ran into Nour. A decade ago, she was pulling all-nighters reading on Wattpad. Today, she’s an Engineering Science student at the University of Toronto, specializing in machine intelligence. Her story is not unique. Over the years, I’ve met countless female Wattpad users who are now scientists, engineers, and entrepreneurs, building startups and pushing boundaries in STEM fields.
This is incredibly fulfilling. Many of them have told me that they looked up to Wattpad and our journey as a source of inspiration. The idea that something we built has played even a small role in shaping their ambitions is humbling.
Now, as an investor at Two Small Fish, I’m excited about the prospect of supporting these entrepreneurs in the next stage of their journey. Some of these Wattpad users will go on to build the next great startups, and it would be incredible to be part of their success, just as they were part of Wattpad’s.
On this International Women’s Day, I want to celebrate this unintended but, in hindsight, obvious outcome: a generation of young women who grew up on Wattpad are now stepping into leadership roles in tech and beyond. They are the next wave of innovators, creators, and entrepreneurs, and I can’t wait to see what they build next.
P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
In many companies, the bottleneck isn’t necessarily in the execution of decisions. The real bottleneck is the excessive time people waste making decisions.
When I was Wattpad’s CEO, everyone in the company knew I had a simple 2×2 framework to empower the whole team to make fast, high-quality decisions – all by themselves!
The essence of this framework comes down to two questions:
• Is this decision reversible?
• Is this decision consequential?
These two factors create four types of decisions:
1. Reversible and inconsequential
2. Reversible and consequential
3. Irreversible and inconsequential
4. Irreversible and consequential
Examples of Each Type
1. Reversible and Inconsequential
This actually makes up the bulk of decisions in a company:
• Internal Slack messages? Delete them if you don’t like them.
• Marketing team’s benign social media copy?Remove the post if it doesn’t work.
• Small typo like the one in the above image? Yes, I purposely left the typo there. I look sloppy, but I could silently replace it with a better one when I have time.
• Small bugs in the product? If a bug fix causes other problems, revert the changes.
The list goes on. The trick is to empower each person in the company to make these decisions independently. I reinforced the same message to the Wattpad team over and over again:
From the most junior interns to the most senior leaders—you’re empowered to make the call all by yourself.
No boss to ask. No approval process. Just do it!
The company moves fast when most decisions don’t require a meeting!
2. Irreversible and Inconsequential
Here’s an example:
At one point, we ran out of space at Wattpad’s Toronto HQ and needed overflow space. We found a small office—just a few hundred square feet with a couple of meeting rooms—in the building right next door. The location was perfect, but the space itself? Just okay.
The problem was the lease—it was relatively long. Once we signed, we couldn’t back out. That limited our flexibility (irreversible), but we knew that if we needed more room, we could always find another expansion space. The cost was small in the grand scheme of things (inconsequential).
Given our growth, there was little downside to signing the lease. So we moved fast, signed the deal, and moved on to the next item on the to-do list.
For this type of decision, you can still move fast. Just be careful—double-check the lease for any hidden “gotchas.” It’s not about if we sign or not. We will sign, but we just want to make sure the bases are covered before we do.
You’d be surprised how much time people waste on indecision. Just make the call and do the due diligence!
When done properly, product releases can be very consequential but still reversible. At Wattpad, we released high-risk software all the time—but always with a way to roll back if things didn’t work.
We knew how to press the undo button!
For these kinds of decisions, move fast and make the call—but monitor the outcome and always be ready to press undo.
Important: How to Increase the Quality of These Decisions
For both Irreversible and Inconsequential decisions and Reversible and Consequential decisions, always ask:
Is there any way to make this decision more reversible or less consequential?
If you can tweak the decision to minimize fallout—no matter how small—do it. It will save time and stress down the road.
4. Irreversible and Consequential
Many of these are leadership-team-level or CEO-level decisions.
They’re rare but also the hardest to make. They require a lot of context, consideration, and, sometimes, choosing between two bad options. Occasionally, you get a good one and choose between a few great choices.
The ultimate example for me?
Whether to take the company public, maintain the status quo and keep going, or accept an acquisition offer.
Sometimes, knowing which quadrant a decision falls into is an art. But imagine if we didn’t have this framework—slow decision-making would have ground the company to a halt.
The key to moving fast isn’t just execution—it’s deciding fast, too.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
“Deep Tech” is one of those terms that gets thrown around a lot in venture capital and startup circles, but defining it precisely is harder than it seems. If you check Wikipedia, you’ll find this:
Deep technology (deep tech) or hard tech is a classification of organization, or more typically a startup company, with the expressed objective of providing technology solutions based on substantial scientific or engineering challenges. They present challenges requiring lengthy research and development and large capital investment before successful commercialization. Their primary risk is technical risk, while market risk is often significantly lower due to the clear potential value of the solution to society. The underlying scientific or engineering problems being solved by deep tech and hard tech companies generate valuable intellectual property and are hard to reproduce.
At a high level, this definition makes sense. Deep tech companies tackle hard scientific and engineering problems, create intellectual property, and take time to commercialize. But what do substantial scientific or engineering challenges actually mean? Specifically, what counts as substantial? “Substantial” is a vague word. A difficult or time-consuming engineering problem isn’t necessarily a deep tech problem. There are plenty of startups that build complex technology but aren’t what I’d call deep tech. It’s about tackling problems where existing knowledge and tools aren’t enough.
In 1964, Supreme Court Justice Potter Stewart famously said, “I know it when I see it” when asked to describe his test for obscenity in Jacobellis v. Ohio. By no means am I comparing deep tech to obscenity—I don’t even want to put these two things in the same sentence. However, there is a parallel between the two: they are both hard to put into a strict formula, but experienced technologists like us recognize deep tech when we see it.
So, at Two Small Fish, we have developed our own simple rule of thumb:
If we see a product and say, “How did they do that?” and upon hearing from the founders how it is supposed to work, we still say, “Team TSF can’t build this ourselves in 6–12 months,” then it’s deep tech.
At TSF, we invest in the next frontier of computing and its applications. We’re not just looking for smart founders. We’re looking for founders who see things others don’t—who work at the edge of what’s possible. And when we find them, we know it when we see it.
This test has been surprisingly effective. Every single investment we’ve made in the past few years has passed it. And I expect it will continue to serve us well.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
Sonos replaced its CEO last week. The company faced significant backlash after launching a redesigned app earlier last year that was plagued by bugs, missing features, and connectivity issues, frustrating customers and tarnishing its reputation. This also led to layoffs, poor sales, and a significant drop in stock price.
While I usually don’t comment on companies I’m not involved with, as a long-time Sonos user, I was very frustrated that the alarm feature I had been relying on to wake me up in the morning for well over a decade disappeared overnight. There were other issues, too.
Throughout my career, I have worked on numerous redesign projects. A fiasco like this is totally avoidable. Today, I am sharing a couple of internal blog posts I wrote for my team (when I was Wattpad’s CEO) about this topic. Of course, these are just examples of the general framework I used. In practice, there are many specific details in each redesign that I helped guide the team through, as frameworks like this are like a hammer. Even the best hammer in the world is still just a hammer. The devil is in the details of how you use it.
These internal blog posts are just some of the hammers and drills in my toolbox that I use to help our portfolio CEOs navigate trade-offs and move fast without breaking things.
Happy reading through a sample of my collection of half a million words!
Note: These two posts have been mildly edited to improve readability.
Blog Post #1 – Subject: Feature Backward Compatibility
I have gone through major technology platform redesigns many times in my career. One problem that arises every single time is backward compatibility.
The reason is easy to understand: users can interact with complex products (such as Wattpad) in a million different ways. There is no way the engineering team could anticipate all the permutations.
There are two common ways to solve this problem. First, run an extensive beta program. This is what big companies like Apple and Microsoft do when they update their operating systems. This approach is also a great way to push some of the responsibility to their app developers. Even with virtually unlimited resources, crowdsourcing from app developers is still a far better approach. However, running an extensive beta program takes a lot of time and resources. Most companies can’t afford to do that.
The other approach is to roll out the changes progressively and incrementally. It is very tempting to make all the big changes at once, roll them out in one shot, and roll the dice. However, I am almost certain that it will backfire. Not only is it a frustrating experience for both users and engineers, but it also makes the project schedule much less predictable and, in most cases, causes the project to take much longer than anticipated.
Next year, when we focus on our redesign to reduce tech debt, don’t forget to set aside some time budget for these edge conditions that are so easily overlooked. Also, think about how we can roll out the changes more incrementally to minimize the negative impact on our users.
Blog Post #2 – Subject: The Reversibility and Consequentiality Framework
The other day, I spoke to the CEO of another consumer internet company. In terms of the scale of its user base, this company is much smaller than Wattpad, but we are still talking about millions of users here.
Like us, this company has been around for over a decade. Not surprisingly, technical debt has been an ongoing concern. A few years ago, the team decided to completely redesign its platform from the ground up. The redesign was a multi-year effort, and the team finally pulled back the curtain a year ago. While it is working fine now, this CEO told me that it took a few months before they fixed all the issues and reimplemented all the “missing” features because many of their users were using the product in “interesting” ways that the new version did not support.
These problems are fairly common when redesigning a new system from the ground up. In practice, it is simply impossible to take all the permutations into account, no matter how carefully you plan. However, if we mess things up, our user base is so large that it might negatively impact (or ruin!) 100 million people’s lives in the worst-case scenario.
On the flip side, over-planning could burn through a lot of unnecessary cycles.
One way or another, we should not let these challenges deter us from moving forward or even slow us down because there are many ways to mitigate potential problems. In principle, ensuring that the rollout is reversible and inconsequential is key.
The former is easy to understand: Can we roll back when things go wrong? Do we have a kill switch when updating our mobile apps? These are best practices that we have already been using.
However, at times, these best practices might not be possible. Can we reduce the consequentiality when rolling out? If the iOS app were completely redesigned, could we do it in smaller chunks, parallel-run the new and old versions at the same time, or try the new version on 0.1% of our users first? If not, could we roll out the new app in a small country first?
Again, our objective is not to avoid any problem at all costs. Our objective is to minimize (but not eliminate) the negative impact when things go wrong—not if things go wrong. Although Wattpad going dark for 100 million people for an extended period of time is not acceptable, in the spirit of speed, it is perfectly okay if we have ways to hit reverse or reduce the impact to only a small percentage of our users. These are not rocket science, but they do require a bit more thoughtfulness because our user base is so large that we can’t simply roll the dice.
P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
Who are the top 50 VCs in Canada? Two Small Fish Ventures is one of them! At Two Small Fish Ventures, we are deeply honoured to be named among Canada’s top 50 venture capital firms in this year’s edition of The 50 — the annual guide produced by the Canadian Venture Capital & Private Equity Association (CVCA) and the Trade Commissioner Service (TCS).
This recognition is not just a badge for us; it’s a reflection of the thriving and globally respected Canadian venture ecosystem we are proud to be part of. We share this honour with an incredible group of firms that are shaping the future of technology, science, and innovation across the country and beyond.
If you are an entrepreneur, this list represents the Canadian VCs you should talk to — firms committed to partnering with visionary founders, pushing boundaries, and building category-defining companies.
We look forward to continuing to back the next generation of transformational founders and are grateful to the CVCA and TCS for this spotlight.
The Full List: Canada’s Top 50 VCs
Here’s the full list of the firms recognized this year (in alphabetical order):
In the past few days, Eva and I had the privilege of joining the University of Toronto delegation in Stockholm to celebrate University Professor Emeritus Geoffrey Hinton, the 2024 Nobel Laureate in Physics. The events, organized by the University, were a fitting tribute to Professor Hinton’s groundbreaking contributions to AI, a technology that will transform our world in the decades to come.
The celebration was a blend of thoughtful discussions, historic venues, and memorable moments. It all began with a birthday party for Professor Hinton, followed by a fireside chat, an inspiring dinner at the iconic Vasa Museum, and a panel exploring Canada’s leadership in AI at the Embassy of Canada to Sweden. Each event underscored not only Professor Hinton’s remarkable achievements but also the global impact of Canadian innovation in AI and technology more broadly.
Rather than recount every detail, I’ll let the pictures and their captions tell the story of this extraordinary week. It was an incredible opportunity for us to honour a visionary scientist.
Eating birthday cake with University of Toronto President Meric Gertler and Dean, Faculty of Arts & Science at University of Toronto Melanie Woodin.This was the chip that was built for Professor Hinton in the late 80s for him to test his artificial neural network.The chip was developed before sub-micron technology was widely available. Professor Hinton believes it might be 3-5 microns, but even he wasn’t 100% sure. Upon closer inspection, it appears there were six neurons on the grid.Taking a picture with U of T President Meric Gertler and Chancellor Wes Hall and Professor Leah Cowen.We are dining in front of the unique and well preserved warship Vasa from 1628!Fireside chat with University Professor Emeritus Geoffrey Hinton, and Patchen Barss, science journalist, speaker and author.Taking a picture with Untether AI co-founder Raymond Chik and Vector Institute CEO Tony Gaffney at the reception after the fireside chat. An insightful panel moderated by Professor Leah Cowen. Panellists included Professor Eyal de Lara, Vector Institute CEO Tony Gaffney, Professor David Lie and Professor Amy Loutfi.Ericsson’s Head of AI Jörgen GustafssonView Jörgen Gustafsson and Jason LaTorre, the Ambassador of Canada to Sweden.
P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
Note: One of the most common pieces of feedback we receive from entrepreneurs is that TSF partners don’t think, act, or speak like typical VCs. The Contrarian Series is meant to demystify this, so founders know more about us before pitching.
For Wattpad, it was exactly ten years between raising our first round of venture capital in 2011 and the company’s acquisition in 2021. Over that decade, we discussed countless topics in our board meetings.
But one topic we never discussed? Exit strategies.
I distinctly remember, a couple of years before the acquisition, I raised the question to a board member. “We’ve been venture-backed for almost ten years now. Should we start talking about exit…”
I couldn’t even finish the sentence. That board member cut me off:
“Allen, I just want you to build a great company.”
That moment stuck with me. Only after the acquisition did I fully appreciate the significance of those ten years as a venture-backed company without focusing on an exit.
Wattpad’s four largest investors—USV, Khosla Ventures, OMERS, and Tencent—enabled us to focus on building the business, not selling it. OMERS, as a pension fund, and Tencent, as a strategic investor, don’t operate under the typical 10-year fund cycle that drives many venture firms to push for exits. USV, with its consistent track record of generating world-class returns, had the trust of its LPs to prioritize long-term value over short-term outcomes. And Khosla Ventures? Well, no one can tell Vinod Khosla what to do, and he loves making big, long-term bets.
Their perspectives freed us to focus on building a great company rather than prematurely worrying about how to sell it.
In early 2020, a year before Wattpad was acquired for US$660M, we set an ambitious company objective: to become “Investment Ready.” This meant ensuring we could scale profitably and confidently project $100M+ in revenue with a minimum of 40% year-over-year growth. By the end of 2020, we wanted to be in a position to choose between preparing for an IPO (we even reserved our ticker symbol WTPD), raising growth capital to accelerate expansion, or scaling organically without any additional funding.
When an inbound acquisition offer came in mid-2020, this optionality proved invaluable. It allowed us to run a proper process with multiple interested parties. We were clear with potential acquirers: our preference was to remain independent. If the offer wasn’t higher than the value we could command through an IPO, we weren’t interested, and we would walk away. Because we had the fundamentals to back it up, no one doubted us.
This underscores an important point: the best way to generate a great outcome is to build an amazing business. Focus on creating value, and optionality will follow.
Any CEO who claims to have an exit strategy—especially in the early stages—is either naïve, disillusioned, or lying.
Here’s the reality: M&A is far less common than people think. The pool of serious potential acquirers often narrows to just a handful in the best-case scenarios. And even then, the stars have to align—you need the right timing, the right strategic fit, and the right price. It’s easier said than done.
Of course, that doesn’t mean I ignored the idea of acquisition entirely (and founders should consider M&A, but only under the right circumstances, and I will save it for another blog post). For instance, we built relationships with potential strategic acquirers and stayed aware of the landscape. But the time I spent on this was minimal. Even my leadership team occasionally asked why I never talked about M&A. The answer was simple: it wasn’t a priority.
Too many founders overthink their “exit strategy,” and it often backfires. Changing their product to appeal to a potential acquirer? Building one-sided partnerships in the hope they’ll buy the company? Hope is not a strategy.
The same goes for VCs. Some overthink their portfolio companies’ “exit strategy” because they worry about selling before the 10-year fund window closes. While this concern is valid, it doesn’t mean they should push their best portfolio companies to sell. There are many ways for VCs to liquidate their positions without forcing a sale. Ironically, the best way for a founder to help their investors exit is to focus on increasing enterprise value. Shares in a great company are always in demand.
For an early-stage startup, having an exit strategy is as absurd as asking an infant to decide which jobs they’ll apply to after university. The founders’ job is to nurture that infant—raise them into a great human being. The results will follow.
Build a great business, and everything else will fall into place. There’s an old saying: Great companies get bought, not sold. It couldn’t be more true.
P.S. Founders, if you have an exit strategy slide in your pitch deck, please remove it before pitching to us. TYSM!
P.P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
There are three distinct phases in the journey of building a great tech company: technology, product, and commercialization. These phases are sequential yet interconnected and sometimes overlap. Needless to say, mastering each is critical to the company’s eventual success. However, it’s important to recognize their differences.
• Building technology is about founders creating what they love. It’s driven by passion and expertise and often leads to groundbreaking innovations.
• Building a product is about creating something others love to use. This is where usability and solving real problems come into focus.
• Commercialization is about building something people will pay for and driving revenue. This phase transforms users into paying customers or finds someone else to pay for it, such as advertisers.
These phases are related but distinct. Great technology doesn’t guarantee anyone will use it, and a widely-used product doesn’t always lead to revenue. I’ve seen many technologists create incredible technologies no one adopts, as well as popular products that fail to commercialize effectively (though it’s rare for a product with tens of millions of users to fail entirely).
For deep tech companies, these phases often have minimal overlap and unfold sequentially. The technology might take years to develop before a usable product emerges, and commercialization may come even later.
In contrast, shallow tech B2B SaaS products often see complete overlap between the phases. For example, a subscription model is typically apparent from the outset, and the tech, product, and commercialization phases blend seamlessly.
Wattpad is also a good example of how these phases can play out differently. Initially, we built our technology and product hand in hand, creating a platform loved by millions of users. However, its commercialization—whether through ads, subscriptions, or movies, the three revenue models we had—was deliberately delayed. Many people assumed we didn’t know how to make money without understanding this counterintuitive approach (but of course, we purposely kept some of our strategies under wraps). This approach allowed us to use “free” as a potent weapon to dominate—and eliminate—our competitors in a winner-takes-all strategy. Operating for years with minimal revenue was clearly the right decision for the market dynamics and our long-term goals. More on this in a separate blog post.
Given this variability, asking, “What is your revenue?” must be thoughtful and context-specific. For some companies, the absence of revenue may be an intentional and brilliant strategy. For others, insufficient revenue could signal serious trouble. It all depends on the company’s stage, strategy, and goals. Understanding the sequence, timing, and specific needs of a business model is crucial for both investors and entrepreneurs. Zero revenue could be a blessing in the right context. On the other hand, pushing for revenue growth—let alone the wrong type of revenue growth—can be fatal, a scenario we’ve seen many times.
At Two Small Fish Ventures, we are very thoughtful and experienced investors. We understand that starting to generate revenue—or choosing not to generate revenue—at the right time is one of the secrets to success that very few people have mastered. We practise what we preach. Over the past two years, all but one of TSF’s investments have been pre-revenue.
No revenue? No problem. In fact, that’s great. Bring them on!
P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
Those who know me well would tell you I am a pretty boring person. I don’t have many hobbies, but one thing I do love is gadgets. For instance, I’m a big fan of DIY home automation. Practically every electronic device in my house is voice-controlled, automated, and Wi-Fi-connected—if it can be, it probably is. Here’s a fun example:
I love robots doing things for me because, frankly, I’m too busy.
At this rate, I might run out of IP addresses! Sure, I could change my network’s subnet to enable more, but every time I tinker with my setup, I have to invest time getting everything right again—something I don’t have in abundance. Anyway, I digress.
One gadget I’ve wanted for years but hesitated to get is a home energy storage and backup system, like Tesla’s Powerwall. The Powerwall 2 has been around since 2016, but for years, the Powerwall 3 was “just around the corner,” with rumours of its launch “next month” seemingly every month. I didn’t want to invest in a device I planned to use for a decade only for it to become obsolete right after I bought it.
Finally, the wait is over. Powerwall 3 became available earlier this year, and I’m glad I waited. Its specs—peak power, continuous power, and efficiency—are significantly upgraded from Powerwall 2. That said, I was a little disappointed that its battery capacity remained unchanged.
I’m told this was the first Powerwall 3 installation in Canada, which is pretty exciting! It’s a beautiful piece of technology, though I don’t see much of it since it’s tucked away in the basement. Paired with solar panels, I hope to “off the grid” as much as possible.
As good as the Powerwall 3 is, it’s only part of the solution. While it handles storage and backup very well, it doesn’t provide fine-grained energy monitoring, let alone control. To address this, I also installed a Sense energy monitor. This device, connected to the electrical panel, collects real-time data from electrical currents to identify unique energy signatures for every appliance and device in the home. It’s a hack, a retrofit solution and imperfect, but it’s probably the best option for someone like me, who is entrenched in the Tesla ecosystem.
The energy space hasn’t changed much in the past half-century. Take the electric panel, for example—it’s still essentially the same analog system I remember from my childhood. However, with the rapid acceleration of the energy transition, smarter energy systems are becoming critical as hardware and software converge to enable new possibilities.
A big thanks to James and Dave from the Borealis Clean Energy team for helping me with this project —and for arriving in style with Canada’s first Cybertruck. The project has so many moving parts. Their expertise made this journey much smoother.
Unboxing PW3!Zooming in to the power electronics.The electricians are working hard. It is a big job!It is done!A big thank you to James.This is the Tesla Gateway, a separate box we need to install. It is a smaller box—roughly a quarter of the size of PW3—and where “the brain” is located. Adding Sense – the orange box – to my old-school electric panel to help me with device-level monitoring.First Cybertruck in Canada. This thing draws attention.
More than two decades ago, before I started my first company, I was involved with an internet startup. Back then, the internet was still in its infancy, and most companies had to host their own servers. The upfront costs were daunting—our startup’s first major purchase was hundreds of thousands of dollars in Sun Microsystems boxes that sat in our office. This significant investment was essential for operations but created a massive barrier to entry for startups.
Fast forward to 2006 when we started Wattpad. We initially used a shared hosting service that cost just $5 per month. This shift was game-changing, enabling us to bootstrap for several years before raising any capital. We also didn’t have to worry about maintaining the machines. It dramatically lowered the barrier to entry, democratizing access to the resources needed to build a tech startup because the upfront cost of starting a software company was virtually zero.
Eventually, as we scaled, we moved to AWS, which was more scalable and reliable. Apparently, we were AWS’s first customer in Canada at the time! It became more expensive as our traffic grew, but we still didn’t have to worry about maintaining our own server farm. This significantly simplified our operations.
A similar evolution has been happening in the semiconductor industry for more than two decades, thanks to the fabless model. Fabless chip manufacturing allows companies—large or small—to design their semiconductors while outsourcing fabrication to specialized foundries. Startups like Blumind leverage this model, focusing solely on designing groundbreaking technology and scaling production when necessary.
But fabrication is not the only capital-intensive aspect. There is also the need for other equipment once the chips are manufactured.
During my recent visit to ventureLAB, where Blumind is based, I saw firsthand how these startups utilize shared resources for this additional equipment. Not only is Blumind fabless, but they can also access various hardware equipment at ventureLAB without the heavy capital expenditure of owning it.
Let’s see how the chip performs at -40C!
Jackpine (first tapeout)
Wolf (second tapeout)
BM110 (third tapeout)
The common perception that semiconductor startups are inherently capital-intensive couldn’t be more wrong. The fabless model—in conjunction with organizations like ventureLAB—functions much like cloud computing does for software startups, enabling semiconductor companies to build and grow with minimal upfront investment. For the most part, all they need initially are engineers’ computers to create their designs until they reach a scale that requires owning their own equipment.
Fabless chip design combined with shared resources at facilities like ventureLAB is democratizing the semiconductor space, lowering the barriers to innovation, and empowering startups to make significant advancements without the financial burden of owning fabrication facilities. Labour costs aside, the upfront cost of starting a semiconductor company like Blumind could be virtually zero too.
P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
When it comes to watches, my go-to is a Fitbit. It may not be the most common choice, but I value practicality, especially when not having to recharge daily is a necessity to me. My Fitbit lasts about 4 to 5 days—decent, but still not perfect.
Now, imagine if we could extend that battery life to a month or even a year. The freedom and convenience would be incredible. Considering the immense computing demands of modern smartwatches, this might sound far-fetched. But that’s where our portfolio company, Blumind, comes into play.
Blumind’s ultra-low power, always-on, real-time, offline AI chip holds the potential to redefine how we think about battery life and device efficiency. This advancement enables edge computing with extended battery life, potentially lasting years – not a typo – instead of days. Products powered by Blumind can transform user behaviours and empower businesses and individuals to unlock new and impactful value (see our thesis).
Blumind’s secret lies in its brain-inspired, all-analog chip design. The human brain is renowned for its energy-efficient computing abilities. Unlike most modern chips that rely on digital systems and require continuous digital-to-analog and analog-to-digital conversions (which drain power), Blumind’s approach emulates the brain’s seamless analog processing. This unique architecture makes it perfect for power-sensitive AI applications, resulting in chips that could be up to 1000 times more energy-efficient than conventional chips, making them ideal for edge computing.
Blumind’s breakthrough technology has practical and wide-ranging applications. Here are just a few use cases:
• Always-on Keyword Detection: Integrates into various devices for continuous voice activation without excessive power usage.
• Rapid Image Recognition: Supports always-on visual wake word detection for applications such as access control, enhancing human-device interaction with real-time responses.
• Time-Series Data Processing: Processes data streams with exceptional speed for real-time analysis in areas like predictive maintenance, health monitoring, and weather forecasting.
These capabilities unlock new possibilities across multiple industries, including wearables, smart home technology, security, agriculture, medical, smart mobility, and even military and aerospace.
A few weeks ago, I visited Blumind’s team at their ventureLAB office and got an up-close look at their BM110 chip, now in its third tapeout. Blumind exemplifies the future of semiconductor startups through its fabless model, which significantly lowers the initial infrastructure costs associated with traditional semiconductor companies. With resources like ventureLAB supporting them, Blumind has managed to innovate with remarkable efficiency and sustainability. (I’ll share more about the fabless model in an upcoming post.)
I’m thrilled to see where Blumind’s journey leads and how its groundbreaking technology will transform daily life and reshape multiple industries. When devices can go years without needing a recharge instead of mere hours, that’s nothing short of game-changing.
Image: Close-up view of BM110. It is a piece of art!
Image: Qualification in action. Note that BM110 (lower-left corner) is tiny and space-efficient.
Image: The Blumind team is working hard at their ventureLAB office. More on this in a separate blog post here.
P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
More than two decades ago, I co-founded my first company, Tira Wireless. The business went through several iterations, and eventually, we landed on building a mobile content delivery product. We raised roughly $30M in funding, which was a significant amount at the time. We even ranked as Canada’s Third Fastest Growing Technology Company in the Deloitte Technology Fast 50.
We had a good run, but eventually, Tira had to shut its doors.
We made numerous strategic mistakes, and I learned a lot—lessons that, quite frankly, helped me make far better decisions when I later started Wattpad.
One of the most important mistakes we made was falling into the “bridge technology” trap.
What is the “bridge technology” trap?
Reflecting on significant “platform shifts” over recent decades reveals a pattern: each shift unleashes waves of innovation. Consider the PC revolution in the late 20th century, the widespread adoption of the internet and cloud computing in the 2000s, and the mobile era in the 2010s. These shifts didn’t just create new opportunities; they also created significant pain points as the world tried to leap from one technology to another. Many companies emerged to solve problems arising from these changes.
Tira started when the world began its transition from web to mobile. Initially, there were countless mobile platforms and operating systems. These idiosyncrasies created a huge pain point, and Tira capitalized on that. But in a few short years, mobile consolidated into just two major players—iOS and Android. The pain point rapidly disappeared, and so did Tira’s business.
Similarly, most of these “bridge technology” companies perform very well during the transition because they solve a critical, short-term pain point. However, as the world completes the transition, their business disappears. For instance, numerous companies focused on converting websites into iPhone apps when the App Store launched. Where are they now?
Some companies try to leverage what they’ve built and pivot into something new. But building something new is challenging enough, and maintaining a soon-to-be-declining bridge business while transitioning into a new one is even harder. This is akin to the innovator’s dilemma: successful companies often struggle with disruptive innovation, torn between innovating (and risking profitable products) or maintaining the status quo (and risking obsolescence).
As an investor, it makes no sense to invest in a “bridge” company that is fully expected to pivot within a few years. A pivot should be a Plan B, not Plan A. It’s extremely rare for bridge technology companies to become great, venture-scale investments. In fact, I can’t think of any off the top of my head.
We are currently in the midst of a tectonic AI platform shift. We’re seeing a huge volume of pitches, which is incredibly exciting. Many of these startups built great technologies and products. However, a significant number of these pitches also represent bridge technologies. As the current AI platform shift matures, these bridge technologies will lose relevance. Sometimes, it’s obvious they’re bridge technologies; other times, it requires significant thought to identify them. This challenge is intellectually stimulating, and I enjoy every moment of it. Each analysis informs us of what the future looks like, and just as importantly, what it will not look like. With each passing day, we gain stronger conviction about where the world is heading. It’s further strengthening our “seeing the future is our superpower” muscle, and that’s the most exciting part.
P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
In the early 2010s, when Wattpad began raising capital from Silicon Valley, Valley VCs didn’t ask me ‘if’ I would move the company or open a second office there; they asked ‘when.’ They argued that Toronto lacked great product people and scale-up leaders, although we had top engineering talent. At that time, it was common for Valley VCs to ask non-Valley companies to move to the Valley as a condition for funding.
But I told them, ‘I won’t move.’
While their argument had a point, Valley VCs failed to see my “big-fish-small-pond” advantages. I don’t need to hire a million great people. After raising one of the largest funding rounds by a Canadian-based company at the time, I was absolutely sure we could hire “enough” great people to help us build a world-class company based in one of the most populous metropolises in North America called Toronto. Paradoxically, it could even work to our advantage. As one of Toronto’s biggest fish, we could hire the best. I couldn’t say the same thing if we moved to the Valley. Besides, building a company culture with a single office location was much easier.
It was a contrarian bet that few people saw, but it was so obvious to me. In hindsight, it was clear that it was the right call.
It all worked well until it didn’t. While the Toronto ecosystem went from strength to strength during the 2010s, it also meant that the talent competition became very fierce towards the end of the decade. The small pond became a much bigger pond, and there were a lot of big fish in it, including many Valley-based companies setting up shops here.
The tipping point for me was when someone bought the old building next to Wattpad HQ. Initially, we had no idea who wanted to turn it into an office tower until Google announced that it would hire a few thousand people. Where? Right next to Wattpad HQ.
My first-mover advantage has eroded. I had to figure out a new plan to regain my big-fish-small-pond advantage.
My solution was to establish a second HQ in a less populous city with a thriving tech ecosystem and an abundance of post-secondary institutions, where we could be the big fish again and have enough talent to enable us to continue to grow rapidly. It had to be a Canadian city because I wanted a few existing Wattpad employees to relocate there to help us “seed” the culture. It was far harder for me to pull it off if it was cross-border.
I toured around the country. I was impressed by what I saw. There were a handful of cities that met our criteria. I knew we could make it work.
At that time, I was already very familiar with Halifax, having been involved in the local ecosystem for a while. While there, I took advantage of the opportunity to grab dinner with Jevon McDonald, whom I had known for a few years. Nothing compares to talking to a local guru.
Jevon gave me the rundown of all the nuances I couldn’t find on Google search. But when I asked him to name one thing that he didn’t like about Halifax, this was our conversation:
Jevon: “I have a few employees in San Francisco. Going there is very painful as I have to catch a 5am flight to connect through Toronto first.”
Me: “So, there is no direct flight from Halifax to SF?”
“Nope.”
“Great!”
“What?!”
It’s a short flight between Toronto and Halifax. There are numerous daily flights between the two cities, so day trips are super easy. However, the lack of direct flights to the Valley means Valley-based companies won’t show up any time soon. An unfair disadvantage became my unfair advantage. The lack of direct flights became my talent moat.
The rest is history. Wattpad established its second HQ in Halifax. We hired a lot of fantastic people there. I have been the biggest champion of Atlantic Canada ever since, as I have encouraged other Toronto-based companies to do the same.
It was another contrarian bet that few people saw, but it was so obvious to me. It was the right call.
These are just a couple of examples. There were many more that Wattpad did, like establishing a movie studio or investing in something unproven called AI more than a decade ago.
Similarly, some of our best investments in Two Small Fish Ventures, such as Sheertex or BenchSci, had a very tough time raising capital early on because very few people saw what we saw.
Of course, I am not suggesting that one should be contrarian for the sake of being contrarian. But when a contrarian bet results in a first-mover advantage in a big opportunity that no one else saw, that will almost always generate an amazing outcome with outsized returns.
Don’t tell anyone.
P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
For any entrepreneur launching an app finding product-market fit is a lot like finding the Golden Ticket; it’s rare, but when it happens it’s life-changing.
Unlike an enterprise business, when you build a consumer app your end-user can’t easily tell you what they want (vs. enterprise apps that are focused on solving a known problem or a pain point for clients). Think about it this way: Before the iPhone launched, no consumer research would point out the need for a touchscreen, keyboardless device. Before Snapchat, no consumer would say they wanted the ability to send ephemeral messages.
Consumers aren’t able to tell you what they want; this makes consumer products a shot in the dark. There is no guarantee if or when product-market fit can be found. It’s usually a long journey of continuous iteration.
And ongoing iteration is what gets you to product-market fit. Each iteration gives you one extra at-bat. Hitting a home run is easy if you can strike out 10o times instead of 3. Y Combinator’s Sam Altman said it best in this tweet:
Finding product-market fit is hard. Look at how many consumer products Facebook and Google shut down even with their massive resources (remember FB Paper, FB Groups app, Google+ app?) Massive resources can help, but it’s not the most critical.
In the early days of Wattpad, despite only having a handful of employees, every day the product looked a bit different. We implemented new concepts in the morning, checked in the afternoon, measured overnight and killed it the next morning if it didn’t work out. That’s how we found product-market fit in many things. And that’s how we left our competitors in the dust.
Although finding product-market fit is freaking hard, it is also very fun and rewarding once you have figured it out.
The new year means a fresh start. With that in mind, I urge product managers, designers, engineers and developers – anyone who helps develop a product, really – to think critically about the features they are designing. Have you thought about what features you’ll say goodbye to in January? Becausekilling features now means better business velocity for the rest of 2019.
As a product and its codebase grows, it is not uncommon to see an increase in technical debt. This debt may be because usage of a feature has scaled beyond its original design (you can’t expect a Toyota Corolla to reach 300 km/h no matter how many turbochargers you add) or because a feature, and subsequently it’s code, is used in more ways than originally intended (like a lawn mower turned into a snow blower – it works, but it shouldn’t). Often, technical debt accumulates because old or infrequently-used features aren’t retired.
There is a cost of removing these old features, of course, but removing features is significantly cheaper in the long-run than maintaining relic code. When you support outdated or unused features you’re also allowing security, performance and backwards compatibility issues to arise.
I remember reading an article about Evernote that claimed 90% of their features (and they have thousands of them) are used by less than 1% of their users. Eventually, the company’s velocity grounded to a halt because every simple feature update required numerous discussions across the company before the change could be implemented.
So make no mistake, it is desirable and even essential to purge old product features. Here’s how in three steps:
First identify a feature that you think should be retired. Then measure the usage of that feature. The data won’t lie. If usage is low, proceed to step two.
The numbers may not tell you the whole story. Talk to some of the old-timers who have more context than you and understand why the feature existed in the first place. In many cases, you’ll be surprised by the reasons.
Decide to purge, modernize or maintain the status quo. Make a decision and then execute your action plan.
Years ago, I was part of a team that dedicated six months to find bugs and purge unused features. On the surface, it seemed we were spending an inordinate amount of time and effort ‘looking in the rear-view mirror’ and not working on things that took the product forward. In reality though, those six months pushed the product much, much further ahead. By the end of it the product ran faster, the UI was cleaner because many unused features were gone, and annoying glitches were finally addressed. The app went from 1-star to 5-star in a few months without adding anything new.
It’s a good reminder: Less is more. Simple is good.