We are super excited to share that Two Small Fish led YScope’s US$3.9 million financing, with Snow Angels (the Snowflake alumni investment syndicate), Next Wave NYC, UTEST, and other successful founders participating.
YScope was cofounded by University of Toronto Professor Ding Yuan, who is also CEO, Professor Michael Stumm, Dr. Kirk Rodrigues, Dr. David Lion, Yu (Jack) Luo, and Beverly Xu (Guangji Xu). It is a deeply impressive team building open-source logging infrastructure for the AI era, combining deep systems research with real-world production traction.
Its core technology, CLP (Compressed Log Processor), makes log storage, search, and analytics dramatically more efficient for both humans and AI, across cloud and edge environments.
We believe this is a massive opportunity. As the cost of intelligence collapses, AI agents, robots, autonomous vehicles, and other intelligent systems will generate orders of magnitude more machine-generated events. A robotic finger moves. A self-driving car makes a slight turn. An AI agent retries a task. Each action creates an event, and the infrastructure layer that can handle that explosion efficiently will matter enormously.
YScope is also a strong mutual fit for TSF. We invest in the next frontier of computing and its applications, and we know firsthand how painful logging becomes at scale. I have spent enough time with logs that I will never get back. At Wattpad, logging every tap, swipe, and click could easily add up to billions of events a day. That is why YScope’s traction is so compelling, from powering Uber’s production logging platform to operating across more than 1.5 million connected electric vehicles and being used by Fortune 500 organizations.
Congrats to Ding, Michael, Kirk, David, Jack, Beverly, and the entire YScope team. Full blog post here.
OpenClaw, an AI agent that can operate a computer on your behalf, has taken the world by storm. Unless you have been living under a rock, you have probably either tried it already or at least wanted to find out what all the buzz is about.
Many, however, have failed to get past installation because it is so difficult. There is a reason why thousands of people lined up for help just to get OpenClaw installed on their machines. More importantly, using it without proper safeguards can create a real security risk.
From my perspective, three issues stand out in OpenClaw’s current form.
First, it is difficult to install, even for technical users. That matters more than many builders realize. A product does not become broadly useful simply because it is powerful. It becomes useful when people can actually get it running without friction or handholding.
Second, it can create a real security risk if not used properly. Tools that operate at the machine level can be compelling, but they also introduce a very different level of responsibility. Most users do not want to expose their full machine environment just to perform a simple task.
Third, it can become expensive quickly. Token bills can become material before users even realize it. A tool may look impressive in a demo, but if the economics do not work, adoption will eventually stall. In AI, performance matters, but efficiency matters just as much.
This is why, after looking at many options, I chose to use Crate from our portfolio company, Gensee, myself, and I believe it is by far the best way to try OpenClaw.
It addresses all three issues directly: one-click install in 60 seconds, a secure sandbox that only accesses what you explicitly allow, and deep expertise from Dr. Shengqi Zhu and award-winning operating systems expert Professor Yiying Zhang, whose work on agentic optimization and efficiency is exactly what makes this possible. That expertise is also why they have been able to make Crate completely free to use.
In other words, it makes OpenClaw easy, safe, and completely free.
There is also a bonus. Crate comes with Gensee’s proprietary AI search engine built in. That search engine ranked #1 on Source Bench for finding the highest-quality web sources.
Another bonus is that Crate comes pre-installed with a set of common, useful skills vetted by the Gensee team for safety, while still allowing users to install additional skills themselves. That makes it both easier to get started and more flexible over time.
A final bonus is flexible control. Users can create multiple instances, pause and resume them, take snapshots, and roll back at any time. That means full control without the usual complexity.
So Gensee Crate is not just an easier and safer way to use OpenClaw. It is also a better one, and that points to where this market is going. The first wave of a technology shows what is possible; the next wave makes it practical for mainstream users. AI agents are now entering that phase. To become part of everyday workflows, they need to be easy to use, safe by design, and efficient enough to be economically viable. That is where adoption happens.
And that is why Gensee Crate is the best way to try out OpenClaw and why it is worth paying attention to.
If you are curious about OpenClaw, try Gensee Crate here.
Today, writing software is no more difficult than pressing a button. You describe what you want. In a few minutes, not a mockup but a fully functional application is ready to use.
I can testify to this personally. In 15 minutes, using AI, I have “written” more software than I did in a full year when I was writing software professionally. Although my old skill is now obsolete, it is wonderful because I can build faster than I ever could. This is the best of times!
So yes, in a narrow sense, the old software opportunity is dead.
The writing has been on the wall for a while. Shallow tech software has been democratized and, in many cases, is not investable. Public markets have finally figured out that a new wave of software is coming. They just do not really know what it is yet, so they sell indiscriminately. Generic business and financial skills do not work during a paradigm shift because disruption does not show up neatly on a spreadsheet full of ARR, EBITDA, and CAGR. Those are the wrong questions to ask when the underlying rules are being rewritten.
At the same time, the early phase of a paradigm shift is often the best time to invest. The people who have new specific knowledge and the courage to build for an AI native world will have a clear edge and, if they are right, capture outsized returns.
Now here is the twist.
When the cost of X collapses, the world does not get less of the thing. It gets flooded with it. That is Jevons Paradox in action. Make something cheaper and easier, and overall demand goes up significantly, often faster than the drop in price. We have seen versions of this before as humanity adopted electricity, personal computers, the internet, and now intelligence.
So software is not dead. We are about to have 10x, 100x, maybe 1000x more software than we have today.
We have seen a similar movie in content. Thanks to the internet and mobile devices, as the cost of content creation and distribution dropped, the amount of content exploded. That created giants that seized the opportunity. Fun fact, I co-founded a business two decades ago on that thesis and rode that wave myself, so yes, I have been there and done that.
Back to software.
The question now is how to capture the opportunity when the world has 1000x more software and the cost of creating software is approaching zero. Inevitably, the business model shifts because we move to a different part of the price elasticity curve when software becomes abundant. When code becomes cheap, value migrates to what stays scarce.
Shallow tech, run-of-the-mill software companies, including a lot of AI wrappers, are generally not investable from a VC perspective because they are so easy to build, copy, and replace. I have been saying this for many years, even before ChatGPT came out. If you still need more evidence, you are already behind. The button is not coming. The button is here.
This does not mean these companies cannot make money. Some will. But “can generate cash when bootstrapping” and “can return a venture fund” are not the same statement.
In contrast, deep tech software is a fantastic opportunity. There is a reason TSF shifted to deep tech investments years ago. That was not an accident. When the cost curve of intelligence collapses, businesses whose primary moat is “we can write this software” or “we spent 100 engineer years building it” need a rethink.
This is why we are unapologetically investing in deep tech.
Deep tech software is a completely different sport. In many cases, the moat is not in the software. The moat is the unique technology embedded in the software, plus the data and the system it connects to. The software is the container. The defensibility sits underneath.
People often ask how to draw the line between deep tech software and everything else. We have a definition, and it is more true than ever in this “software is abundant” era. More importantly, making that call takes specialized skill. That is why deep tech investing is reserved for trained eyes, as it requires engineering judgment, product instinct, operating experience, and recognition of a market gap that comes from building and commercializing disruptive opportunities. We can do deep tech because we are equipped to do so. Been there. Done that.
To be clear, of course, I am not suggesting the only software opportunity is deep tech. There is also a massive opportunity in bespoke software and disposable software.
For decades, companies bought off-the-shelf software because that was the only option that made economic sense, even when the software was not a perfect fit for their workflow. You ended up customizing your workflow around the software. Bespoke-built software was too expensive, too slow, and too hard to maintain.
Now the economics are changing.
We can now build software for problems that were previously too small to matter economically. We can now create personal tools designed for an audience of one. We will ship internal workflows the way we send emails. We can now generate software that lives for a week, does its job, and disappears.
That is a massive opportunity. Much of it will look like a low-tech, large-scale service business. Some of it will become platforms and infrastructure for software generation itself. Some of it will become entirely new categories we do not have names for yet. Some of it will help make deep tech software even more defensible.
But the direction is clear. Software is becoming abundant, and the economics of software will be drastically different.
So, is software dead?
Yes, software as a scarce craft is dying.
Software-as-a-moat because “we spent 100-engineer-years building it” is dying.
But software as leverage is exploding. Software as the fabric of everything is exploding. The world is not losing software. The world is getting more of it than we can possibly imagine.
Back to the movie analogy. It is like the theatre business. The movie is not the only product. The experience is the product. The popcorn is the product. The atmosphere is the product. The movie is what gets you in the door.
For most of semiconductor history, progress was a simple loop. Shrink transistors. Fit more into the same area. Get faster compute as a byproduct.
That loop had a name. Moore’s Law. It traces back to Intel co-founder Gordon Moore. He observed in the 1960s that the number of transistors on a chip, and hence its capabilities, tended to double every two years. The industry turned that observation into a roadmap. It was never guaranteed to run forever. Now shrinking is harder because we are starting to hit many limits in physics and economics, and the cost of pushing the frontier keeps rising.
So if the curve is going to keep bending upward, the industry needs new scaling vectors beyond making everything smaller in two dimensions.
This is why Two Small Fish invested in Zinite in 2021 at the company’s inception. The thesis was simple then, and it is still simple now. Scale in the third dimension, using proprietary technology protected by patents to enable true 3D chips.
Zinite stayed deliberately stealth early on, focused on building the core and protecting it properly before saying too much. Five years after we invested, we can finally talk about it more openly.
The company is led by its CEO, Dr. Gem Shoute. Fun fact. Her breakthrough was strong enough that her professors and industry veterans (who helped create fundamental IP used in all chips since 2008) joined her as co-founders, Dr. Doug Barlage and Dr. Ken Cadien.
The Distance Tax
In a recent blog post, I used a factory analogy to explain why speed, latency, and energy are often bottlenecked by movement, not necessarily arithmetic.
In short, systems don’t lose because they can’t do math. GPUs are already very good at that. Systems lose speed because they can’t feed the math with data fast enough.
In many systems, moving data costs far more than doing the arithmetic. When movement is expensive, speed and energy efficiency get worse together.
AI inference exacerbates the problem because the computational characteristics of AI inference workloads put a premium on memory behaviour. In many cases, the limiting factor is not arithmetic. It is how efficiently the system can move data. Bringing memory closer to logic matters because it directly reduces that movement.
Sensing fits in the same frame as logic and memory. Sensors generate raw data at high volume. If the system’s first step is to ship raw data far away before anything useful happens, it pays in bandwidth, latency, and power. The more intelligence that can happen closer to where data is produced, the less the system wastes just transporting information.
So the distance tax is one big problem showing up in three places at once. Logic. Memory. Sensing.
Why 3D Matters for Speed and Energy
When people hear 3D chips, they think density. More transistors per area. That matters. The bigger lever is proximity. Current 3D approaches to deliver more performance per area rely on advanced packaging, which is hindered by cost and the distance tax.
If memory can live closer to logic, the system avoids transfers that dominate both performance and power. If compute and memory can sit closer to sensing, the system avoids hauling raw streams around before doing anything intelligent.
Every avoided transfer is a double win. Speed improves because stalls go down and effective bandwidth goes up. Energy improves because fewer joules are burned moving bits instead of doing work.
That is the two birds, one stone result.
Five years after we invested, Zinite is far from just a concept. The company is doing exceptionally well, and it represents the kind of platform that can extend performance gains into the post-Moore era by reducing the distance tax, not by asking physics for more shrink, but by making data travel less.
We need new architectures to meet the speed, security, and energy demands of the next frontier of computing and its applications, which is the lens I used in The Factory Analogy.
Our portfolio company Applied Brain Research (ABR) just achieved a new milestone: ABR announced the successful closure of its oversubscribed seed funding round, including investment from TSF as a lead investor, with Eva Lau joining the board.
ABR created and patented a new type of AI model, called state space models, to make AI smaller, faster, and more energy efficient than transformer models. State space models deliver real-time voice and time series intelligence without the cloud, built for privacy and efficiency. ABR’s first chip, TSP1, delivers real-time, fully on-device voice AI without the cloud. Full vocabulary speech-to-text and text-to-speech are now possible at under 30mW.
At the edge, every millisecond and every milliwatt count.
For context:
30mW is 100× less than a 3W LED lightbulb.
A data-center GPU lives in a different universe: an NVIDIA H200 NVL is up to 600W.
Now connect that to the three constraints that define the edge:
Speed: for voice and interaction, half a second is half a second too late. Cloud voice is “a terrible experience,” plagued by delays.
Security: shipping voice data to the cloud bakes in privacy risk by default — which is why we keep coming back to intelligence that stays close to the user, as Brandon argued in his post In Favour of Intelligence That Stays Put. ABR calls out “privacy concerns” as a core issue with cloud voice.
Energy: edge devices are constrained by battery life and on-device resources. ABR’s on-device voice numbers move this from “interesting” to “deployable.”
This is why ABR enables numerous new use cases that weren’t viable before in categories like AR, robotics, wearables, medical devices, and automotive.
Imagine AR glasses (or other wearables) that respond to your command in real time without draining the battery. Imagine a robot that reacts with no hesitation. Imagine a medical device that can provide insight securely, without exporting sensitive data. Imagine a car that can respond to voice commands even when the network is unreliable. These are just a few examples. The list can go on and on.
Or as Eva put it in ABR’s announcement: sophisticated voice AI doesn’t require the cloud.
I spent a full day at Ontario Tech University in Oshawa a few weeks ago. It was my first time on campus, despite it being just over a 40-minute drive from Toronto, where I live. I arrived curious and left with a clearer picture of what they’re building.
Ontario Tech is still a relatively young university, just over two decades old. What’s less well known—and something I didn’t fully appreciate before the visit—is how quickly it has grown in that time, now serving around 14,000 students, and how deliberately it has established itself as a research university rather than simply a teaching-focused institution.
That research orientation shows up not just in output, but in where the university has chosen to build depth—areas that sit close to real systems and real constraints.
This came through clearly in conversations with Prof. Peter Lewis, Canada Research Chair in Trustworthy Artificial Intelligence, whose work focuses on trustworthy and ethical AI. The university has launched Canada’s first School of Ethical AI, alongside the Mindful AI Research Institute, and the work here is grounded in how AI systems behave once deployed—how humans interact with them, and how unintended consequences are identified and managed.
Energy is another area where Ontario Tech has built serious capability. The university is home to Canada’s only accredited undergraduate Nuclear Engineering program, which is ranked third in North America and designated as an IAEA Collaborating Centre. In discussions with Prof. Hossam Gaber, the emphasis was on smart energy systems, where software, sensing, and control systems are developed alongside the physical energy infrastructure they operate within.
I also spent time with Prof. Haoxiang Lang, whose work in robotics, automotive systems, and advanced mobility sits at the intersection of computation and the physical world.
That work is closely tied to the Automotive Centre of Excellence, which includes a climatic wind tunnel described as one of the largest and most sophisticated of its kind in the world. The facility enables full-scale testing under extreme environmental conditions—from arctic cold to desert heat—and supports research that needs to be validated under real operating constraints.
I can’t possibly mention all the conversations I had over the course of the day—it was a full schedule—but I also spent time with Dean Hossam Kishawy and Dr. Osman Hamid, discussing how research, entrepreneurship, and industry engagement fit together at Ontario Tech.
The day also included time at Brilliant Catalyst, the university’s innovation hub, speaking with students and founders about entrepreneurship. I had the opportunity to give a keynote on entrepreneurship, and the visit ended with the pitch competition, where I handed the cheque to the winning team—a small moment that underscored how early many technical journeys begin.
Ontario Tech may be young, but it is already operating with the structure and discipline of a mature research institution, while retaining the adaptability of a newer one.
Thank you to Sunny Chen and the Ontario Tech team for the time, access, and thoughtful conversations throughout the day.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
I wrote my master’s thesis on Code Division Multiple Access, or CDMA, a wireless communication technology that originated from military needs in World War II. CDMA uses a technique called direct sequence spread spectrum, which spreads a signal across a wide bandwidth so that it appears as random noise. This made it far better at encryption, resisting jamming, and avoiding eavesdropping. Needless to say, it was perfect for military environments long before it found its way into everyday communication.
A startup company called Qualcomm was beginning to commercialize CDMA. I spent countless hours studying their technical papers, which demonstrated how a technology with military grade robustness could also be applied to large scale commercial mobile networks. Working on that thesis in the 90s was also the first time I encountered the idea of dual use, the concept of a technology that can be used in both military and civilian environments, and one that has existed since the post–World War II era.
Geopolitics Has Recentered Dual Use
Fast forward to today. Geopolitics has returned to the foreground. Defence budgets around the world are rising. Countries are rethinking supply chains and rediscovering the importance of technological sovereignty. The focus is no longer only on wartime capability but also on the resilience of civilian systems that society relies on every day.
In this environment, dual use has moved from the background to the forefront of national strategy. In the AI era we are in, governments everywhere are looking for new technologies that strengthen national security and economic competitiveness at the same time. Technologies that once seemed far removed from defence are now recognized as essential.
A Tailwind for Deep Tech
For Two Small Fish Ventures, none of this comes as a surprise. Deep tech has always lived at the intersection of what is scientifically hard and what is societally important. Today, it naturally lends itself to dual use.
Breakthroughs in the five areas that TSF invests in — vertical AI platforms, physical AI, AI infrastructure, advanced computing hardware, and smart energy — were never designed to be solely military. Yet many of these technologies have clear applications in resilience, cybersecurity, automation, sensing, communication, and energy stability.
In other words, dual use does not narrow a company’s mission. It broadens it. It is the rare case where one innovation can truly kill two birds with one stone.
Defence Technology Is Not Only About Weapons
There is a common misconception that defence technology refers only to weapons. That has never been true.
Most technologies are neutral. I am certain our national defence department uses Microsoft Office, for instance. This is a reminder that much of what defence departments buy is not lethal but operational.
To be clear, we do not invest in companies whose sole purpose is military lethal weapons systems.
Our focus remains on building companies in the areas where we believe the next frontier of computing is taking shape. When those technologies also support national resilience, that is not mission drift. It is simply the nature of deep tech.
Deep tech requires scientific and engineering breakthroughs that are difficult to copy. In a dual use environment, this becomes an essential advantage.
A New Frontier for Founders
Founders often think of defence as a separate world. That is changing. Defence is a complicated beast, and anyone who believes they can simply walk in will be disappointed. But for those who understand the landscape and can navigate it, this is a generational opportunity waiting to be captured.
When I first studied CDMA decades ago, I never imagined that a communication technique developed for the battlefield would become the backbone of commercial wireless networks.
Today, many deep tech founders are standing at a similar moment. For founders and investors in deep tech, this is the beginning of an important cycle. And we are excited to support the innovators who will define what comes next.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
I had the opportunity to join a panel at the Impact 2025 Summit in Calgary, moderated by Raissa Espiritu, with Janet Bannister and Paul Godman. Ironically, none of us are labelled as impact investors, and I explained on stage why Two Small Fish Ventures does what we do.
At Two Small Fish Ventures, we’ve never called ourselves an impact fund. That’s not because we’re indifferent to impact; in fact, it’s core to what we do. Our focus is on deep tech, the next frontier of computing, where innovation can create meaningful, long-term change. Specifically, we invest in five key areas: Vertical AI Platforms, Physical AI, AI Infrastructure, Advanced Computing Hardware, and Smart Energy.
We care deeply about scientific advancement, and more importantly, about turning those breakthroughs into real-world impact. That’s how meaningful progress happens.
Eva is our General Partner, and both of us are immigrants. Diversity isn’t a marketing point for us; it’s part of who we are. It naturally shows up in our portfolio: about half of our companies have at least one female founder, and many come from underrepresented backgrounds. That said, uncompromisingly, we back amazing deep tech founders who are turning their creations into world-class companies.
It’s actually rare that we talk about topics like women investing or investing in underrepresented groups in isolation. Not because we don’t care, quite the opposite. The fact that Eva is one of the few female GPs leading a venture fund, and that we’re both immigrants, already says a lot. Our actions speak volumes. We walk the walk and talk the talk.
We need to deliver results. Period. Our competition isn’t other venture funds; it’s every other investment opportunity available in the market. If we can’t perform at the highest level — top decile in everything we do — we can’t sustain our mission. Delivering some of the best results in the industry enables us to do what we love and make an impact.
That’s why I believe impact and performance are not opposites. The most powerful kind of impact happens when companies succeed, when they become world-class companies. Strong returns and meaningful impact can, and should, reinforce each other.
I also talked about the importance of choosing the right vehicle for the right purpose. When we made a 2 million dollar donation to the University of Toronto to establish the Commercialization Catalyst Prize, it wasn’t about investing. It was about supporting a different kind of impact — helping scientists and engineers turn their research into innovations that can reach the world. Not every kind of impact should come from the same tool.
At the end of the day, labels matter less than intent and execution. We don’t need to call ourselves an impact fund to make a difference. Our goal is simple: to back bold deep tech founders using science and technology to build a better future and to do it with excellence.
A big thank you to Raissa, George Damian, Sylvia Wang, and the entire Platform Calgary team for putting together such a thoughtful and well-run event.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
When I was studying electrical engineering, out of my curiosity, I chose to take an elective course on quantum physics as part of advanced optics. It sparked my curiosity in quantum. The strange, abstract, counterintuitive rules, for example particles existing in multiple states or being entangled across distance, captivated me.
Error correction, closely related to fault tolerance in quantum systems today, is the backbone of telecommunications, one of the areas I majored in.
Little did I know these domains would converge in such a way that my earlier academic training would become relevant again years later.
For me, computing is not just my profession, it is also my hobby. As a science nerd, I actively enjoy following advances, and I keep going deeper down the rabbit hole of the next frontier of computing. That mix of personal curiosity and professional focus shapes how I approach both the opportunities and risks in the space. Over the past few years, I have gone deeper into the world of quantum. My academic and professional background gave me the footing to evaluate both what is technically possible and what is commercially viable.
From If to How and When
In June, I wrote Quantum Isn’t Next. It’s Now. We have passed the tipping point where the question is no longer if quantum technology will work, it is how and when it will scale.
This momentum is not just visible to those of us deep in the field. As the Globe and Mail recently reported, we at Two Small Fish have been following quantum for years, but did not think it was mature enough for an early-stage fund with a 10-year lifespan to back. This year, we changed our minds. As I shared in that article: “It’s much more investible now.”
The distinction is clear: when quantum was still a science problem, the central question was whether it could work at all. Now that it has become an engineering problem, the questions are how it will work at scale and when it will be ready for commercialization.
This shift matters for investors. Venture capital focuses on engineering breakthroughs, hard, uncertain, but achievable on a commercialization timeline. Fundamental science, which can take many more years to mature, is better supported by governments, universities, and non-dilutive funding sources. I will leave that discussion for another post.
One of Five Frontiers
At Two Small Fish Ventures, we have identified five areas shaping the next frontier of computing. Quantum falls under the area of advanced computing hardware, where the convergence of different areas of science, engineering, and commercialization is accelerating.
Each of these areas is no longer a speculative science experiment but a rapidly advancing field where engineering and commercialization are converging. Within the next ten years, the winners will emerge from lab prototypes and become scaled companies. Quantum is firmly on that trajectory.
How We Invest in Quantum
Our first principle at Two Small Fish is straightforward: we only invest in things we truly understand, from all three technology, product, and commercialization lenses. That discipline forces us to dig deep before committing capital. And after years of study, it is clear to us that quantum has moved into investable territory, but only selectively.
Not every quantum startup fits a venture time horizon. Some promising projects will take too many years to scale. But we are now seeing opportunities that, within a 10-year window, can realistically grow from an early-stage idea to a successful scale-up. That is the standard we apply to every investment, and quantum finally has companies that meet it.
From Sci-Fi to Reality
Canada has played an outsized role in building the foundation of quantum science. Now, it has the chance to lead in quantum commercialization. The next few years will determine which teams turn breakthrough science into enduring companies.
For investors, this is both an opportunity and a responsibility. The quantum era is not a distant possibility, it is here now. What once sounded like science fiction is now an investable reality. And for those willing to put in the work to understand it, the frontier is already here.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
Last year we invested in Axiomatic AI. Their mission is to bring verifiable and trustworthy AI into science and engineering, enabling innovation in areas where rigour and reliability are essential. At the core of this is Mission 10×30: achieving a tenfold improvement in scientific and engineering productivity by 2030.
The company was founded by top researchers and professors from MIT, the University of Toronto, and ICFO in Barcelona, bringing deep expertise in physics, computer science, and engineering.
Since our investment, the team has been heads down executing. Now they’ve shared their first public release: Axiomatic Operators.
What They’ve Released
Axiomatic Operators are MCP servers that run directly in your IDE, connecting with systems like Claude Code and Cursor. The suite includes:
AxEquationExplorer
AxModelFitter
AxPhotonicsPreview
AxDocumentParser
AxPlotToData
AxDocumentAnnotator
Why is this important?
Large Language Models (LLMs) excel at languages (as their name suggests) but struggle with logic. That’s why AI can write poetry but often has trouble with math — LLMs mainly rely on pattern matching rather than reasoning.
This is where Axiomatic steps in. Their approach combines advances in reinforcement learning, LLMs, and world models to create AI that is not just fluent but also capable of reasoning with the rigour required in science and engineering.
What’s Next
This first release marks an important step in turning their mission into practical, usable tools. In the coming weeks, the team will share more technical material — including white papers, demo videos, GitHub repositories, and case studies — while continuing to work closely with early access partners.
Find out more on GitHub, including demos, case studies, and everything else you need to make your work days less annoying and more productive: Axiomatic AI GitHub
We’re excited to see their progress. If you’re in science or engineering, we encourage you to give the Axiomatic Operators suite a try: Axiomatic AI.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
In 1865, William Stanley Jevons, an English economist, observed a curious phenomenon: as steam engines in Britain became more efficient, coal use didn’t fall — it rose. Efficiency lowered the cost of using coal, which made it more attractive, and demand surged.
That insight became known as Jevons Paradox. To put it simply:
Technological change increases efficiency or productivity.
Efficiency gains lead to lower consumer prices for goods or services.
The reduced price creates a substantial increase in quantity demanded (because demand is highly elastic).
Instead of shrinking resource use, efficiency often accelerates it — and with it, broader societal change.
Coal, Then Light
The paradox first appeared in coal: better engines, more coal consumed. Electricity followed a similar path. Consider lighting in Britain:
Period
True price of lighting (per million lumen-hours, £2000)
Change vs. start
Per-capita consumption (thousand lumen-hours)
Change vs. start
Total consumption (billion lumen-hours)
Change vs. start
1800
£8,000
—
1.1
—
18
—
1900
£250
↓ ~30×
255
↑ ~230×
10,500
↑ ~500×
2000
£2.5
↓ ~3,000× (vs. 1800) / ↓ ~100× (vs. 1900)
13,000
↑ ~13,000× (vs. 1800) / ↑ ~50× (vs. 1900)
775,000
↑ ~40,000× (vs. 1800) / ↑ ~74× (vs. 1900)
Over two centuries, the price of light fell 3,000×, while per-capita use rose 13,000× and total consumption rose 40,000×. A textbook case of Jevons Paradox — efficiency driving demand to entirely new levels.
Computing: From Millions to Pennies
This pattern carried into computing:
Year
Cost per Gigaflop
Notes
1984
$18.7 million (~$46M today)
Early supercomputing era
2000
$640 (~$956 today)
Mainstream affordability
2017
$0.03
Virtually free compute
That’s a 99.99%+ decline. What once required national budgets is now in your pocket.
Storage mirrored the same story: by 2018, 8 TB of hard drive storage cost under $200 — about $0.019 per GB, compared to thousands per GB in the mid-20th century.
Connectivity: Falling Costs, Rising Traffic
Connectivity followed suit:
Year
Typical Speed & Cost per Mbps (U.S.)
Global Internet Traffic
2000
Dial-up / early DSL (<1 Mbps); ~$1,200
~84 PB/month
2010
~5 Mbps broadband; ~$25
~20,000 PB/month
2023
100–940 Mbps common; ↓ ~60% since 2015 (real terms)
>150,000 PB/month
(PB = petabytes)
As costs collapsed, demand exploded. Streaming, cloud services, social apps, mobile collaboration, IoT — all became possible because bandwidth was no longer scarce.
Intelligence: The New Frontier
Now the same dynamic is unfolding with intelligence:
Year
Cost per Million Tokens
Notes
2021
~$60
Early GPT-3 / GPT-4 era
2023
~$0.40–$0.60
GPT-3.5 scale models
2024
< $0.10
GPT-4o and peers
That’s a two-order-of-magnitude drop in just a few years. Unsurprisingly, demand is surging — AI copilots in workflows, large-scale analytics in enterprises, and everyday generative tools for individuals.
As we highlighted in our TSF Thesis 3.0, cheap intelligence doesn’t just optimize existing tasks. It reshapes behaviour at scale.
Why It Matters
The recurring pattern is clear:
Coal efficiency fueled the Industrial Revolution.
Affordable lighting built electrified cities.
Cheap compute and storage enabled the digital economy.
Low-cost bandwidth drove streaming and cloud collaboration.
Now cheap intelligence is reshaping how we live, work, and innovate.
As we highlighted in Thesis 3.0:
“Reflecting on the internet era… as ‘the cost of connectivity’ steadily declined, productivity and demand surged—creating a virtuous cycle of opportunities. The AI era shows remarkable parallels. AI is the first technology capable of learning, reasoning, creativity… Like connectivity in the internet era, ‘the cost of intelligence’ is now rapidly declining, while the value derived continues to surge, driving even greater demand.”
The lesson is simple: efficiency doesn’t just save costs — it reorders economies and societies. And that’s exactly what is happening now.
If you are building a deep tech early-stage startup in the next frontier of computing, we would like to hear from you. This is a generational opportunity as both traditional businesses and entirely new sectors are being reshaped. White-collar jobs and businesses, in particular, will not be the same. We would love to hear from you.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
The cost of intelligence is dropping at an unprecedented rate. Just as the drop in the cost of computing unlocked the PC era and the drop in the cost of connectivity enabled the internet era, falling costs today are driving explosive demand for AI adoption. That demand creates opportunity on the supply side too, in the infrastructure, energy, and technologies needed to support and scale this shift.
In our Thesis 3.0, we highlighted how this AI-driven platform shift will reshape behaviour at massive scale. But identifying the how also means knowing where to look.
Every era of technology has a set of areas where breakthroughs cluster, where infrastructure, capital, and talent converge to create the conditions for outsized returns. For the age of intelligent systems, we see five such areas, each distinct but deeply interconnected.
1. Vertical AI Platforms
After large language models, the next wave of value creation will come from Vertical AI Platforms that combine proprietary data, hard-to-replicate models, and orchestration layers designed for complex and large-scale needs.
Built on unique datasets, workflows, and algorithms that are difficult to imitate, these platforms create proprietary intelligence layers that are increasingly agentic. They can actively make decisions, initiate actions, and shape workflows. This makes them both defensible and transformative, even when part of the foundation rests on commodity models.
This shift from passive tools to active participants marks a profound change in how entire sectors operate.
2. Physical AI
The past two decades of digital transformation mostly played out behind screens. The next era brings AI into the physical world.
Physical AI spans autonomous devices, robotics, and AI-powered equipment that can perceive, act, and adapt in real environments. From warehouse automation to industrial robotics to autonomous mobility, this is where algorithms leave the lab and step into society.
We are still early in this curve. Just as industrial machinery transformed factories in the nineteenth century, Physical AI will reshape industries that rely on labour-intensive, precision-demanding, or hazardous work.
The companies that succeed will combine world-class AI models with robust hardware integration and build the trust that humans place in systems operating alongside them every day.
3. AI Infrastructure
Every transformative technology wave has required new infrastructure that is robust, reliable, and efficient. For AI, this means going beyond raw compute to ensure systems that are secure, safe, and trustworthy at scale.
We need security, safety, efficiency, and trustworthiness as first-class priorities. That means building the tools, frameworks, and protocols that make AI more energy efficient, explainable, and interoperable.
The infrastructure layer determines not only who can build AI, but who can trust it. And trust is ultimately what drives adoption.
4. Advanced Computing Hardware
Every computing revolution has been powered by a revolution in hardware. Just as the transistor enabled mainframes and the microprocessor ushered in personal computing, the next era will be defined by breakthroughs in semiconductors and specialized architectures.
From custom chips to new communication fabrics, hardware is what makes new classes of AI and computation possible, both in the cloud and on the edge. But it is not only about raw compute power. The winners will also tackle energy efficiency, latency, and connectivity, areas that become bottlenecks as models scale.
As Moore’s Law hits its limit, we are entering an age of architectural innovation with neuromorphic computing, photonics, quantum computing, and other advances. Much like the steam engine once unlocked new industries, these architectures will redefine what is computationally possible. This is deep tech meeting industrial adoption, and those who can scale it will capture immense value.
5. Smart Energy
Every technological leap has demanded a new energy paradigm. The electrification era was powered by the grid. Today, AI and computing are demanding unprecedented amounts of energy, and the grid as it exists cannot sustain this future.
This is why smart energy is not peripheral, but central. From new energy sources to intelligent distribution networks, the way we generate, store, and allocate energy is being reimagined. The idea of programmable energy, where supply and demand adapt dynamically using AI, will become as fundamental to the AI era as packet switching was to the internet.
Here, deep engineering meets societal need. Without resilient and efficient energy, AI progress stalls. With it, the future scales.
Shaping What Comes Next
The drop in the cost of intelligence is driving demand at a scale we have never seen before. That demand creates opportunity on the supply side too, in the platforms, hardware, energy, physical systems, and infrastructure that make this future possible.
The five areas — Vertical AI Platforms, Physical AI, AI Infrastructure, Advanced Computing Hardware, and Smart Energy — represent the biggest opportunities of this era. They are not isolated. They form an interconnected landscape where advances in one accelerate breakthroughs in the others.
We are domain experts in these five areas. The TSF team brings technical, product and commercialization expertise that helps founders build and scale in precisely these spaces. We are uniquely qualified to do so.
At Two Small Fish, this is the canvas for the next generation of 100x companies. We are excited to partner with the founders building in these areas globally, those who not only see the future, but are already shaping it.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
A few years back, Eva met Dr. Scott Stornetta. Later, I did too. Alongside Dr. Stuart Haber, Scott is widely credited as the creator of blockchain. Blockchain is a technology built on a simple but radical idea at the time: decentralization. No single authority, no central point of control, just a trusted system everyone can rely on.
Now, these two scientists are teaming up again to start a new company, SureMark Digital. Their mission is to bring that same decentralized philosophy to identity and authenticity on the internet, enabling anyone to prove who they are, certify their work, and push back against deepfakes and impersonation. No middlemen. No central gatekeepers.
It took us about 3.141592654 seconds to get excited. We are now proud to be the co-lead investor in SureMark’s first institutional round.
At Two Small Fish, we love backing frontier tech that can reshape large-scale behaviour. SureMark checks every box.
Eva has written a deeper dive on what they are building and why it matters. You can read it here.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
A swimming world champion, a cycling champion, and a marathon champion each tried their hand at a triathlon.
None of them even came close to the podium. All were easily defeated.
Why?
Because the swimming champion could not bike, nor could he run fast.
The cycling champion did not swim well.
The marathon runner was painfully slow in the water.
The winner?
It was someone who had been humbled by the swimming champion in the pool for years, finishing second in the world championships multiple times. He was an exceptional swimmer, yes. However, he could also bike fast and run hard. Not the best in any single discipline, but strong across all three. And that is what won him the race.
The takeaway:
To win in triathlon, you need to be competitive in all three disciplines.
The winner is often world class in one of them, but they must be very good if not great at the other two.
This is the same mistake many first time deep tech founders make.
They believe that superior technology alone is enough to win.
It is not.
While technology is crucial, and in fact it is table stakes and the foundation of innovation, it must be transformed into a usable product. If it does not solve a real problem in a way people can adopt and benefit from, its brilliance is wasted.
And even if you have built world class technology and a beautifully crafted product, you are still not done. Without effective commercialization, which includes distribution, pricing, sales, positioning, and partnerships, you will not reach the users or customers who need what you have built.
Neglecting any one of them is like trying to win a triathlon without training for the bike or the run.
Just like a triathlete must train in all three disciplines, a founder must excel across all three pillars:
Great and defensible technology
An excellent product
Execution on commercialization
You need all three.
That is how you win the world championship.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
In the history of human civilization, there have been several distinct ages: the Agricultural Age, the Industrial Age, and the Information Age, which we are living in now.
Within each age, there are different eras, each marked by a drastic drop in the cost of a fundamental “atomic unit.” These cost collapses triggered enormous increases in demand and reshaped society by changing human behaviour at scale.
From the late 1970s to the 1990s, the invention of the personal computer drastically reduced the cost of computing [1]. A typical CPU in the early 1980s cost hundreds of dollars and ran at just a few MHz. By the 1990s, processors were orders of magnitude faster for roughly the same price, unlocking entirely new possibilities like spreadsheets and graphical user interfaces (GUIs).
Then, from the mid-1990s to the 2010s, came the next wave: the Internet. It brought a dramatic drop in the cost of connectivity [2]. Bandwidth, once prohibitively expensive, fell by several orders of magnitude — from over $1,200 per Mbps per month in the ’90s to less than a penny today. This enabled browsers, smartphones, social networks, e-commerce, and much of the modern digital economy.
From the mid-2010s to today, we’ve entered the era of AI. This wave has rapidly reduced the cost of intelligence [3]. Just two years ago, generating a million tokens using large language models cost over $100. Today, it’s under $1. This massive drop has enabled applications like facial recognition in photo apps, (mostly) self-driving cars, and — most notably — ChatGPT.
These three eras share more than just timing. They follow a strikingly similar pattern:
First, each era is defined by a core capability, i.e. computing, connectivity, and intelligence respectively.
Second, each unfolds in two waves:
The initial wave brings a seemingly obvious application (though often only apparent in hindsight), such as spreadsheets, browsers, or facial recognition.
Then, typically a decade or so later, a magical invention emerges — one that radically expands access and shifts behaviour at scale. Think GUI (so we no longer needed to use a command line), the iPhone (leapfrogging flip phones), and now, ChatGPT.
Why does this pattern matter?
Because the second-wave inventions are the ones that lower the barrier to entry, democratize access, and reshape large-scale behaviour. The first wave opens the door; the second wave throws it wide open. It’s the amplifier that delivers exponential adoption.
We’ve seen this movie before. Twice already, over the past 50 years.
The cost of computing dropped, and it transformed business, productivity, and software.
Then the cost of connectivity dropped, and it revolutionized how people communicate, consume, and buy.
Now the cost of intelligence is collapsing, and the effects are unfolding even faster.
Each wave builds on the last. The Internet era was evolving faster than the PC era because the former leveraged the latter’s computing infrastructure. AI is moving even faster because it sits atop both computing and the Internet. Acceleration is not happening in isolation. It’s compounding.
If it feels like the pace of change is increasing, it’s because it is.
Just look at the numbers:
Windows took over 2 years to reach 1 million users.
Facebook got there in 10 months.
ChatGPT did it in 5 days.
These aren’t just vanity metrics — they reflect the power of each era’s cost collapse to accelerate mainstream adoption.
That’s why it’s no surprise — in fact, it’s crystal clear — that the current AI platform shift is more massive than any previous technological shift. It will create massive new economic value, shift wealth away from many incumbents, and open up extraordinary investment opportunities.
That’s why the succinct version of our thesis is:
We invest in the next frontier of computing and its applications, reshaping large-scale behaviour, driven by the collapsing cost of intelligence and defensible through tech and data moats.
The race is already on. We can’t wait to invest in the next great thing in this new era of intelligence.
Super exciting times ahead indeed.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
Footnotes
[1] Cost of Computing
In 1981, the Intel 8088 CPU (used in the first IBM PC) had a clock speed of 4.77 MHz and cost ~$125. By 1995, the Intel Pentium processor ran at 100+ MHz and cost around $250 — a ~20x speed gain at similar cost. Today’s chips are thousands of times faster, and on a per-operation basis, exponentially cheaper.
[2] Cost of Connectivity
In 1998, bandwidth cost over $1,200 per Mbps/month. By 2015, that figure dropped below $1. As of 2024, cloud bandwidth pricing can be less than $0.01 per GB — a near 100,000x drop over 25 years.
[3] Cost of Intelligence
In 2022, generating 1 million tokens via OpenAI’s GPT-3.5 could cost $100+. In 2024, it costs under $1 using GPT-4o or Claude 3.5, with faster performance and higher accuracy — a 100x+ reduction in under two years.
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
In the early 2000s, it was a common joke in the tech world that “next year is the year of the smartphones.” People kept saying it over and over for almost a decade. It became a punchline. The industry nearly lost its credibility.
Until the iPhone launched. “Next year is the year of the smartphones” finally became true.
The same joke has followed quantum for the past ten years: next year is the year of quantum.
Except it hasn’t been. Not yet.
And yet, quietly, the foundations have been built. We’re not there, but we’re far from where we started.
We’re getting closer. Much closer. I can smell it. I can hear it. I can sense it.
Right now, without getting into too much technical detail, we’re still at a small scale: fewer than 100 usable qubits. Commercial viability likely requires thousands, if not millions. The systems are still too error-prone, and hosting your own quantum machine is wildly impractical. They’re expensive, fragile, and noisy.
At this stage, quantum is mostly limited to niche or small-scale applications. But step by step, quantum is inching closer to broader utility.
And while these things don’t progress in straight lines, the momentum is real and accelerating.
Large-scale, commercially deployable, fault-tolerant quantum computers accessed through the cloud are no longer science fiction. They’re within reach.
I spent a few of my academic years in signal processing and error correction. I’ve also spent a bit of time studying quantum mechanics. I understand the challenges of cloud-based access to quantum systems, and I’ve been following the field for quite a while, mostly as a curious science nerd.
All of that gives me reason to trust my sixth sense. Quantum is increasingly becoming a reality.
Nobody knows exactly when the iPhone moment or the ChatGPT moment of quantum will happen. But I’m absolutely sure we won’t still be saying “next year is the year of quantum” a decade from now.
It will happen, and it will happen much sooner than you might think.
This is an exciting time and the ideal time to take a closer look at quantum, because the best opportunities tend to emerge right before the technology takes off.
How can we not get excited about new quantum investment opportunities?
P.S. I’m excited to attend the QUANTUM NOW conference this week in Montreal. Also thrilled to see Mark Carney name quantum as one of Canada’s official G7 priorities. That short statement may end up being a big milestone.
P.P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
Yet the ocean remains surprisingly underdeveloped — in fact, it’s the least developed.
Land transportation has been electrified. In space, payload costs have dropped drastically. Now, it’s time for marine to catch up.
Unlike cars, you can’t simply add an electric motor and battery to a boat and make it work. Why? One reason is that water’s viscosity is much higher than air, meaning drag or resistance is an order of magnitude greater. As a result, replacing a gas motor with an electric one would require a gigantic battery, making it impractical and, frankly, unusable. That’s why marine electrification has lagged.
Until now.
The “iPhone moment” of marine transportation has arrived. ENVGO’s hydrofoiling NV1 tackles these multidisciplinary complications head-on. Led by successful serial entrepreneur Mike Peasgood, the team brings together expertise in AI, robotics, control systems, computer vision, autonomous systems, and more. Leveraging their prior success as drone pioneers at Aeryon, they are now building a flying robot — on water.
It’s day one of a large-scale transformation of marine transportation. Two Small Fish is privileged and super excited to lead this round of funding, alongside our good friends at Garage, who are also participating. We can’t wait to see how ENVGO reimagines the uncharted waters — pun fully intended.
Read our official blog post by our partner Albert here.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
When entrepreneurs exit their companies, it is supposed to be a victory lap. But in reality, many find themselves in an unexpected emotional vacuum. More often than you might think, I hear variations of the same quiet confession:
“It should have been the best time of my life. But I felt lost after the exit. I lost my purpose.”
After running Wattpad for 15 years, I understand this all too well. It is like training for and running a marathon for over a decade, only to stop cold the day after the finish line. No more rhythm. No more momentum. No next mile.
Do I Miss Operating
Unsurprisingly, people often ask me:
“Do you like being a VC?”
“Do you miss operating?”
My honest answer is yes and yes
(but I get my fix without being a CEO — see below).
Being a founder and CEO was deeply challenging and also immensely rewarding. It is a role that demands a decade-long commitment to building one and only one thing. And while I loved my time as CEO, I did not feel the need to do it again. Once in a lifetime was enough. I have started three companies. A fourth would have felt repetitive.
What I missed most was not the title or the responsibility. It was the people. The team. The day-to-day collaboration with nearly 300 passionate employees when I stepped down. That sense of shared mission — of solving hard problems together — was what truly filled my cup.
Let’s be honest: they call me especially when they believe I am the only one who can help them. Their words, not mine. And there have been plenty of those occasions.
That gives me the same hit of adrenaline I used to get from operating. At my core, I love solving hard problems. That part of me did not go away after my exit. I just found a new arena for it — and it is a perfect replacement.
A Playground for a Science Nerd
What people may not realize is that the deep tech VC job is drastically different from a “normal” VC job. As a deep tech VC, I am constantly stretched and go deep — technically, intellectually, and creatively. It forces me to stay sharp, push my boundaries, and reconnect with my roots as a curious, wide-eyed science nerd.
There is something magical about working with founders at the bleeding edge of innovation. I get to dive into breakthrough technologies, understand how they work, and figure out how to turn them into usable and scalable products. It feels like being a kid in a candy store — except the candy is semiconductors, control systems, power electronics, quantum, and other domains in the next frontier of computing.
How could I not love that?
Ironically, I had less time to indulge this curiosity when I was a CEO. Now I can geek out and help shape the future at the same time. It is a net positive to me.
You Do Not Have to Love It All
Of course, every job — including CEO and VC — has its less glamorous parts. Whether you are a founder or a VC, there will always be administrative tasks and responsibilities you would rather skip.
But I have learned not to resent them. As I often say:
“You do not need to love every task. You just need to be curious enough to find the interesting angles in anything.”
Those tasks are the cost of admission to being a deep tech VC. A small price to pay to do the work I love — supporting incredible entrepreneurs as they bring transformative ideas to life, and finding joy in doing so. And knowing what I know now, I do not think I would enjoy being a “normal” VC. I cannot speak for others, but for me, this is the only kind of venture work that truly energizes and fulfills me.
A New Season. A New Purpose.
So yes, being a VC brings me as much joy — and arguably even more fulfillment (and I am surprised that I am saying this) — than being a CEO. I feel incredibly lucky. And I am all in.
It feels like all my past experience has prepared me for what I do today. I often describe this phase of my life this way:
Wattpad was my regular season. TSF is my playoff hockey.
It is faster. It is grittier. The stakes feel higher. Not because I am building one company, but because I am helping many shape the future.
Driven by rapid advances in AI, the collapse in the cost of intelligence has arrived—bringing massive disruption and generational opportunities.
Building on this platform shift, TSF invests in the next frontier of computing and its applications, backing early-stage products, platforms, and protocols that reshape large-scale behaviour and unlock uncapped, new value through democratization. These opportunities are fueled by the collapsing cost of intelligence and, as a result, the growing demand for access to intelligence as well as its expansion beyond traditional computing devices. What makes them defensible are technology moats and, where fitting, strong data network effects.
Ormore succinctly: We invest in the next frontier of computing and its applications, reshaping large-scale behaviour, driven by the collapsing cost of intelligence and defensible through tech and data moats.
Watch this 2-minute video to learn more about our approach:
Our Evolution: From Network Effects to Deep Tech
When we launched TSF in 2015, our initial thesis centred around network effects. Drawing from our experience scaling Wattpad from inception to 100 million users, we became experts in understanding and leveraging exponential value and defensibility created by network effects at scale. This expertise led us to invest—most as the very first cheque—in massively successful companies such as BenchSci, Ada, Printify, and SkipTheDishes.
We achieved world-class success with this thesis, but like all good things, that opportunity diminished over time.
Our thesis evolved as the ground shifted toward the end of 2010s. A couple of years ago, we articulated this evolution by focusing on early-stage products, platforms, and protocols that transform user behaviour and empower businesses and individuals to unlock new value. Within this broad focus, we zoomed in specifically on three sectors: AI, decentralized protocols, and semiconductors. That thesis guided investments in great companies such as Story, Ideogram, Zinite, and Blumind.
But the world doesn’t stand still. In fact, it has never changed so rapidly. This brings us to the next and even more significant shift shaping our thesis.
A New Platform Shift: The Cost of Intelligence is Collapsing
Reflecting on the internet era, the core lesson we learned was that the internet was the first technology in human history that was borderless, connected, ubiquitous, real-time, and free. At its foundation was connectivity, and as “the cost of connectivity” steadily declined, productivity and demand surged, creating a virtuous cycle of opportunities.
The AI era shows remarkable parallels. AI is the first technology capable of learning, reasoning, creativity, cross-domain functionality, and decision-making. Like connectivity in the internet era, “the cost of intelligence” is now rapidly declining, while the value derived from intelligence continues to surge, driving even greater demand.
This shift will create massive economic value, shifting wealth away from many incumbents and opening substantial investment opportunities. However, just like previous platform shifts, the greatest opportunities won’t come from digitizing or automating legacy workflows, but rather from completely reshaping workflows and user behaviour, democratizing access, and unlocking previously impossible value. These disruptive opportunities will expand into adjacent areas, leaving incumbents defenceless as the rules of the game fundamentally change.
Intelligence Beyond Traditional Computing Devices
AI’s influence now extends far beyond pre-programmed software on computing devices. Machines and hardware are becoming intelligent, leveraging collective learning to adapt in real-time, with minimal predefined instruction. As we’ve stated before, software alone once ate the world; now, software and hardware together consume the universe. The intersection of software and hardware is where many of the greatest opportunities lie.
As AI models shrink and hardware improves, complex tasks run locally and effectively at the edge. Your phone and other edge devices are rapidly becoming the new data centres, opening exciting new possibilities.
Democratization and a New Lens on Defensibility
The collapse in the cost of intelligence has democratized everything—including software development—further accelerated by open-source tools. While this democratization unlocks vast opportunities, competition also intensifies. It may be a land grab, but not all opportunities are created equal. The key is knowing which “land” to seize.
Historically, infrastructure initially attracts significant capital, as seen in the early internet boom. Over time, however, much of the economic value tends to shift from infrastructure to applications. Today, the AI infrastructure layer is becoming increasingly commoditized, while the application layer is heavily democratized. That said, there are still plenty of opportunities to be found in both layers—many of them truly transformative. So, where do we find defensible, high-value opportunities?
Our previous thesis identified transformative technologies that achieved mass adoption, changed behaviour, democratized access, and unlocked unprecedented value. This framework remains true and continues to guide our evaluation of “100x” opportunities.
This shift in defensibility brings us to where the next moat lies.
New Defensibility: Deep Tech Meets Data Network Effects
Defensibility has changed significantly. In recent years, the pool of highly defensible early-stage shallow tech opportunities has thinned considerably, with far fewer compelling opportunities available. As a result, we have clearly entered a golden age of deep tech. AI democratization provides capital-efficient access to tools that previously required massive budgets. Our sweet spot is identifying opportunities that remain difficult to build, ensuring they are not easily replicated.
As “full-spectrum specialists,” TSF is uniquely positioned for this new reality. All four TSF partners are engineers and former startup leaders before becoming investors, with hands-on experience spanning artificial intelligence, semiconductors, robotics, photonics, smart energy, blockchain and others. We are not just technical; we are also product people, having built and commercialized cutting-edge innovations ourselves. As a guiding principle, we only invest when our deep domain expertise can help startups scale effectively and rapidly cement their place as future industry-disrupting giants.
Moreover, while traditional network effects have diminished, AI has reinvigorated network effects, making them more potent in new ways. Combining deep tech defensibility with strong data-driven network effects is the new holy grail, and this is precisely our expertise.
What We Don’t Invest In
Although we primarily invest in “bits,” we will also invest in “bits and atoms,” but we won’t invest in “atoms only.” We also have a strong bias towards permissionless innovations, so we usually stay away from highly regulated or bureaucratic verticals with high inertia. Additionally, since one of our guiding principles is to invest only when we have domain expertise in the next frontier of computing, we won’t invest in companies whose core IP falls outside of our computing expertise. We also avoid regional companies, as we focus on backing founders who design for global scale from day one. We invest globally, and almost all our breakout successes such as Printify have users and customers around the world.
Where We’re Heading
Having recalibrated our thesis for this new era, here’s where we’re going next.
We have backed amazing deep tech founders pioneering AI, semiconductors, robotics, photonics, smart energy, and blockchain—companies like Fibra, Blumind, ABR, Axiomatic, Hepzibah, Story, Poppy, and Viggle—across consumer, enterprise, and industrial sectors. With the AI platform shift underway, many new and exciting investment opportunities have emerged.
The ground has shifted: the old playbook is out, the new playbook is in. It’s challenging, exciting, and we wouldn’t have it any other way.
To recap our core belief, TSF invests in the next frontier of computing and its applications, backing early-stage products, platforms, and protocols that reshape large-scale behaviour and unlock uncapped, new value through democratization. These opportunities are fueled by the collapsing cost of intelligence and, as a result, the growing demand for access to intelligence as well as its expansion beyond traditional computing devices. What makes them defensible are technology moats and, where fitting, strong data network effects.
Ormore succinctly: We invest in the next frontier of computing and its applications, reshaping large-scale behaviour, driven by the collapsing cost of intelligence and defensible through tech and data moats.
So, if you’ve built interesting deep tech in the next frontier of computing, we invest globally and can help you turn it into a product. If you have a product, we can help you turn it into a massively successful business. If this sounds like you, reach out.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
Fibra is developing smart underwear embedded with proprietory textile-based sensors for seamless, non-invasive monitoring of previously untapped vital biomarkers. Their innovative technology provides continuous, accurate health insights—all within the comfort of everyday clothing. Learning from user data, it then provides personalized insights, helping women track, plan, and optimize their reproductive health with ease. This AI-driven approach enhances the precision and effectiveness of health monitoring, empowering users with actionable information tailored to their unique needs.
Fibra has already collected millions of data points with its product, further strengthening its AI capabilities and improving the accuracy of its health insights. While Fibra’s initial focus is female fertility tracking, its platform has the potential to expand into broader areas of women’s health, including pregnancy detection/monitoring, menopause, detection of STDs and cervical cancer and many more, fundamentally transforming how we monitor and understand our bodies.
Perfect Founder-Market Fit
Fibra was founded by Parnian Majd, an exceptional leader in biomedical innovation. She holds a Master of Engineering in Biomedical Engineering from the University of Toronto and a Bachelor’s degree in Biomedical Engineering from TMU. Her achievements have been widely recognized, including being an EY Women in Tech Award recipient, a Rogers Women Empowerment Award finalist for Innovation, and more.
We are thrilled to support Parnian and the Fibra team as they push the boundaries of AI-driven smart textiles and health monitoring. We are entering a golden age of deep-tech innovation and software-hardware convergence—a space we are excited to champion at Two Small Fish Ventures.
Stay tuned as Fibra advances its mission to empower women through cutting-edge health technology.
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
The Two Small Fish team is thrilled to announce our investment in Hepzibah AI, a new venture founded by Untether AI’s co-founders, serial entrepreneurs Martin Snelgrove and Raymond Chik, along with David Lynch and Taneem Ahmed. Their mission is to bring next-generation, energy-efficient AI inference technologies to market, transforming how AI compute is integrated into everything from consumer electronics to industrial systems. We are proud to be the lead investor in this round, and I will be joining as a board observer to support Hepzibah AI as they build the future of AI inference.
The Vision Behind Hepzibah AI
Hepzibah AI is built on the breakthrough energy-efficient AI inference compute architecture pioneered at Untether AI—but takes it even further. In addition to pushing performance/power harder, it can handle training loads like distillation, and it provides supercomputer-style networking on-chip. Their business model focuses on providing IP and core designs that chipmakers can incorporate into their system-on-chip designs. Rather than manufacturing AI chips themselves, Hepzibah AI will license its advanced AI inference IP for integration into a wide variety of devices and products.
Hepzibah AI’s tagline, “Extreme Full-stack AI: from models to metals,” perfectly encapsulates their vision. They are tackling AI from the highest levels of software optimization down to the most fundamental aspects of hardware architecture, ensuring that AI inference is not only more powerful but also dramatically more efficient.
Why does this matter? AI is rapidly becoming as indispensable as the CPU has been for the past few decades. Today, many modern chips, especially system-on-chip (SoC) devices, include a CPU or MCU core, and increasingly, those same chips will require AI capabilities to keep up with the growing demand for smarter, more efficient processing.
This approach allows Hepzibah AI to focus on programmability and adaptable hardware configurations, ensuring they stay ahead of the rapidly evolving AI landscape. By providing best-in-class AI inference IP, Hepzibah AI is in a prime position to capture this massive opportunity.
An Exceptional Founding Team
Martin Snelgrove and Raymond Chik are luminaries in this space—I’ve known them for decades. David Lynch and Taneem Ahmed also bring deep industry expertise, having spent years building and commercializing cutting-edge silicon and software products.
Their collective experience in this rapidly expanding, soon-to-be ubiquitous industry makes investing in Hepzibah AI a clear choice. We can’t wait to see what they accomplish next.
P.S. You may notice that the logo is a curled skunk. I’d like to highlight that the skunk’s eyes are zeros from the MNIST dataset. 🙂
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
I’d like to extend my heartfelt congratulations to Richard Sutton, co-founder of Openmind Research Institute and a pioneer in Reinforcement Learning, for being honoured with the 2024 Turing Award—often described as the “Nobel Prize of Computing.” This accolade reflects his groundbreaking contributions, which have shaped modern AI across a wide spectrum of applications, from LLMs to robotics and everything in between. His influence resonates throughout classrooms, research, and everyday life worldwide.
As a self-professed science nerd, I’ve had the privilege and honour of working with him through the Openmind board. Rich co-founded Openmind alongside Randy Goebel and Joseph Modayil as a non-profit focused on conducting fundamental AI research to better understand minds. We believe that the greatest breakthroughs in AI are still ahead of us, and that basic research lays the groundwork for future commercial and technological innovations.
A core principle of Openmind—and a guiding philosophy of its co-founders—is a commitment to open research: there are no intellectual property restrictions on its work, ensuring everyone can contribute to and build upon this shared body of knowledge. Rich’s vision and dedication continue to inspire researchers and practitioners around the world to push the boundaries of AI and openly share their insights. This Turing Award is a well-deserved recognition of his transformative impact, and I can’t wait to see the breakthroughs that lie ahead as his work continues to redefine our understanding of intelligence.
P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
“Deep Tech” is one of those terms that gets thrown around a lot in venture capital and startup circles, but defining it precisely is harder than it seems. If you check Wikipedia, you’ll find this:
Deep technology (deep tech) or hard tech is a classification of organization, or more typically a startup company, with the expressed objective of providing technology solutions based on substantial scientific or engineering challenges. They present challenges requiring lengthy research and development and large capital investment before successful commercialization. Their primary risk is technical risk, while market risk is often significantly lower due to the clear potential value of the solution to society. The underlying scientific or engineering problems being solved by deep tech and hard tech companies generate valuable intellectual property and are hard to reproduce.
At a high level, this definition makes sense. Deep tech companies tackle hard scientific and engineering problems, create intellectual property, and take time to commercialize. But what do substantial scientific or engineering challenges actually mean? Specifically, what counts as substantial? “Substantial” is a vague word. A difficult or time-consuming engineering problem isn’t necessarily a deep tech problem. There are plenty of startups that build complex technology but aren’t what I’d call deep tech. It’s about tackling problems where existing knowledge and tools aren’t enough.
In 1964, Supreme Court Justice Potter Stewart famously said, “I know it when I see it” when asked to describe his test for obscenity in Jacobellis v. Ohio. By no means am I comparing deep tech to obscenity—I don’t even want to put these two things in the same sentence. However, there is a parallel between the two: they are both hard to put into a strict formula, but experienced technologists like us recognize deep tech when we see it.
So, at Two Small Fish, we have developed our own simple rule of thumb:
If we see a product and say, “How did they do that?” and upon hearing from the founders how it is supposed to work, we still say, “Team TSF can’t build this ourselves in 6–12 months,” then it’s deep tech.
At TSF, we invest in the next frontier of computing and its applications. We’re not just looking for smart founders. We’re looking for founders who see things others don’t—who work at the edge of what’s possible. And when we find them, we know it when we see it.
This test has been surprisingly effective. Every single investment we’ve made in the past few years has passed it. And I expect it will continue to serve us well.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
This is the picture I used to open our 2024 AGM a few months ago. It highlights how drastically the landscape has changed in just the past couple of years. I told a similar story to our LPs during the 2023 AGM, but now, the pace of change has accelerated even further, and the disruption is crystal clear.
The following outlines the reasons behind one of the biggest shifts we identified as part of our Thesis 2.0 two years ago.
Like many VCs, we evaluate pitches from countless companies daily. What we’ve noticed is a significant rise in startups that are nearly identical to one another in the same category. Once, I quipped, “This is the fourth one this week—and it’s only Tuesday!”
The reason for this explosion is simple: the cost of starting a software company has plummeted. What once required $1–2M of funding to hire a small team can now be achieved by two founders (or even a solo founder) with little more than a laptop or two and a $20/month subscription to ChatGPT Pro (or your favourite AI coding assistant).
With these tools, founders can build, test, and iterate at unprecedented speeds. The product build-iterate-test-repeat cycle is insanely short. If each iteration is a “shot on goal,” the $1–2M of the past bought you a few shots within a 12–18 month runway. Today, that $20/month can buy you a shot every few hours.
This dramatic drop in costs, coupled with exponentially faster iteration speeds, has led to a flood of startups entering the market in each category. Competition has never been fiercer. This relentless pace also means faster failures, and the startup graveyard is now overflowing.
For early-stage investors, picking winners from this influx of startups has become significantly harder. In the past, you might have been able to identify the category winner out of 10 similar companies. Now, it feels like mission impossible when there are hundreds—or even thousands—of startups in each category. Many of them are even invisible, flying under the radar for much longer because they don’t need to fundraise.
Of course, there will still be many new billion-dollar companies. In fact, I am convinced that this AI-driven platform shift will produce more billion-dollar winners than ever—across virtually every established category and entirely new ones that don’t yet exist. But by the law of large numbers, spotting them among thousands of startups in each category is harder than ever.
If you’re using the same lens that worked in the past to spot and fund these future tech giants, good luck.
That’s why, for a long time now, we’ve been using a very different lens to identify great opportunities with highly defensible moats to stay ahead of the curve. For example, we’ve been exclusively focused on deep tech—a space where we know we have a clear edge. From technology to product to operations, we have the experience to cover the full spectrum and support founders through the unique challenges of building deep tech startups. So far, this approach has been working really well for us.
I guess we are taking our own advice. As a VC firm, we also need to be constantly improving and striving to be unrecognizable every two years!
There’s no doubt the rules of early-stage VC have shifted. How we access, assess, and assist startups has evolved dramatically. The great AI democratization is affecting all sectors, and venture capital is no exception.
For investors who can adapt, this is a time of unparalleled opportunity—perhaps the greatest era yet in tech investing. The playing field has been levelled, and massive disruption (and therefore opportunities) lies ahead. Incumbents are vulnerable, and new champions will emerge in each category – including VC!
Investing during this platform shift is both exciting and challenging. And I wouldn’t want it any other way, because those who figure it out will be handsomely rewarded.
P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
The next frontier of AI lies at the edge — where data is generated. By moving AI toward the edge, we unlock real-time, efficient, and privacy-focused processing, opening the door to a wave of new opportunities. One of our most recent investments, Applied Brain Research (ABR), is leading this revolution by bringing “cloud-level” AI capabilities to edge devices.
Why is this important? Billions of power-constrained devices require substantial AI processing. Many of these devices operate offline (e.g., drones, medical devices, and industrial equipment), have access only to unreliable, slow, or high-latency networks (e.g., wearables and smart glasses), or must process data streams in real time (e.g., autonomous vehicles). Due to insufficient on-device capability, the only solution today is to send data to the cloud — a suboptimal or outright infeasible approach.
How does ABR solve this? ABR’s groundbreaking technology addresses these challenges by delivering “cloud-sized” high-performance AI on compact, ultra-low-power devices. This shift is transforming industries such as consumer electronics, healthcare, automotive, and a range of industrial applications, where latency, reliability, energy efficiency, and localized intelligence are essential.
What is ABR’s secret sauce? ABR’s unique approach is rooted in computational neuroscience. Co-founded by Dr. Chris Eliasmith, CTO and Head of the University of Waterloo’s Computational Neuroscience Research Group, ABR leverages a brain-inspired invention called the Legendre Memory Unit (LMU), which was invented by Dr. Eliasmith and his team of researchers. LMUs are provably optimal for compressing time-series data—like voice, video, sensor data, and bio-signals—enabling significant reductions in memory usage. Running the
LMU on ABR’s unique processor architecture has created a breakthrough that “kills three birds with one stone” by:
1. Increasing performance,
2. Reducing power consumption by up to 200x, and
3. Cutting costs by 10x.
This is further turbocharged by ABR’s AI toolchain, which enables customers to deploy solutions in weeks instead of months. Time is money, and ABR’s technology allows for advanced on-device functions—like natural language processing—without relying on the cloud. This unlocks entirely new use cases and possibilities.
At the helm of ABR is Kevin Conley, the CEO and a former CTO of SanDisk, alongside Dr. Chris Eliasmith. Together, they bring exceptionally strong leadership across both hardware and software domains—a rare but powerful combination that gives ABR a significant competitive advantage.
ABR’s vision aligns perfectly with our investment thesis and our belief that edge computing and software-hardware convergence represent the next frontier of opportunity in computing. We’re excited to see ABR power billions of devices in the years to come.
P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
There are three distinct phases in the journey of building a great tech company: technology, product, and commercialization. These phases are sequential yet interconnected and sometimes overlap. Needless to say, mastering each is critical to the company’s eventual success. However, it’s important to recognize their differences.
• Building technology is about founders creating what they love. It’s driven by passion and expertise and often leads to groundbreaking innovations.
• Building a product is about creating something others love to use. This is where usability and solving real problems come into focus.
• Commercialization is about building something people will pay for and driving revenue. This phase transforms users into paying customers or finds someone else to pay for it, such as advertisers.
These phases are related but distinct. Great technology doesn’t guarantee anyone will use it, and a widely-used product doesn’t always lead to revenue. I’ve seen many technologists create incredible technologies no one adopts, as well as popular products that fail to commercialize effectively (though it’s rare for a product with tens of millions of users to fail entirely).
For deep tech companies, these phases often have minimal overlap and unfold sequentially. The technology might take years to develop before a usable product emerges, and commercialization may come even later.
In contrast, shallow tech B2B SaaS products often see complete overlap between the phases. For example, a subscription model is typically apparent from the outset, and the tech, product, and commercialization phases blend seamlessly.
Wattpad is also a good example of how these phases can play out differently. Initially, we built our technology and product hand in hand, creating a platform loved by millions of users. However, its commercialization—whether through ads, subscriptions, or movies, the three revenue models we had—was deliberately delayed. Many people assumed we didn’t know how to make money without understanding this counterintuitive approach (but of course, we purposely kept some of our strategies under wraps). This approach allowed us to use “free” as a potent weapon to dominate—and eliminate—our competitors in a winner-takes-all strategy. Operating for years with minimal revenue was clearly the right decision for the market dynamics and our long-term goals. More on this in a separate blog post.
Given this variability, asking, “What is your revenue?” must be thoughtful and context-specific. For some companies, the absence of revenue may be an intentional and brilliant strategy. For others, insufficient revenue could signal serious trouble. It all depends on the company’s stage, strategy, and goals. Understanding the sequence, timing, and specific needs of a business model is crucial for both investors and entrepreneurs. Zero revenue could be a blessing in the right context. On the other hand, pushing for revenue growth—let alone the wrong type of revenue growth—can be fatal, a scenario we’ve seen many times.
At Two Small Fish Ventures, we are very thoughtful and experienced investors. We understand that starting to generate revenue—or choosing not to generate revenue—at the right time is one of the secrets to success that very few people have mastered. We practise what we preach. Over the past two years, all but one of TSF’s investments have been pre-revenue.
No revenue? No problem. In fact, that’s great. Bring them on!
P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
Those who know me well would tell you I am a pretty boring person. I don’t have many hobbies, but one thing I do love is gadgets. For instance, I’m a big fan of DIY home automation. Practically every electronic device in my house is voice-controlled, automated, and Wi-Fi-connected—if it can be, it probably is. Here’s a fun example:
I love robots doing things for me because, frankly, I’m too busy.
At this rate, I might run out of IP addresses! Sure, I could change my network’s subnet to enable more, but every time I tinker with my setup, I have to invest time getting everything right again—something I don’t have in abundance. Anyway, I digress.
One gadget I’ve wanted for years but hesitated to get is a home energy storage and backup system, like Tesla’s Powerwall. The Powerwall 2 has been around since 2016, but for years, the Powerwall 3 was “just around the corner,” with rumours of its launch “next month” seemingly every month. I didn’t want to invest in a device I planned to use for a decade only for it to become obsolete right after I bought it.
Finally, the wait is over. Powerwall 3 became available earlier this year, and I’m glad I waited. Its specs—peak power, continuous power, and efficiency—are significantly upgraded from Powerwall 2. That said, I was a little disappointed that its battery capacity remained unchanged.
I’m told this was the first Powerwall 3 installation in Canada, which is pretty exciting! It’s a beautiful piece of technology, though I don’t see much of it since it’s tucked away in the basement. Paired with solar panels, I hope to “off the grid” as much as possible.
As good as the Powerwall 3 is, it’s only part of the solution. While it handles storage and backup very well, it doesn’t provide fine-grained energy monitoring, let alone control. To address this, I also installed a Sense energy monitor. This device, connected to the electrical panel, collects real-time data from electrical currents to identify unique energy signatures for every appliance and device in the home. It’s a hack, a retrofit solution and imperfect, but it’s probably the best option for someone like me, who is entrenched in the Tesla ecosystem.
The energy space hasn’t changed much in the past half-century. Take the electric panel, for example—it’s still essentially the same analog system I remember from my childhood. However, with the rapid acceleration of the energy transition, smarter energy systems are becoming critical as hardware and software converge to enable new possibilities.
A big thanks to James and Dave from the Borealis Clean Energy team for helping me with this project —and for arriving in style with Canada’s first Cybertruck. The project has so many moving parts. Their expertise made this journey much smoother.
Unboxing PW3!Zooming in to the power electronics.The electricians are working hard. It is a big job!It is done!A big thank you to James.This is the Tesla Gateway, a separate box we need to install. It is a smaller box—roughly a quarter of the size of PW3—and where “the brain” is located. Adding Sense – the orange box – to my old-school electric panel to help me with device-level monitoring.First Cybertruck in Canada. This thing draws attention.
More than two decades ago, before I started my first company, I was involved with an internet startup. Back then, the internet was still in its infancy, and most companies had to host their own servers. The upfront costs were daunting—our startup’s first major purchase was hundreds of thousands of dollars in Sun Microsystems boxes that sat in our office. This significant investment was essential for operations but created a massive barrier to entry for startups.
Fast forward to 2006 when we started Wattpad. We initially used a shared hosting service that cost just $5 per month. This shift was game-changing, enabling us to bootstrap for several years before raising any capital. We also didn’t have to worry about maintaining the machines. It dramatically lowered the barrier to entry, democratizing access to the resources needed to build a tech startup because the upfront cost of starting a software company was virtually zero.
Eventually, as we scaled, we moved to AWS, which was more scalable and reliable. Apparently, we were AWS’s first customer in Canada at the time! It became more expensive as our traffic grew, but we still didn’t have to worry about maintaining our own server farm. This significantly simplified our operations.
A similar evolution has been happening in the semiconductor industry for more than two decades, thanks to the fabless model. Fabless chip manufacturing allows companies—large or small—to design their semiconductors while outsourcing fabrication to specialized foundries. Startups like Blumind leverage this model, focusing solely on designing groundbreaking technology and scaling production when necessary.
But fabrication is not the only capital-intensive aspect. There is also the need for other equipment once the chips are manufactured.
During my recent visit to ventureLAB, where Blumind is based, I saw firsthand how these startups utilize shared resources for this additional equipment. Not only is Blumind fabless, but they can also access various hardware equipment at ventureLAB without the heavy capital expenditure of owning it.
Let’s see how the chip performs at -40C!
Jackpine (first tapeout)
Wolf (second tapeout)
BM110 (third tapeout)
The common perception that semiconductor startups are inherently capital-intensive couldn’t be more wrong. The fabless model—in conjunction with organizations like ventureLAB—functions much like cloud computing does for software startups, enabling semiconductor companies to build and grow with minimal upfront investment. For the most part, all they need initially are engineers’ computers to create their designs until they reach a scale that requires owning their own equipment.
Fabless chip design combined with shared resources at facilities like ventureLAB is democratizing the semiconductor space, lowering the barriers to innovation, and empowering startups to make significant advancements without the financial burden of owning fabrication facilities. Labour costs aside, the upfront cost of starting a semiconductor company like Blumind could be virtually zero too.
P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
The history of computing has been a constant shift of the centre of gravity.
When mainframe computers were invented in the middle of the last century, they were housed in air-conditioned, room-sized metal boxes that occupied thousands of square feet. People accessed these computers through dumb terminals, which were more like black and white screens and keyboards hooked to the computer through long cables. They were called dumb terminals because the smart part was all on the mainframes.
These computers worked in silos. Computer networks were very primitive. Data was mainly transferred through (physical!) punch cards and tapes.
The business model was selling hardware. During that era, giants like IBM and Wang emerged, and many subsequently submerged.
Hardware was the champion.
Mainframe computers in the 50s. Image source: Wikipedia
The PC era, which started in the 80s and supercharged in the 90s, ended the reign of the mainframe era. As computers became much faster while the price dropped by orders of magnitude, access to computing became democratized, and computers appeared on every desktop. We wanted these computers to talk to each other. Punch cards clearly no longer worked as there were millions of computers now. As a result, LANs (local area networks) were popularized by companies like Novell, which enabled the client/server architecture. Unlike the previous era, the “brains” were decentralized, with clients doing much of the heavy lifting. Servers still played a role, but for the most part, it was for centralized storage.
Although IBM invented the PCs, the business models shifted, creating the duopoly of Intel (and by association companies like Compaq) and Microsoft, with the latter capturing even more value than the former. The software era had begun.
Software became the champion. Hardware was dethroned to the runner-up.
Then, in the late 90s to the 2010s, the (broadband) web, mobile, and cloud computing came along. Connectivity became much less of an issue. Clients, especially your phones, continued to improve at a fast pace, but the capability of servers increased even faster. The “brains” shifted back to the server as that’s where the data is centralized. For the most part, clients were now responsible for user experience, important but merely a means to an end (of collecting data) rather than an end in themselves.
Initially, it appeared that the software-hardware duopoly would continue as companies like Netscape and Cisco were red hot, only to be dethroned by companies like Yahoo and AOL and later Google and Meta. Software and hardware were still crucial, but they became the enablers as the business model once again shifted.
Data became the newly crowned champion.
Fast forward to now, the latest—and arguably the greatest of all time—platform shift, powered by generative AI, is upon us. The ground beneath us is shifting again. On a per-user basis, generative AI demands orders of magnitude more energy. At a time when data centres are already consuming more energy than many countries, it is set to double again in two years to roughly equivalent to the electricity consumption of Japan. The lean startup era is gone. AI startups need to raise much more capital upfront than previous generations of startups because of the enormous cost of compute.
Expecting the server in the data centres to do all the heavy lifting can’t be sustainable in the long term for many reasons. The “brains” have once again started to shift back to the clients at the edge, and it is already happening. For instance, Tesla’s self-driving decisions are not going to make the round trip to its servers. Otherwise, the latency will make the split-second decisions a second too late. Another example, most people may not realize this, but Apple is an edge computing company already as its chips have had AI capabilities for years. Imagine how much more developers can do on your iPhone—at no cost to them—instead of paying a cloud provider to run some AI. That would be the Napster moment for AI companies!
Inevitably, now that almost every device can run some AI and is connected, things will be more decentralized.
In past eras, computing architectures evolved due to the constraints of—or the liberation of—computing capabilities, connectivity, or power consumption. The landscape has once again shifted. Like past platform shifts, there will be a new world order. The playing field will be levelled. Rules will be rewritten. Business models will be reinvented. Most excitingly, new giants will be created.
Every. Single. Time.
Seeing the future is our superpower. That’s why a while ago, at Two Small Fish Ventures, we have already revised our thesis. Now, it is all about investing in the next frontier of computing and its applications, with edge computing an important part of it. Our recent investments have been all-in on this thesis. If you are a founder of an early-stage, rule-rewriting company that is taking advantage of this massive platform shift, don’t hesitate to reach out to us. We love backing category creators in massive market opportunities.
We are all engineers, product builders and company creators. We know how things work. Let’s build the next champion together!
Update: This blog post was published just before Apple announced Apple Intelligence. I knew nothing about Apple Intelligence at that time. It was purely a coincidence. However, it did validate what I said.
P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.