For most of semiconductor history, progress was a simple loop. Shrink transistors. Fit more into the same area. Get faster compute as a byproduct.
That loop had a name. Moore’s Law. It traces back to Intel co-founder Gordon Moore. He observed in the 1960s that the number of transistors on a chip, and hence its capabilities, tended to double every two years. The industry turned that observation into a roadmap. It was never guaranteed to run forever. Now shrinking is harder because we are starting to hit many limits in physics and economics, and the cost of pushing the frontier keeps rising.
So if the curve is going to keep bending upward, the industry needs new scaling vectors beyond making everything smaller in two dimensions.
This is why Two Small Fish invested in Zinite in 2021 at the company’s inception. The thesis was simple then, and it is still simple now. Scale in the third dimension, using proprietary technology protected by patents to enable true 3D chips.
Zinite stayed deliberately stealth early on, focused on building the core and protecting it properly before saying too much. Five years after we invested, we can finally talk about it more openly.
The company is led by its CEO, Dr. Gem Shoute. Fun fact. Her breakthrough was strong enough that her professors and industry veterans (who helped create fundamental IP used in all chips since 2008) joined her as co-founders, Dr. Doug Barlage and Dr. Ken Cadien.
The Distance Tax
In a recent blog post, I used a factory analogy to explain why speed, latency, and energy are often bottlenecked by movement, not necessarily arithmetic.
In short, systems don’t lose because they can’t do math. GPUs are already very good at that. Systems lose speed because they can’t feed the math with data fast enough.
In many systems, moving data costs far more than doing the arithmetic. When movement is expensive, speed and energy efficiency get worse together.
AI inference exacerbates the problem because the computational characteristics of AI inference workloads put a premium on memory behaviour. In many cases, the limiting factor is not arithmetic. It is how efficiently the system can move data. Bringing memory closer to logic matters because it directly reduces that movement.
Sensing fits in the same frame as logic and memory. Sensors generate raw data at high volume. If the system’s first step is to ship raw data far away before anything useful happens, it pays in bandwidth, latency, and power. The more intelligence that can happen closer to where data is produced, the less the system wastes just transporting information.
So the distance tax is one big problem showing up in three places at once. Logic. Memory. Sensing.
Why 3D Matters for Speed and Energy
When people hear 3D chips, they think density. More transistors per area. That matters. The bigger lever is proximity. Current 3D approaches to deliver more performance per area rely on advanced packaging, which is hindered by cost and the distance tax.
If memory can live closer to logic, the system avoids transfers that dominate both performance and power. If compute and memory can sit closer to sensing, the system avoids hauling raw streams around before doing anything intelligent.
Every avoided transfer is a double win. Speed improves because stalls go down and effective bandwidth goes up. Energy improves because fewer joules are burned moving bits instead of doing work.
That is the two birds, one stone result.
Five years after we invested, Zinite is far from just a concept. The company is doing exceptionally well, and it represents the kind of platform that can extend performance gains into the post-Moore era by reducing the distance tax, not by asking physics for more shrink, but by making data travel less.
Driven by rapid advances in AI, the collapse in the cost of intelligence has arrived—bringing massive disruption and generational opportunities.
Building on this platform shift, TSF invests in the next frontier of computing and its applications, backing early-stage products, platforms, and protocols that reshape large-scale behaviour and unlock uncapped, new value through democratization. These opportunities are fueled by the collapsing cost of intelligence and, as a result, the growing demand for access to intelligence as well as its expansion beyond traditional computing devices. What makes them defensible are technology moats and, where fitting, strong data network effects.
Ormore succinctly: We invest in the next frontier of computing and its applications, reshaping large-scale behaviour, driven by the collapsing cost of intelligence and defensible through tech and data moats.
Watch this 2-minute video to learn more about our approach:
Our Evolution: From Network Effects to Deep Tech
When we launched TSF in 2015, our initial thesis centred around network effects. Drawing from our experience scaling Wattpad from inception to 100 million users, we became experts in understanding and leveraging exponential value and defensibility created by network effects at scale. This expertise led us to invest—most as the very first cheque—in massively successful companies such as BenchSci, Ada, Printify, and SkipTheDishes.
We achieved world-class success with this thesis, but like all good things, that opportunity diminished over time.
Our thesis evolved as the ground shifted toward the end of 2010s. A couple of years ago, we articulated this evolution by focusing on early-stage products, platforms, and protocols that transform user behaviour and empower businesses and individuals to unlock new value. Within this broad focus, we zoomed in specifically on three sectors: AI, decentralized protocols, and semiconductors. That thesis guided investments in great companies such as Story, Ideogram, Zinite, and Blumind.
But the world doesn’t stand still. In fact, it has never changed so rapidly. This brings us to the next and even more significant shift shaping our thesis.
A New Platform Shift: The Cost of Intelligence is Collapsing
Reflecting on the internet era, the core lesson we learned was that the internet was the first technology in human history that was borderless, connected, ubiquitous, real-time, and free. At its foundation was connectivity, and as “the cost of connectivity” steadily declined, productivity and demand surged, creating a virtuous cycle of opportunities.
The AI era shows remarkable parallels. AI is the first technology capable of learning, reasoning, creativity, cross-domain functionality, and decision-making. Like connectivity in the internet era, “the cost of intelligence” is now rapidly declining, while the value derived from intelligence continues to surge, driving even greater demand.
This shift will create massive economic value, shifting wealth away from many incumbents and opening substantial investment opportunities. However, just like previous platform shifts, the greatest opportunities won’t come from digitizing or automating legacy workflows, but rather from completely reshaping workflows and user behaviour, democratizing access, and unlocking previously impossible value. These disruptive opportunities will expand into adjacent areas, leaving incumbents defenceless as the rules of the game fundamentally change.
Intelligence Beyond Traditional Computing Devices
AI’s influence now extends far beyond pre-programmed software on computing devices. Machines and hardware are becoming intelligent, leveraging collective learning to adapt in real-time, with minimal predefined instruction. As we’ve stated before, software alone once ate the world; now, software and hardware together consume the universe. The intersection of software and hardware is where many of the greatest opportunities lie.
As AI models shrink and hardware improves, complex tasks run locally and effectively at the edge. Your phone and other edge devices are rapidly becoming the new data centres, opening exciting new possibilities.
Democratization and a New Lens on Defensibility
The collapse in the cost of intelligence has democratized everything—including software development—further accelerated by open-source tools. While this democratization unlocks vast opportunities, competition also intensifies. It may be a land grab, but not all opportunities are created equal. The key is knowing which “land” to seize.
Historically, infrastructure initially attracts significant capital, as seen in the early internet boom. Over time, however, much of the economic value tends to shift from infrastructure to applications. Today, the AI infrastructure layer is becoming increasingly commoditized, while the application layer is heavily democratized. That said, there are still plenty of opportunities to be found in both layers—many of them truly transformative. So, where do we find defensible, high-value opportunities?
Our previous thesis identified transformative technologies that achieved mass adoption, changed behaviour, democratized access, and unlocked unprecedented value. This framework remains true and continues to guide our evaluation of “100x” opportunities.
This shift in defensibility brings us to where the next moat lies.
New Defensibility: Deep Tech Meets Data Network Effects
Defensibility has changed significantly. In recent years, the pool of highly defensible early-stage shallow tech opportunities has thinned considerably, with far fewer compelling opportunities available. As a result, we have clearly entered a golden age of deep tech. AI democratization provides capital-efficient access to tools that previously required massive budgets. Our sweet spot is identifying opportunities that remain difficult to build, ensuring they are not easily replicated.
As “full-spectrum specialists,” TSF is uniquely positioned for this new reality. All four TSF partners are engineers and former startup leaders before becoming investors, with hands-on experience spanning artificial intelligence, semiconductors, robotics, photonics, smart energy, blockchain and others. We are not just technical; we are also product people, having built and commercialized cutting-edge innovations ourselves. As a guiding principle, we only invest when our deep domain expertise can help startups scale effectively and rapidly cement their place as future industry-disrupting giants.
Moreover, while traditional network effects have diminished, AI has reinvigorated network effects, making them more potent in new ways. Combining deep tech defensibility with strong data-driven network effects is the new holy grail, and this is precisely our expertise.
What We Don’t Invest In
Although we primarily invest in “bits,” we will also invest in “bits and atoms,” but we won’t invest in “atoms only.” We also have a strong bias towards permissionless innovations, so we usually stay away from highly regulated or bureaucratic verticals with high inertia. Additionally, since one of our guiding principles is to invest only when we have domain expertise in the next frontier of computing, we won’t invest in companies whose core IP falls outside of our computing expertise. We also avoid regional companies, as we focus on backing founders who design for global scale from day one. We invest globally, and almost all our breakout successes such as Printify have users and customers around the world.
Where We’re Heading
Having recalibrated our thesis for this new era, here’s where we’re going next.
We have backed amazing deep tech founders pioneering AI, semiconductors, robotics, photonics, smart energy, and blockchain—companies like Fibra, Blumind, ABR, Axiomatic, Hepzibah, Story, Poppy, and Viggle—across consumer, enterprise, and industrial sectors. With the AI platform shift underway, many new and exciting investment opportunities have emerged.
The ground has shifted: the old playbook is out, the new playbook is in. It’s challenging, exciting, and we wouldn’t have it any other way.
To recap our core belief, TSF invests in the next frontier of computing and its applications, backing early-stage products, platforms, and protocols that reshape large-scale behaviour and unlock uncapped, new value through democratization. These opportunities are fueled by the collapsing cost of intelligence and, as a result, the growing demand for access to intelligence as well as its expansion beyond traditional computing devices. What makes them defensible are technology moats and, where fitting, strong data network effects.
Ormore succinctly: We invest in the next frontier of computing and its applications, reshaping large-scale behaviour, driven by the collapsing cost of intelligence and defensible through tech and data moats.
So, if you’ve built interesting deep tech in the next frontier of computing, we invest globally and can help you turn it into a product. If you have a product, we can help you turn it into a massively successful business. If this sounds like you, reach out.
P.S. If you enjoyed this blog post, please take a minute to like, comment, subscribe and share. Thank you for reading!
This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
More than two decades ago, before I started my first company, I was involved with an internet startup. Back then, the internet was still in its infancy, and most companies had to host their own servers. The upfront costs were daunting—our startup’s first major purchase was hundreds of thousands of dollars in Sun Microsystems boxes that sat in our office. This significant investment was essential for operations but created a massive barrier to entry for startups.
Fast forward to 2006 when we started Wattpad. We initially used a shared hosting service that cost just $5 per month. This shift was game-changing, enabling us to bootstrap for several years before raising any capital. We also didn’t have to worry about maintaining the machines. It dramatically lowered the barrier to entry, democratizing access to the resources needed to build a tech startup because the upfront cost of starting a software company was virtually zero.
Eventually, as we scaled, we moved to AWS, which was more scalable and reliable. Apparently, we were AWS’s first customer in Canada at the time! It became more expensive as our traffic grew, but we still didn’t have to worry about maintaining our own server farm. This significantly simplified our operations.
A similar evolution has been happening in the semiconductor industry for more than two decades, thanks to the fabless model. Fabless chip manufacturing allows companies—large or small—to design their semiconductors while outsourcing fabrication to specialized foundries. Startups like Blumind leverage this model, focusing solely on designing groundbreaking technology and scaling production when necessary.
But fabrication is not the only capital-intensive aspect. There is also the need for other equipment once the chips are manufactured.
During my recent visit to ventureLAB, where Blumind is based, I saw firsthand how these startups utilize shared resources for this additional equipment. Not only is Blumind fabless, but they can also access various hardware equipment at ventureLAB without the heavy capital expenditure of owning it.
Let’s see how the chip performs at -40C!
Jackpine (first tapeout)
Wolf (second tapeout)
BM110 (third tapeout)
The common perception that semiconductor startups are inherently capital-intensive couldn’t be more wrong. The fabless model—in conjunction with organizations like ventureLAB—functions much like cloud computing does for software startups, enabling semiconductor companies to build and grow with minimal upfront investment. For the most part, all they need initially are engineers’ computers to create their designs until they reach a scale that requires owning their own equipment.
Fabless chip design combined with shared resources at facilities like ventureLAB is democratizing the semiconductor space, lowering the barriers to innovation, and empowering startups to make significant advancements without the financial burden of owning fabrication facilities. Labour costs aside, the upfront cost of starting a semiconductor company like Blumind could be virtually zero too.
P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.
When it comes to watches, my go-to is a Fitbit. It may not be the most common choice, but I value practicality, especially when not having to recharge daily is a necessity to me. My Fitbit lasts about 4 to 5 days—decent, but still not perfect.
Now, imagine if we could extend that battery life to a month or even a year. The freedom and convenience would be incredible. Considering the immense computing demands of modern smartwatches, this might sound far-fetched. But that’s where our portfolio company, Blumind, comes into play.
Blumind’s ultra-low power, always-on, real-time, offline AI chip holds the potential to redefine how we think about battery life and device efficiency. This advancement enables edge computing with extended battery life, potentially lasting years – not a typo – instead of days. Products powered by Blumind can transform user behaviours and empower businesses and individuals to unlock new and impactful value (see our thesis).
Blumind’s secret lies in its brain-inspired, all-analog chip design. The human brain is renowned for its energy-efficient computing abilities. Unlike most modern chips that rely on digital systems and require continuous digital-to-analog and analog-to-digital conversions (which drain power), Blumind’s approach emulates the brain’s seamless analog processing. This unique architecture makes it perfect for power-sensitive AI applications, resulting in chips that could be up to 1000 times more energy-efficient than conventional chips, making them ideal for edge computing.
Blumind’s breakthrough technology has practical and wide-ranging applications. Here are just a few use cases:
• Always-on Keyword Detection: Integrates into various devices for continuous voice activation without excessive power usage.
• Rapid Image Recognition: Supports always-on visual wake word detection for applications such as access control, enhancing human-device interaction with real-time responses.
• Time-Series Data Processing: Processes data streams with exceptional speed for real-time analysis in areas like predictive maintenance, health monitoring, and weather forecasting.
These capabilities unlock new possibilities across multiple industries, including wearables, smart home technology, security, agriculture, medical, smart mobility, and even military and aerospace.
A few weeks ago, I visited Blumind’s team at their ventureLAB office and got an up-close look at their BM110 chip, now in its third tapeout. Blumind exemplifies the future of semiconductor startups through its fabless model, which significantly lowers the initial infrastructure costs associated with traditional semiconductor companies. With resources like ventureLAB supporting them, Blumind has managed to innovate with remarkable efficiency and sustainability. (I’ll share more about the fabless model in an upcoming post.)
I’m thrilled to see where Blumind’s journey leads and how its groundbreaking technology will transform daily life and reshape multiple industries. When devices can go years without needing a recharge instead of mere hours, that’s nothing short of game-changing.
Image: Close-up view of BM110. It is a piece of art!
Image: Qualification in action. Note that BM110 (lower-left corner) is tiny and space-efficient.
Image: The Blumind team is working hard at their ventureLAB office. More on this in a separate blog post here.
P.S. This blog is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, redistribute, remix, transform, and build upon the material for any purpose, even commercially, as long as appropriate credit is given.