The cost of new chip designs is growing exponentially. Will new tools like generative AI solve the gap?
Editor’s Note: After a gruesome earnings report from Intel on Thursday — leading to a drop of one-quarter of its value on Friday — as well as a general pullback in semiconductor stocks (the widely-used PHLX Semiconductor Sector index is down 15.9% over the past month), I figured it was time to go deeper into the upside and downside risks of the chip sector in the decade ahead.
Lux general partner Shahin Farshchi and our summer associate Dario Soatto have brought together their collective wisdom on the future of automating chip design and how new technologies may contain the spiraling costs at ever-smaller chip nodes.
Semiconductors represent the bleeding edge of human ingenuity. They manifest our greatest mathematical insights and engineering prowess, powering everything from rockets and cars to hotel card keys. Chip fabs are the most advanced manufacturing facilities ever designed by mankind, and global politics are shaped by who has access to them — and who doesn’t.
The headlines are filled with geopolitical concerns around TSMC, one of the largest producers of semiconductors, and equipment makers such as ASML, the only company manufacturing the extreme ultraviolet lithography equipment required to make leading-edge chips. These are valid but overblown concerns: any country compromising TSMC will necessarily cause irreparable damage to its own economy.
Instead, we should focus on a less heralded but far more determinative factor in the future of the chip economy: the extraordinary amount of manpower and brainpower required to design chips, and their rapidly spiraling costs.
What you don’t hear about in the news is the pending labor shortage and rising wage costs needed to make these chips, and I’m not talking about people in factories, I’m talking about people sitting behind desks. Designing even a single cutting-edge chip involves hundreds of full-time engineers working for months — long before a single transistor is fabricated.
As transistor densities increase with each technology node, automating chip design is not keeping up with the laborious work involved in bringing a chip from idea to fab. Take a look at this chart, which roughly sketches how the various input costs for designing a new chip have exponentially increased over the past two decades:
Even as costs spiral, larger companies are looking to design their own chips, whether Amazon and Google as hyperscalers or Intel and AMD to serve a wider range of customers across desktop, mobile, connected devices and more. That demand for expert chip design talent has heightened a long-term shortage for the industry, beneficially raising salaries for workers but increasing costs for new chips and limiting the ambitions of companies seeking to expand their product lines.
Though not as sensational as the prospect of China invading Taiwan or the failure of a single company like ASML to bring the world to a halt, the impending brainpower shortage in chip design is a critical bottleneck for the future of the chip economy.
Automation in chip design today
Electronic design automation (EDA) tools in semiconductors are older than most people reading this article. These tools help chip engineers design a system architecture, test it for functionality, synthesize the logic and circuit schematics that implement a chip’s functions, lay out the transistors on silicon, and verify (again) that everything is doing what it’s supposed to be doing within very tight margins of error.
Currently, the EDA tool market is dominated by Cadence, Siemens EDA (through the acquisition of Mentor Graphics in 2017), and Synopsys. Each is worth many billions of dollars after having rolled up smaller EDA tool companies for many hundreds of millions of dollars. Collectively, these three firms represent over 70% of the global EDA market, with Cadence and Synopsys each controlling approximately 30% of the market, and Siemens around 15%.
These firms sell their tools to both chip design firms and manufacturers, who need to verify whether a design is feasible for production. The main line of revenue for each of these firms is selling software for designing and validating integrated circuit products. In addition, these firms sell pre-created blocks of IP that chip designers can use in their designs.
These companies have long lives — Cadence was formed as a merger back in 1988. Their longevity comes from their deep and trusted relationships with chip foundries, resulting in tools that can faithfully simulate production and help designers construct chips under realistic constraints. Solving a problem in design is exponentially cheaper than solving it in fabrication, which is why an oligopoly of EDA tool companies have remained dominant for decades.
Given the robust expansion of the semiconductor industry, they have also been good investments. Cadence’s stock is up 236% over five years while Synopsys’s is up 272%, matching the broader vertiginous growth in semiconductor stocks this decade (the PHLX index is up 205% over the same period).
Market dominance plus market growth is a great combination for financial success, but that just highlights the exponentially growing cost of designing chips. Those costs flow as profits to the EDA tools industry, which means that the companies we most need to accelerate automation and save expenses are precisely those most profiting off the industry’s slow pace of innovation. That’s a bad set of incentives — and a recipe for poor performance of the sector in the decade ahead.
Generative AI and the potential for future automation in chip design
What can be done? The most promising direction centers on using generative AI models to shrink the per-unit labor costs of chip design. Much of chip design is writing code that represents logic and then writing verification scripts to test that logic. For a chip to work, this code has to be written at multiple levels ranging from the most basic gate design all the way up to high-level architecture.
Generative AI has shown strong early capabilities for writing code, so if you automate code generation, you’re automating a bunch of chip design. There’s a key challenge though: access to data. Unlike the billions of lines of code available in open-source repositories on sites like GitHub, chip designs and the code used to generate and verify them are closely-guarded secrets. Unsurprisingly, this puts the big incumbents in EDA tools as well as deep-pocketed tech companies like Google and Nvidia at an advantage relative to disruptive startups.
Indeed, we’ve seen a flurry of announcements from these companies the past few years:
- Synopsys: Launched in 2020, DSO.ai (short for Design Space Optimization) uses reinforcement learning to search for optimization targets in large solution spaces. The software, which doesn't involve generative AI, helps design teams evaluate “floorplans” by analyzing a variety of potential design alternatives. Over time, Synopsys has developed a full-stack, AI-driven EDA suite of products that perform design space exploration, verification coverage and regression analytics, automatic test pattern generation, and analog design migration. In late 2023, Synopsys launched their generative AI copilot powered by Microsoft Azure. This copilot serves as a knowledge query system to answer questions from chip engineers, and it operates within the confines of a particular organization, allowing it to leverage proprietary data and adapt to unique workflows.
- Cadence: Cadence has built out a platform for generative AI-based chip design that includes Cerebrus Intelligent Chip Explorer, which aims to automate chip design flow optimization. After block engineers specify their design goals, Cerebrus optimizes the design to meet Performance, Power, and Area (PPA) goals. In addition, Cadence has developed AI-driven tools for analog design (Virtuoso Studio), debug and verification (Verisium), Printed Circuit Board (PCB) design (Allegro X AI), and multiphysics optimization (Optimality).
- NVIDIA: The company recently announced ChipNeMo, an AI system to accelerate chip design. To develop ChipNeMo, NVIDIA used Meta’s foundation model LLaMA 2, trained it on 24 billion tokens of chip design code and docs, and then performed supervised fine-tuning. Initial use cases for ChipNeMo include a chatbot for answering questions asked by engineers, generation of small snippets of code, and maintenance of descriptions of known bugs. This is a great example of an effort that will work alongside existing EDA tool companies.
- Google: DeepMind is exploring AI for logic synthesis. Their novel approach involves leveraging circuit neural networks, a type of neural network that turns edges into wires and nodes into logic gates and then learns to connect them together by optimizing for performance, power, and cost. Logic synthesis is a part of the overall chip design process — and an important one — and this project overlaps with some of the initiatives existing EDA tool companies have in their own pipelines.
While the incumbents have the data, capital and the customers plus no end of buzzword-laden press releases, startups hope to exploit the classic Innovator’s Dilemma playbook here to invent disruptive innovations to compete.
How are startups automating chip design?
In true Silicon Valley fashion, aspiring startups are putting traditional EDA tool companies into their crosshairs. While some founders plan to overturn the industry by incorporating AI into one or more chip design steps, most founders are instead aiming to augment the existing EDA tool establishment by removing some human labor.
Speaking as a venture capitalist, while audacious founders looking to overthrow an industry are always exciting, I expect the interlocking nature of the companies in chip design to stymie any full-frontal assaults. So instead, I am extremely keen on founders with deliberate and clever strategies for gaining an early foothold in the chip design process — likely through assiduous technology development in areas overlooked by incumbent players due to their disincentives — and who then have a plan to expand and compete over the long run.
For those interested, here are nine startups I have been keeping my eye on and why they are interesting:
- VoltAI: VoltAI is building foundation models for semiconductors and electronics to "boost ease of use and productivity while building with semiconductor components." The CTO Erfan Rostami has written about the need for AI copilots across the semiconductor industry, including for generating Verilog and VHDL code, debugging error messages generated from EDA tools, and sifting through documentation to find information rapidly. VoltAI came out of stealth and we’re eager to see how their products shape up.
- Cadstrom: Cadstrom is more focused on the backend of system design by minimizing the need for PCB respins. They use AI to validate each step of PCB design and flag potential problems. Cadstrom was founded last year, and they've raised a pre-seed round. Existing EDA tool companies have similar tools and may cook up similar solutions.
- Silimate: Silimate is building an AI copilot for chip design. During register-transfer level (RTL) development, which is done to implement a logic block into transistors, Silimate identifies bugs, helps optimize for performance, power, and cost, and recommends fixes in real time. Silimate was also founded last year, and has raised a pre-seed round. This is another tool that would likely be used alongside existing EDA options.
- Silogy: Silogy is building a web platform that manages the entire workflow for digital integrated circuit design verification by leveraging LLMs to assist chip developers and verification engineers. They've built a continuous integration tool that runs tests in the cloud and tracks each project’s test coverage and pass rate. Their LLM can also help write verification test plans. They're building additional features tailored specifically for verification engineers, including an orchestration framework, a waveform viewer, and a coverage report, and they're adding a collaboration system akin to Google Docs. Silogy was founded last year, and they've raised a pre-seed round. This tool brings leverage to backend verification, which itself represents a large amount of chip design spend and would work alongside EDA tool companies.
- Motivo AI: Motivo AI seeks to accelerate time to market through explainable/interpretable AI- and ML-powered design tools in IC design analytics and optimization. Their technology suite includes M-OPT (AI to implement design for manufacturability), M-DVP (AI to create a comprehensive set of tests for design validation), and M-TCO (AI to rapidly achieve timing convergence). Motivo was founded in 2015; they raised a $12M series A led by Intel Capital in 2021. With their focus on interpretability, Motivo may serve as a strong complement to chip designers’ existing platforms of tools.
- Quilter: Quilter is fully automating circuit design by using reinforcement learning informed by physics simulations. Chip designers input a schematic with their ECAD tool of choice, prepare a board file, and define constraints. From there, Quilter's design agent then explores and optimizes a chip across millions of placement, routing, and stackup combinations. Quilter was founded last year; they raised a $10M series A led by Benchmark in February. These offerings will likely compete directly with the AI tools already developed by the dominant EDA tool companies.
- Mitai AI: Mitai AI combines human expertise with AI-driven analysis to provide PCB design support to facilitate compliance with electronic interference regulations (known as EMC). Their product scans PCB design data for issues and provides in-depth solution reports. Mitai was incubated by Japan’s TDK Corporation and supported by Mach49. Mitai is targeting the niche of EMC compliance, and they will likely expand their offerings to broader design optimization.
- Chipstack AI: Chipstack AI uses a medley of LLMs on a proprietary software stack that allows for high-design productivity, lower RTL bugs, and faster development. They have begun deployments with early partners. Chipstack was founded last year; they haven't raised a large round yet. As with Quilter, Chipstack’s attempt to optimize design with an AI copilot locks it into competition with the major EDA players.
- CircuitMind: CircuitMind is building a platform that goes from architecture to schematic rapidly and optimally. CircuitMind's electronic design assistance platform, ACE, takes as inputs functional blocks, input/output signals, cost/size/power, availability requirements and mechanical constraints. It then searches through trillions of potential circuits to optimize for cost, size, power, function, availability, performance, and lifecycle before outputting schematic options along with a bill of materials, verification checks, form-factor analysis, and availability reports. CircuitMind was founded in 2018; they've received non-dilutive funding from Google for Startups through their Black Founders Fund. CircuitMind’s focus on analyzing the manufacturability and availability of chip designs provides a useful perspective on where future EDA tools are headed.
Clearly, founders are excited about the potential of further automating chip design and are attacking the problem of the industry’s labor shortage from multiple angles. While fighting against rich and entrenched incumbents is always challenging, the broad expansion of the chip industry this decade provides a tailwind for all new players — something that can make competition less a zero-sum game.
Readying a whole new generation of chip designs – and chip designers
We are at another turning point in semiconductors. In the 80s and 90s, the factories that built chips for weapons and space missions were repurposed as foundries to produce chips for broad consumer use. Entrepreneurial professors and lab researchers packaged early EDA tools into software that ultimately led to the creation of the giants we know today. Moving chip design out of government-funded labs and into the broader commercial markets resulted in the telecom and compute revolution that made today’s Internet and cloud ecosystem possible.
Many more such ecosystems and revolutions are possible. Just in the past few years, we have seen traditional software companies like Amazon, Google, Microsoft and Meta entering the chip design game to upgrade their data center performance in everything from AI inference and training to video streaming and conversion. Tesla took matters into its own hands and designed its own chips to run its autopilot software, while Apple is designing custom chips for its iPhones, iPads, MacBooks and the recently-launched Vision Pro. AI, automotive, virtual reality and more — the potential growth in new chip designs is dizzying and likely to exceed even the personal computer and internet revolutions.
Yet, we will never deliver on that promise if the cost of chip designs continues to increase exponentially. That’s why generative AI and other approaches to automation are so crucial. Lowering the cost of design will lower the barriers to entry for custom silicon, allowing more companies to harness this technology and build exceptional products with better efficiency and productivity for all of us.
For chip designers, this is a boon. Far from automating their jobs away, further design automation will enrich their work while dramatically expanding the import of their skills across more employers and more sectors. It’s a win-win for everyone.
At Lux, we often think about investing in capabilities and not just markets. An investment in chip design isn’t just about a singular return on an individual startup, but rather a bet that new capabilities can empower an entire new generation of companies — many of which may also be venture-backable. It’s that cascade of potential gains that brings our attention to this popularly overlooked sector, and why we are searching every day for the next strategic founder looking to upend an otherwise entrenched industry.
Podcast: How many trillions in damage would an invasion of Taiwan cost global GDP?
Yes, you should direct your attention to automating chip design per Shahin and Dario above, but like a mosquito to the lamp, I can’t help fly toward the scary headlines sometimes. A potential embargo or full-scale invasion of Taiwan by China has been discussed ad infinitum in America’s and the world’s halls of power, but what would the real cost be of such an event? The results surprised me, and I expect, you as well.
Joining me on the Riskgaming podcast this week was Gerard DiPippo, the Senior Geo-Economics Analyst for Bloomberg Economics, where his research centers on the Chinese and Taiwanese economies and their interlinkages with global value chains.
DiPippo and I walk through different scenarios of what could take place in the Taiwan Strait and how we might model the global economic costs of each scenario. We also discuss some of the second-order effects of any conflict in the Strait, from additional sanctions to what goods might substitute for those lost to conflict. Along the way, DiPippo highlights some surprising and counterintuitive findings from his macroeconomic analysis that changes the calculus for all parties involved.
🔊 Listen to “How many trillions in damage would an invasion of Taiwan cost global GDP?”
The Orthogonal Bet: Building a Fractal Combinatorial Trope Machine
In this episode of our mini-series, Lux’s scientist-in-residence Sam Arbesman speaks with Hilary Mason, co-founder and CEO of Hidden Door, a startup creating a platform for interactive storytelling experiences within works of fiction. Hilary has also worked in machine learning and data science, having built a machine learning R&D company called Fast Forward Labs that she sold to Cloudera. Prior, she was the chief scientist at Bitly and even a computer science professor.
Sam wanted to talk to Hilary not only because of her varied experiences but also because she has thought deeply about how to use AI productively — and far from naively — in games and other applications. She believes that artificial intelligence, including the current crop of generative AI, should be incorporated thoughtfully into software, rather than used without careful examination of its strengths and weaknesses.
🔊 Listen to “The Orthogonal Bet: Building a Fractal Combinatorial Trope Machine”
Lux Recommends
- Sam enjoyed Clive Thompson’s reflection in Wired on "Back to BASIC—the Most Consequential Programming Language in the History of Computing.” “It’s a language for noobs, sure, but back then most everyone was a noob. Throughout the ’70s and ’80s, BASIC sent a shock wave through teenage tech culture.”
- Shaq Vayda enjoyed this super-detailed and beautiful visualization dubbed “Calculating Empires: A Genealogy of Technology and Power Since 1500” designed by Kate Crawford and Vladan Joler. “Calculating Empires is a large-scale research visualization exploring how technical and social structures co-evolved over five centuries. The aim is to view the contemporary period in a longer trajectory of ideas, devices, infrastructures, and systems of power.”
- Harvard has been a linchpin of computer science for decades, and so Sam enjoyed this retrospective by emeritus professor Harry R. Lewis on “Mechanical Intelligence and Counterfeit Humanity.” “Users of batch-processing systems learned, like the Arabian Nights fisherman who released the genie from the jar, to be very careful what they asked for.”
- It is fantastically long, but Adam Ciralsky’s profile of Roger Carstens, the key man on all hostage negotiations for the U.S. government, is one of the most fascinating I have read in some time. More than a year of embedded reporting certainly helps! It’s particularly poignant this week given the simultaneous release of many hostages held by Russia, including Paul Whelan and Wall Street Journal reporter Evan Gershkovich.
- Finally, two fun hacks that delighted Sam, one on “Building Lego Machines to Destroy Tall Lego Towers” and the other by Thomas Geijtenbeek on simulating running with a human skeleton using deep learning.
That’s it, folks. Have questions, comments, or ideas? This newsletter is sent from my email, so you can just click reply.