Riskgaming

AI is spiking chip design costs – can it solve them too?

The old adage of “If you build it, they will come” might be translated into chip design better as, “You can’t build it, since they don’t exist.” The small but crucial profession of chip design used to be a quieter niche within the broader semiconductor market, with just a handful of companies hiring PhD grads. Now, with trillion-dollar companies like Apple, Google, Meta and more all looking to develop custom silicon, securing chip designers is suddenly an ultra-competitive business — and wages are soaring.

At its source is the rise of artificial intelligence and the need for custom silicon to improve the performance-to-power ratio in contexts ranging from mobile devices to data centers. Apple’s launch this week of its new iPhone 16 line is a case in point: years of design work have afforded Apple the ability to deliver its “Apple Intelligence” product with on-device inference with relatively minimal effect on battery life. Now, dozens of more companies want to compete in this bubbly market and beyond.

Lux general partner ⁠Shahin Farshchi⁠ and host ⁠Danny Crichton⁠ talk about the evolution of chip design and how an incumbent oligopoly of electronic design automation companies are now facing new competition from AI-driven competitors. We talk about the history of the EDA market and why custom silicon is really a reversion to historical norms, why designing chips hasn’t changed much in decades and is now rapidly changing for the first time, how large tech companies are using chip design to vertically integrate, the growing exponential complexity of modern chips, and finally, how startups are poised to have access to this market for the first time in a generation.

Produced by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Christopher Gates⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Music by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠George K

continue
reading

Transcript

This is a human-generated transcript, however, it has not been verified for accuracy.

Danny Crichton:
Shahin, thank you so much for joining us. You had a piece recently in the Riskgaming Newsletter, and the point of this piece was really to draw the attention of our audience away from chip fabs, these massive of manufacturing facilities, TSMC, GLOBALFOUNDARIES and others that really dominate the headlines. You see the multi-billion dollar subsidies going to Arizona to other places in the United States. The European Union has their subsidies. South Korea has its subsidies, but you're trying to redirect our attention away, not exclusively away from fabs, but to really bring up another point, which is that there's another bottleneck in the production of chips and then is really around chip design and chip designers. Tell us a little bit about that aspect, because I don't think it gets a lot of attention by the press or think tankers, and it deserves a lot more.

Shahin Farshchi:
Well, Danny, it's just not sexy unfortunately. It's very exciting to talk about how this 50 plus billion dollar fab is in the crosshairs of our adversaries. And when it comes down to talking about labor, it doesn't get as much attention, even though labor is just as important, if not more important than factories. The irony is that semiconductors themselves are very, very much on the bleeding edge of technology, but the way they are made hasn't changed a whole lot for decades despite, again, the technology evolving very quickly. These processes that go into delivering that technology are very, very mature and people just don't want to touch it. For example, if you look at EDA tools, people complained about how EDA tools look like they came out of the '90s. In fact, I don't think EDA tools have changed much at all since I used them design chips myself 20 plus years ago.
And there's a reason for that, and that there's not necessarily a need to update these user interfaces. And these dominant players, Cadence, Synopsys, Mentor, Siemens, these companies have been able to position themselves through their relationships with the ecosystem. And that ecosystem is the fabs, it is the semiconductor companies themselves, the chip companies themselves. There is a very, very entrenched ecosystem that's very difficult to change, and there really isn't much motivation to change. And so, a lot of startups come about and say, "Hey, I want to build a new EDA tool and I'm going to be able to win over engineers because I have this sexy user interface that incorporates AI." And my response to that is, "Well, that's interesting but what about everything else that the EDA tool companies have done over the past 30, 40 years?" And that's very difficult to replicate.
Now as venture capitalists, we're all optimists and we like to think glass half full. And I think the way to do this, when I say this, I mean incorporating AI to really relieve the pressure of the laborer components in making new chip designs is to do that in collaboration with the existing ecosystem of the fabs, the EDA tool companies, the IP companies, and the chip companies. And I'm extremely optimistic. There's a handful of companies that are attempting it, you know, impacting this ecosystem one way or the other. And I think the ones that are able to partner with and synergize with these players the most will likely be the most successful. And at the end, all of us will benefit from better products and more security in the supply chain.

Danny Crichton:
So I want to dive into this issue of the labor market here, because chip designers, when we think about that, I don't know if anyone thinks about anything, right? "Okay, we're just designing a chip." Well, this is a really precise profession. It's a relatively small profession, right? So there's a lot of different specialties that are coming together to get a chip from conception into a fab, into your hands and your smartphones, whatever device you're using. And chip designers are at the earlier part of this. They're the ones who are actually taking an idea for new chip, something that's maybe more efficient, has new functionality, is faster, slower, depends and is constrained by the envelope that's available in the end user device.
And they design it and they have to fit, whether it's the architecture of ARM or something else, they have to fit all the constraints that are coming in from the product team. And it has to work in a fab, which means that you're not going to run it multiple times, costs a lot of money, and it doesn't actually function in real life. And to me, one of the most interesting parts of your argument was that we need a lot more chip designers today because a lot more companies are starting to build chips that were never part of this ecosystem before. Talk about a little bit about that expansion, because I think that that's something that's new in the last few years that a lot of folks have overlooked.

Shahin Farshchi:
Chips become very strategic to these companies. Companies like Google, Amazon, Facebook, even Oracle. The car companies, Tesla, consumer electronics companies, Apple, these companies have realized that they cannot maintain a competitive advantage without getting more vertically integrated. And chips fall within that realm of being able to provide unique and novel performance that's unique to you, to your product. So, Tesla realized that if they want to create a unique autopilot experience, it would be very challenging to do that with existing semiconductors that are available off the shelf. Which was by the way, the way car companies have been doing this forever. Car companies identify semiconductor suppliers and components off the shelf and they make systems around them. Tesla said, "Wait a minute. We want to take a new approach here. We want to start with the capability that we want to realize and then build the chips that are going to deliver those experiences."
And they went ahead and designed their own chips. And a lot of companies are now taking on the same strategy. Apple, for example, its new line of laptops, they all incorporate their own proprietary chips. As have their phones have been for the past probably 10 plus years. And the hyperscaler is now a huge portion of their spend. In fact, there was a bunch of articles that went around over the past couple of weeks around the hyperscalers spend on semiconductors, and they want to shore some of that up and bring, again, novel capabilities and do that in-house.
And Danny, to your point, that creates a ton of demand for new designers to go out and build these things. I recall being surprised probably 10 years ago when I learned that Apple had hundreds and hundreds of chip designers in-house, even though they were relying on Broadcom to help them develop their TPU. And so, that brings about the whole question of, "Okay, well, in a realm of a lot of engineers focusing on becoming software developers or how are we going to satisfy our need for chip designers?" And that's where a lot of the arguments around automating a lot of this chip design work and engineering comes in.

Danny Crichton:
And I'm going to highlight another piece here, which is Apple launched its own chips. It has a video codec on the iPhone and the Apple Mac where if you're watching YouTube, instead of running on a general purpose chip like your old Pentium processor, it actually is in hardware that you can run that codec, which means you get better battery life, you can watch YouTube for 20 hours instead of four hours. It has all those unique features. And every one of these companies, as you've highlighted, is realizing that they have different needs with these chips. And so, the idea that Pentium shall rule them all, that everyone will have one chip from one company doesn't make sense when we have heterogeneous and user applications, all of which have different types of workflows and we can actually design for hardware to be much more efficient and optimal for those applications.

Shahin Farshchi:
By the way, Danny, this whole notion of custom chips is not new. The notion of custom chips dates back to the introduction of chips. In fact, most of the chips up until the '70s were built for custom use cases. The US military developed their own chips for guiding missiles, the communications companies developed chips that were used for communications infrastructure. And so, the notion of a commercial off the shelf part in the form of a processor or a transceiver or a sensor, these are a relatively new concepts from the past few decades. And these companies attempts to further differentiate themselves coupled with their immense balance sheets has behooved them to start going back to the way it was many years ago in the dawn of chips, to go back to making their own custom chips. Which is ironic in my opinion, but I feel like it's the cycle that we're going into.

Danny Crichton:
Well, and I think there's a TikTok, I mean the old joke on mainframes and then we went to decentralized computing and then back to cloud, and we go back and forth in orchestration. I think that you're seeing the same thing in compute. You're going from specialized chips that were designed for very specific applications in the '60s, like missile targeting, to these general purpose processors. And then you realize, "Well, there's all these unique applications. We have money again, let's go back to specialization because that can be a competitive advantage."
And so you see the same TikTok, to analogize to the artificial intelligence world, is it the general purpose model that can do everything or is it going to be the much more efficient but specialized model that can read a radiology report but can't answer your questions about zoo animals? And it'll be faster for inference, but it can't do everything. And so, where is that balance between specialization and generalization? And right now it looks like in chips we're heading back towards specialization and away from the Pentium AMD one chip to rule them all.

Shahin Farshchi:
Danny, I wouldn't say that it is swinging back entirely in that direction. I would nuance it with how we have these mega companies today that we didn't have before, that in their desire to become more vertically integrated and differentiated and maintain their competitive advantage, are going into the business of building their own chips. And the reason why this didn't exist before was one, it probably wasn't as necessary, and two, the companies just didn't have the resources. It's a combination of all these things that are creating this new requirement, but it doesn't necessarily mean that off the shelf chips are no longer used.
They are used, in fact, they're used in the same devices. The vast majority of the chips in the iPhone or in a Tesla car or in a laptop are off the shelf and will continue to be off the shelf components. However, for the reasons that we just talked about, there is going to be now more custom silicon going into these devices as well, which by the way, going back to the topic of this whole discussion, creates more demand for the talents and the intellectual horsepower that's needed to actually design these things to begin with. And as they get more and more complex, there is exponentially more burden now on the human factor that would have to go into delivering these things.

Danny Crichton:
Essentially, we're producing more types of chips, but the number of chip designers isn't expanding rapidly. And so, it's just supply and demand. The labor marketplace wages are up, it's getting more competitive to hire these folks, you're fighting big tech which has the largest budgets. And so, in order to compete if you're up against Apple, against Tesla, up against the hyperscalers, you're looking for an advantage. You need to get into custom silicon, but it's really, really expensive to hire that talent. Plus, you have the secondary problem of the node sizes are getting smaller and smaller for every one of these chips on the bleeding edge or leading edge. And so the costs, because of the number of transistors on these chips for verification, for testing, is going up exponentially.
And I think in your post, you had a amazing chart showing the costs from 65 nanometer node back in 2006 to today's two nanometer nodes in 2024, and the cost has gone up from something like 25 to $50 million for a new chip all the way up to about $750 million for our chip today in 2024. And some of that is the chip design labor costs, but the rest of it is just automating all of this work around the transistors and the logic that underpins these chips. And so those two things are kind of combining and saying like, "Look, it's becoming a little bit more monopolistic. Unless you have almost a billion dollars to produce a new piece of custom silicon, you're out from being on the cutting edge of technology."

Shahin Farshchi:
That's absolutely right. And that's why you only see these massive players playing this game. Which behooves us as investors to look at this and say, "Wait a minute, does this have to be the case?" Does it have to be the case that you have to be a trillion-dollar company in order to build custom chips? Now, I would just caveat what you just, the observation from that chart with not all chips cost close to a billion dollars. A lot of the more complex chips that we're talking about that are used for AI training, for AI inference, for autopilot applications, those could be the more expensive ones. There's obviously cheaper ones, but those are definitely the more interesting ones, and those are the ones that we talk about more.

Danny Crichton:
As you pointed out, I mean EDA, so electronic design automation, has been around for decades. Cadence and these, most of the oligopoly here, or the top three were founded in the '80s. Actually, mostly out of Berkeley. It was a very tight cluster of entrepreneurs and labs and researchers who sort of spun out to go build these tools. So automation has always been here, because there's way too much to do by hand, there's way too much in terms of verifying trillions of transistors. We could spend centuries and never do it manually, so we've always needed automation. Why is that automation not keeping up and why is there this exponential curve?

Shahin Farshchi:
So the exponential curve is a result of exponential complexity. The question on why haven't EDA tools kept up? The EDA tools, in my opinion, have kept up. The problem is the issue is the nature of the problem. The added complexity is creating exponentially more input requirements. So the question becomes, "Well, how do you overcome this?" And I think it's a great time to be asking this question because AI can now do a lot for you. A lot of what goes into what you just described, which is the definition of the architecture of a chip, the breakdown of that architecture into parts, the translation of those logical blocks to actual resistor transistor logic or RTL, going back and verifying that RTL that's been generated reflects the intended functionality of the architecture. And then actually turning that into a physical design with physical transistors and then going back and verifying that those physical transistors represents that resistor transistor logic.
That all requires the creation of code and scripts. And one thing that we've learned in this whole AI revolution is AI is really good at writing code. And so, if you can use AI to generate the code that performs the functionality that we talked about here, then voila, you've taken out a lot of the effort. And so instead of that trend being an exponential trend as chips get more complex, that trend may suddenly become a linear trend, and that would be a huge improvement if that happens. Then you may ask, "Why haven't people already solved this?" And the issue is that unlike large language models where you have the benefit of petabytes and petabytes of Reddit posts and other content that's already out there, YouTube videos, many, many, many years of YouTube videos, another content that's already out there that you can scrape, that content is non-existent right now for semiconductors.
Not a single line of code that anybody has used to create a, let's use the Pentium chip for example, or an Nvidia accelerator, is out there for a third party to use to train a model that will generate these codes and these programs and these codes and scripts. And so the question becomes, "How do we find this data? And then more importantly, how do we create enough data to be able to train a model?" That's what a lot of these startups now that are trying to solve this problem are exploring, which is generating synthetic data since that actual data does not exist that would have the quality that you need to actually train a model that will do this reliably and repeatably. And that's what we're looking for as a firm. Teams that are making major inroads in figuring this out.

Danny Crichton:
Well, and I think it's really interesting because the dynamics in the semiconductor industry, very, very competitive, very focused on trade secrets, very focused on preventing industrial espionage. Yes, there are open source initiatives like RISC-V and others. But when it actually comes to the verification of building of a chip, even in the RISC-V world, most of this is not public. You can't get access to it. There is really no corpus to build upon, which means that this is sort of traditionally an incumbent kind of advantage place.
If you're Intel, theoretically, you could produce an LLM against all of your proprietary data. If you're Cadence, you maybe could do it with your customers or et cetera, et cetera, et cetera. But from a startup perspective, you're just a new founder looking to get into this space, there's a really high barrier and moat to cross. And so from your perspective, synthetic data is the best opportunity to sort of build your own bridge over that moat and cross the chasm.

Shahin Farshchi:
That's right. And I would just expand what you said, Danny, to the incumbents as well. If you're Apple, the dozen or so, or maybe 50 or so chips that you've developed over the past 10, 15 years, even the code that went to developing those chips probably would not be adequate to train a model that would design and verify those chips. And so, I think it's a problem that's not just presented to the startups, but also to the incumbents. And by the way, the people that would use these kinds of tools to automate chip design will likely be these big companies anyway, because you have to be a big company to be in this game to begin with.

Danny Crichton:
And look, I know you put it in the newsletter, we don't have to go over the details, but Nvidia and Synopsys and Cadence have announced various generative AI features, which-

Shahin Farshchi:
That's right.

Danny Crichton:
Feels like following a trend. Maybe there's a little bit more depth to these than not, but certainly wanting to get into the flow, nicer valuations, and argue for this. But in general, they're in good shape. I mean, all of these companies have done very, very well in the semiconductors over the last couple of years with the spike of compute around AI. Synopsys and Cadence, two of the largest EDA companies, both up about 250 to 300% over the last two, three years. I mean, great deals, great stocks, why break the system when you sort have it in place? But you also highlight that there is a dozen plus new companies trying to enter this space with kind of unique models, different strategies, different insertion points. We don't have to walk through all of them, but maybe give us some highlights on trends or patterns you're sort of seeing among entrepreneurs who are trying to rebuild something new here in this industry.

Shahin Farshchi:
So first of all, if I was an investor in any of those companies that you mentioned, these EDA tool companies or the fabs or even the incumbent chip companies, I wouldn't be losing sleep over this because those companies aren't going anywhere. These tools are going to make those companies more efficient. These tools will help them build better products and perhaps pave the way for new challengers to come into the market. But I believe that the rising tide will float all boats, and ultimately the outcome here will be better products, more products, bigger market, more returns to investors across the board, whether you're an incumbent company or in a startup.
To your comment on these existing incumbents putting out AI tools, there's certainly a lot of branding going on, so brand engineering going on. But there's also some interesting actual AI enabled products that they're putting out there. For example, place and routes, which is basically connecting transistors or connecting leads on circuit boards. That's been a capability that's been ML driven for a long time. It hasn't been good, but these incumbent EDA tool companies are making them better using AI.
To your question on what the startups are doing, I would put them into two categories. One is around generating this resistor transistor logic. Historically, there have been tools and programming languages used where you would write C code or something like it and compile it into the logic that would underlie a certain type of functionality. And what these companies are doing is using large language models to automate that step. Some of the incumbents are working that as well, working on that as well. And I mentioned those in the article.
And then there's other companies that are trying to automate the physical design, the layout of these transistors within the layers of a chip and perhaps even reorganize those within a certain footprint if you're going from one generation to the next. And so, we're seeing a lot of interesting work in the different steps that are involved. Some through partnerships with the existing EDA tool companies, some trying to kind of go around them, but I still think it's very much the early innings and we're still yet to see some really interesting companies trying to solve this problem.

Danny Crichton:
And to me, this is sort of where you end up with the argument and you've already sort of foreshadowed it. The opportunity here is that instead of a 750 million to a billion dollars to produce a new chip, if new generative AI tools were to lower that price through automation, not only do you sort of expand the interest for the chip designers, they're able to do more sophisticated work, they're able to automate some of the drudgery that comes into chip design, but companies that might otherwise have to use something off the shelf that's not ideal would suddenly maybe have access to this.

Shahin Farshchi:
That's right.

Danny Crichton:
If it was lower in cost to 200 million, how many more companies that currently take something directly off the shelf would be able to custom silicon their own work? And I think of a lot of different hardware products. Everything from the Humane AI pin that we saw to new textbook readers, to even a Peloton or any of these kind of exercise equipment. I mean, sure you can use general compute and it seems to work fine, but a lot of these companies were at scale at various points where if there was just a little bit more democratization around chip design, they might have been able to offer a much more compelling value proposition to consumers even on their first product. Which means that they would've been more competitive and had a much greater chance of success long term.

Shahin Farshchi:
Yeah, absolutely. And by the way, again, I want to make sure that to reinforce that, you can make custom chips today through partnerships with folks like Marvell and Broadcom. This is part of their businesses, to help companies like Peloton, like Google deliver or make their own custom chips. It just costs a lot of money and it takes time. And these kinds of tools will hopefully collapse the non-recurring engineering if it still exists under the same name. But you can go in there and get on an existing run at any number of foundries for very, very little money.
The issue with that is that these fundamentally are not commercially viable products because of a lack of performance, a lack of reliability, cost issues. There's very likely something out there that is significantly better. I mean, it's like saying that, "Okay, can we go to our garages and build a car that would be interesting and competitive?" We probably could. It'll take us many weekends and many hours, our wives might divorce us, but the output of this effort will likely be a pretty decent automobile. Now, can we build businesses around this? Probably, but the consumer would be just as well off walking down the street into a Ford dealership.
In fact, it's funny, I rented a Ford Edge over the weekend and I initially didn't take this vehicle very seriously, but I was amazed by the refinements, by the performance, by the features. It has lane keeping, it has adaptive cruise control, it has blind spot control checks, it has automatic braking, and this is an entry-level vehicle. It was just absolutely mind-blowing. And so, it was a long-winded answer for, yes, hobbyists have always been able to make their own chips but they just weren't functional to the point where they would be mass consumable.

Danny Crichton:
Well, on your Ford it sounds like it was a leading-edge car.

Shahin Farshchi:
A leading edge car, that right.

Danny Crichton:
That was a chip design joke. Well, maybe we'll end on that. Shahin, thank you so much for joining us.

Shahin Farshchi:
Thank you. Thank you, Danny.