Hacker News new | past | comments | ask | show | jobs | submit login

I hope somebody with relevant knowledge can answer this question, please: what % of the costs is "physical cost per unit" and what % is maintaining the I+D, factories, channels...?

In other words, if a chip with 100x size (100x gates, etc.) made sense, would it cost 100x to produce or just 10x or just 2x?

Edit: providing there wouldn't be additional design costs, just stacking current tech.




There's many limiting factors... one is the reticle limit.

But most fundamental is the defect density on wafers. If you have, say, 10 defects per wafer, and you have 1000 chips on it: odds are you get 990 good chips.

If you have 10 chips on the wafer, you get 2-3 good chips per wafer.

Of course, there's yield maximization strategies, like being able to turn off portions of the die if it's defective (for certain kinds of defects).

For the upper limit, look at what Cerebras is doing with wafer scale. Then you get into related, crazy problems, like getting thousands of amperes into the circuit and cooling it.


The way TSMC amortizes those fixed costs is to charge by the wafer, so if your chip is 100x larger it costs at least 100x more. (You will have losses due to defects and around the edges of the wafer.) You can play with a calculator like https://caly-technologies.com/die-yield-calculator/ to get a feel for the numbers.


It's been a while since I've been out of that industry, but back around the 45nm days, one of the biggest concerns was yield. If you've got 100x the surface area, the probability of there being a manufacturing defect that wrecks the chip goes up. Now, you could probably get away with selectively disabling defective cores, but the chiplet idea seems, to me, like it would give you a lot more flexibility. As an example, let's say a chiplet i9 requires 8x flawless chips, and a chiplet Celeron requires 4 chips, but they're allowed to have defects in the cache because the Celeron is sold with a smaller cache anyway.

In the "huge chip" case, you need the whole 8x area to be flawless, otherwise the chip gets binned as a Celeron. If the chiplet case, any single chip with a flaw can go into the Celeron bin, and 8 flawless ones can be assembled into a flawless CPU, and any defect ones go into the re-use bin. And if you end up with a flawed chip that can't be used at the smallest bin size, you're only tossing 1/4 or 1/8 of a CPU in the trash.


>would it cost 100x to produce or just 10x or just 2x?

Why would 100x something only cost 2x to produce?

>what % of the costs is "physical cost per unit" and what % is maintaining the I+D, factories, channels...?

Without unit volume and a definition of the first "cost" in the sentence no one could answer that question. But if you want to know the BOM cost of a chip, it is simply Wafer Price divided total useable chips depending on yield where yield is both a factor of current maturity of node and whether your design allows correction of defects for usable chips. Then add about ~10% for testing and packaging.


Why would 100x something only cost 2x to produce?

If you create an app, the cost is mostly developing it. Once you can sell a copy, you can sell 100x for more or less the same cost.

It's tricky for physical things. We tend to think that costs correlates with weight or volume, but that's wrong.

The price of a typical 100 ml (3.4 fl oz) perfume is around $50 in shops, for an "official" $100 price.

The cost of the juice is just $3, maybe $5. But that's not the total cost: the perfumist, the bottle, the box, shop markup, distribution markup, marketing, tv commercials, design, samples...

In this case, I guess I+D and machinery is a huge cost and price per unit is set not so much for the cost of producing one unit, but looking at what the whole costs are, putting a markup and dividing by expected units, hence my question.

By the way, nobody answered :-)

I know that the reponses answered somehow, in an indirect way...


That is only in the case of software, where the unit cost does not increase since it is nearly zero. What you wrote about perfume and juice are not unit "cost", but "price". Unit Cost, or BOM or COGS does not include --> the perfumist, the bottle, the box, shop markup, distribution markup, marketing, tv commercials, design, samples...

I am assuming the "I" here stands for Initial Investment or CapEx. And in the context of Foundry Customer it is more like R&D since they dont invest in Machineries. The foundry does that aka Intel , TSM, Samsung. The R&D cost again is not a percentage of "price" or "cost" since there is not expected volume to divide in the first place. And it depends on the complexity of the chip as well as node. On Leading Edge node tooling becomes much more expensive. Expect the Cost of Entry to be $300M+ ( for 7nm, definitely higher now for 5nm and 3nm ) for tools excluding masks and other steps. Making a chip with 99.9% of SRAM would have near zero R&D cost except for the mask. Complexity of your chip and design would dictate how many mask required before your final production. However multiple SKUs ( or dies variation ) could shared the cost of mask which is in the double digit million per run.

And none of these includes packaging and testing.

R&D dominates the cost of chip once you factor in engineers cost, and hence you often see Qualcomm ( or any other Fabless Company ) chasing volume despite it is often not in their best interest in terms of margin.

And and none of these includes the cost of IP, to ARM, IMG, CEVA, Lattice etc or Patents. Most of these are also per unit based. And probably other things I cant record on top of my head right now.


First, I apologize for the confusion, I+D wasn't translated to English: R&D.

What you wrote about perfume and juice are not unit "cost", but "price".

Maybe "cost" has a precise definition in Economics that I'm not aware of, but "price" is definitely not right in this context either. In my example, I can't see how the bottle and the box are not costs.

For me (maybe not correct, but that's what I meant anyway) "cost" is any expense that you incur to put the product in the hands of the final buyer. And cost per unit is total expenses divided by number of units. Feel free to use other words instead of "cost" and "cost per unit" that fit in these definitions.

Let me clarify the original question so it makes more sense. Thirty years ago computers had a single processor. Then they started to make multiprocessor and multicore CPUs. Now it seems that miniaturization is reaching its hard limits so they're wondering how to increase computing power.

One of the logical solutions is to put more processors in the same computer, not in the same chip or waffle (so defect rate is irrelevant here), but more like a "cluster in a box". I guess that would bring a number of problems like heat, but this kind of power doesn't seem the kind you want in your pocket.

So my question was: would this make economic sense? If "cost per unit" is rigid, it wouldn't. If it's driven by market or R&D, it might make sense.


I'm not an expert, or even an amateur, here but I defects are inevitable. So if you _need_ 100x the size without defects and one defect ruins the chip the cost might be 10000x to produce




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: