Hacker News new | past | comments | ask | show | jobs | submit login
Samsung Foundry Forum announcements (samsung.com)
168 points by ittan on Oct 7, 2021 | hide | past | favorite | 124 comments



> process technology migration to 3- and 2-nanometer (nm) based on the company’s Gate-All-Around (GAA) transistor structure

I believe GAA is the next gen tech for the node process. Samsung is the first foundry to do GAA w/3nm while TSMC is sticking w/FinFET for their 3nm. It'll be interesting to see how 3nm FinFET compare to 3nm GAA.

https://www.anandtech.com/show/16041/where-are-my-gaafets-ts...

edit:

>After that, transistor structures begin to change. Samsung and TSMC are manufacturing chips at 7nm and 5nm based on today’s finFETs. Samsung will move to nanosheet FETs at 3nm. Intel is also developing GAA technology. TSMC plans to extend finFETs to 3nm, and then will migrate to nanosheet FETs at 2nm around 2024.

https://semiengineering.com/the-increasingly-uneven-race-to-...


I want to add some context for those who dont follow Semi closely.

Samsung Foundry have a history of over promise and under deliver. This 3nm launch will likely be similar to their EUV node which they claim to be industry first but wasn't shipping in any real volume. So arguably they are not lying, but it is a marketing spin.

TSMC is an extremely pragmatic company. It either work or it doesn't, there is no need to save faces. No need to push industry first GAA or FinFET, push for whatever it works within the timeframe with respect to yield and cost. What is the point of having the best tech in 2022 when they cant produce it with enough volume that any of their customer would want?

That is not to say Samsung Foundry are evil or anything, they are pushing very very hard to try and catch up to TSMC and stays competitive in the market. ( Look at what happen to Global Foundry ). And now Intel is coming. Pat Gelsinger seems to be doing all the right thing.


> TSMC is an extremely pragmatic company.

After listening into their earnings call, their CEO really pushes aside quick profits (such as raising prices) and prefers long-term stability and keeping their existing clients happy. The investors wanted TSMC to raise chip prices and expand their operations.


> After listening into their earnings call,

People should really listen to earning calls, it shows the company's true character. From Nvidia, Intel, AMD or Apple. I have yet to see any media does a decent job and translating it to mainstream news. Mostly because Tech news (site) aren't really focused on Finance and only cares about the tech, while Finance / Investors doesn't care or known enough about about Tech and wouldn't cover it in the Finance news.

> and prefers long-term stability and keeping their existing clients happy.

That is why people partner with TSMC. As Morris Chang calls it the Grand Alliance. Unlike Samsung and Intel, TSMC is Pure Play, which means they dont have chips that compete with their clients. They are also transparent about their progress. TSMC originally put GAA on 3nm, but it was clear the tech isn't going to be mature enough in time. So they changed it to something else. Let their customer knows, let them decide if they should move on to a more mainstream node like a cost reduced 4nm, or stick to leading edge on 3nm FinFET.

Unlike Intel which lied to all of their Custom Foundry partners leaving some of them with a two years 5G delayed rollout. Or Samsung which despite having the cheapest per wafer, the cost advantage doesn't translate well when you dont have enough of them for your inventory needs.


Counterpoint:

What is the point of having the best tech in 2021 if all capacity is allocated to a single customer?


"To be clear, wafer agreements are signed 2-3 years before the chip makes it into HVM and TSMC can build fabs faster than that so there will be no N3 shortages for anyone who signed a wafer agreement (apple, AMD, NVIDIA, QCOM, etc…). If they need more chips than what they signed up for, which happens, there may be shortages. This is how TSMC and the foundry business works. It’s all about the wafer agreements."

https://semiwiki.com/semiconductor-manufacturers/302408-tsmc...


Because that single customer paid in advance and book those capacity?

That is how it works in literally all other industry supply chain.


In practice, how is that different from not having the capacity to serve others?


That is not how capacity planning works. You dont built it and hope for them to come.

If others were willing to paid, take the risk and book in advance they will have the capacity the need. But rarely anyone was will to pay the price premium. They wait for it to become a little more mainstream.


Intel's switch to FinFET was equivalent to almost 2 node jumps. If they nail GAA, then it will probably be much better, but it will also be enormously more expensive to produce the chips.

Intel has loads of cash and would pay that money in a second, but I suspect that most other companies would rather hold off a bit longer in exchange for much cheaper products.


> I believe GAA is the next gen tech for the node process. Samsung is the first foundry to do GAA w/3nm while TSMC is sticking w/FinFET for their 3nm. It'll be interesting to see how 3nm FinFET compare to 3nm GAA.

Comparing Apples to Oranges, and doing doing a comparison on taste vs. size.

The device may well be awesome, but first ICs using it not so.

The biggest advancements in under 14nm were in metal, not so much with the device, process, or materials.

Even if Samsung will produce a better device design, they will still have to catch up to TSMC in so many, many other areas.

Only single digit number of people on this planet will ever know the exact measurements of FinFet vs. GAAFet

But one thing for sure, Samsung saying that they pioneered a new device ahead of TSMC does indeed sound very, and impressive to a certain category big co. people regardless of actual performance.


Samsung has a dishonorable marketing department. 3nm is not actually 3nm. I'm fed up... OLED is superior so they had to take the path of calling theirs QLED which is actually just an LCD screen with phosphors on top of a blue backlight (and it's nothing new).


Their high-end phones have such dark patterns that I will flat-out not buy Samsung anything.

Even if you pay [€$]1000+ for one of their Smart Phones, you can look forward to:

* Uninstallable cruft, as if they were a telcom and you were on a contract and had not just handed over a grand

* ...like a confusingly-similar-looking competitor to Google Contacts that will upload your info to their servers

* GDPR? LOL

* A hard button on the side of your phone located just below the "volume down" button, easy to press accidentally, that is hard-coded and unconfigurable, that will launch their AI assistant Bixby. Don't want to use Bixby? Tough shit. Nothing you can do about it.

* Constant badgering by the phone's native notification to sign up for "Samsung Members", a social media platform. No, you can't turn that off.

* Other, similar bullshit.

3nm? These are such sketchy practices I cannot imagine it won't affect, say, their high-end TVs (they would totally monitor your house and show you advertisements).

Seriously, avoid that company. No, paying for their high-end options will not insulate you from their nonsense.


In a world of garbage electronics they’re actually a name I trust on some level. They build pretty decent stuf- with stupid caveats.

My Galaxy Buds Plus are pretty good and have unparalleled battery life - but you can’t use the companion app on Android because it won’t work unless you give it access to your contacts.

My Samsung TV is quite snappy and, besides my model being a special edition that doesn’t come with Bluetooth and them not specifying it anywhere, it’s actually pretty alright. Cold-boots quickly, has a snappy UI, theoretically comes with all the smart features you want ... but it’s full of ads the moment you enable internet access, plus you know the spying allegations. I guess I’ll still have to figure out proper firewall rules.

I’m going to guess that their other appliances are similar. Pretty good hardware, pretty good software underpinnings, just severely held back by some anti-consumer software decisions.


> * Constant badgering by the phone's native notification to sign up for "Samsung Members", a social media platform. No, you can't turn that off.

Apple does something similar. Every now and then I get a notification about "try tv+/arcade/music for x months".

Or few years back when wallet was launched, daily notifications to add my card.. in a country that doesnt support apple wallet.


Try saying no to the new iCloud terms and conditions.

The options are “I agree to everything” or “Keep bugging me until I agree to everything”.


You can at least turn off iCloud entirely on Apple devices. I have never enabled iCloud and I only get these screens when I update the OS to a new version.


> turn off iCloud entirely

Then what's the point of using an iPhone?


A hard button on the side of your phone located just below the "volume down" button, ... Don't want to use Bixby? Tough shit. Nothing you can do about it.

That button is actually my favorite feature of the phone and it'll be very hard to give up. Obviously I'm not using it for Bixby. I have it mapped to play/pause on long press, toggle flashlight on double press and as a secondary unlock/lock button on simple press.

https://play.google.com/store/apps/details?id=com.jamworks.b...

Works fine on my S10, ymmv.

Edit: And yes, the pre-installed crap is super annoying. You can remove some, but not all of it via adb, it's a hassle. Samsung is hardly the only offender in this regard, though they may be among the worst (aside from Google and Apple which get a free pass).


I feel like this somewhat misses how conglomerates, especially the Japanese and Korean kind work. They're basically a bunch of hardly related corporations in a trench coat. Though you are probably right as far as TVs go, these likely come from closely related units.


Google uploads to their servers. They aren't anointed.

You can turn off sync. Same as Google.

Samsung software isn't worse than Apple or Google IMO.


Sure, you have a right to your opinion. IMO Samsung software is far worse than Apple's. Many reasons why, but for one, with an older Apple phone you will still get very timely upgrades to their latest software for many more years than you would with a Samsung phone.


You don't need full system updates on android to update browser and other apps.


It is far worse than the Google and apple experience. They routinely make inferior versions of an app that would be redicously easy to clone. They need to realize they are a hardware company, not a software company

And that goes for all of their product lines across the board. It's all garbage software. Its a shame they intentionally cripple excellent hardware with crap

Until they stop that practice they are not even a potential option for me. I don't get why they can't save the money and just make a good pure Android experience


Nope, I use Camera, Notes, and Gallery every day and prefer them to the Google ones. Those aren't simple apps.


Google Contacts is a confusingly-similar-looking competitor to Samsung contacts.


I can take Google Contacts off my phone, while I cannot remove Samsung Contacts


Also, add that you can't believe in any promise about updates of his phones. I did one time, when they launched his phones with Bada OS. Never again.


The only thing common between Samsung that makes phones and Samsung that fabs chips is name.


> 3nm is not actually 3nm

It's a stretch to blame that on Samsung, processor generations described in nanometres haven't been based on actual component size for years now, by any manufacturer.


I feel that if numbers were all wrong in, say, the automotive or aerospace industries, there'd be more of an upheaval about it.

Then again, the nanometer sizes aren't always completely indicative of performance and aren't necessary to be used in any capacity when actually using a computing device, so maybe it's not as bad.

Dishonest marketing, though? Most certainly.


GamersNexus on Youtube have a great video detailing all the issues with processor naming with regards to sizing.

https://www.youtube.com/watch?v=wxKGFxmwcDo


I agree calling their display tech QLED is pretty shady, and clearly just intended to confuse people. But I'm glad their high end display tech is not OLED yet, because nobody seems to have solved burn in satisfactorily yet (if you play a lot of games or use them with a PC).


I've been using an LG OLED for 5 years with an HTPC. Lots of gameplay and leaving it on all day with eg a web browser open. I haven't experienced any burn in.


Would you mind taking a photo of your TV showing a 50% gray screen?


https://imgur.com/a/mMuy5fj

Poor lighting in this room, taken w/ an iPhone 12 Mini. Just grabbed the first result of "50% gray screen" from youtube.

I can see some vertical stripes if I look closely, but that seems to be general aging and not burn in. At least I can't think of any images that could have burned in that pattern with what I've used the screen for. Tbh this picture makes it look worse than it appears but you'll just have to take my word on that. ¯\_(ツ)_/¯ This TV has gotten a lot of (ab)use, much more than a typical home I would think.

I'm glad you suggested this test because now I have some fuel for upgrading in 2022. ;) I wouldn't say the aging I can see from this test is noticeable in most cases, but it's a tool I'll use in the future to see how the striping worsens.


oh god, I shudder at the thought of trying this test on my old screen because I know what the result is going to be like.


My 3 year old 65” LG OLED is having the red pixels burn out already and it looks horrible. My 85” Samsung looks just as good to me (even better some times because it gets brighter).


That really sucks, especially given how expensive these things are. Mine is a 55" fwiw. :/

I went ahead and ran a pixel test and they're all working fine still. Maybe LG has been cutting costs as their OLED lines have matured? Or it could be bad luck of the draw.


There’s an lg oled soon to be released that “solves” the problem by basically underscanning and using the unlit pixels at the edges to allow for much more significant pixel shifts. It’s this one, but I can’t find the review. https://www.dpreview.com/news/0394947539/lg-new-32-4k-ultraf...


Oled tvs have had Screen Shift for years.


Yes but oled TVs don’t have the same spare pixels as the monitor.


I'm really starting to believe that mini-LED displays are generally better for long-term use.


What is the best metric nowadays? Dhrystones/MIPS/FLOPS per MHz/Watt per square inch? As a complete outsider, millions of transistors per square inch sounds like a very intuitive metric of how small things are.


Performance per watt is a pretty decent measure (and I would argue really the only measure that matters - assuming it's possible to get it "fast enough", and with the exception of IoT type stuff where you probably only care about getting power consumption as low as possible because pretty much any amount of processing power is enough).

You still need to somewhat segment into low/medium/high power chips when comparing them though as it's generally not possible to just take a very high perf/watt low power part and scale up the same design with the same level of efficiency.


If we are just looking at the fabrication process, as opposed to CPU architecture, MTr/mm² – million transistors per a square millimetre. But transistor density can be variable depending on the type of circuit, so the standard calculation uses a weighted average of two different cell types – NAND2 and SFF (scan flip-flop)

https://en.wikichip.org/wiki/mtr-mm%C2%B2


A while ago, I just tried taking any figure I could find for transistors per area, for various processes/chips, ignoring all the nuances, and taking the square root to get the implied linear density, and if I recall correctly, it was pretty consistently 1/10th of the usually quoted figure.

Whatever the ratio was, it seemed to be roughly consistent going back like 30 years, which surprised me.

So now I don't believe there actually has been a drift towards marketing and inaccuracy. And believe the details of what kind of cells is just excessive precision.


If you were comparing it to the Nnm figures, that actually checks out and is higher than I thought it would be. Nnm has been "feature size" not transistor size for eons now. It's the size of the smallest single shape the process node can do. So, like the width of a corner of a fin on a finfet.



I prefer bogomips.


> 3nm is not actually 3nm

To be fair, I read the Xnm labelling is pretty much pure marketing at this point - since 45nm; https://en.wikichip.org/wiki/technology_node#Meaning_lost


I'm a photonics/optics and I find the Xnm figure really fascinating. It is based on a real metric which is the smallest feature size they can make (roughly the resolution of the lithography process). It is absolutely amazing that we have been able to scale things down to image at that level.


Note that with the rise of finFETs we can pull some of those numbers into the 3rd dimension. The effective gate length is higher than the amount of length it takes on the flat plane of the chip.

https://semiengineering.com/moving-to-gaa-fets/


And cpus are just sand


3nm means the process is equivalent in density and performance to a 3nm planar transistor that Samsung designed. I don't see how that is dishonorable marketing because every company designed their own reference planar transistors.


I thought the Q stood for quantum dot, which they actually use? (Not that that makes them unique nowadays).


Yeah but do you really think they didn't choose that acronym, with that upper case Q with no intention to make people think they were OLEDs/similar to OLEDS.

QLED

OLED


Yes. Occam's Razor would demand it


[flagged]


Manufacturers only started calling LCDs with LED backlights "LED TVs" when OLED tech started getting hyped up too. It's the same kind of bullshit marketing.


Actually, you're conveniently omitting a part of that history, which completely negates your point. First it was LCD and Plasma. Then Plasma went away completely, and you had fluorescent backlit LCD. What's the point of calling something LCD when it's all LCD? Was the Sony Trinitron called a CRT? No, it was called a Trinitron, because everything was CRT.

When LED started happening, it wasn't just a big backlight like the old gas backlit LCD. There were many LEDs, and they were selectively turned on or off to save energy and greatly improve contrast. Not just a dumb backlight like "LCD" in the days when it was competing with Plasma. So yes, LED, which provided the Luminance channel of the image, was a completely valid way to call it.

>LED backlights "LED TVs" when OLED tech started getting hyped up

well this is simply false. LED TVs came out over a decade before OLED TVs and were called LED TVs.

Since the Q applies to the LED, not the LCD part of the TV, QLED is exactly and the only proper way to call it.

It's the same kind of bullshit conspiracy theory as "covid is a hoax" and "the election was stolen." The QLED and LED name have zero to do with marketing. They literally describe the TV.


So it should be QLCD


No, that would be Quantum Liquid Crystal Display instead of Quantum (Dot) Light Emitting Didode


Quantum (dot) Liquid Crystal Display is an accurate description of the display technology, unless you think LED backlight is more important than the LCD tech that drives the pixels.


As much as I'd like everyone to be more honest in their marketing, I think it's a bridge too far to blame Samsung for how televisions have been marketed by all manufacturers and retailers alike for nearly two decades.


I do. Hence the importance of OLED.


> I'm fed up

Cool. I'm excited to get another Samsung phone. Suit yourself.


I might be excited about a Samsung phone if it didn’t have a lame version of Android on it.


The Samsung apps and interface are fine. Google apps cause me the most frustration - particularly Google Maps. I don't use the OK Google feature, so am fine that Bixby is occasionally getting in my way rather than OK Google.


I can never remember what xnm means as it varies between companies.

In this case,

Samsung's 3nm = Intel's 7nm

I am still waiting for a standard based on transistor density numbers!


It should be ahead of Intel 4 ("4nm"). Samsung 5nm density is approximately 127M gates/mmsq on paper. Samsung 4nm will scale to around 0.75x area according to their China conference earlier this year, to a transistor density of around 168M gates/mmsq. They had another conference the other day detailing 3nm, which will scale down another 25%, to around 224M gates/mmsq.

Intel 4 was estimated to be up to 200M gates/mmsq. I don't think we have exact numbers since Intel only released numbers for their previous 10nm plan, which were heavily revised for Tigerlake iirc. I think Intel 3 is a variant of Intel 4 so 3GAA will presumably be similar to Intel 3.

edit: slides of the 3nm conference yesterday https://twitter.com/stshank/status/1445924295121592321/photo...


gates/mmsq seems a way more usuable metric then the nm marketing stuff?


It's a better ballpark, but it's still an ideal number measured by the manufacturer using their own tests. Problem is, companies rarely if ever source the exact same design to multiple foundries, so it's not easy to compare in practice.


Intel actually renamed their nodes in like with Samsung and TSMC, so Intel 7 (used to called 10nm) is roughly comparable to TSMCs 7nm and Samsungs 8nm


For anyone who had not been aware (like me), apparently Intel has renamed their third gen 10 nm process to 7, and their 7 nm process (the one that they describe as being their first full use of EUV, that got pushed back to 2023) is now named 4. That puts Intel’s node naming roughly in line with TSMC and Samsung in terms of feature density.


there was a restaurant, a&w I think, which had a 1/3lb burger. a whole bunch of people thought it was a ripoff - smaller than a quarter pounder. As it's impossible to explain to the masses that they need to repeat 3rd grade, the restaurant gave up on the idea.

then there was windows. have you actually heard an elite hipster tell you, a professional highly paid tech guy, that mac is superior because windows is behind on operating systems? not because unix is better. no, because windows was "8" and mac was "10." X is the roman numeral for "10" you see, and 10 is a later version than 8. So ms ended up skipping windows 9.

There was a defined standard for process sizes. A bunch of unethical marketing scammers used half-truths to scam people. It worked, because people can't even figure out a hamburger. And now the people who followed the long-established standard have to switch to the scammer's standard, because they still need that hamburger guy's money.

Here's the problem with customers... You need their money.


It doesn’t relate to any measurement of feature size. It’s just a frustratingly meaningless marketing term.


Why do we keep using it?! It’s so unscientific that it embarrasses me that intelligent people even talk about it.


Fabs have been increasing transistor performance without shrinking the size much; for example, a 22 nm FinFET has similar performance to a 14 nm planar FET so feature sizes don't tell you as much as they used to. This led fabs to invent "effective" feature sizes to convey improvements in the process.


It’s basically like a version number now I suppose.


>I can never remember what xnm means as it varies between companies.

Intel has renamed their node so the industry has now pretty much standardise on naming. Where all 3nm from Samsung, TSMC and Intel will have similar transistor density. ( But not similar performance or any other characteristic )

Officially Intel doesn't use 4nm or 3nm, they call it Intel 4 or Intel 3. But for the sake of easier comparison most people still use Intel 4nm to describe it.


There is also a big difference between the nm accuracy of lithography and the transistor size.

Creating chips with an accuracy of 1nm does not mean 1nm transistors.

But 1nm sounds good as marketing.


There are a few comments saying that 3nm is a marketing term and that the transistors are actually larger - how is this allowed? Isn't it misleading and deceptive?


Well, I hope nobody chooses a fab based on the headline number on their marketing material. Adapting your design to them is a long process that requires all kinds of details, and how well they will produce your circuit depends on those details as much as on the feature size.


There’s not really a standard, some feature is probably 3nm, which feature chosen has varied over time and between companies.


There are no features with lengths of 3 nm, 7 nm etc.

Some vertical distances, e.g. thicknesses, are indeed of only a few nm, but the process size name always referred strictly to horizontal distances (i.e. parallel with the wafer surface), which are determined by lithography. Those are at least 10 times larger, in the range 25 nm to 60 nm for modern processes.

There are a few horizontal distances that are not determined by lithography, i.e. they do not correspond with something drawn on the mask, but the distances are determined by speeds of corrosion or diffusion, like also for the vertical dimensions, but those also do not count for naming processes, because you could have such a distance of only e.g. 5 nm even in an 180 nm process. Those distances that are not determined by lithography do not influence the potential density of a circuit but only certain electrical performances.


Everyone is making excuses, but the simple answer is: yes, it’s misleading and deceptive.

It’s allowed, as many, many, many other bad things in this world are.


It's not supposed to be deceptive because the idea is that, say, the 3LPE process provides the same performance as an ideal classic 3 nm transistor.


The naming of the process does not provide any direct information about the performance of the transistors.

The process name is supposed to mean that if the size name is twice smaller than in the same area you can pack 4 times more transistors.

So the 3LPE process is supposed to have a transistor density about one hundred times larger than the 32-nm processes that were used for Intel Sandy Bridge or AMD Bulldozer, or about 3600 times larger than the 180-nm process used for the first Pentium 4.

The performance of individual transistors is usually worse for smaller transistors (except for the energy consumed by switching a single transistor, which is obviously smaller), but that is compensated by using more complex circuits, with more transistors, but not much more, so there is still a gain in density by going to smaller transistors.


3 nm is not the size of these transistors, just like X nm has never been the size of transistors for any value of X, so it's not "misleading and deceptive".


Actually it is all about marketing now, the customers though to whom they market are MNCs with immense cash piles to splurge on microchips. Nobody else can.

See — you rarely ever see so much marketing money spent on an industrial service, and like here seeing Hollywood level gfx on an obscure industry event keynote would've been more laughable than noteworthy 10 years ago.

Without these cash piles, there is no way to finance new fabs, and SEL is fighting for its survival here. Once you are out of the race in semi industry, you can never catch up.


At what point does quantum tunnelling become a problem?


First though, "5nm", "3nm" and so on are just marketing names. There is nothing about "5nm" that makes it "5nm" other than the company in question saying it is. Some things are smaller than 5nm on a given 5nm node, and some are larger. I cannot recall exactly what node this started to be the case (there used to be an actual definition, one for DRAM, one for logic), but it was in the past two decades and got particularly ridiculous beginning around "28nm" up to now. The really concerning physical dimension for quantum tunneling to occur/not occur is "gate length," and that's been basically sitting around ~16nm (actual, real, literal 16nm), plus or minus a few nanometers (depending on the manufacturer and process in question), since about "45nm" (mid-late 2000s). So that one critical dimension isn't getting smaller. And there isn't much they can do about it right now. They are still shrinking other dimensions though, and things don't work like they used to. Powered off transistors aren't really off, and leak power. The workaround for this is that they just use bigger transistors in certain places for what's called "power gating". You get the benefits of having tons of small transistors, with a slight area penalty. In addition to power gating, they have made substantial improvements to the design of the transistors themselves. Gates now wrap around the channel on 3 sides, creating a device known as a Finfet. Silicon dioxide is no longer used as an insulator to the same extent -- hafnium dioxide preforms much better as an insulator. Gates are now metal instead of polysilicon. And there's an assortment of other changes that have occurred or are on the way. So performance has actually managed to improve somewhat, and things have still gotten smaller. The end is near... but not quite yet. Gate length is not going to budge much unless some miracle occurs, though.


>but it was in the past two decades and got particularly ridiculous beginning around "28nm" up to now.

I wonder if that is some kind of cultural shift that is taking place that started around 2009, or if it's always been like this and I just never noticed.

BMW model numbers used to more or less accurately reflect engine sizes, not anymore, it's just numbers now.

2G, 3G, 4G used to mean something, not anymore.

I could add a remark about the federal reserve, but... I'll just stay away from that. Don't want to be too edgy/turn this into a political discussion (I just think it's interesting from a cultural perspective).

It's like we collectively decided that "it's just numbers, man."


>It's like we collectively decided that "it's just numbers, man."

People with no scruples realized it's easier/cheaper to confuse and persuade people something is better than actually producing something better and that conventional wisdom was wrong.

It's a lot easier to invest in propoganda that convinced improved perceived value than actual value. It's win-win, the consumer thinks they're happy and the producer doesn't have to deal with the mess of hurdles in reality to continue to make money. Conventional wisdom says people are smart and will see through your snake oil, meanwhile, empirical data says people will drink the snake oil if you tell them it's from the fountain of youth.


Waaaaay back in the day, the Electro Motive Division (EMD) of General Motors make railroad locomotive. They had the GP-20 with 2000 horsepower, and the SD-24 with 2400 HP. Then General Electric entered the business with the U-25, having 2500 HP. EMD's next model was the GP-30, with (ahem) 2250 HP. This was in 1961.

So, yeah. This is nothing new. Marketers gonna market.


What’s really disappointing is that we as a society chose to accept commercial prevarication. The fine print exception is sheer bullshit. Dishonest marketing should be treated the same as dishonest weights and measures.


Kind of makes sense. People probably didn't all of a sudden become more dishonest. Thanks.


>2G, 3G, 4G used to mean something, not anymore.

Did they really, though?

2G was accurately labeled (in my experience), sure, but I remember when Verizon started relabeling their HSPA+ (3G) stuff as 4G in my hometown. Not even "4G LTE" (which allowed them to get away with it since it's not actually 4G), they just straight up called it 4G on my Motorola Droid Turbo. When I rooted it, I found out exactly what it was connected with and learned it was all a lie. (When I actually did experience real 4G, the speed difference was shocking.)

AT&T relabeled their 4G network as 5G fairly early on. [0] Then, Verizon decided to copy them. [1] But this wasn't a trend that started with 5G.

[0]: https://www.theverge.com/2020/5/20/21265048/att-5g-e-mislead...

[1] https://telecoms.com/505584/verizon-told-to-stop-lying-about...


> >but it was in the past two decades and got particularly ridiculous beginning around "28nm" up to now.

> I wonder if that is some kind of cultural shift that is taking place that started around 2009, or if it's always been like this and I just never noticed.

> BMW model numbers used to more or less accurately reflect engine sizes, not anymore, it's just numbers now.

When was that and which number? Just looking at the 7 series (surely you didn't mean that number) the E32 build between 86 and 94 had engine sizes between 3 and 5 liters.

> 2G, 3G, 4G used to mean something, not anymore.

So what did the G mean? AFAIK it was generation, but that's a very vague term. Just look at human generations, people are born continously, so you could have two people who are the same age but technically a generation apart because one had very old parents and the other has very young parents/grandparents are they the same generation?

> I could add a remark about the federal reserve, but... I'll just stay away from that. Don't want to be too edgy/turn this into a political discussion (I just think it's interesting from a cultural perspective).

> It's like we collectively decided that "it's just numbers, man."


Advertisers ruin everything.


Advertising is cancer on modern society.

http://jacek.zlydach.pl/blog/2019-07-31-ads-as-cancer.html


"our code is fast, flexible, and easily modifiable"


The theoretical half pitch size limit for a single exposure EUV is Lambda * 2 or 26nm.

You can get arbitrarily small at the cost of exploding count of masks. I.E. double patterning needs 2X masks, but quad patterning needs 8X. Octuple patterning is completely impractical.


Can't they use a smaller wavelength source? Nothing special about 13nm afaik


13nm is hard enough. ASML seems to have settled on laser-driven tin plasma light source, which is miserably inefficient: https://en.wikipedia.org/wiki/Extreme_ultraviolet_lithograph

>The required utility resources are significantly larger for EUV compared to 193 nm immersion, even with two exposures using the latter. Hynix reported at the 2009 EUV Symposium that the wall plug efficiency was ~0.02% for EUV, i.e., to get 200-watts at intermediate focus for 100 wafers-per-hour, one would require 1-megawatt of input power

The optical train is also tough. 13nm is getting close to soft x-rays, and photons that hot don't like reflecting, and the optics are rapidly degraded by exposure light:

>EUV collector reflectivity degrades ~0.1-0.3% per billion 50kHz pulses (~10% in ~2 weeks), leading to loss of uptime and throughput [...] Due to the use of EUV mirrors which also absorb EUV light, only a small fraction of the source light is finally available at the wafer. There are 4 mirrors used for the illumination optics, and 6 mirrors for the projection optics. The EUV mask or reticle is itself an additional mirror. With 11 reflections, only ~ 2% of the EUV source light is available at the wafer.



The design of EUV lasers is already completely absurd. It's an awesome piece of engineering, but it's no easier to push the laser wavelength downwards than anything else.


Does the lithography require the tight wavelength or other nice properties of lasers? I had thought they used filtered synchrotron output since a while ago.

Basically everything about the process is absurd, not sure why pushing on the light source is less feasible then any of the other knobs


Diffraction limit becomes a real problem at these feature sizes even with short wavelength light, and the physical properties of short wavelength light start to damage equipment and cause serious other issues even at the current levels.

At some point it’s switching from ‘lots of wiffle balls in a stream’ to ‘high power machine gun fire’, and the physical properties of everything involved become very limiting.


13nm * 2 is 26nm.

Using a smaller wavelength is kind of useless for the optical lithography, as below this photons will make too many secondary electrons which will reduce the effective resolution. This is the reason X-ray lithography went nowhere.

This is why the ultimate limit of 157nm lithography was also not so far away from EUV. Also somewhere in between 25nm-30nm

This is also why some people suggest resurrecting 157nm — getting nearly same half pitch without maintenance, and expensive tooling of EUV.


Quantum tunnelling has already become a problem in most computer components. NAND flash was the first in maybe 2015 to start reporting seeing issues?

Effects are moderately visible and have to be counteracted at 5nm. I heard some rumours about 7nm, but cannot confirm any countermeasures were taken to avoid quantum effects.

It should be noted that "3nm" is now purely for commercial reasons and has no relationship to the size of transistors on board.


It already has been a problem in terms of gate leakage, although largely mitigated by material improvements.

Gate leakage is the phenomenon of quantum tunneling through the gate dielectric barrier and started appearing as gate dielectrics became thinner and thinner. Gate leakage was mitigated by moving to higher k dielectrics (from silicon dioxide, SiO2, to more exotic materials that include other elements such as Hafnium).

Higher k dielectrics allow for the same capacitance per unit area and channel control with a thicker physical gate compared to plain SiO2, reducing gate leakage. This technology change came along with metal gates (which used to be polysilicon) and were a combined advance that Intel incorporated a few years before before TSMC, IIRC circa 2008.

This is a circuit designer's perspective. Someone who actually understands device physics and material properties can chime in to correct me.


Your nickname rings familiar. Ex-Xilinx by any chance?


Do you have any books that you recommend on this topic? would love to get my hands dirty to the degree that i can with this


It's always been a problem to some degree. Keep in mind that 3nm is a misnomer and has no real physical meaning behind it.


From a few years ago until we perfect the QFET and can put it to use.


For silicon n00bs like myself, listen to Acquired's recent episode for some context on how hard this area of innovation is https://www.acquired.fm/episodes/tsmc.

Off the back of this episode, I can't help but feel TSMC's monopoly needs disrupting with players like Samsung to contend with Taiwan & China's geopolitical tension.


> Samsung To Mass Produce 2nm Chips in 2025

https://www.tomshardware.com/news/samsung-foundry-to-produce...


Hmm, I wonder how this compares to TSMC/Intel in terms of transistor density?

Is transistor density the best measure of chip competitiveness these days?


At what point do we have to stop because we’ve hit physical limits? At 3nm we are talking transistors only a few dozen atoms wide, what’s the smallest theoretical transistor we can build?


With the old-school plane transistors over the wafer's silicon design, you need about 10nm of doped material so it won't completely mix with everything else.

With theoretical organic 1-electron designs you can build a transistor out of 4 carbon atoms. But nobody knows how to mass produce those ones.

We are somewhere on the middle, modern designs did break the 10nm barrier, but are not nearly as small as those numbers you see around.


The actual feature sizes are still (remember these shapes are 3D) quite a few scalings away from being even close to atoms.


3nm is a marketing term.


If their chip fan has the same commitment to QA as their TV’s, I’ll pass.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: