If you're interested in chip fabrication, I heartily recommend watching https://youtu.be/NGFhc8R_uO4 - it goes over the evolution of how they make transistors all the way up to 2009 state of the art, with a follow up talk in 2013 (https://youtu.be/KL-I3-C-KBk).
Both of them mention how Intel is sinking billions of dollars into ASML to try and get this process working, and how impossible everyone thinks it is, so I'm skeptical that they finally got everything squared away now :)
TSMC and Samsung are also exploring EUV, so it's not just Intel. New lithography is necessary to continue shrinking, otherwise mask costs will rise exponentially (literally).
Can you explain the exponential rise in mask costs with shrinking? My naive understanding is that you simply can't resolve features below some fraction of the wavelength. How do more expensive masks get beyond that?
I can split any small pitch exposure into separate, larger pitch exposures. For instance, if I need 40nm pitch, and my system is limited to 80nm, I can do 4 exposures. The next shrink requires 8.
To figure this out, draw a square, and each node is an exposure.
This process works forever in theory, but is limited in practice by the mutual registration of the exposures.
What you can do is basically use partially overlapping beams to energize an area smaller than any of the individual beams (think venn diagram). Its a similar technique (except in 2 dimensions) to radiation therapy where multiple beams of radiation are passed through healthy tissue such that their intersection point is on cancerous tissue. That way only the cancerous tissue receives a highly damaging dose of radiation while the healthy tissue receives significantly lesser dosages.
That doesn't usually work because the the diffraction limit PSF is usually Gaussian and therefore the overlapping region can have lower fluence than the peak regions.
Edit: the exponential rise happens because to go 2x smaller I now need to double my # of masks. Of course, in practice overlay accuracy could kill you first.
> Both of them mention how Intel is sinking billions of dollars into ASML to try and get this process working, and how impossible everyone thinks it is, so I'm skeptical that they finally got everything squared away now :)
Intel claims EUV is now at production volume and ready for manufacturing introduction on the 7nm node, even at 75-80% EUV machine uptime.[1] At the kind of production volumes Intel is capable of sustaining, and ASML's projected machine throughput, that means they've got more than a dozen of these machines operating in Fab 42 (likely closer to 40-50 if they want to hit a projected 100k wafer starts/month target, with extra machines to make up for the excess downtime; UV litho machines can easily run with 90+% uptime). That should be an enormous win over the current quad patterning they're using at 10nm, cutting out tens of process steps.
This stuff is incredibly challenging to get right, but they've been working on it for over a decade and the machines ASML are building now are production machines, not prototypes. And now from the linked article, it seems Trumpf's also made it to production equipment.
Once everyone's had more time to operate these machines at scale and really crunch out the bugs, we'll see how far we can continue to push physics on making these tiny etched transistors. 5nm shouldn't be an impossible step with EUV, but the stochastics after that node could make 3nm and smaller logic very questionable.
Reading this, I realized that I have no ability to differentiate actual in use fab technology from science fiction. What the engineers and scientists do in modern fabs is in my mind more impressive than the entire Apollo program.
A lot of people talk about how wide-reaching the technologies spun out from NASA in the Apollo Era were; whereas I see the research that goes into making chips smaller has an equal if not greater reach. Considering how widespread the modern IC is these incremental improvements need to become more and more novel, to the point where entirely new approaches need to be designed to combat the limits of physics itself.
Yeah, the Apollo program was a huge consumer of the first generation of ICs, with most of the rest going to the Minuteman ICBM so another space (briefly) application.
> The weird thing, in a sense, is that tech that seems close to nanotechology and nanomachines exists simultaneously with extremely mundane tech.
As a tangent, I think it's a time for reminder that life itself is nothing but molecular nanotechnology that we didn't invent and don't control yet. There are already nanomachines all around us, and we're all made of some. So whenever you use a piece of tech with an organic component (whether dead or alive), take a moment to ponder the extremely advanced nanotechnology involved ;).
Nails are cheap. Anything fancier saves you two minutes at a hideous cost per minute. Flying cars have that problem on top of being inefficient and unstable.
If people wanted to burn money on certain super futuristic products, there would be more of them. But they've been judged not worth it.
> I'm still hammer nails into the wall to support my coat hanger rod
There are glues for this nowadays if you want but personally I don't mind the screw or nail. Screws and nails are a 'good enough' solution for the problems they solve and anything else would have to be much better to really compete. The glue exists, you can buy it but it has a shelf life and does not have the same shear resilience that a screw or a nail would have.
Technology should first and foremost solve a problem.
Flying cars would create as many problems as they would solve, we barely manage with two dimensions, three would be a lot harder especially with large numbers of vehicles in the air. I don't see that working at all from a physics perspective but if we do somehow get it working to where it can compete favorably with regular vehicles on cost then it will require central traffic control.
I think nanomachines are still purpose-built for solving small problems, and haven't been particularly useful outside the medical field yet because 'scaling is hard', in this case being enough to have noticeable effects. Seeing a household similar to The Jetsons' is still a ways off in my eyes.
Nanomachines are hardly being built at all from what I can tell - I doubt if any have progressed far beyond laboratories.
Meanwhile, micro-machines are everywhere these days - MEMS accelerometers and gyros and SAW MEMS microphones are in every cellphone made, every smartwatch has similar features, DLP TVs have actually gone out of style but were micromirror devices, MEMS barometers and altimeters are cheap and commercial drones commonly have one or more, MEMS cell sorters and biochips are actually being used in production medical labs, the list just continues on for ages...
(I could go on a long aside about the households comment, but I'll save that rant for another time. Suffice it to say, we're doing jack shit with technology in homes, and that alone is hugely depressing. Innovation in that space seems to be sorely limited to what you can bring in and plug in to a socket, rather than disrupting home designs to better fit what our technology is actually capable of pulling off today - just look at the absolutely pathetic state of residential air conditioning, vs what can be done with passive cooling and occupant-aware HVAC systems.)
I think the problem there is the churn. If you built a 'smart home' ten years ago, it would be worse than useless now. I do think we're due a shift away from 110v outlets everywhere, but I think a house that's in keeping with the technical possibilities will have to wait until those possibilities have stabilized.
> If you built a 'smart home' ten years ago, it would be worse than useless now.
I know this is nitpicking, but still: I disagree. I think that home would be more useful than a smart home built today, because basic sensors and actuators today are very similar to those of 10 years ago, they talk over the same protocol (you screw your LED bulb into the same socket you used to screw an incandescent one into before), and most importantly, none of that was tied up with third-party Internet services. Consumer IoT of today is complete garbage because every vendor wants to be a platform, and doesn't want to interoperate with anything else. Back before IoT was a thing, home automation stuck to standards.
If humans live another thousand years and avoid a major dark age which drags our tech level below a point from which we cannot bring it back up again, then we’ll probably still be hammering nails into our walls.
> A lot of people talk about how wide-reaching the technologies spun out from NASA in the Apollo Era were;
A lot of people spin nonsense about that.
Consider the hoary myth that integrated circuits are a spinoff of Apollo. What actually happened was that the early Apollo program ordered a bunch of chips from the early vendors and helped qualify them. They didn't invent ICs, they didn't make ICs, they weren't even the biggest market for ICs (that was the Minuteman II missile, which made many more computers w. ICs than Apollo did.)
There is nothing particularly impressive about the Apollo program, at least not these days. You are essentially comparing today's frontier of High-Tech research with 50 year old "needs to work reliably" technology, which had different requirements even back then.
"Every now and again there is a moment that brings home how strange life in the twenty-first century can be. There I was in Brighton, England, holding a thin slice of glass and metal which was made in South Korea and ran American software, and which could show me the President of America threatening the Supreme Leader of North Korea."
John Higgs: Stranger Than We Can Imagine
an alternative history of the 20th century, p. 5
It's amazing when you remind yourself that your web request is actually causing something physical to happen on the other side of the world. That blows my mind every time I really think about it. It's all so seamless most of the time that it's easy to forget that it's an actual physical process that happens in the real world, with real EM waves travelling through the real physical world.
We should be having plenty of both by today. Nobody is saying smartphones aren't impressive, but there's argument to be made that we messed up on the space exploration front.
A tidbit, a one of more lunatical proposal for the next gen fab design is to build it around a multimegawatt cyclotron, and have the light source problem "solved for good"
The genuine problem with tin plasma laser is its power efficiency. An early adopter runs can bare with 0.02% energy efficiency, but imagine, say, a few 20 line fabs and their power consumption.
Another very important advantage of this design would be getting a more stable, easily tunable, and more narrowband light source. Tin plasma has around 1nm deviation in its spectrum, while a cyclotron can get to picometres on an arbitrary wavelength.
And with all above, you get a supremely tempting option to try diffractive optics, and do away with all geometry imposes nonsense of EUV reflective optics...
High level descriptions of semiconductor fabrication is what scifi tech porn wants to be. Throwing an accelerator in there doesn't seem out of place. I wouldn't be surprised if they already used them for some other purposes.
Particle accelerators are used, though I'm not aware of anyone doing so for production semiconductors. They are used for doping and someone was using one for cleaving solar cells off of a silicon ingot, so there would be no kerf losses.
My father used to work for Intel, back in the '80s and '90s - he's mentioned this proposal before. Apparently this is not even remotely a new idea, and even back in the late 90s, they were expecting to have to move to this model eventually.
"User Interface LINUX Architecture
The system is delivered with proprietary ASML TWINSCAN software based on the Linux/Voyager
operating system. Separate computing hosts are used for data access and scanner control offering
the following automation interfaces: SECS, EDI, and EDA."
Not a useful comparison. A single neuron is far better networked to its neighbors and far more complex/nonlinear than a transistor. There are already chips out there with 1e11 transistors, but they don't run "ai" even a little bit.
Assuming you need something like 1e5 transistors to approximate one neuron, you'd need like 100m^2 of die area and interconnect to actually emulate a brain.
At sufficient photon energy the initial absorption kicks off secondary electrons which blurs the feature edges/limits effective resolution. This problem already exists to some extent with EUV but gets much worse at X-ray energy.
Just a guess as I am not in the industry but I would imagine has to do with the photoresists and masks - perhaps we don’t have ones that work with X-ray. Not much good using a wavelength that passes through or is unabsorbed by the resist material!
I think the masks can take it but their is a ton of difficulty in inspecting the masks themselves for defects. IIRC the leading candidate at the time I was in the industry was to actually expose wafers and then scan the wafers for defects which originated on the mask, hoping to not confuse these with other process defects. Not sure how they ended up solving that one.
Yeah - typically print the entire wafer then compare dies within a field (usually there are multiple dies exposed in a single exposure) to see which defects are in the same place on each field. This won’t work if there is only one die per field, as could be the case for an exceptionally large server chip. In that case you probably need to predict what the image will look like based on design files and then compare with the actual image and look for differences. You could also e-beam scan the whole wafer but this can take a while (like 30 days when I was around), night damage the wafer, and bumps up against the reliability of E-beam components. It wasn’t being seriously considered.
That would ruin the economics of EUV. A mask set costs millions of dollars and for chip designers it is the generally the highest most expensive part of the manufacturing process and must be amortized over as many units as possible.
The machines that create the masks are also not very fast. A mask is created by firing an electron beam which is a sequential process.
If you were limited to one wafer per mask then not only would each chip cost tens of thousands of dollars you would also only be able to produce a single digit amount of wafers per year.
Diffractive lenses exist for that (Fresnel zone plates are being used for that) - the problem is that they're absorptive diffractive lenses instead of phase masks and of course they have limited manufacturing precision. It's a contributing factor to the application of ptychographic methods (i.e. image synthesis from multiple diffraction patterns) to arrive at a high-resolution X-Ray image (e.g. in X-Ray microscopy).
Edit: Phase mask mirrors ('surface profile' fresnel zone plate) do exist though.
I'll be making an attempt here to compare that number to the complexity of programs in software. It's the first time I try to do this and I'm sure I don't know enough to make a good comparison, anybody please correct me.
The binaries of Firefox on Debian Stretch (oldstable), when just counting the main program binary and .so files, amount to about 100 MB of compiled code. These are stripped hence will mainly consist of binary code and constants.
Making a flip flop in hardware requires 6 transistors IIRC. What if I compare 1 bit in software with 1 flip flop in hardware--OK, code is constant (could be etched as a PROM, but that's a useless comparison), but represents some complexity that probably needs more transistors to represent. E.g. an if statement (after evaluating a value to be dispatched on) needs a conditional jump assembly instruction (8 bytes?), and perhaps another jump instruction when the success branch is finished (another 8 bytes). This comes down to 128 bits, 768 transistors with my stupid calculation; enough to route data etc.?
So, encoding a program of the complexity of Firefox (binary part) as hardware would then need 100e686 = 4.8e9 transistors. Given 100e6 transistors per mm^2, this would need 48 mm^s chip area, a chip 7mm*7mm, not far from what CPUs use?
Thus, today's web browsers and CPUs seem to be comparable in complexity? Or, a web browser could be encoded entirely as hardware and about fit on a chip? I find that unexpected and a bit surreal, loading a huge program like Firefox is just bringing in the same amount of complexity into the running system as there already is active in the CPU? Or, another line of expectation in my thinking is, CPUs are very small compared to the large programs, programs are being serially executed with a smallish set of instructions precisely because complexity in Hardware needs to stay small. Actually maybe that's not really true?
I don't what the power output of 80s amorphous silicon PV cells (those used for pocket calculators). But I'd be very happy to see a tiny linux terminal retrofitted to run on solar.
The switching faster bit has only been happening very incrementally with the breakdown of Dennard scaling. Thankfully the less energy bit seems to be part of a more fundamental process than Moore's Law[1] and there are nice, clear, theoretical limits on how power efficient a computation can be which we're nowhere near hitting[2].
we have been in various ways for a long time. as you stack higher and higher it gets harder to get the heat out, and its already really tough to get the heat out.
Like intel's fastest gaming processor, the 9900k, the major difference is better heat dissipation material between the die and heat spreader.
laptops and phones are mostly all thermally limited
die shrinks help with this, but only if you don't increase performance as you do it.
That would be colossally inefficient - essentially the size of the chip means that electrons would be taking multiple cycles to get from one side to the other. The solution would be localizing processing into distinct processing units on the one die. At the point you’ve reinvented multiple cores and it starts becoming cost effective to split them into separate chips to improve yields :)
The problem is the increase power usage of the additional caches that are necessary - modern CPUs already need a bunch of physically local caches in addition to the large L1/2/3/n caches because of timing of flowing electrons from A to B. At some point the benefit of larger single die becomes minimal. The moment that happens you benefit from making separate chips because of increased yield.
Most modern chips already use numerous clocks (aside from anything else propagation delays for the clock signal is already a problem).
The problem is not simply "because clock cycle" it is "if electron takes Xns to get from one execution unit to the next, then that's Xns of functionally idle time". That at best means additional latency. The more latency involved in computing a result the more predictive logic you need - for dependent operations the latency matters.
An asynchronous chip does not avoid that same problems encountered by a multistage pipelined processor, it's purely a different way to manage varying instruction execution times.
But this doesn't answer the killer problem of yield. The larger a single chip is the more likely any given chip is to have errors, and therefore the fewer chips you get out of a given wafer after the multiple weeks/months that wafer has been trundling through a fab. Modern chips put a lot of redundancy in to maximize the chance that sufficient parts of a given core survive manufacture to allow a complete chip to function, eg. more fabricated cache and execution units than necessary, at the end of manufacture any components that have errors are in effect lasered out. If at that point any chip doesn't have enough remaining cache/execution units, or an error occurs where it can't be redundant, the entire chip is dead.
The larger a given die is the greater the chance that the entire die will be written off.
That massive ML chip a few days ago worked by massively over prescribing execution units. I suspect that they end up with much greater lost area of a given wafer than many small chips, which directly contributes to actual cost.
Both of them mention how Intel is sinking billions of dollars into ASML to try and get this process working, and how impossible everyone thinks it is, so I'm skeptical that they finally got everything squared away now :)