Why can't I have a $1M machine sitting in the corner of my lab which turns out 800nm chips on demand? Can the technology which in 1990 was incredibly demanding now be turned into a product, thus vastly opening up the chip market, albeit for "obsolete" varieties of circuits.
Making chips requires a large, expensive fab plant. Using it to make something as obsolete as that would be more expensive than just making a couple generations old chips; perhaps 30-50 nanometers. But no matter what it makes, those plants are expensive still.
I think the point that the OP is missing is that Silicon manufacturing is extremely complex, requiring 1000's of steps, tons of water and electricity, and extremely pure materials. That's not something you can fit in a lab, let alone a machine within the lab.
To give you a sense of scale, I ride past Intel's research fab (Ronler Acres) in Oregon and the Hillsboro airport on the way to work. The fab is bigger than the airport, and even that fab does not contain all the equipment necessary to manufacture chips for public consumption. There are separate plants for package and testing.
Considering the investment concentrated in such facilities and the stringent requirements of clean rooms, the 911 attacks would gave done much worse economic damage by targeting those facilities.
You still would have had the TSA and the wars, plus an ongoing economic impact on the cost of chip production. I'm not saying that there wasn't a lot of economic impact. I'm saying there could have been more.
Look at this ~6000nm Intel plant from the late 1970s. It was extremely high tech at the time, but nothing there seems inherently like it cannot be fully automated, simplified and shrunk down to my $1M FAB machine:
Look at this ~6000nm Intel plant from the late 1970s. It was extremely high tech at the time, but nothing there seems inherently like it cannot be fully automated, simplified and shrunk down to my $1M FAB machine.
You can get down much lower than $1 million if you're willing to live with a feature size of 10+ microns.
The ghetto method is "point a projector through the top of a microscope" so that you don't have to bother with photomasks. Not counting the other steps.
irc.freenode.net #homecmos ... although they are taking a break for a while, so I also recommend ##hplusroadmap and #dlp3dprinting and #reprap for now.
(I love this question. I have a Ph.D. in the making of semiconductor devices, and I once worked as a troubleshooter in a factory that was making transistors with a twenty-year-old process.)
The first fallacy that's tripping you up is marginal cost. Just because it's cheaper to buy a 800nm-process chip today than it was in the 1990s doesn't mean that it's cheaper to build the factory, employ the packaging engineers, or source the materials (let alone stuff all those things into a refrigerator-sized box). The finished parts are cheaper because the R&D, factories, processes, and HR procedures were bought and paid for in the 1990s, and those things are all still there, so long as a market is there. The workers are very happy to keep doing their jobs, and the marginal cost to keep them working is relatively low, particularly because the yield on a mature process can be really high.
The second fallacy is the physical-plant fallacy. You look at the factory and the machines and you think that's what it takes to make semiconductors. But if I gave you the keys to a shiny new Intel factory today, you would not succeed in making 80486 processors in a few weeks. Even if I gave you a new factory and its staff and the services of the world's leading experts in semiconductor devices and went back in time to arrange the delivery of a steady stream of raw materials, you would still not succeed in making working 80486 processors in a few weeks, although the Dream Team might manage to make some things that looked like working devices right up until you tried to turn them on... or until you tried to turn them on three weeks later.
The expensive part of manufacturing is the learning curve. Every one of those shiny machines has five hundred knobs, and every one of those knobs needs to be set correctly or the products won't work. Your experts can guess the approximate settings for everything, but the crucial final 5% needs to be dialed in by trial and error. You must exercise the factory, then correct for the mistakes.
That's expensive because the feedback is expensive. The difference between a broken part and a working part might take weeks to manifest, and it's literally microscopic, so you need an entire little team of highly trained QA scientists with thermal-cycling ovens and electron microscopes and Raman spectroscopes and modeling software and coffee in order to develop hypotheses about the problems with your process, hypotheses which must be tested by running more doomed wafers through that process.
(I've watched a few thousand people come within a hair of losing their jobs because we couldn't make this iteration converge fast enough.)
This is where economy of scale comes from: Practice. The Nth wafer coming out of a fab has high yield if and only if the (N-1)st wafer had high yield, so you have to bootstrap your yield up from zero one batch at a time. Your fab is only as valuable as the number of wafers it has made, or tried to make. The factory needs practice, and practice takes time, and time costs money.
---
So, here's how your refrigerator-sized fab is going to work. You'll take delivery and set it up. Unfortunately, shipping being what it is, parts will have slipped or gotten bent or stretched. Your humidity and temperature cycles will be different than they were back in Shenzhen. Your ambient dust level will be different. The batch of photoresist that you pour into your hopper will have been manufactured on a different week than the batch that the manufacturer used to calibrate the machine, and your sputtering targets will contain a different mixture of contaminants.
All of these things can probably be calibrated out – if the knobs are well-built enough to stay where you set them, and your environmental controls are comprehensive enough that the conditions remain constant, and you aren't forced to change suppliers, and you have the operational discipline to resist the urge to get blind drunk and start twiddling settings at random while sobbing. But how do you know which experiment to run, on your microscopically-flawed parts, in order to converge on working parts? You need to order the optional "electron microscope" kit, which ships in a slightly smaller box. The box next to that one will contain the materials scientist that you ordered. Hopefully they remembered to drill the air holes!
It's a lot less than a million dollars. You can fab transistors at home with a bit of patience. A number of universities have done this as part of an undergraduate class.
It would be interesting to build a single metal layer process for 120mm wafers (DVD sized). Something like a 'makerbot' but for simple chips. The tricky bit is that you really want to use things like hydrofluoric acid and getting ones hands on that stuff is time consuming. Something that with the liberties of the mid-20th century and the technology of the 21st century would be possible, but not the other way around.
I am working on a thought experiment for holographic FPGA type device that would be pretty much a DVD and it would be programmed in a regular DVD burner.
FWIW, at USC in the early 80's I was in a lab that was doing (in part) holographic image convolvers. Basically trying to create holographic "lenses" that would effectively do an image convolution when light passed through them. It was a pretty remarkable concept but I don't recall it being all that successful. The reports should still be available from the EE department there.
Why can't I have a $1M machine sitting in the corner of my lab which turns out 800nm chips on demand? Can the technology which in 1990 was incredibly demanding now be turned into a product, thus vastly opening up the chip market, albeit for "obsolete" varieties of circuits.