Not directly related but just last week I just gifted two Intel 8088 machines including one in a really beautiful case with original "3 year warranty!" sticker, along with other stuff (period disks, Apple LCII, Apple II, SGI/SunOS/Solaris stuff, etc.) to http://acms.org.au/ who are looking for an exhibition location for their huge collection. As a registered charity, space donations are tax deductible and can help meet council public space/cultural requirements for new developments. Requirements: Sydney or Canberra, 100-300m2 exhibition space (really 200+), 500-1000m2 storage space (500+ already in use).
In 2011, I donated a mass of Mac computers to a podcaster who wanted to set up a museum of Mac history. It was a lot of work.
Not news to the HN readers, but yeah: computers used to be even more difficult to use than they are now. And of course getting all of the bits together for a working system takes a lot of time.
Very cool, as I recall the 8080 needed multiple power supplies and a clock generator and the 8080A made those internal to the chip. The Z80 was usable with no support chips at all and I believe that was the primary motivation for the 8085.
When you look that notion of the 8238 you can see the seeds of the 'south bridge' that was so prevalent in PC design. Intel systems broke apart as "memory" (northbridge), "compute" (cpu), and "i/o" (southbridge). That has been a pretty successful architecture and one that ARM would do well to emulate for their non-SoC offerings.
Despite how simple the 8008 is compared to modern processors, I wonder if we'll ever see some sort of commercially-available "3D printer for circuits" that would be able to reproduce something of its caliber. Such a thing would finally enable truly open computing at every level, which, even if far less capable than high-end processors, would be sufficient for lots of tasks.
People have replicated silicon fab processes at home -- see e.g. [0] where Jeri Ellsworth makes homemade transistors. Of course the effort she took to do that (while seriously really impressive) probably isn't worthwhile compared to the use of an FPGA if you just want to see your CPU pipeline in Verilog come alive :-)
You can easily replicate a 4000-transistor microprocessor with the lowest-end FPGA for a few dollars - cheaper than any 3D-printing process for silicon that I can imagine. It would, of course, also be reprogrammable, making it very cheap for prototypes and experiments.
From a security perspective, how feasible is it for a state-level actor to usefully backdoor a FPGA? I imagine that part of the appeal of personally-fabricated hardware is that, if you validate the design, you have greater assurance about the code executing on your machine. Sort of like the difference between compiling from source and trusting a binary blob (though, just like compilers can be backdoored, I imagine there are theoretical attacks involving subverting the fab itself...).
It's quite feasible. State of the art FPGA chips are typically manufactured in third party fabs. These are primarily TSMC, GlobalFoundries, and UMC. I'm guessing Intel's acquisition of Altera will result in a more "trustworthy" FPGA. In a nutshell, if a state actor can access the fab, it can insert a backdoor.
With circuit design, fully independent verification has not been solved yet. The primary reason is that fabrication, especially at lower process nodes, is extremely complex. So even if you had a "3D IC printer" at home, you still need to trust the manufacturer of that printer, as well as the manufacturer of the key components of the printer. Taking that even further, you need to trust that each of those components was manufactured by a trusted fab. It's turtles all the way down.
If I were to design such a printer from scratch, I would have a consortium of known companies oversee the design of the first printer and all of its components in a closed environment with 24/7 surveillance. All circuits would be fabbed using a manual, low volume process. Once the initial printer is complete, all the circuits of subsequent printers would be fabbed using this root printer. There are shortcomings with this technique too, but it's probably the best way to go about doing it (with current tech).
I agree that trusting the manufacturer of the fab would constitute a potential attack vector, but if the fab was capable of reproducing itself (in conjunction with other tools, of course) then persistent subversion of the fab would be analogous to a trusting-trust attack. And just like trusting-trust attacks can be satisfactorily mitigated (e.g. via diverse compilation), so could a trusted hardware toolchain eventually be produced. The question then is just whether such a thing is worth the large effort. :)
The raw resources alone - not considering the labor and technology - required to build a fab and maintain it are so staggering that I just can't see that happening anytime soon.
Besides, the fab itself isn't the issue. Rather, the equipment used during the fabrication process itself are the weakest link, so to speak. Further, if we consider the feasibility of insider attacks, the people who maintain the fab are potential targets as well.
So, I think achieving what you propose would require: 1) access to an immense amount of resources, 2) the capability to self-manufacture the tools required to manufacture ICs, and 3) an AI that can operate the fab.
The third point alone is so far off that I'd conclude it's going to be impossible for a long time.
Backdooring the device itself is equally hard/easy as any other form of IC. The question is, with what? Because it doesn't have a fixed function it's almost impossible to set up any specific subversive behaviour in advance without knowing what might be run on it.
The most likely attack would be an override of the code protect/security fuse/anti-JTAG features. But that's only useful when the attacker has got hold of the device and is probing it.
> Because it doesn't have a fixed function it's almost impossible to set up any specific subversive behaviour in advance without knowing what might be run on it.
This is precisely the essence of my question, given that I have little knowledge of FPGAs and have no real clue how much each reprogrammed circuit has in common with any other. :)
> how much each reprogrammed circuit has in common with any other
Very little. An FPGA is a set of reprogrammable logic blocks (LUTs) of very small size, plus a number of special purpose peripherals. The "layout" process of assigning functions to LUTs is usually done with simulated annealing and random perturbation. The compiler won't necessarily give the same output from the same input, let alone slightly different input.
The fixed-function blocks and any embedded processors (e.g. Nios) are more targetable. But you could also e.g. set up the clock PLL to leak the FPGA configuration slowly via spread-spectrum modulation.
Simulated annealing isn't technically a genetic algorithm (nor is random perturbation). Simulated annealing jiggles things randomly (in this case, probably locations of LUTs), with decreasing amplitude over time. The amplitude of the random perturbations is the "temperature," which decreases until the system has settled into a (hopefully global) optimum. So basically this system will start moving the LUTs around a lot, keeping the most optimal results, and gradually start moving them around less and less until the result doesn't change for a while.
A genetic algorithm, by contrast, encodes the system into a "string" (like a DNA strand), and then swaps pieces of strings between two "organisms," just like genetic mating does. The most optimal descendants are kept, the least are discarded, and the process is repeated. This would be harder to implement for locations, as you would have to encode locations onto a string, and be able to swap pieces of strings while maintaining the functionality of the LUTs.
A subverted FPGA could contain a whole different circuit which turns on based on whatever condition desired (timer, radio command, data pattern seen by the FPGA) and takes over the function of the FPGA in whatever way the subverter intended. As the post to which you are replying said, for good effect, the designer of the subverted circuit would need to know the function the FPGA is performing in your device and design a custom circuit for it. This is possible e.g. by examining the first batch of your devices, and inserting a subverted circuit in your supply chain for subsequent batches. Certainly a lot of trouble to go to, but it's possible theoretically, and people have been writing about the possibility.
Now, the FPGA is not necessarily an ideal place to insert your circuit; personally, I would put it in something like one of those flat ribbons which connect components inside so many devices, or a socket - they may have access to all the bus pins of the gadget, and they will be less conspicuous. I don't think 3D printing is the answer to this, since too many things would have to be 3D printed.
Now, I don't know how I would go about protecting from such attacks, or if this is even a real-life concern right now, but I would think that some kind of automated high-resolution X-ray imaging and analysis technology would be a more realistic direction.
> The most likely attack would be an override of the code protect/security fuse/anti-JTAG features. But that's only useful when the attacker has got hold of the device and is probing it.
I believe there was already an incident in this direction were researches discovered vendor master keys for these functions in FPGAs.
Sure. FPGAs, or any IC, could contain subversive/backdoor hardware placed there by a sophisticated actor. For that matter, so could ostensibly passive components like connectors, ports, flat ribbon cables... now that we have sophisticated circuits that can be powered by stray magnetic waves (RFID and its advanced friends), so can components not connected to power, such as cases, clips, screws. It is a fascinating thing to think about, actually, and I don't see how 3D-printing every single thing around you is a realistic solution.
And considering that regular color printers (inkjet and laser) have all been backdoored for ages - they print mostly invisible constellations of dots into every page which identify the individual printer - I would say that backdooring a "silicon printer" is also a definite possibility.
Most of the time octal causes me pain, but octal is definitely the way to understand Intel instruction sets. In hex you can sort of see some patterns, but in octal things are trivial.
The octal encoding goes back to the Datapoint 2200, which use BCD decoder chips (7442) for instruction decoding. These were used to decode three instruction bits at a time, so the instruction set of the Datapoint 2200 (and thus the 8008) was based on groups of three bits. Among other things, that's why the 8008 has 7 registers (A, B, C, D, E, H, L) - the 8th value was used to indicate a memory access.
Very cool. I wish university professors use this article as a precursor to the computer architecture course material. Its much more interesting to study something when you have a historical perspective that you can relate to later on in the course.
When I was a teen, my older brother loaned me his copy of the book "State of the Art : A Photographic History of the Integrated Circuit" which included a die photo of the 8008. And thus began my love affair with computers.
Can anyone recommend a few books on chip manufacturing? I am completely blown away at the scale they are working with here. I'm particularly interested in a "how it's made" type book. This is obviously something that requires significant upfront investment. The scale makes this seem so difficult. I'm a total newb is this area so I might not even be asking the right questions yet.
Back in the 80s I took a freshman lab course on this at Caltech. The course notes were available at the campus bookstore, but I don't think they were ever a proper book with an ISBN (and I hope there's something newer, though maybe there's not much new to say about the kind of crude small-scale ICs we got to make in the lab).
In ye-olde planar layouts (TTL-era) they were routed quite similar, like arteries and veins, with small 100n caps and the TTL chips as a sort of capillaries.
However, this kind of power supply routing (not sure Am. call it idiomatically, some here call it a "ladder supply/layout") has poor electrical properties, unsuitable to operating logic beyond a couple MHz.