It's fascinating how many great stories there are from this wild west period of the computer industry, thanks for a great writeup.
I have to pick a nit with footnote 10 though, "There's no need for a processor accessing bits in parallel to be little endian" – there is when your data bus is narrower than the address bus, such as an 8-bit CPU performing indexed or relative 16-bit addressing. The simple pipelining of the little endian 6502 allows it to calculate the lower half of the target address while it fetches the upper half from memory, saving a cycle compared to the big endian 6800.
Also, wouldn't little-endianness generally benefit 16 bit arithmetic operations performed on 8 bit ALUs? Because (correct me if I'm wrong) a carry flag always carries over from LSB to MSB, processing the LSB before the MSB allows you to implement the carry mechanism more efficiently.
Yes, though few 8-bit CPUs supported 16-bit arithmetic. Most required two 8-bit operations with carry instead, an din that case there's no difference between little and big endianness - in both cases the LSB has to be calculated first.
The big endian 6809 supports 16-bit arithmetic, and is indeed punished for it: 16-bit ADD and SUB costs an extra cycle compared to a little endian design.
After carefully studying pictures of SNES cartridges, I think you're right. That's pretty funny that Boysel used game cartridges when creating the demo system for a patent lawsuit.
There's another TI processor that never really took off despite its usage in arcade consoles at the time. The TMS340 http://en.m.wikipedia.org/wiki/TMS34010) series. 32bit in 1987. I assume it was just out of the price range of 8bit and 16bit machines. It was used in Mortal Kombat and NBA Jam games I believe.
I remember those (coincidentally 68020/2) back when I was working at Eicon's graphics group (now Eicon Networks). Ended up just using M68Ks with blitters for PostScript and HP PCL interpreters. In the end the HP version even lost the blitter chip and I had to write bitblt.s
I can't remember all the details now. The main interpreter was MS C with CodeView in text mode was awesome. The routines were 68k assembly with pixel write modes, optimized character blt, and there was a scale-up routine. The only reason the sw implementation came close to hw was that the hw had quite a bit of setup which is quite a bit of overhead for small patterns which were typical (e.g. patterned horizontal rules).
My opinion only, but I did at the time work for a company that created a TMS34010 product. It didn't take off because, frankly, it sucked. It was very expensive, relative to competing products, made worse by requiring lots of expensive VRAM memory (for both code and data memory) for decent performance. Memory bus was 16-bit, so if you were trying to do, say, 8-bit graphics, you could only grab 2 pixels per clock (~150ns-200ns depending on speed step). It was integer math only. I recall the C compiler was pretty buggy as well. There was a follow-on TMS34020 chip that was faster and had an FPU coprocessor and better 3D, but it was still prohibitively expensive to deliver product, and by then there were a boatload of competitors that were beating it on both performance and price.
the size difference and design technique between Intel and TI is telling. Even though they are both of the same generation of chip Intel was just years ahead in efficiency. The TI almost looks like a "will this work" or "that will do" type design.
Author here. One key difference between the Intel and TI layout is that Federico Faggin (who later created the Z80) brought silicon gate technology to Intel from Fairchild. This provided much better transistor performance. But it also provided a layer of polysilicon "wiring", which made routing much easier. (Kind of like routing a single-sided printed circuit board vs a double-sided board.) This gave Intel a huge advantage in layout. But even taking that into account, the TI layout is kind of terrible.
Yeah, and the vastly greater size fed directly into it not being commercially viable because of massively lower yield, all things being equal.
Add an unrealistic requirement for ultra clean power, and the thesis of the article is quite debatable; this chip "worked in the lab" but was not viable outside of it. Perhaps a good learning experience for TI, but hard to see more than that.
Was TI using the same feature size as Intel? The 4004 and 8008 were 10-micron, but the TMX1795 looks like it was a larger process - fron the comparative photo I can nearly read the bits off the decode PLAs on it, whereas it's a lot harder with the 4004 and basically impossible on the 8008 die.
I was very surprised at that exact same thing as well. I thought I was the only one. I really thought for a second that it was some magazine from the 90s or even early 00s.
My takeaway from this is that the author clearly documented the AL1 as the worlds first microprocessor. The arguments against are artificial; it doesn't matter that this wasn't the intended configuration nor that one essentially had to program it in microcode. The microcode ROM is nothing more than a compression technology which doesn't render it a more capable.
All that said, it's all arbitrary and the 4004 is the first _successful_ microprocessor. (I personally find it a more interesting question which alternative architectures could have been produced given the technology and constraints of the 4004 and 8080).
So, can anyone make an open hardware version of this chip? Have the patents run out? It's been 45 years, and I understand they only last 25 years. A side question would be which CPU architectures are openly available now that are capable of running a modern Linux distro?
The patent situation is such that one could manufacture a 32-bit x86 processor without the various SIMD and virtualization extensions and not have to worry about patents on the abstract architecture. Implementation techniques up to and almost including the Pentium Pro (November 1995) would likewise be safe from patent threats. However, semiconductors get explicit treatment under copyright law (http://copyright.gov/title17/92chap9.html), so you have to do your own layout until such time as copyrights start expiring again.
I think every typewriter/office machine company tried their hand at computers at some point. Royal McBee, Burroughs, Smith Corona (I don't think they did full-blown computers, but they did do some dedicated word processors), Remington Rand...
I think that gives the wrong impression, that office machine companies tried making computers and failed. Remington Rand, for instance, was IBM's principal rival in the 1950's, selling the UNIVAC. Olivetti had transistorized mainframes before IBM. I think the interesting question is why IBM survived while the rest didn't. My highly oversimplified answer would be the IBM 360 crushed them.
Coincidentally, I'm reading "IBM's Early Computers" and this book says the first personal computer was the IBM 610 (originally called "Personal Automatic Calculator"), introduced in 1957! It consisted of a desktop keyboard with tiny CRT, along with a typewriter-style printer. (Off to the side was the freezer-sized cabinet full of vacuum tubes.)
Personally, I think it's hopeless to try to pick the "first personal computer", since the term can mean almost anything.
> The IBM 610 was the first personal computer, in the sense that it was the first computer intended for use by one person (e.g. in an office) and controlled from a keyboard.
I don't know if it counts as a personal computer if you need four burly men to pick it up. And the price was rather out of range for home users.
I have to pick a nit with footnote 10 though, "There's no need for a processor accessing bits in parallel to be little endian" – there is when your data bus is narrower than the address bus, such as an 8-bit CPU performing indexed or relative 16-bit addressing. The simple pipelining of the little endian 6502 allows it to calculate the lower half of the target address while it fetches the upper half from memory, saving a cycle compared to the big endian 6800.