Hacker News new | past | comments | ask | show | jobs | submit login
Texas Instruments’ Biggest Blunder: The TMS9900 Microprocessor (ieee.org)
223 points by nradov on June 23, 2017 | hide | past | favorite | 125 comments



My first job out of college was to write a polygon fill routine for the TMS34010 graphics processor. I developed a love for it's clean 32-bit risc architecture ... it was even bit addressable with a flat address space. I actually loved programming in assembly language ... you could fit most subroutine arguments in registers.. use any of 32 registers you pleased.

Later I did some 8086 assembly programming. I learned why a generation of programmers hate assembly... only a few special purpose registers, segmented memory, long list of nonorthogonal instructions, ... yuck

I blame the general aversion for assembly programming to the choice of using the 8086 for PC's.


You could choose that the memory accesses be at any bit size from 1 to 32 with both unsigned and sign-extended register loads. In fact, I think you got to pick two sizes and for most normal operation you'd choose 16 and 32 bits. There were some load/store operations that always operated at 8 bits so you'd get the usual complement of data sizes. Pretty useful feature for something that was intended to be a graphics processor. I've also seen the different bit sizes used for fast and simple Huffman decoding in gzip decompression.

The auto-increment/decrement addressing modes were aware of the bit size. Thus you could have polymorphic subroutines where the same code could be used to sum an array of bytes, words, 13 bit quantities or whatever size word you wanted (up to 32 bits).

Since the processor had built-in circuitry to help drive graphics displays it took a high input frequency of something like 40 or 50 MHz and actually divided that down to run the processor at around 10 MHz or so. The opposite of what we're used to now and made the darn things look scary (well, scarier) from an emulation programmer's perspective.

The memory interface was designed to work with shift-register VRAM. The idea there being that the VRAM chips had a built-in shift register 512 pixels wide. The display circuitry would make use of it by having each line from the frame buffer dumped into the shift register as the raster was moving down the screen. And then the shift register would clock out the pixels as the raster moved across each line.

During VBLANK (when the video circuitry is waiting for the raster to return to the top of the screen) you could use special instructions to load and store the shift register which would be as fast as any load/store operation. The entire frame buffer could be filled very quickly with any repetitive pattern or erased entirely by copying some fixed line to all the others.

Lotta cool stuff in that beast.


Yes and then the TMS34020 came out which had a trapezoid fill instruction. Essentially you would partition your polygon into trapezoids; For each trapezoid you would load the appropriate registers ... the slopes of the edges used 16.16 fixed point numbers. I remember how much you could do with fixed point arithmetic.. very few programmers even know how program using fixed point anymore. A very useful lost art.

There was also a built in Bressenhsm line drawing instruction.

Then there was the TMS34082 floating point coprocessor. I essentially wrote a primitive OpenGL'ish pipeline API. That was fun.

Wow... I really wish a general purpose processor like this made it into the PC. The best stuff doesn't always win in the marketplace.


Last time I used fixed point arithmetic was for a J2ME game targeted at the Sharp GX20, for a game competition sponsored by Vodafone.


The 34010 was indeed a sweet chip. I once wrote a Forth compiler for it in assembler as an exercise. Coding graphics in Forth was a joy.

And yes, Intel's instruction sets are so butt-ugly they make everybody else's look beautiful by comparison. I really wish almost any other chip had won the war for that reason. I'm encouraged by the rising profile of ARM.


If you played any of the Williams Electronics or Midway arcade titles from the late 1980s through the mid-90s (NARC, Smash TV, Cruis'n USA/World, Mortal Kombat 1/2/3, Terminator 2, NBA Jam, etc) they were all done on the TMS34010 or 32031...all in assembly language.


Emulation of that processor was a bit of a challenge. Fortunately the fully general bit addressibility was only used in gzip decompression and the drawing instructions (circles, rectangles, that sort of thing) only came up in test mode. So not all of it had to be implemented and only the normal 8/16/32 bit addressing modes had to be fast.


Not everyone hates x86.

I had lots of fun coding for with, initially with as86 and then TASM.

The only thing I hated with the x86 was trying to use AT&T syntax a few decades later.

I also coded for Z80, 68000 and MIPS.


I have learned three different syntaxes for the same x86 instruction set: [MBTN]ASM (DOS/WIN16/32), GNU assembly, and Amsterdam compiler kit (Minix). All of them used different mnemonics, a different notation for addressing, and couldn't decide between MOV SRC,DST and MOV DST,SRC.

Then you had real mode vs protected mode which used the same segmented registers in a completely different way -- a clever way to be backwards compatible -- long story. In real mode you have a variety of different memory models: tiny, small, large, ... -- near and far ptrs? -- ugh. 20-bit physical addresses from segment << 4 | offset. Uggggllllyyyyy....

Forget about the x87 floating point processor -- arguments go on a 8-level deep stack -- really? I got to push and pop to do floating point?

Now you have AMD-64. But all that old stuff is still there taking up transistors!


Not really.. the API is still there, behaving like a backwards compatible processor, but deep down- there is no reserved silicon for this- just pipelines, getting microcode.


Naive architecture question: where is the microcode stored? Is it in some on-chip ROM?


You got me, hat tip. Look Up Tables :D


The TMS34010 wasn't RISC by any normal definition.

It was nice though, I wrote a backend for GNU binutils for it and wrote an X11 driver for the chip.


I guess I mean RISC in the sense that it had a small set of orthogonal instructions, a large set if general purpose registers, and non-move instructions required operands to be in registers. Very simple and clean model from the programmers perspective.


> I blame the general aversion for assembly programming to the choice of using the 8086 for PC's.

I loved 8086 assembly after coming from Z80. And one of the reasons it was so fast was because of minimal registers.

And 8086 was released 10 years earlier.


> I blame the general aversion for assembly programming to the choice of using the 8086 for PC's.

Well, nowadays you can use LLVM instead and achieve portability, at perhaps a small cost in efficiency.


The problem with writing LLVM is that you must use Static Single Assignment (SSA) to the pseudo-registers. If I had the time I'd like to write non-SSA LLVM assembler that transpiled into SSA for you.


The same chip family included the TMS9918 Video Display Processor: https://en.wikipedia.org/wiki/Texas_Instruments_TMS9918

The TMS9918 was the first chip to refer to overlaid graphical objects as "sprites." It was used in a multitude of early computers and game consoles, like the MSX1, Colecovision, Sega's SG-1000, and of course TI's own 99/4 (where it was paired with a TMS9900 CPU). It also served as the basis for the video controllers in the Sega Master System and Genesis/Mega Drive.

The TMS9918 is also interesting because it was one of the few off-the-shelf video generator chips ever produced. (There were a couple others, like the MC6847 used by the CoCo, but almost every other home computer/console of the era used custom silicon for video generation.)


And for $395 you could get a SuperSprite board for your Apple ][, which was a card containing a TMS9918 and a AY3-8912 programmable sound generator, basically turning your Apple ][ into a ColecoVision. Except you'd have to write your own games, because no existing games supported it. I still wanted one!


The architectural Achilles' heel of the 9900 is the obtuse excessive use of memory bandwidth. It specifies a workspace pointer, which points to an area of memory to be used as registers. Which isn't actually so bad if you accelerate those architectural registers into on-chip physical registers. But they didn't. The 9900 squandered great amounts of memory bandwidth.


Wait, am I understanding you correctly? It used _main memory_ as registers? I feel like I must be misunderstanding this, because that seems absurd even for 1978.


> It used _main memory_ as registers?

Yes, that's exactly how it worked. The 9900 only had one internal register, which pointed at the current "register bank" in main memory. I worked at TI in those days and wrote code for the 9900. It wasn't a crazy idea when the chip was designed; after all it made context switches completely free. But after the chip went into production, the speed differences between CPUs and DRAM started becoming obvious.


because that seems absurd even for 1978.

At that time memory could be at nearly the same speed or even faster than the CPU; in fact the CISCs which were popular left the memory bus mostly idle while they executed instructions internally, which is what let the relatively memory-bandwidth-consuming RISCs become viable.

In fact I'd almost bet that, had memory always been slower than the CPU, RISC would've never been invented.


Case in point:

https://xania.org/201405/jsbeeb-getting-the-timings-right-cp...:

"So far so good - it seems unusual to our modern “memory is slow” mindset that the processor touches RAM every cycle, but this is from an age where processors and RAM were clocked at the same speed."

This page also shows that the 6502 happily did extra memory reads.

In fact, the memory had bandwidth to spare. http://www.6502.org/users/andre/osa/oa1.html#hw-io:

"The memory access is twice as fast as the CPU access, so that during Phi2 low the video readout is done and at Phi2 high the usual CPU access takes place"

I remember reading about a setup with two 6502s both running from the same single-ported RAM at full speed, but cannot find it.


> I remember reading about a setup with two 6502s both running from the same single-ported RAM at full speed, but cannot find it.

Commodore 8050


Sure, it's a decent idea. You can even make it fast if you cheat.

I believe that the PDP-10 (well, some versions) had the first few memory locations equivalent to registers.

The AT&T Hobbit (Aka Crisp) chip had a stack pointer, and aggressively cached memory around the stack, essentially. Once cached, stack-relative memory operations were as fast as registers. (The Apple Newton was going to use the Hobbit, but switched to ARM when it became clear that AT&T wasn't truly interested in committing to consumer grade pricing of the CPUs).


> I believe that the PDP-10 (well, some versions) had the first few memory locations equivalent to registers.

Essentially. Actually the registers were addressable as the first 16 addresses in memory for all models. For the PDP-6 and first PDP-10 (model KA) the registers were fast semiconductor devices (DTL, as I believe it predated TTL) while the rest of memory was literally core (convenient for when the power went out, as happened occasionally in Cambridge -- whatever process was running died since the registers were lost, but everything else was in core, so the machine could just be restarted).

Since they were addressable you could run code out of them, like bootstrap routines or some deranged TECO code I once wrote). On the other hand any word of memory could be used as a stack pointer (two addresses fit in a word, so one half was the base and the other half the depth).

It was quite a RISC-like, highly symmetrical architecture for its time and a pleasure to program. I still miss it.


Huh! The first BeBoxes were built around the Hobbit as well. The switch to PowerPC happened when it was clear that its performance and addressing were always going to be weird. Not sure it even got to the point of discussing volume.


There were was a PC modem board that had the Hobbit in it. I can't seem to find any references to it online though.


Kind of a performance limiter if you actually do it. There are architectures where the programmers' reference manual is written as if registers are in main memory, but the CPU brings the active registers into physical registers and fakes it. In olden days (of water chillers) I worked on CPU's like that. It essentially requires creating a look-aside buffer so that memory addresses that map to registers in the current context get redirected to the physical registers, and you have to watch out for order dependencies.

So not totally crazy, but a severe performance limiter unless you can afford the complexity of the standard trickery.

TI did make a personal computer that competed with the likes of the VIC-20. It was sloooooow, but had nice (for the time) color graphics.

(edit typos)


When I was in college an 6800 derived embedded systems prototype board had 'really fast' in-package memory used for such a setup.

In the context of something vaguely like an SoC where you can make "0 page" memory registers fast with a small bank of high speed SRAM it can make sense, particularly for decoupling manufacturing defects or silicon production processes.

Of course for modern, potentially out of order and speculative branch predicting, pipelined instruction systems this is a horrid idea.


Well, actually, once you buy into everything you need for O-O-O execution with synchronous exceptions, a lot of stuff that seems difficult at first glance becomes cheap because you can build on the existing O-O-O infrastructure. Anytime you can belly-flop onto the reorder buffer scoreboard the hard stuff just falls out.


Damn, and I thought x86 was a bad architectural decision....


The PDP-10's register set occupied the first 16 memory address locations, although they were not implemented as memory.

This made instructions simpler, as register instructions did not need a distinct addressing mode.


Yes, a lot machines of that era did. The Univac 1100 series did, and I believe the IBM 709 and 7090 machines did as well. The low performance machines used actual memory, the higher performance machines had backing registers in the CPU.


The UNIVAC 1100 machines didn't put registers in memory; they just allowed programs to reference them via memory addresses. This removed the need for register-to-register instructions.


1100/10 used main memory (plated wire) for the registers. 1101, 1102, and 1108 may have but I can't say for certain. 1100/80 definately use registers in the CPU and redirected matching memory addresses to the registers.


That is what the two above postings said, too. Or at least I understood them to say that.


I owned a TI99/4a as a kid and at the time I was interested in the TMS9900 archictecture.

I always thought the workspace-pointer-to-register-set could make for some easy multitasking context switches. You just change the workspace pointer and immediately you're working in another context.

In practice it was slow though compared to processors with real registers.


I had one too. The really shocking bottleneck though was this: just 256 bytes of RAM were CPU-addressable; the 16k bytes it was advertised to come with were video RAM. If you bought the "mini memory" 4k expansion and coded in assembler, the speed seemed competitive with other home computers around then. Apparently it was coding in BASIC through the video memory bottleneck (and iirc an extra level of interpretation? I think I read somewhere that their BASIC interpreter was written in an interpreted VM code) that made your programs so slow on the TI-99.

(Added: https://en.wikipedia.org/wiki/Texas_Instruments_TI-99/4A#VDP...)


Correct, but you had the option to use fast (external) SRAM for the area which held the registers. Even so, the 64K limit was the bigger problem.

Actually 8088 really had a huge advantage: easy to port CP/M apps to the IBM-PC. Even if the 68K was ready to go, 8088 was probably the better choice.


One of the things I remember reading in the late 80's was that for IBM the 68000 was a no go because it would have doubled the number of DRAM's needed for a bare bones system. And Motorola didn't have the 68008 ready yet. Important because back then memory was a huge part of the cost of the machine.


By "bare bones" system do you mean 64kb? The original Mac had a 68k and only 128kb of RAM. I guess we are talking about the late 70s here though. The Commodore 64 didn't come out until 82 and it got along just fine with 64kb.

IIRC the 68k was an expensive chip period. It was the chip you used if you had money to burn.


The 68000 has a 16-bit bus but the 8088 is only 8 bit. I don't know about today, but RAM chips of the time supplied 1 bit per address, with the capacity being a square (since column and row are addressed using the same pins). So if you need twice as many bits per cycle, your options are basically to have twice as many chips (of a quarter the capacity each... probably cost-effective, but now your system has half the RAM), or twice as many chips (of the same capacity each... and now it's twice the cost).

Fitting twice as many chips on the board is probably a pain too. (And suppose you go for the 2 x quarter capacity option - now you need 4x as many if you want the same amount of RAM!)


(My memory is fuzzy)

I think around 1979-1980 that an Apple II with 16k was like $800. One with 48k was $1900. And they weren't passing any markup on the memory. Much different than today where the cost of DRAM is much smaller fraction of the cost.


I remember finding a scan of an invoice from the late 1960 for a Univac mainframe somewhere on the Internet. The total invoice was ~1.5 million dollars, the CPU costing ~500k, the RAM (~768 KB) costing 800k, and the rest ~200k.


Obvious reversal: the 8086 has a 16 bit bus, and the 68008 only 8.

There is a slower kid in every family. :)

Note that both the 8086 (1978) and 68000 (1979) were introduced ahead of, respectively, the 8088 (1979) and 68008 (1982). Basically these 8-bitsters were probably kind of a cost reduction following a familiar pattern in the hardware industry: product catches on, then customers want to put it into more and more things that are cheaper and cheaper, with simpler boards, where big MIPS aren't needed.


As the article points out at one point, at least some part of the reason to make the 8088 were existing peripheral devices that were not compatible with the 8086 I/O-wise.

(Those reasons are not mutually exclusive, of course.)


It was pretty easy to use 8080 peripheral chips with the 8086 and some very few clones did just that. IBM itself had to deal with the problem on the PC AT which had the same i/o chips but a 286 processor. It needed to replicate externally the circuit that the 8088 had internally due to the need to be compatible with the old 8 bit cards as well as the new ISA ones.

The 68000 would have been more of a problem since it moved from the matched memory and clock cycle scheme of the 6800 to a four clock cycle scheme with a complicated handshake. A special memory mode and two extra pins made it talk just fine to the 8 bit i/o chips. There was no need to wait for the 68008 for that.

One huge mistake that was made in the 8088 and 68008 (and I will suppose the TMS9980 as well, though I haven't checked) was that they didn't have a simple way to take advantage of page mode access in DRAMs like the original ARM did. If they had, the gap in performance compared to the 16 bit bus models would have been smaller.


And the 1979-1982 period is exactly when the original PC was released, hence the problem!


The base IBM PC (in 1981) came with 16KB of memory

The Mac came out three years after


It's hard to imagine getting any actual work done on a machine with 16KB of main memory and no memory manager. A single page of text is 2k, and you can't go too fancy with mappers and memory paging schemes because the code would be too complicated to fit in memory...

One can see reasons for C's design tradeoffs if you're worried about machines like that.

But even after you abuse every trick in the book it's hard to see how that machine isn't hobbled by its lack of memory.


> only 128kb of RAM

I remember reading the first IBM PC came with 16k. A former colleague of mine once reminisced about programming on a PC in the late 1980s that only had 256k of RAM (although I think that must have been an old or low-end machine).


> 128kb of RAM.

Probably 16 64kX1 DRAMS.

> IIRC the 68k was an expensive chip period.

Fuzzy memory but the 68000 was a 64 pin ceramic package. I remember comments that the IC testers of the day didn't have enough IO to test them. That upped the cost as well.


The 6502 does something similar with its use of the "zero page", remember back then the disparity between CPU and RAM speeds was alot less than it is today.


Yes but INX is 2 cycles and the zero page versions INC $55 and INC $55,X are 5 and 6 cycles, respectively. These regs are 2.5x to 3x the speed of ext mem. So having no internal regs would just be a disaster.


The 6502 'zero page' was pretty much the same concept.

https://en.wikipedia.org/wiki/Zero_page


Only in a vague sense. The 256 byte window can't be relocated. Most instructions (boolean and ALU) only leave results in the accumulator, not RAM. (Inc/dec are an exception, but very slow). Internal registers are used to index zero page bc it can't index itself. Direct, arbitrary access to 16 bit address space is also possible without updating a memory pointer via additional write instruction. (The zero page is basically a parallel set of instructions with eight zeros in the upper 8 bits off address).


> The 256 byte window can't be relocated.

That's true, but that doesn't change the principle. Other processors (like the 6809 for instance) used the same model and could relocate the 'zero' page.

> Most instructions (boolean and ALU) only leave results in the accumulator, not RAM.

Yes, but that's the way this is supposed to work. A, X and Y are scratch with the real results held in 'quick access' zero page variables.

> (The zero page is basically a parallel set of instructions with eight zeros in the upper 8 bits off address).

Yes, and this was explicitly designed in such a way to offset the rather limited register file of the CPU.

In the 6809 it was called the 'direct' page, and in that form it was a lot more usable since you could do a complete context switch with a single load (which the 6809 operating system OS/9 used to good effect).


I don't know about the TMS99 series, but on the 6502 memory read could be as short a single cycle. The 6502 had a concept of a "zero page" which treated the first 256 bytes of memory specially, with single cycle access. So they could be used as a kind of register.

But of course there was only one ALU, and to use it you had to use the Accumulator register.


> On the 6502 memory read could be as short a single cycle…

No, it couldn't. Even a NOP was 2 cycles. Memory access is at least 3 cycles for a zero-page read (read opcode, read immediate byte, read data), or more for the more complicated addressing modes.


The 6502 definitely reads or writes every cycle! - consult the data sheet (or VICE's 64doc) for more info. This is also not hard to verify on common 6502-based hardware.

Suppose it executes a zero page read instruction. It reads the instruction on the first cycle, the operand address on the second cycle, and the operand itself on the third. 1 byte per cycle.

(For a NOP, it reads the instruction the first cycle, fetches the next byte on the second cycle, then ignores the byte it just fetched. I think this is because the logic is always 1 cycle behind the next memory access, so by the time it realises the instruction is 1 byte it's already committed to reading the next byte anyway and the best it can do is just not increment the program counter.)


Oh, it certainly accesses memory every cycle. But the effective "efficiency" of the memory access instructions is much lower -- if you were writing something like a memcpy(), for instance, they wouldn't be contributing to its transfer speed.


The Renesas 740 (M740) 6502 variant had a T processor status bit which would cause certain instructions to operate on $00,X instead of the accumulator.


If I'm not totally mistaken, similar concepts are still applied today. IIRC POWER 8 offloads some of its registers to L1 when running at higher SMT modes like SMT 8.


It might have been reasonable for an older minicomputer design, which would probably run memory at the same speed as the CPU, and would only read or write one register per cycle. For anything larger, though, it'd certainly be catastrophic.


Thought of mine was these early 16 bit machines were targeted towards the baby minicomputer market. But all the growth was in the personal computer market. In that market speed wasn't that big of a deal. However users were rapidly bumping up against the limitations of a 64k address space. And that's why they needed 16 bit machines.


Actually, a pure 16 bit processor (like the TMS9900, MSP430, Z8000, etc) can only address 64KB (with byte addresses) or 128KB (with word addresses). Just like a pure 8 bit processor (like the Kenbak-1) can only address 256 bytes.

The solution is to either have a hybrid, such as 8 bit data and 16 bit address, or use some kind of memory management unit. So the 8088/8086 had a segmented sort of MMU built in while many 8 bit computers added external MMUs to break the 64KB barrier (MSX1 machines could have up to 128KB of RAM while the MSX2, still Z80A based, could have up to 4MB per slot).


I remember the Hitachi HD64180 and Zilog Z180 had a paged based MMU. 512k of memory.

And a lot of embedded 8051 designs used one of the 8 bit ports to extend the address space from 16 to 24 bits. I think both common C compilers for the 8051 supported that memory model.

Also if I remember the 68000 correctly indirect addressing was 16 bit register + 32bit constant. Definitely not a 'pure' memory model. Though the 808x was far more ugly.


I think you're remembering the 68000 backwards. All of the registers were 32-bit, and some addressing modes supported a 16-bit immediate displacement.


I am remembering it backwards. Remembering back, I found a compiler bug that had to do with the 16 bit offset being computed incorrectly for a large data structure. (overflowed)


Yes, it had an instruction set similar in feel to the PDP-11, using 16 general-purpose "registers" and about the same set of addressing modes, but you could change the base address in memory of the "register" set. (There was a BLWP instruction: branch and link with workspace pointer. https://en.wikipedia.org/wiki/Texas_Instruments_TMS9900#Inst...)


In the late 80s/and 1990s people tried to get the best of both worlds by implementing register windows: basically more registers the were visible to the program. Calling a function moved to the next window in turn.

The AMD 2900 went one better by having a huge register file; calling a function moved a pointer into a large stack of fast on-chip memory.


Are you sure you are not thinking of the AMD 29000 ? AMD 2900 series parts were used to build microcoded CPUs.


Yes, that was a typo. I meant the 29K


At that time memory was as fast or faster than the CPU so it mattered little. In particular the TI-99/4 and TI-99/4A had fast static RAM onboard -- only 256 bytes of it though! The rest had to be accessed through the video circuitry and was slow -- unless you bought the bulky, slow Peripheral Expansion Box and 32k upgrade which gave you more CPU RAM.

For its time the 9900 could have been revolutionary. But the architecture of the TI-99/4A was odd in its own right.


Great article. The 68000 went on to become the first processor in Sun workstations and the original Macintosh so I wouldn't call it a total loser.


Sun, Apple Lisa, Apple Macintosh (& descendants), Atari ST series, Commodore Amiga Series...

... the Motorola 68000 was a very successful chip!!


Also the Amiga and the NeXT. All of the above were influential computers, and obviously the Mac achieved some success, so I think calling it an "also-ran" is a bit of a stretch.


Also the Atari ST series, a whole wack of arcade games, the Sharp X68000, a bunch of Unix workstations, dedicated network hardware, etc. etc. And it is still manufactured in a modified form in the ColdFire.

But it still never approached near the volume of sales of x86. And was abandoned by Motorola by the early 90s (when they started to go down the RISC road with the 88000 and then PowerPC)


The MSP430 instruction set is heavily influenced by the 9900, so although TI missed the PC revolution they sold a heck of a lot of silicon derived from this project.


What strikes me about this article is the disparity between how much TI-99/4A hackers love that machine and the TMS9900 vs. how much the TI leadership of the time seem to hate it.

There's virtually no inside information written down about the history of this chip, so this was pretty fascinating.


The love/hate thing might have been related to the eventual fire sale pricing.

Towards the end, you could pick them up for $200, which was unheard of for that class of PC. Easy to see why that would draw an adoring audience while frustrating TI.


At the very end they could be had for under $100.


Mine was $50 in 1983 and I've met several people who remember getting it for that price.


From the article:

If IBM, and not Microsoft, had controlled MS-DOS, Windows, and so on, the computing world would now be a different environment.

-

iMHO probably a much worse world. Microsoft might have been the "evil empire" but ibm was the original.


True, if IBM's development tools were any indication. Circa 1992, IBM's OS/2 development environment was called C-Set, and, in spite of being a 32-bit compiler, was inferior to Microsoft's Programmer's Workbench (precursor to Visual Studio). For instance, after a compilation, if you double-clicked on an error message, it would open up the source code in a separate editor window, in a proportional typeface. You could not trigger a compilation from this window. You had to make your change, save & close, then go back to the compiler window. Backwards.

There was no Resource Workshop for designing dialog boxes visually. You had to write all the statements in a text editor, and wouldn't see the appearance until compiling and running the program. This slowed down development.

While debugging, I recall trying to step into the GUI part of the application code. This locked up the computer so bad we had to re-install OS/2, a tedious process requiring 22 diskettes.

The IBM documentation was so inscrutable that even for ver. 2 of OS/2, we had to refer to Microsoft's ver. 1 manuals. IBM's support gave an interesting insight into OS/2's downfall. My client had an expensive OS/2 support contract with IBM. The client's contacts were two guys in the PC support section. When I required technical support to answer some obscure and specialized questions about OS/2, it didn't make sense to get the client guys involved, even though they were supposed to be the only official go-between with IBM. So I called IBM with the client's authorization, and impersonated one of the support techs. After a few calls I slipped and used my name. The IBM guy was livid and tore a strip of me, saying that as an independent contractor, I had no business blah blah. Here I was just trying to get a customer up & running.

Authors of their own misfortune.


That OS/2 2.0 debacle is my favorite topic, and this was just the beginning of the entire fiasco. That period was not long after the MS OS/2 2.0 SDK in 1990-1991 (which used MS compilers) and before some of the really unethical tactics MS used to attack OS/2.


Although SOM was much more advanced than COM, it had even support for metaclasses.

IBM also tried their luck to have a Smalltalk like IDE for C++, but it was too heavy for the typical hardware configurations on those days.

It was the fourth version of Visual Age for C++ OS/2.


It's also wishful thinking. If IBM had controlled that, there wouldn't have been a big enough ecosystem around the PC platform for it to become the winner. IBM also tried to grab this back with PS/2 and OS/2 which also failed.

It's interesting to compare this to another article linked here about the downfall of the TI home computers. Apparently TI understood the value of software to the point where they threatened to sue unlicensed software vendors, which pretty much guaranteed that those vendors chose to write software for other platforms. Perhaps they would have had a better chance if they had opened up their platform?


People complain about Apple, but they are the single company left from those days, where each manufacturer controlled the whole stack.

Had IBM done that, we would be in a similar position regarding PCs.

On the other hand, maybe thanks to that single event, Atari ST or Amiga variants would still exist.

However, the industry seems to be turning back to those days.

Even FOSS won't help here, because each OEM just packages their own OS and SDK flavour and locks down the hardware.


Commodore/Amiga/Atari couldn't capture a big enough market.

Apple (Mac) was never open enough.

None of the UNIX companies would have ever made anything cheap enough to hit the mass-market until after Windows 95 came out.

Without Microsoft we could have ended up with some weird IBM world running OS/2. They might have even switched to Motorola eventually.

But Microsoft and Compaq made things open and cheap enough to get us to where we are today.


But that was kind of my point, without Microsoft, the PC would just be yet another platform like the Commodore/Amiga/Atari/Mac.

None of them would get a piece big enough of the pie.

The PC only happened due to the way OEM market was created, originating a race to the bottom regarding computer components.

My other point being that given the current state of computer market with iDevices, Android, Chromebooks, IoT, hybrid tablets (aka netbooks), TVs, Watches,..., those OEMs are now trying to turn the remaining of the market into that vision.

So besides their proprietary OSes, we get customised versions of their own forks from open source OSes (e.g. Huawei Linux, LG BSD, put your flavour here) in locked down hardware like those computer systems with their OSes written in ROMs.

Microsoft is still here, but the day they actually do loose the PC market, don't expect the "Year of Linux/BSD" to happen.


What made PCs successful was the hordes of cheap IBM PC compatible machines. Without it, the world of home computers would be a lot different.


Dunno. Seem to me that the 68k lacking anything like "real mode" made upgrading in place a painful proposition.

Having the ability to temporarily suspend protected mode while running older software allowed a smoother transition on the x86 path.


Commodore (Amiga) and Atari (ST) would have destroyed themselves anyway regardless of the competition. Their management teams were just completely incompetent.


Maybe, but since I am playing "what if", it could also happened that in spite of their incompetency, they would survive, because the IBM PC market would not turned out as widespread as it did.

In this alternative world, only IBM would be producing PCs just like Atari and Comodore produced their own products, there wouldn't be OEMs driving costs down and creating comodoty computers.


It's always interesting to hear stories of when a hodgepodge of companies like TI, Commodore, and Radio Shack were vying to become dominant stewards of computer technology, from the OS down to the processors. It doesn't seem like that will ever happen again, but who knows?


That tends to happen with most new disruptive technologies. So we probably won't see the same thing happen again with microcomputers, but maybe with AR goggles or whatever comes next.


Seems to be happening with Lidar right now. Granted, a much narrower space.


So the Intel won with the ugliest instruction set and Microsoft won with the ugliest OS. "Ugly" is apparently an overrated design disinclination.


Yes, for better or worse ugly is definitely a 2nd-order predictor of success.

I guess "ugly" is a fair assessment of the 8086 at the time. Certainly the 68000 was a much cleaner and orthogonal architecture. On the other hand, I'd rate the 8086 at least as good as if not better than other contemporary microprocessors such as the Z-80 and 6502. The 6809 was sweet but a 16 bit address space rooted it in the previous direction and the 68000 make it clear that the 6809 wasn't in Motorola's future plans. Sure, had IBM chosen the 6809 there surely would have been a compatible follow-on but I can't imagine even the stanchest IBMer to have that kind of hubris.

But calling MS-DOS ugly at the time would have been unfair. It was as capable as any other microcomputer OS at the time in the home computer space. It was widely proclaimed to be a rip-off of CP/M so we might take that as a compliment and if you look at TRS-DOS, Apple DOS or whatever PET's used it was just fine. It'd be unrealistic to suggest and mini or mainframe OS was an option and Unix just wasn't there yet. If IBM had given Microsoft more lead time they might have went with Xenix which they did have out in 1981 for the Z-8001. I'm not so sure IBM would have been interested, though, as they wouldn't have an exclusive license to the OS.

Not to mention that the overhead of the operating system was an important consideration. The machines didn't have much capacity to waste and whatever you picked it still had to perform well on a system with only floppy drives. Maybe that in itself doesn't rule out Unix but it sure cramps the design space.

In both cases, CPU and OS, the ugliness really took off with backward compatibility to maintain. The 80286 was already being designed so it drove that deeper into the weeds and there was no way of bypassing MS-DOS compatibility once it anchored the marketplace. The only way forward was to improve the OS while keeping MS-DOS programs running and the whole OS/2 debacle only helped to delay that upward path.

I mean, fair enough to say "ugly won" but some consideration should be given to the lay of the land when these long-term trends were set in motion.


I have wondered about that. I have programmed. I am not skilled enough to warrant calling me a programmer. I did so out of necessity, I kinda hated it most of the time.

But, I hear this about Intel's instruction set - at a number of tech sites.

Intel got their start making memory (SRAM). They had Moore and Noyce, as I recall.

Anyhow, I am not qualified to opine on instruction sets. I am curious as to where it went 'wrong.' I accept your declaration as being likely true - I've seen it echoed elsewhere. But, if you know, do you happen to know where (perhaps even why) it went wrong?

I have tried the mighty Google, by the way. It was no help. I may have not had the correct search terms. I am a mathematician, I only programmed (retired now) out of necessity. In fact, I hated computers for the longest time. I am just curious about the history of where they went so wrong that so many people complain about it.


This is a basic summary of why it sucks. https://news.ycombinator.com/item?id=276471


That tells me why it sucks from the perspective of a compiler writer in 2017. But it doesn't tell me if it sucked in 1979 in an era when ask was much more frequently hand-written and there were many more physical constraints and trade-offs. x86 seems often to be judged by later innovations like RISC. But that wasn't a thing in 1979 and sucks for hand-writing anyway - perhaps you want more complex and non-orthogonal instructions in 1979.


Someone who is familiar with the 70s and 80s will have to chime in, but I'm not sure how popular it was with assembly writers then (I don't think it was).


As someone who wrote a reasonable amount of assembler, my first exposure to RISC (SPARC) nearly blew my mind; so easy to learn and much less struggle to write well. I never understood the attachment to arcane ISAs.


Thank you. I understood most of that. I'm still curious as to 'where it all won't wrong.' If such info exists, of course.


It didn't, it's ugly because of backwards compatibility. It was cheap(transistor and engineering-wise) for Intel to add instructions in the 80's to 90's timeframe since it reduced instruction size, so they added a tonne of them. Now it looks ugly, but Intel can't reduce it because some obscure program may break.


Alright, thanks.


They both benefitted from piggybacking on the success of the Z80 and CP/M


My sister's school bought a bunch of TI99/4As and I got to play with hers once when visiting. She said it was a little flakey, as I recall. One interesting feature of the 99/4A was the speech synthesizer module, which used the same LPC technology that TI used in the Speak & Spell toy. That was a fairly unusual peripheral for that generation of home computer.


Well, I remember a lot of fun with my father Spectrum. He attached a board with a SPO256 chip and we had the Speccy talking us with weird accent (We are Spaniard.)


Pretty bizarre that they call the 68000 an "also-ran" (twice!). But I guess this is more of a personal story than an attempt at objective computer history.


One of the links on that page points to this fascinating memoir by Gary Kildall: http://www.computerhistory.org/atchm/in-his-own-words-gary-k...

As an erstwhile CP/M user, I have of course known Kildall's name for decades, but never had a sense of who he was as a person. This somewhat autobiographical recounting is very illuminating in that regard.


The first personal computer to use the TMS9900 - the Marinchip 9900.[1] This dealt with the problem of using a 16-bit computer on an 8-bit bus by using external logic which turned one TMS9900 memory access into two bus accesses.

Everything ran much slower back then.

[1] http://www.s100computers.com/Hardware%20Folder/Marinchip/990...


This is an intriguing article, and the comments posted here are that, squared. This happens quite a lot on HN, so much so that asking "did you read such-and-such on HN last week, and what did you think of the discussion" would be an excellent interview question for students applying to a computing program at university. Showing an interest in such threads would be a good indication of future potential for jobs above the code-monkey level.


One of the comments has a pointer to a fascinating article in InfoWorld, reprinted from Texas Monthly, recounting the rise and fall of the TI 99/4. I can't figure out how to copy and paste the 500-character URL but it's very much worth following the link in the comment.



Bit.ly, goo.gl, etc?


What a great story - and now I know about Lubbock, TX.

Google street view has confirmed the authors' opinion.


If you are not from the US, I cannot express the desolation, wind, dust, and smells of West Texas. Hell even if you are from the US. Imagine a windier drier more right wing Mongolia, with feed lots and oil wells everywhere.


West Texas holds a certain primitive charm to me as a native Texan. But if I ever moved back to Texas, I'd want to live near Austin. It's the only part of Texas that has urban amenities, educated people, and relatively tolerable weather.


Yeah, the Llano Estacado is where the world keeps all the spare sky when its not using it.


the only lesson is that ibm being a huge monopoly at the time got to make the wrong decisions for everyone else. So, the lesson is: please the big monopolies?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: