Hacker News new | past | comments | ask | show | jobs | submit login
The Architecture of the Burroughs B-5000 (1982) (smecc.org)
84 points by rutenspitz on Jan 1, 2016 | hide | past | favorite | 23 comments



I worked as a test technician on B6800 systems when Burroughs had a manufacturing plant in Mission Viejo, CA. It was cool. All MSI components on PCBs plugged into a wirewrapped backplane, big squirrel cage fans for cooling, a power rack with 1" X 2" thick copper bars sticking out to supply the beast. All bolted together it was the size of an automobile.

I would wear headphones with a radio in them due to the noise of the fans, and if I tuned to an empty AM band while running diagnostics I could tell by the pattern of electrical noise if the machine was running properly or it was going to fail.

Debugging was so interesting, using just an oscillisope we would walk along the backplane and probe internals of the ALU, accumulator, memory bus, etc.

Most common failures were a wirewrap that was too tight cutting the insulation and causing a short, stray wire clippings in the backframe, and MSI components that were failing at speed.

That last one was the most challenging and interesting to find, usually requiring entering some microcoded routine to run the failing sequence of instructions, handle the exception, and loop back again, then go around and probe with the scope to find the failing component.

There were also more dramatic failures now and then, like exploding capacitors - they were the size of coffee cans and exploded like an M80 when they shorted. There were a couple of machines that caught fire, too.

Relevant to this article was the microcoding that we did when we had to write debug routines. The tagged words and stack architecture seemed bizarre to me at first, but once I got the hang of it I could whip up small routines quickly. They were entered by toggling bits via a slick test and maintenance processor panel. This was accomplished via a built in serial test scan mode, similar to IBMS's LSSD but a different implementation.


For those interested in the old Burroughs stuff, the folks at Unisys have announced a hobbyist program for the ClearPath MCP Express systems:

http://www.unisys.com/offerings/high-end-servers/clearpath-f...


There's also an excellent open-source project with an emulator and a lot of software available to run:

https://github.com/pkimpel/retro-b5500/


Thanks, I just pulled it down. It's so amazing how small the compilers and the MCP was. Thanks very much for posting these links.


That's awesome. Thanks for the link.


Anyone that likes the concept should look at SAFE and CHERI projects, esp SAFE. They're applying lessons from old tagged and capability systems to modern CPU design for security. Similar techniques improve reliability.

http://www.crash-safe.org/papers.html

(See esp "SAFE: A clean-slate architecture for secure systems")

https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/

Far as old stuff, there are at least three more greats along Burrough's style: System/38 (later called AS/400 & IBM i), Intel iAPX 432, and Flex Machine w/ Ten15. Sole survivor of those is System/38 in form of IBM i. It lived up to future-proofing, reliability, and relative security its architecture intended. Like with Burroughs (later ClearPath MCP), (slow clap) "Bravo to the designers." :)

iAPX 432 and System/38 here: http://www.cs.washington.edu/homes/levy/capabook/index.html

Flex and Ten15 here: https://en.wikipedia.org/wiki/Flex_machine

For availability, NonStop deserves a good mention for a series of ground-up designs that did great: https://en.wikipedia.org/wiki/Tandem_Computers

Still looking for proper docs but OpenVMS's system-level decisions and clustering were so good by 90's they're still worth copying to some degree: http://h71000.www7.hp.com/openvms/whitepapers/high_avail.htm...

So, there's old and new with the old stuff still being better than a lot of new stuff in integrity and availability. Makes me smirk at all the young, hot-shot coders in cloud market acting like they coded the best thing ever. I'll be impressed when they top 1960's-80's era technology at their own game. Although, I'd rather they learn the lessons and exceed them given it's (censored) 2015. ;)


This is the first time I've heard i432 described as "great". What parts of that architecture do you think Intel got right?


- Clean slate attempt to clean up mess that was x86 family

- Microcoded for easier updates at ISA level

- Scheduling, IPC, and memory/storage management in CPU for acceleration and protection

- Dedicated I/O processor like mainframes have

- Multiprocessing support early on

- CPU-level support for and some enforcement of abstract data types (objects)

- Capability-based addressing for its fault-tolerance and security benefits

- Segments do POLA at high-speed and fine-granularity

- Protected procedures via domains

- Garbage collection support

- Consistency in hardware and software

- OS and apps written mostly in safe, high-level language (Ada)

Those are a few. The 75% performance drop due to bad implementation hurt it a lot. However, subsequent papers showed relatively few changes to the design would've eliminated most of its problems. Even calling it overkill, the subsequent i960 incorporated its object-descriptor architecture and fault-tolerance mechanisms into an otherwise RISC CPU. Imagine how much easier correct or secure apps would've been on either vs x86.

Yet, market forces ensured x86 won out and became dominant. Long series of problems followed. Yet, compared to its architecture, the iAPX 432 and to a lesser degree the i960 were great.


The 75% performance drop due to bad implementation hurt it a lot.

I know a number of people who were on the team that tried to "implement" that architecture, and it's highly unfair to them to call it "bad implementation".

For example, there was this insanity:

   The instruction set also used bit-aligned
   variable-length instructions (as opposed to
   the byte or word-aligned semi-fixed formats
   used in the majority of computer designs).
   Instruction decoding was much more complex
   than in other designs.
In the available Intel fab process of the time, just the CPU portion needed to be split into two chips to be "implemented". That's not bad implementation, that's oblivious architects "playing in the sand" without any regard for the practical ramifications of their architectural decisions.

There was a reason that the successor i960 was much simpler, and why Glen Myers was brought in from IBM to bring some adult supervision to that architect's sandbox.

The point is, it doesn't matter just how forward looking or elegant or innovative the 432 was, if it wasn't possible to build it.

https://en.wikipedia.org/wiki/IAPX432#The_project.27s_failur...


"I know a number of people who were on the team that tried to "implement" that architecture, and it's highly unfair to them to call it "bad implementation"."

"The point is, it doesn't matter just how forward looking or elegant or innovative the 432 was, if it wasn't possible to build it."

Let me be clear that I'm talking about the design decisions plus implementation. Subsequent work showed that changes to the design would've greatly boosted performance while keeping the overall scheme of things. Summary in the first link with details in other two.

http://people.cs.clemson.edu/~mark/432.html

https://dl.acm.org/citation.cfm?id=17367&dl=ACM&coll=DL&CFID...

https://www.princeton.edu/~rblee/ELE572Papers/Fall04Readings...

What are your thoughts on Colwell et al's analysis and suggestions for improving i432?

"There was a reason that the successor i960 was much simpler, and why Glen Myers was brought in from IBM to bring some adult supervision to that architect's sandbox."

I agree with you there. Colwell says i432 was a nice experiment with significant contributions that were forgotten as it failed. I can see that except that it was intended to be a product. Myers understood that products and experiments were two, different things. Hence, the better results with i960. Very unfortunate that it was done in by politics like i860, etc.

Matter of fact, I saw a writer recently mocking F-35 because it uses obsolete, legacy i960MX's without realizing how much more advanced, reliable, and secure they could be vs 2015 ARM chips. People don't appreciate the wisdom in the past and fail to learn from it. Least SAFE and CHERI are exploring these things among others.


What are your thoughts on Colwell et al's analysis and suggestions for improving i432?

You're obviously more in tune with the project than I am. I never worked at Intel, but a lot of my friends did (mostly in implementation rather than architecture).

As for Colwell's critique, Intel must have liked it so much that they hired him. :) I attended a number of Colwell's public presentations on various x86 chips, and he seemed like a smart, approachable, down-to-earth guy.

As for improving the 432, my friends were worn out from struggling with it, so they were overjoyed that the 960 was a relatively simple RISC. The only concession they needed to make to its legacy was the 33rd memory bit, the tag bit.

It would have been interesting if i960 had done better. But, even at the time, what I mostly heard about BiiN was "billions invested in nothing". Apparently the Siemens group that was initially involved wasn't their main computer people, it was some other division. So I'm sure there was lots of internal politics going on at Siemens also.

One other thing that confuses me with various wiki entries of all this is that Wiki mentions Fred Pollack more prominently, whereas the name I heard more was Justin Rattner.

mocking ... i960MX's without realizing how much more advanced, reliable, and secure they could be

Now that Intel seems to be struggling to produce meaningful improvements to the x86 (whoopie, 15% faster!), perhaps something more architecturally advanced should be considered. Who cares if it's only 25% of the performance of the x86. The tradeoffs in terms of security etc might make it worth while. At least for high reliability applications.

https://en.wikipedia.org/wiki/Intel_iAPX_432 https://en.wikipedia.org/wiki/Intel_i960 https://en.wikipedia.org/wiki/BiiN


"You're obviously more in tune with the project than I am. I never worked at Intel, but a lot of my friends did (mostly in implementation rather than architecture)."

Nah, I just read the Wikipedia articles, the chapter from the capability systems book, and the other papers I linked. I'm digging history out piece by piece like anyone else.

"As for Colwell's critique, Intel must have liked it so much that they hired him. :) I attended a number of Colwell's public presentations on various x86 chips, and he seemed like a smart, approachable, down-to-earth guy."

Oh hell! Didn't know he got hired. That's cool. Good to hear that about his character, too, because that makes him a worthwhile consultant on clean-slate chips if he's still alive and in the field.

"So I'm sure there was lots of internal politics going on at Siemens also."

Politics and trying to do everything at once from what I gather. Along with picking safe languages like Ada again with market rejection. Gotta at least support whatever is popular. Additionally, there was the backward compatibility effect: need to virtualize prior ISA so apps aren't thrown away. IBM's capability system (System/38->AS/400) was doing that while Intel's didn't.

So, a number of reasons these things failed.

"One other thing that confuses me with various wiki entries of all this is that Wiki mentions Fred Pollack more prominently, whereas the name I heard more was Justin Rattner."

That is weird. Someone should try to get to the bottom of that sometime. Meanwhile, a quick Google led to this:

https://en.wikipedia.org/wiki/Justin_Rattner

Tells us little but notice that the patents' functionality correspond to subsystems of i432. The first 3 at least because the 4th is ambiguous. He could've been one of the bright HW people on the team coming up with the tech while Pollack led the project.

"perhaps something more architecturally advanced should be considered. Who cares if it's only 25% of the performance of the x86. The tradeoffs in terms of security etc might make it worth while. At least for high reliability applications."

You see, I agree but they've tried and failed this before. The only one you didn't link was Itanium: the third attempt to clean-slate CPU's for higher performance, reliability, and security. Itanium had enough RAS features for use in mainframe-style SMP systems and its security features could have been used for very secure systems. One company did with its CTO being the Itanium designer probably having something to do with that. ;)

https://www-ssl.intel.com/content/dam/www/public/us/en/docum...

Anyway, VLIW part and disconnect from legacy meant little uptake despite around $200 million spent. At this point, Intel may have been burned too many times by the market to try too hard. The next step is to go for the legacy + accelerator model which has paid off for them with GPU's and custom instructions. Dedicated hardware or onboard FPGA's can give a performance boost while supporting legacy app.

I'd also consider mixing clean-slate and legacy CPU's in the same system where legacy app can run on one with accelerators running on other with smooth, function-call-level integration. They could have the same endianess, ABI, data-types, etc to make it easier. The rest is different for better parallelism, less bloat, enhanced security (eg pointer protection), fault-tolerance, and so on.

Many possibilities but Intel will be careful given billions invested in nothing so far.


Didn't know (Colwell) got hired. That's cool.

Colwell went to Multiflow first, and then went to Intel after Multiflow fell apart. He played a very key role at Intel, as the architect of many x86 chips:

   He was the chief IA-32 architect on the
   Pentium Pro, Pentium II, Pentium III,
   and Pentium 4 microprocessors.[1]
One of the talks that Colwell gave had to do with the P4, and how the marketing people forced the engineers to prioritize clock rate above all else, during the Megahertz Wars. Needless to say, that did not turn out well.

Colwell wrote a book, The Pentium Chronicles, but I can't recommend it because I haven't read it.

[1] https://en.wikipedia.org/wiki/Robert_Colwell


That is quite the track record. Also seems to have made all the chips the modern effort must compete with. I might try to get the book just to see what issues they encountered and worked around. Darn guarantee that next effort will run into similar problems given they'll have to do same trajectory from old fabs to mid-grade ones to cutting edge with various verification and optimizations along the way.


Hi there, is this the same b5000 that inspired Dr Kay? On the hole OOP paradigm if I remember the quote correctly he said that the b5000 almost remove the notion of data and how he wanted to duplicate this behavior with OOP.

Can anyone more knowledgeable about this step in please and we can all learn more about it.


This is the same B5000 that Dr. Kay mentions in several of his talks.

Kay has often said that he saw the same idea three at least three different times before he fully appreciated the concepts that he later labeled ‘OOP’. Besides the B5000 (designed by Bob Barton), other proto-OOP systems were Sketchpad (Ivan Sutherland), Simula, and even the proto-Internet (that was known as the ARPAnet).


You can always start out with the Wikipedia article. Donald Knuth, Edsger Dijkstra, Bob Barton and others worked on the software.

I think the quote you are looking for is: My best results have come from odd takes on ideas around me -- more like rotations of point of view than incremental progress. For example, many of the strongest ingredients of my object-oriented ideas came from Ivan Sutherland's Sketchpad, Nygaard & Dahl's Simula, Bob Barton's B5000, the ARPAnet goal, Algebra and Biology. One of the deepest insights came from McCarthy's LISP. But the rotational result was a new and different species of programming and systems design that turned out to be critically useful at PARC and beyond.


Holy Deja Vue Batman! I used these systems when I went to University. Thank Hoff for posting the link to the hobby version of Clearpath, it will be cool to go back over 40 years. Wonder how much Algol I really remember.


I learned Algol on a Burroughs B5500 at University of California at San Diego. I really liked programming in Algol, and I liked Pascal even more (I learned UCSD Pascal).


If the first machines which had segmentation existed in the early '60s, why did it take so long to discover that memory segmentation is an anti-feature?

AFAIK segments were put in the 386 because Intel thought people would like them, but they didn't. Now segmentation only exists on x86 hardware for backward compatibility, and all modern OS's essentially disable segmentation in software (basically putting all of memory into a single segment) and using paging to implement process isolation, no-write / no-execute, shared memory, and the like.

I'm not talking about 8086 / 8088 style segmentation, which was a different beast, a somewhat reasonable way to allow a 16-bit processor to access more than 64k of memory.


I don't think it's so much an anti-feature as something that's just not used in the dominate ecosystems so today it is superfluous complexity. Not in IBM 360, not used as such by MS-DOS for the 8088 and Windows, which were originally designed for the 8088 and the 286, since their segments were too small, not for UNIX(TM) and its derivatives including NeXTSTEP -> OS X and other originally 680x0 based operating systems, not for anything ARM based.


Old times of hardware designed both for safer Assembly coding, and support for simple OS & compilers thanks to low-level safety... like Obi-Wan would say, "an elegant weapon for a more civilized age".


Strange thing to say, since this particular system was designed specifically for high-level languages like COBOL and ALGOL-60. I don't think it even supported assembly language.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: