Hacker News new | past | comments | ask | show | jobs | submit login
IBM System 38 (1984) [pdf] (washington.edu)
64 points by brudgers on July 2, 2015 | hide | past | favorite | 58 comments



I cut my teeth on a S/38 as an operator.

I was the worst data entry person. I have dysgraphia, cannot touch type, and am a world class champion of bad spelling. Not a good combination for transcribing acuities from intraocular lens follow ups.

My supervisor was on vacation and the manager needed a report for the FDA so, eager to help, I set about trying to convince paradox (a 4GL) to give me what was required.

I got the report to print but the sorting was all wrong. It was in date order and not grouped by model. Finally, by looking through the in-line help system (1992, no googling), I figured it out and got the report to my manager.

His reaction was not what I expected; he looked exceedingly perplexed.

"Something wrong?"

"It's grouped!", he replied.

"Isn't that the way it should be?", I responded.

He then proceeded to explain two things; he had been asking for the report this way for months, and I was going to be terminated that day.

The best part was that, unbeknownst to me, they were shipping the S/38 from Virginia and needed an operator to work from 12 noon till 3 am and—instead of being fired—I was just about to be offered a new job!


I don't get it - why did he fire you for doing what he wanted/needed for months?


he wasnt being fired for incompetence, he was about to be laid off as a data entrist, presumably because the machine was moving. but by figuring out how to sort the damn output he got offered a sysadmin position.


Keep reading... ;-)

  and—instead of being fired—I was just about to be offered a new job!


Ah, he was going to be fired because he was a poor data-entry operator, but then re-hired because he was an awesome sysadmin. Got it. Makes more sense to me now .. and I'm pretty sure a lot of us who cut our teeth on that metal in the 70's and 80's experienced similar things. For my part, I was hired as a junior developer at a new company once, in my late teens, and told to "figure out how to programmatically reboot the workstations from a distance" .. took me 10 seconds with DEBUG.COM and I'd done a job nobody there already could figure out for weeks. An advantage of having nothing but DEBUG.COM to play with on my new PC for a while, I suppose .. ;)


How did you do the "from a distance" part of the reboot?

(I once had to rebuild a partition table by hand using nothing but DEBUG.COM and Peter Norton's book..)


Partition rebuilding .. heh, yup, been there too.

I created "REBOOT.COM" and then had our master control application call it on command. The hardest part was working out how to programmatically reboot the PC - for some reason the mainframe guys couldn't work out that a simple "JMP FFFF" was all we needed. Scored points that day. :)


Nice.

I recall writing REBOOT.COM myself by using EDIT.COM and writing two bytes (0xCD 0x18 by typing Alt-205 Alt-24, if I recall right) and saving it. That executes interrupt 0x18 which should start the BASIC interpreter in the ROM. But most PCs didn't have one, so instead it rebooted. I discovered this by experimentation :-)

http://www.delorie.com/djgpp/doc/rbinter/id/50/22.html


Great stuff.

I miss the days when solutions could be that tight. You could have written that .com file with a hex editor back then. Now an ELF header and symbol table is bigger than edit.com


Well, this was in the very early days of DOS, where it was being used as a front-end for other bigger systems, so it was considered not much better than a dumb terminal, albeit re-programmable so .. on these "workstations" that needed REBOOT.COM installed, we didn't even have DEBUG.COM - only the master control program (quite literally, a .BAT file), which didn't have a facility to put new apps on all the little DOS machines - admins had to do it manually.

So we all got used to using COPY CON: C:\REBOOT.COM and some sort of Alt-key combo for "JMP FFFF", which defeats me since I haven't thought about it in 30 years or so .. but yeah. It was the last manual-install we did as an admin/dev team, as the reboot was needed so that we could finally add "Remotely administer Workstation Base Image" to the master control program/.BAT file and save ourselves endless late nights. ;)


It's always amazing to me to see all the hype around containerization and build once, run anywhere, and a lot of the tech trends happening now and realize that we've been reinventing the wheel, since products like AS/400 (a.k.a System/38 a.k.a iSeries) have been doing it since before PCs were ever a thing. It's perhaps even more funny when the reimplementation is inferior to the original product (which seems to happen with some level of regularity). It's a shame mid-range IBM iron is both expensive and sort of vendor-locked as it's really often a rather nice system to build for; system/38 really was revolutionary and really has influence more than I think we often give it credit for.


I grew up with computing in the 70's and 80's, and I truly think the 90's were a period of devolution in the computer industry - desktop (Microsoft) computing came a long and gave computing to the masses, but the cost was that a lot of technical advances that were being made in the industry as a whole got swept under the covers - for various reasons, either it confused users, or Microsoft just didn't have the technology chops to get it into their OS, or whatever.

But I definitely think we make many, many advances in computing that get washed away by the mass-market mechanics required to get things out there in a way that satisfies the bean counters.

I remember being able to freeze/defrost process on MIPS Risc/OS back in the day .. I even used it as part of my development/test process, since security policies in the computer rooms I was working in the 80's meant that some systems (operational) were not allowed to have compilers - the only way I could test my new build was to run the process, freeze it to disk, send it over to the test-ops machines, and defrost them. I did this so often that it just became commonplace - yet the ability to do this just faded during the 90's when I moved to other environment. I still can't do this easily on any of our existing systems, although there's cryopid and company - but it strike me as amusing that these sorts of capabilities are being touted as new and ground-breaking. More often than not, the discoveries of a unicorn comp-sci grad who just spent 5 years on themselves seem to be re-inventing things they were ignoring all the while were standard, in the rush to push the past aside and no longer deal with dinosaurs.

Its a funny business we're in. I look forward to the next cycle of devo/evo-lution.


That's called checkpointing or checkpoint/restore these days, and it's sadly mostly limited to HPC and cluster environments. That said, there are some projects (at least for Linux) like DMTCP and CRIU which try to offer it as userspace tools, if somewhat finicky.

DragonFly BSD has had kernel-level checkpointing through sys_checkpoint(2) since 2005, but it has limitations with multithreaded programs.


Thanks for the pointer about Dragonfly, I'm not at all surprised that it has solid checkpointing, since its developers are definitely hard-metal types.

For my part, I treat my Docker files as my own little server-farm, and production is of course managed else-wise, so its all just an amusing analog of how things worked 'in the good old days' ..

Cryopid on Linux looked interesting for a while, but I guess its not really relevant as a feature in this age of hardware. In the good old days, it was necessary to checkpoint to get out of the way of the other jobs the computing facility had to perform .. endless tapes of checkpoints, hanging on the wall, waiting to be spooled, re-spooled, etc.


I guess it is one of those flukes of history.

Those early home desktops were not the most potent of computers. Remember that there was a distinction made between PCs and workstations.

Thus i fear many that grew up with a PC in the home ended up with something akin to mental blinders.

And by the time they hit higher educational tiers the labs etc had transitions to using PCs on networks rather than terminals tied to a big iron.


I believe you are correct and its an interesting aspect of technology that progress in one realm can mask/obfuscate/negate progress made in another ..


Well, not just technology.

Many a recent discovery in science have been thanks to someone glancing over the mental partitions between fields of study and going "hey, i recognize that. We have a decade old solution for it".


The problem is that IBM shot themselves in the foot on this (just as Cisco seems to be hell bent on doing). As near as I can tell it goes like this: (1) Companies want hardware they can find people with experience on (2) The only way to get IBM big iron experience is to have big iron hardware (ongoing scuffles with emulation: https://www.google.com/search?q=ibm+sues+emulator https://en.wikipedia.org/wiki/Hercules_%28emulator%29 ) (3) IBM mainframes are expensive compared to commodity hardware (4) Universities don't choose to run IBM mainframes (5) A tiny minority of people coming out of university actually have experience (6) Companies are worried about the lack of inexpensive employees with experience

They have amazing features in their OS's, for sure, but no amount of marketing budget makes up for "I can throw this on my PC in some fashion that I can learn about those features."


Bingo. And why you see the likes of Apple, Adobe and Microsoft bend over backwards to offer students deep discounts.

Heck, MS has been using their personal computing market share in their business sales pitches. This in the form of "total cost of ownership". More specifically that people will be accustomed to MS interfaces from their home use, and so less training is needed as new employees.


I agree, though a correction: PCs very much did exist by the time of AS/400, and particularly when OS/400 (IBM i) introduced LPARs, though analogues have been around in plenty of IBM's prior systems.


A predecessor of the AS/400, the System/36 PC was available in very similar chassis to the IBM AT. It used an IBM 5150 ( aka PC ) as the console with a special interface card.


Yeah, you're right. I made a perhaps bad/loose equivocation for System/38, which I think predates IBM PCs, to AS/400, which showed up in the late 80's.


... and they're still called LPARs today on the modern boxes :)


Correct and since 2007 LPAR's have been able to be xfered between hosts in a fashion similar to VMotion.

http://www.redbooks.ibm.com/redbooks/pdfs/sg247460.pdf

Additionally there are WPARs as well which are essentially application/workload management containers :-)

https://www.redbooks.ibm.com/Redbooks.nsf/RedbookAbstracts/s...


Even their architecture of using a kernel JIT with a portable format for the executables.


I think it happened because, as you said, the irons were vendor locked and expensive.

Heck, thinking about it i kinda get the feeling that a x86 server rack is a breadboarded big iron.


I currently work on an iSeries, while there is some vendor lock we have run unix programs through PASE, we run a full suite of web enabled applications some of which reuse code that has been around forever. Even RPG has gone modern with fully free form code, procedures, embedded sql, and with ILE the ability to be bound with modules written in any other supported language.

The deployed base is very large and across a great many industries because of resilience, ease of programming, minimal support staff required, and simply because with most big systems it just works. Its not flawless but db2 is far easier to manage than oracle and the iSeries we have operate with much smaller staffs than the AIX/Oracle setups.

Overall the biggest failing is IBM's lack of direction. They tend to push systems which generate the biggest kickback and such, usually meaning their AIX fare.

No matter how many times efforts were put in place to move off the i or even z it just came down to, it works, its modern, and for businesses having something you know is many times better than going with the newest thing.


A lot of this goes back to the System/3, which was released in 1969.

System/3 => System 32 => System 36 => System 38 => AS/400 => iSeries.


Well, only sort of. The 38 was a radical departure, and is the the true root of this series.


Chronologically, it was actually S/32 => S/34 => S/38 => S/36. Though S/36 was more a successor to S/34, built on more modern hardware but didn't really offer anything new in terms of OS architecture.


Ok, so here is a trick question: Who invented virtual memory?


It wasn't IBM - IBM actually made fun of virtual memory while their engineers caught up with the competition.

I know that the RCA Spectra series had virtual memory in the 70's. The Spectra series was a 360 clone with the same instruction set. RCA developed virtual memory and converted their TSOS (Time Sharing Operating System) to VMOS. Univac bought the Spectra line and converted VMOS to VS/9. I worked for Univac at the time.

Virtual memory may have been used earlier by the Scientific Data Systems (SDS) Sigma series, later bought by Xerox. I was once a peripheral device to a Sigma, hooked up for an ECG.


I believe it was Burroughs who first did virtual memory. This was before the IBM Model 67 and the research at University of Michigan.

So my first major gig was a startup called Telemed located first in Park Ridge then in Chicago, in an office building at the end of runway 27 L. We built this system of collecting ECG data over the phone line in three-channel FM, converted to digital and then processed by Sigma 5! Sounds like the same system that you were hooked up to. How about that.

The Sigma5 didn't have virtual memory, but the Sigma 7 did. But this was well after both Burroughs and IBM had commercial offerings.

The Sigma series was produced by Scientific Data Systems, later bought by Xerox. Made Max Plaevsky the largest Xerox until Xerox bought University Microfilms.


I think, the Atlas was the first machine to support virtual memory (https://en.wikipedia.org/wiki/Atlas_%28computer%29), but I am not entirely sure when Burroughs first released a machine with virtual memory. Atlas was - according to Wikipedia - first operational in 1962.


I do believe you are right. Burroughs was the firs US manufacturer to provide it.


Manchester Atlas?


Every IBM AS/400 (and its predecessors S/36, S/38) came with all the manuals which took up more than one book shelf (IIRC back in the day this made IBM the largest publisher in the world). In high school I had this minimal wage office job where my responsibilities were not entirely clear, but I spent a lot of time reading those manuals, especially the "Structured Query Language/400" and RPG IV, and eventually negotiated a programming job because I was able to completely automate the tasks I was given.

Thinking back, I think those "systems" (as IBM referred to them) were pretty cool. You could upgrade any part of it, including the central processing units without taking it down, which is pretty remarkable even today. As I recall a "reboot" was referred to as IPL (Initial Program Load) and you only did those when IBM said it was necessary - those things never crashed.


All I remember on our S/36 were the first chapters of each manual. "How to read this manual". And amazingly, those 20 some pages were needed. At least for me they were.

As far as IPL's I can't remember a more breathtaking moment when I had to IPL the system and it would come up successfully. And the more horrific event when the 10" floppy used to IPL went bad. LOL, good times.......


Reminds me of happy times getting deliveries of Sun workstations to our lab in the early 1990s - they would come with vast amounts of documentation. The only catch was that you had to put it all into the ring binders by yourself!

Another nice thing was getting copies of the Adobe PostScript books (I think we got all 3 of the Red, Blue and Green books) - these came with OpenWindows for NeWS.


VAX manuals - rows of orange folders everywhere!!!


Which is why IBM created SGML which led to XML and then HTML...


I was localizing for IBM back in the time. Actually the breakthrough for reduction in publication costs for IBM was the introductin of GML (https://en.wikipedia.org/wiki/IBM_Generalized_Markup_Languag...), the predecessor for SGML.


Actually, IBM didn't "invent" SGML, they participated. GML, which was invented by IBM in the person of Charles Goldfarb https://en.wikipedia.org/wiki/Charles_Goldfarb, was a major influence in SGML.

And HTML came before XML.


What does IBM use internally for publishing nowadays?


They use SGML and DITA.


This is chapter 8 from Capability-Based Computer Systems, a book by Henry Levy published by Digital Press. Here's the TOC:

http://homes.cs.washington.edu/~levy/capabook/

I remember reading this chapter in the mid 80's when I was starting to develop business software for this minicomputer. The System/38 hardware and OS were not well understood outside of IBM, and Levy's book gave me some sense of what was going on under the hood to make the system secure.


Searching for descriptor-based computer architecture after reading your comment since it reminded me that I was going to, gave me a link to chapter 10 of the book. https://homes.cs.washington.edu/~levy/capabook/Chapter10.pdf


It's interesting to note how so many of the innovative research OS of the 80s and 90s were based on firmly object oriented architectures that enabled their features. This is in spite of OO being much maligned by contemporary FP evangelists. I'm not aware of what research has been done on FP (pure or impure) operating systems. I know of House, but it doesn't actually have any interesting features per se beyond the novelty of being written in Haskell.


The much-maligned 286 MMU was object based: segments were referred to by handle and had a byte-granular length, so out-of-bounds accesses were impossible.

I once saw a Smalltalk which used the 286 MMU like this. Each Smalltalk object had its own segment descriptor, effectively exploiting the MMU as a fast and cheap way to implement the object pointer table. The Smalltalk garbage collector could move things around behind the scenes and everything still worked, there being no pointers to update.

Of course, with segment sizes limited to 64kB and a limit of (I think?) 2^13 different segments it wouldn't scale to modern machines, but it was still pretty nifty for a 16-bit system. It's a shame it never got used to its full potential.


System 38 is object based which is slightly different from what we tend to associate with object orientation in that there is no inheritance mechanism by which objects can be extended with additional or different functionality.


The Lisp machine OSes were pretty amazing. Everything is dynamically updatable, stuff printed to the consoles remained live and interactive, and the user could configure or modify the OS or programs interactively and easily.

While industrial Lisps may or may not be "functional" compared to today's view of the topic, these implementations were at least based around first-class functions.


Many of those anti-OOP evangelists kind of forget all successful FP languages are actually hybrid and there is more than one way of doing OOP.

Haskell type classes are no different than interfaces, at least conceptually.

Lisp had FLAVOURS, CLOS.

The O in OCaml is actually Objective to imply its support for objects, as its predecessor Caml Light did not have it.


We run a System i where I work (the current name of the AS/400 and iSeries). The old crusty bits are still underneath, but you'd be surprised at what runs on it.

Ruby, PHP, NodeJS, Apache...it's all there. [Zend](http://www.zend.com/en/solutions/modernize-ibm-i) has a completely supported PHP platform on the i. [PowerRuby](http://www.powerruby.com/) has Rails and DB2 support.

Check out www.youngiprofessionals.com for more


There's also a Stack Overflow ibm-midrange tag [1] that has some moderate activity.

[1]: https://stackoverflow.com/questions/tagged/ibm-midrange


The design of the S/38 was original and remarkable. However, it was marred by that horrible language, RPG (Report Program Generator). RPG was based on a plugboard tabulator paradigm.

Our local IBM reps said that IBM refused to release an Assembler for the S/38, keeping the internals a big secret.


Not sure about the S/38 but IBM i (AS/400) documentation for programming MI (machine interface) is available [1].

I once wrote an XMODEM crc16 routine in MI because I didn't have the C compiler available and I couldn't get RPG to calculate it fast enough to saturate the modem line.

[1]: http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_71/a...


At the time I was a hardware engineer and I installed and upgraded so many System/34/36/38s I couldn't count them all. It was, by modern standards, dinosaur-era hardware but I loved working on them every time.

To add the 2nd frame onto a S/38 - the L-shaped frame on the left, held more I/O cards and I think four more 62ED disk drives (64MB each, IIRC) - there were data cables that ran from the far corner of the 2nd frame to the extreme lower right corner of the card cage in the main unit. The cables ran through every cable channel and card gate in the system, and if the weren't run exactly the way IBM wanted them to be run, they'd come up about a half an inch short and you'd have to re-run them again.

Miserable? Nah. Good times.


Only on topic regarding interesting 1970s OS design, but does anybody have a reference with a similar level of technical detail to this one, but for the ICL (now Fujitsu) VME operating system?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: