Hacker News new | past | comments | ask | show | jobs | submit login
Mainframe Programming vs. Cloud Computing (slashdot.org)
85 points by dolmen on Feb 1, 2021 | hide | past | favorite | 41 comments



That's about right. I spent most of 1999 rewriting a payroll application in VB6 + SQL Server + ASP which was one of the last remaining things on an IBM crate that they wanted to get rid of because it wasn't trendy any more. I ended up basically rewriting some of the stuff verbatim. The only gain was that they had a web GUI for it rather than a terminal emulator and I'm not sure that was a gain for that class of application as the users had muscle memory.

Roll on 20 years last check and the mainframe was still in there and the payroll system had been rewritten three times over since with virtually no functional change. The mainframe still does internal stock control and reporting.

I think the last rewrite was nodejs which the lack of strong decimal types on financial applications gives me the shivers.


I began writing software in the early 1980's in BASIC, Assembly, RPG II, eventually C, Bash, Perl, C++, and PHP along with SQL and more so now JavaScript/Node.js, C#, and Python. The truth is, these languages have continually become more complex, more memory bloated, slower, harder to learn, and less efficient to code in. Frameworks have enabled people who cannot write softare from scratch to write applications, too, but very restricted and poor performing.

COBOL is still easy to argue as the best overall language for business data processing -- easy to learn, easy to read/debug, and very fast executing.

Next up might be Python, except Python is very slow and, although it has nice data structures, it's restricted and clumsy with its data structures. For example, you are kept from manipulating a list while looping through it and are often forced to add excess logic to deal with None Types and such. It's also super annoying to have to type myvar['subvar']['subsubvar'], etc.

Third up I would say is Node.js. Performance and object/data structure flexibility is very good. The only problem--and it is a major problem--is that asychrony is forced upon you even while working with SQL databases. This very much complicates data processing to the point of not being practical at all for any kind of fluid business processes (where you have to read and make changes on a regular basis).

I have also spent a few years writing in Microsoft JScript.Net. While almost nobody else uses this language, I found it to be far superior to C#, Java, or really any other language for business data processing except for performance. It performs as well as C# or Java and is far more readable and efficiently writable but the Microsoft dotnet CLR (Common Language Runtime) that underlays all dotnot languages has no real support for variant data types. Instead it instantiates a generic object for each which is very expensive on memory and performance. This experience is what got my originally interested in Google's V8 JavaScript JIT-compiler and eventually its derivative, Node.js. However--and even though I use Node.js heavily, this also leaves me with a rather large feeling of tragedy that Node.js maintainers are so dead set against sychronous access to SQL databases... It is a huge loss.


There is a quote I'm paraphrasing about mainframe programming vs minicomputer (server) programming I love.

"Mainframe programmers write code as if the computer is reliable, Minicomputer programmers know better"

The thing about a mainframe is, is IO is an order of magnitude faster, and you can literally remove half or more of the computer from the computer, and the thing will keep processing, thats the advantage of it.


To be fair mainframes did clustering long before anyone else too.

In the 60ies hardware was not very reliable, so they had to do something.


IBM ASP - Attached Support Processor - implemented similar scheduling as today one does with kubernetes and the like, back in 1967. It used a smaller S/360 computer as controller to a larger one, with larger one running only application workload and smaller one running all the housekeeping tasks. It was soon extended to handling multiple worker nodes, and rewritten for MVS as JES3


This is a selective reframing of mainframe and cloud computing. Some points to add:

- CICS was largely used to run online apps, using it to process back-end functions grew in popularity during/after the Client-Server period

- COBOL is has become a fairly complex language. e.g. You can write a GUI app for Windows (or at the time OS/2 PM) with it

- Cloud has many dimensions other than shared allocation of computing resources

This high-level comparison isn't much different or worthwhile than saying a Facebook datacenter is their modern computer/application-platform. Yes you can draw parallels but how does this help anyone? Recognizing the differences and making appropriate choices requires details not hand-wavy similarities.


CICS is a TP monitor, and there were mini-computer and unix implementations of such systems, including Encina, which IBM bought and became CICS for RISC.

I worked on a TP monitor for VAX/VMS from 1982-1988, which had many of the features of CICS on Mainframes.

In 1995, I started a company called WebLogic, and wrote the first Java app server. Many of the features and techniques made their way into the WebLogic app server.

The rest is history.


"- The CPU in the node isn't important. The binary format for IBM mainframes is a virtual machine language and has to be just in time compiled on the node where it will run if a binary wasn't already cached from earlier transactions."

Maybe I am nitpicking, but this sounds more like the System/38 and AS/400 rather than the mainframe. Is the author referring to some particular technology which runs on top of the mainframe?


I think it's just poorly worded (or they indeed confused it with AS/400), and what they really mean is while technically yes, IBM mainframes run machine code that has been kept compatible for decades, today lots of old instructions are microcoded and the overall CPU architecture is so different that it amounts to recompilation in hardware. So the particular choice of assembler language doesn't really matter anymore.


I'd suspect it's a bad interpretation of how microcoded computers work, as on first contact they do sound like virtual machines like that.


I've heard of true mainframe (ie. s/360, not that minicomputer nonsense that people call mainframes for some reason) systems that'll compile the code to run just in time on the equivalent of an API call. Sort of like if lambdas would run CI jobs for the code to execute first if they hadn't run since the code was last updated.


I don't think that's quite true, I think the closest thing to that on mainframe is that databases can dynamically (JIT) compile the programs in query languages (SQL, DL/I) that used to be compiled statically (AOT).


I've definitely heard of shops that compile COBOL and PL/1 in response to CICS transactions if the stored source is out of date.


Worth noting even its creators say JCL is the worst language ever invented.


> Worth noting even its creators say JCL is the worst language ever invented.

I don't dispute that, but I'd kinda like to hear the specifics of their reasoning.

I've never actually used JCL, but I'm willing to cut it a little slack because, IIRC, it literally started in the 60s as a way to specify job-execution information on a punched cards for batch processing systems that had as little as 8K of memory. What I've read about very early personal computers with similar amounts of memory makes me think it requires serious compromises to implement anything that actually works on such a system.


Like everything on mainframe, JCL is old and has lots of warts. But it is, essentially, what is known today as "infrastructure as code".


As someone who doesn't know what JCL is, this is an enlightening statement. Anyone that's used IaC oriented languages know they suck too.


> old and has lots of warts

It was still the worst programming language ever created even when it was new and wart free.


Some programs have warts before the first line of code is written.


JCL was not exactly fun to write, but M4 gives it a run for its money.


dnl And the comment syntax is nuts!


I think anything that becomes a right of passage (editing the m4 file for sendmail) has to have a mention when discussing bad implementations.


I had to learn some JCL to do the mastering the mainframe challenge, other than the syntax being really odd, its not awful.


Yeah. JCLs been that way for long time. That's why it's recommended not to keep JCLs very creative. It should be easy for the operator to understand what to do if job fails. Re-run from top or restart from particular step.

If you don't make thing complicated, it's easy.


> That's why it's recommended not to keep JCLs very creative.

That's solid advice for any language. Clever code actually isn't.


I worked with mainframes long ago, for about a year.

When it came to JCL, the advice I got and what everyone I worked with did, was to copy working JCL and modify it. I never wrote it from scratch, nor did I know anyone who did.


Do you have a source for that? I'd like to show it to some mainframe fanboys at work that praise that abomination


Fred Brooks led IBM 360 software development. From his oral history at the Computer History Museum [1] (on p.33 of 49):

That’s the worst mistake we made was JCL.

Yeah. Well the existence of JCL was a mistake. Building it on a card format was a mistake. Building it on assembly language was a mistake. Thinking of it as only six little control cards instead of a language was a mistake so it had no proper sub-routine facilities, no proper branching facilities.

It kind of grew. It kind of grew but when you end up with your data definitions doing all the verbiage things because you’ve limited yourself to six verbs that’s a language mistake.

We didn’t see it as a language was the fundamental problem. We saw it as a set of control cards.

And lo and behold they’re still around, the dusty decks that nobody dare touch because they run and nobody knows what’s inside

Incredibly complicated, the keyword parameters and the set goes on and on and on and on ...

[1] https://archive.computerhistory.org/resources/access/text/20...


This does not sound like they are saying the language itself is bad for the users.

It sounds more like they are saying it’s a nightmare for the creators of the language to maintain.


It sounds to me like they think it's a terrible language for the users because they never put the normal engineering rigor that they would normally for a full new language. It grew 'organically' to it's own (and it's users') detriment.


> And lo and behold they’re still around, the dusty decks that nobody dare touch because they run and nobody knows what’s inside

Any language that allows its users to do that to themselves is a very bad language. Good languages encourage good practices.


I have to say it's the kind of impression I get from Terraform. I get the feeling it evolved from a JSON-like syntax to something that has conditionals, then variables, then some form of iteration, then some form of functions, but they are more like macros because the language is declarative... At every stage it seems it was impossible to step back and think it through.


I challenge them to try out DOORS DXl


If you go read the original Borg paper, it's clear that they're heavily inspired by IBM mainframes. They even call the job description language 'BCL'. I remember thinking at the time "oh, that's cute, they must have had the task 'what changes if you make a single z/OS cluster run on a million cores'"


It was a marketing war. IBM, Hitachi, NCR and other mainframe vendors vs Sun, HP, and later Microsoft and Oracle on the distributed computing side. As usual, it wasn't about whose tech was better: but who was going to get the customer's money. Back in the late 90s I saw IBM terminals doing some impressive graphics -- and that was just the built-in stuff for the UI. Build my career on distributed, first Sun and then Oracle on Red Hat, with all the FOSS in between like Apache and MySQL. But I'd take a 360 running Linux in an LPAR over any cluster of servers in a data center. AWS, for one, changed everything and has led to an explosion of alternatives. Can't imagine ever going back.


I often wonder what an open source OS designed to fulfill the role of mainframes would actually look like.

I'm going to describe this really badly, but I see things like Excel and HyperCard, and I really wonder what something acting more like a mainframe with the input sensibilities of user oriented programs like Excel / HyperCard would look like. Maybe I played with Orchestrations in BizTalk a bit too much and really wonder.


Mainframes, it's easy. Some people might feel trapped, it's the green screen and text. They call it legacy but the z/OS get upgraded every 5-10 yrs that whole new machine. All the code inside will be backward compatible. Few years back it used to be just COBOL, Rexx, Db2, JCL and cics. But now git, Hadoop and python has got in.

Things are getting bit interesting...

Mainframe is the first cloud...


> it's the green screen and text.

Nonsense! 3279's could do 8 colors! ;-)

Now, on a more serious tone, 3270's were the 70's browsers. They had forms, sent them to the computers and got back other pages. They were pretty smart.


They cost about a million dollars... Although you could amortize that by hooking up four terminals to the controller. Very close to modern day text browsers. Just Really Expensive.


They were incredibly well built. It's a shame so few are left.


Proprietary Cloud APIs is the new mainframe.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: