Hacker News new | past | comments | ask | show | jobs | submit login

I grew up with computing in the 70's and 80's, and I truly think the 90's were a period of devolution in the computer industry - desktop (Microsoft) computing came a long and gave computing to the masses, but the cost was that a lot of technical advances that were being made in the industry as a whole got swept under the covers - for various reasons, either it confused users, or Microsoft just didn't have the technology chops to get it into their OS, or whatever.

But I definitely think we make many, many advances in computing that get washed away by the mass-market mechanics required to get things out there in a way that satisfies the bean counters.

I remember being able to freeze/defrost process on MIPS Risc/OS back in the day .. I even used it as part of my development/test process, since security policies in the computer rooms I was working in the 80's meant that some systems (operational) were not allowed to have compilers - the only way I could test my new build was to run the process, freeze it to disk, send it over to the test-ops machines, and defrost them. I did this so often that it just became commonplace - yet the ability to do this just faded during the 90's when I moved to other environment. I still can't do this easily on any of our existing systems, although there's cryopid and company - but it strike me as amusing that these sorts of capabilities are being touted as new and ground-breaking. More often than not, the discoveries of a unicorn comp-sci grad who just spent 5 years on themselves seem to be re-inventing things they were ignoring all the while were standard, in the rush to push the past aside and no longer deal with dinosaurs.

Its a funny business we're in. I look forward to the next cycle of devo/evo-lution.




That's called checkpointing or checkpoint/restore these days, and it's sadly mostly limited to HPC and cluster environments. That said, there are some projects (at least for Linux) like DMTCP and CRIU which try to offer it as userspace tools, if somewhat finicky.

DragonFly BSD has had kernel-level checkpointing through sys_checkpoint(2) since 2005, but it has limitations with multithreaded programs.


Thanks for the pointer about Dragonfly, I'm not at all surprised that it has solid checkpointing, since its developers are definitely hard-metal types.

For my part, I treat my Docker files as my own little server-farm, and production is of course managed else-wise, so its all just an amusing analog of how things worked 'in the good old days' ..

Cryopid on Linux looked interesting for a while, but I guess its not really relevant as a feature in this age of hardware. In the good old days, it was necessary to checkpoint to get out of the way of the other jobs the computing facility had to perform .. endless tapes of checkpoints, hanging on the wall, waiting to be spooled, re-spooled, etc.


I guess it is one of those flukes of history.

Those early home desktops were not the most potent of computers. Remember that there was a distinction made between PCs and workstations.

Thus i fear many that grew up with a PC in the home ended up with something akin to mental blinders.

And by the time they hit higher educational tiers the labs etc had transitions to using PCs on networks rather than terminals tied to a big iron.


I believe you are correct and its an interesting aspect of technology that progress in one realm can mask/obfuscate/negate progress made in another ..


Well, not just technology.

Many a recent discovery in science have been thanks to someone glancing over the mental partitions between fields of study and going "hey, i recognize that. We have a decade old solution for it".


The problem is that IBM shot themselves in the foot on this (just as Cisco seems to be hell bent on doing). As near as I can tell it goes like this: (1) Companies want hardware they can find people with experience on (2) The only way to get IBM big iron experience is to have big iron hardware (ongoing scuffles with emulation: https://www.google.com/search?q=ibm+sues+emulator https://en.wikipedia.org/wiki/Hercules_%28emulator%29 ) (3) IBM mainframes are expensive compared to commodity hardware (4) Universities don't choose to run IBM mainframes (5) A tiny minority of people coming out of university actually have experience (6) Companies are worried about the lack of inexpensive employees with experience

They have amazing features in their OS's, for sure, but no amount of marketing budget makes up for "I can throw this on my PC in some fashion that I can learn about those features."


Bingo. And why you see the likes of Apple, Adobe and Microsoft bend over backwards to offer students deep discounts.

Heck, MS has been using their personal computing market share in their business sales pitches. This in the form of "total cost of ownership". More specifically that people will be accustomed to MS interfaces from their home use, and so less training is needed as new employees.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: