Hacker News new | past | comments | ask | show | jobs | submit login

If you're referring to structured records, I saw the mainframe, I used the mainframe, and I was unimpressed.

As for unstructured text communication, say what?!? Every good UNIX engineer knows: build in a -m switch for versioned machine readable output, and if possible, make that output a stable interface. That's clear, at least to me. That isn't clear to you?

And I hope by structured text, you don't mean garbage like JSON, one of the most inconsistent and idiotic formats I have ever seen?

Hopefully you also don't mean XML, which is terrible to parse with standard UNIX tools like grep, sed, and AWK. More complications for negligible gains.




>Hopefully you also don't mean XML, which is terrible to parse with standard UNIX tools like grep, sed, and AWK. More complications for negligible gains.

I'd prefer a type system so I can use these tools like a library. Most of them only work on piped data or files.

A recent example is that I needed to diff files. There are existing programs and I didn't want to reinvent the wheel, I just needed that particular wheel to build something else.

To use the existing programs I had to write to a file, which is too slow for my use case. It would be much easier if I could hand these tools a pointer to my in memory data structures and get the diff back in another structure.

This is one reason why we often see libraries replicated /bin. Powershell did a good job of solving this (but was too flawed in other ways).


Textual interfaces enforce decoupling. In a lispy system, with richer interfaces, you can couple your apps and functions as tightly as you want. In Unix, the textual interchange limits you.

However, if you have more complex data to send, text may be problematic. And if you're going to send structured data via text, you need a standard, easily parsable format so that people can easily parse your data without having to roll their own, incredibly buggy, parser. JSON and DSV are both easy to parse, and so those are the formats people use, like it or not. And no, it's not inconsistant. It wouldn't be so easy to parse if it was.

Also, I have never seen a tool with -m. Maybe it's because I'm running Linux.


"If you're referring to structured records, I saw the mainframe, I used the mainframe, and I was unimpressed."

You saw a mainframe. I saw a number that were quite different from each other. The parent said "a step back," though, not mainframes or a specific mainframe. There were many architectures that came before or after UNIX with better attributes as I list here:

https://news.ycombinator.com/item?id=10957020

If we're talking minimal hardware, let's look at two other approaches. One was Wirth's. They do an idealized assembly language to smooth over hardware or portability issues. It's very fast due to being close to bare-metal. Simple so amateurs can implement it. They design a safer, system language that's consistent, easy to compile, type-checks interfaces, can insert eg bounds-checks, and compiles to fast code. They write whole system in that. Various functions are modules that directly call other modules. High-level language, rapid compilation, and low debugging means that two people crank out whole system & tooling in about 2 years. Undergrads repeatedly extend or improve it, including ISA ports, in 6mo-2yr per person. A2 Bluebottle runs insanely fast on my 8-year-old hardware despite little optimization and OS running in a garbage-collected language. Brinch Hansen et al did something similar in parallel on Solo OS except he eliminated data races at compile time with his Concurrent Pascal. Later did a Wirth-style system on PDP-11 with similar benefits called Edison.

On functional end, various parties created the ultimate, hacker language in LISP. Important properties were easy DSL creation, incremental compilation of individual functions, live updates, ability to simulate any development paradigm, memory safety, and higher-level in general. The LISP machines implemented most of their OS's and IDE's in these languages. Imagine REPL-style coding of an application that would run very fast whose exceptions, even at IDE or OS level, could be caught, analyzed at source form, and patched while it was running. Holy. Shit. They targeted large machines but Chez Scheme (8-bit) and PreScheme (C competitor) showed many benefits could be had by small machines. Jonathan Rees even made a capability-secure version of Scheme which, combined with language safety benefits, made it one of most powerful for reliability or security via isolation. A project to combine the three concepts could have amazing potential.

So, yeah, UNIX/C was a huge step back in compiler speed/consistency, speed/safety tradeoffs in production, flexibility for maintenance, integration, debugging, reliability, security, and so on. Tons of architectures or languages better on each of these with some having easier programming models. That Thompson and Ritchie's perfect set of language features for C replacement were collectively an Oberon-2 clone (Go) is also an implicit endorsement of competing system. Plenty of nails in the coffin. Sociology, economics, and luck are reasons driving it. The tech is horrible.


UNIX was the best thing at the time. It had good interfaces for IPC, could run on most systems, not just big, expensive ones, and was relatively portable. And sometimes, Worse really is Better. Wirth's architecture was late, and more expensive compuatationally. Lisp was VERY expensive computationally, and was often highly unportable, being written in asm, and lispms all implementing their own version of the language: more elegant, less practical.

Unix was and is sucessful because it was good enough, and far more platform, language, and tecnique agnostic than the competition. Unix reccomends a lot, but ultimately perscribes little.


"Wirth's architecture was late"

You're missing the point: abstracting some machine differences behind a system module then building on it in a safer, easy-to-compile language with optional efficiency/flexbility tradeoffs. Thompson and Ritchie could've done that given prior art but they wanted a trimmed-down MULTICS with that BCPL language Thompson had a preference for. Around 5 years later, Wirth et al had a weak system to work on and did what I described with much better results in technical aspects. His prior work, Pascal/P, got ported to around 70 architectures ranging from 8-bit to mainframes in about 2 years by amateurs. Imagine if UNIX had been done the Wirth way then spread like wildfire. Portability, safety, compiles, modifications, integrations... all would've been better. Safety stuff off initially where necessary due to huge impact on performance but gradually enabled as a compiler option as hardware improved. As Wirth et al did. I included Edison System reference because Hansen did Wirth style on PDP-11, proving it could've been done by UNIX authors.

"Lisp was VERY expensive computationally, and was often highly unportable, being written in asm, and lispms all implementing their own version of the language: more elegant, less practical."

Choices of the authors. Similar to above, they could've done what PreScheme and Chez people did in making an efficient, variant of LISP with or without GC's. Glorified, high-level assembly if nothing else. PreScheme could even piggy-back on C compilers given they were prevalent at time it was written. Took till the 90's before someone was wise enough to do that although I may have missed one in LISP's long history. They also formally verified for correctness down to x86, PPC, and ARM. Would've benefited any app or OS written in it later. Pulling that off for C took a few decades... using Coq and ML languages. :)

"Unix reccomends a lot, but ultimately perscribes little."

My recommendations do that by means of being simple, functional or imperative languages with modules. Many academics and professionals were able to easily modify those compilers or systems to bring in cutting-edge results due to tractable analysis. UNIX is the opposite. It prescribes a specific architecture, style, and often language that made high-security or high-integrity improvements hard to impossible in many projects. The likes of UCLA Secure UNIX failed to achieve objective even on simple UNIX. Most of the field just gave up with result being some emulation layer or VM running on top of something better to get the apps in there. Also the current approach in most cloud models leveraging UNIX stacks. It wasn't until relatively recently that groups like CompCert, Astree, SVA-OS or Cambridge's CHERI started coming up with believable ways to get that mess to work reliably & securely. It's so hard people are getting PhD's pulling it off vs undergrads or Masters students for alternatives.

So, yeah, definitely something wrong with that approach given alternatives can do the same thing with less labor. Hell, ATS, Ocaml, and Scheme have all been implemented on 8-bit CPU's with their advantages. You can run OpenVMS, MCP, Genera LISP, or MINIX 3 (self-healing) on a desktop now directly or emulated. You can get the advantages I mentioned today with reasonable performance. Just gotta ditch UNIX and pool FOSS/commercial labor into better models. Also improve UNIX & others for interim benefits.


You can't run Genera in anything like a sane manner. I've tries.


You can run it, though, which is the point. It doesn't require a supercomputer or mainframe. It can be cloned with a combo of dynamic LISP (flexibility/safety) and static LISP (low-level/performance) where latter might use Rust-style safety as in Carp. You can still isolate drivers and/or app domains in various ways for reliability as in JX OS. Necessary components are there for modern, fast, desktop, LISP machine with its old benefits.

People just use monoliths in C instead & call it good design/architecture despite limitations. Saying "it's good enough for my needs" is reasonable justification for inferior technology. Just not good to pretend it's something it isn't. When you don't pretend, you get amazing things like the BeOS or QNX desktop demos that did what UNIX/Linux desktop users might have thought impossible at the time. Since UNIX/Linux were "better." ;)


Who said writing monoliths was a good idea? Because that wasn't me. Monoliths are bad. And yeah, you shouldn't write your app in C.




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: