Hacker News new | past | comments | ask | show | jobs | submit login

"Wirth's architecture was late"

You're missing the point: abstracting some machine differences behind a system module then building on it in a safer, easy-to-compile language with optional efficiency/flexbility tradeoffs. Thompson and Ritchie could've done that given prior art but they wanted a trimmed-down MULTICS with that BCPL language Thompson had a preference for. Around 5 years later, Wirth et al had a weak system to work on and did what I described with much better results in technical aspects. His prior work, Pascal/P, got ported to around 70 architectures ranging from 8-bit to mainframes in about 2 years by amateurs. Imagine if UNIX had been done the Wirth way then spread like wildfire. Portability, safety, compiles, modifications, integrations... all would've been better. Safety stuff off initially where necessary due to huge impact on performance but gradually enabled as a compiler option as hardware improved. As Wirth et al did. I included Edison System reference because Hansen did Wirth style on PDP-11, proving it could've been done by UNIX authors.

"Lisp was VERY expensive computationally, and was often highly unportable, being written in asm, and lispms all implementing their own version of the language: more elegant, less practical."

Choices of the authors. Similar to above, they could've done what PreScheme and Chez people did in making an efficient, variant of LISP with or without GC's. Glorified, high-level assembly if nothing else. PreScheme could even piggy-back on C compilers given they were prevalent at time it was written. Took till the 90's before someone was wise enough to do that although I may have missed one in LISP's long history. They also formally verified for correctness down to x86, PPC, and ARM. Would've benefited any app or OS written in it later. Pulling that off for C took a few decades... using Coq and ML languages. :)

"Unix reccomends a lot, but ultimately perscribes little."

My recommendations do that by means of being simple, functional or imperative languages with modules. Many academics and professionals were able to easily modify those compilers or systems to bring in cutting-edge results due to tractable analysis. UNIX is the opposite. It prescribes a specific architecture, style, and often language that made high-security or high-integrity improvements hard to impossible in many projects. The likes of UCLA Secure UNIX failed to achieve objective even on simple UNIX. Most of the field just gave up with result being some emulation layer or VM running on top of something better to get the apps in there. Also the current approach in most cloud models leveraging UNIX stacks. It wasn't until relatively recently that groups like CompCert, Astree, SVA-OS or Cambridge's CHERI started coming up with believable ways to get that mess to work reliably & securely. It's so hard people are getting PhD's pulling it off vs undergrads or Masters students for alternatives.

So, yeah, definitely something wrong with that approach given alternatives can do the same thing with less labor. Hell, ATS, Ocaml, and Scheme have all been implemented on 8-bit CPU's with their advantages. You can run OpenVMS, MCP, Genera LISP, or MINIX 3 (self-healing) on a desktop now directly or emulated. You can get the advantages I mentioned today with reasonable performance. Just gotta ditch UNIX and pool FOSS/commercial labor into better models. Also improve UNIX & others for interim benefits.




You can't run Genera in anything like a sane manner. I've tries.


You can run it, though, which is the point. It doesn't require a supercomputer or mainframe. It can be cloned with a combo of dynamic LISP (flexibility/safety) and static LISP (low-level/performance) where latter might use Rust-style safety as in Carp. You can still isolate drivers and/or app domains in various ways for reliability as in JX OS. Necessary components are there for modern, fast, desktop, LISP machine with its old benefits.

People just use monoliths in C instead & call it good design/architecture despite limitations. Saying "it's good enough for my needs" is reasonable justification for inferior technology. Just not good to pretend it's something it isn't. When you don't pretend, you get amazing things like the BeOS or QNX desktop demos that did what UNIX/Linux desktop users might have thought impossible at the time. Since UNIX/Linux were "better." ;)


Who said writing monoliths was a good idea? Because that wasn't me. Monoliths are bad. And yeah, you shouldn't write your app in C.




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: