Hacker News new | past | comments | ask | show | jobs | submit login

begin

this is terrible news;

is there a better source than twitter (edit: https://lists.inf.ethz.ch/pipermail/oberon/2024/016856.html thanks to johndoe0815);

wirth was the greatest remaining apostle of simplicity, correctness, and software built for humans to understand; now only hoare and moore remain, and moore seems to have given the reins at greenarrays to a younger generation;

young people may not be aware of the practical, as opposed to academic, significance of his work, so let me point out that begin

the ide as we know it today was born as turbo pascal;

most early macintosh software was written in pascal, including for example macpaint;

robert griesemer, one of the three original designers of golang, was wirth's student and did his doctoral thesis on an extension of oberon, and wirth's languages were also a very conspicuous design inspiration for newsqueak;

tex is written in pascal;

end;

end.




> wirth was the greatest remaining apostle of simplicity, correctness, and software built for humans to understand;

And yet far from the last. Simple, correct, and beautiful software is still being made today. Most of it goes unnoticed, its quiet song drowned out by the cacophony of attention-seeking, complex, brittle behemoths that top the charts.

That song never faded, you just need to tune in.


who are today's new great minimalists?


In no particular order: 100r.co, OpenBSD (& its many individual contributors such as tedu or JCS), Suckless/9front, sr.ht, Alpine, Gemini (&gopher) & all the people you can find there, Low Tech Magazine, antirez, Fabrice Bellard, Virgil Dupras (CollapseOS), & many other people, communities, and projects - sorry I don't have a single comprehensive list, that's just off the top of my head ;)


I would add Jochen Liedtke (unfortunately he passed away already more than 20 years ago) as inventor of the L4 microkernel.

Several research groups continued work on L4 after Liedtke's death (Hermann Härtig in Dresden, Gernot Heiser in Sydney, a bit of research at Frank Bellosa's group in Karlsruhe and more industrial research on L4 for embedded/RT systems by Robert Kaiser, later a professor in Wiesbaden), but I would still argue that Liedtke's original work was the most influential, though all the formal verification work in Sydney also had significant impact - but that was only enabled by the simplicity of the underlying microkernel concepts and implementations.


agreed, though i think l4 was more influential than eumel (which is free software now by the way) even though eumel preceded l4


i... really don't think kris de decker is on niklaus wirth's level. i don't think he can write so much as fizzbuzz

fabrice bellard is wirth-level, it's true. not sure about tedu and jcs, because i'm not familiar enough with their work. it's absurd to compare most of the others to wirth and hoare

you're comparing kindergarten finger paintings to da vinci

you said wirth was "far from the last" apostle of simplicity. definition of apostle: https://en.wikipedia.org/wiki/Apostle


> it's absurd to compare most of the others to wirth and hoare

You're the one trying to directly compare achievement, not me. If you're looking for top achievers, I'd have to name PHP or systemd, and THAT would be out of place ;)

I even said "in no particular order", because I don't think any two can be easily compared.

My main criterion for inclusion was the drive for simplifying technology, and publishing these efforts:

> An apostle [...], in its literal sense, is an emissary. The word is [...] literally "one who is sent off" [...]. The purpose of such sending off is usually to convey a message, and thus "messenger" is a common alternative translation.

Every single project, person, or community I've named here has some form of web page, blog, RSS feed, papers/presentations, and/or source code, that serve to carry their messages.

Achievement can be measured, simplicity can only be appreciated.


I'd also mention https://www.piumarta.com


ooh, yeah, now that's a possibility


uxn ftw <3


uxn is cool but it's definitely not the same kind of achievement as oberon, pascal, quicksort, forth, and structured programming; rek and devine would surely not claim it was


to quote gp, "beautiful software is still being made today". It's not a competition.


you don't get to be described as an 'apostle of simplicity' just because you like simplicity. you have to actually change the world by creating simplicity. devine and rek are still a long way from a turing award


From your Wikipedia link about the meaning of the Word Apostle:

“The term [Apostle] is also used to refer to someone who is a strong supporter of something.[5][6]“

So I would call many people and myself (as someone who started studying Computer Science with Assembler and Modula-2) Apostle of simplicity.

No need for techno-classism.


you don't get to dictate who does or doesn't get recognized for creating awesome works that influence and inspire others. take your persistent negativity elsewhere.

btw, uxn is absolutely the exemplification of "software built for humans to understand" and simplicity. I mean...

> the resulting programs are succinct and translate well to pen & paper computing.

> to make any one program available on a new platform, the emulator is the only piece of code that will need to be modified, which is explicitly designed to be easily implemented

https://100r.co/site/uxn_design.html

how one can frame this as trivial is beyond me.


i don't think uxn is trivial, i think it's a first step toward something great. it definitely isn't the exemplification of "software built for humans to understand"; you have to program it in assembly language, and a stack-based assembly language at that. in that sense it's closer to brainfuck than to hypertalk or excel or oberon. it falls short of its goal of working well on small computers (say, under a megabyte of ram and under a mips)

the bit you quote about uxn having a standard virtual machine to permit easy ports to new platforms is from wirth's 01965 paper on euler http://pascal.hansotten.com/niklaus-wirth/euler-2/; it isn't something devine and rek invented, and it may not have been something wirth invented either. schorre's 01963 paper on meta-ii targets a machine-independent 'fictitious machine' but it's not turing-complete and it's not clear if he intended it to be implemented by interpretation rather than, say, assembler macros

i suggest that if you develop more tolerance for opinions that differ from your own, instead of deprecating them as 'persistent negativity' and 'dictating', you will learn more rapidly, because other people know things you don't, and sometimes that is why our opinions differ. sometimes those things we know that you don't are even correct

i think this is one of those cases. what i said, that you were disagreeing with, was that uxn was not the same kind of achievement as oberon, pascal, quicksort, forth, and structured programming (and, let me clarify, a much less significant achievement) and that it is a long way from [meriting] a turing award. i don't see how anyone could possibly disagree with that, or gloss it as 'uxn is trivial', as you did, unless they don't know what those things are

i am pretty sure that if you ask devine what he thinks about this comment, you will find that he agrees with every word in it


Hi kragen, hi amatecha!

Someone sent me this thread so I could answer, and I do agree. I for one think uxn is trivial, it was directly inspired by the VM running Another World and created to address a similar need. It's not especially fast, or welcoming to non-programmers, it was a way for my partner and I to keep participating in this fantastic universe that is software development, even once our access to reliable hardware was becoming uncertain. It's meant to be approachable to people in a similar situation and related interests, and possibly inspire people to look into assembly and stack machines -- but it has no lofty goals beyond that. We're humbled that it may have inspired a handful of developers to consider what a virtual machine designed to tackle their own needs might look like.

A lot of our work is owed to Wirth's fantastic documentation on Oberon, to the p-machine and to pascal. Niklaus' works has influenced us in ways that it would be very unlikely that we could pass forward. I'm sad to hear of Nicklaus' passing, there are people who inspire me in similar ways, that are alive today and that I look up to for inspiration, but to me, Wirth's work will remain irreplaceable. :)

-- Devine

http://wiki.xxiivv.com/site/devlog


There wasn't a single place I asserted that uxn is specifically novel or unprecedented. In fact, Devine's own presentation[0] about uxn specifically cites Wirth and Oberon, among countless other inspirations and examples. I'm saying it's awesome, accessible, simple and open.

I don't need to "develop more tolerance for differing opinions" - I have no problem with them and am completely open to them, even from people who I feel are communicating in an unfriendly, patronizing or gatekeeping manner. rollcat shared some other people and projects and you took it upon yourself to shoot down as much as possible in that comment - for what purpose? No one said Drecker is "on Wirth's level" when it comes to programming. We don't need him to write FizzBuzz, let alone any other software. I'm sorry you don't recognize the value of a publication like Low-Tech Magazine, but the rest of us can, and your need to shoot down that recognition is why I called your messages persistently negative.

Further, when I give kudos to uxn and recognize it as a cool piece of software, there's absolutely no point in coming in and saying "yeah but it's no big deal compared to ____" , as if anyone was interested in some kind of software achievement pissing contest. The sanctity and reverence for your software idols is not diluted nor detracted from by acknowledging, recognizing and celebrating newer contributors to the world of computing and software.

I have to come back and edit this and just reiterate: All I originally said was "uxn ftw" and you found it necessary to "put me in my place" about something I didn't even say/assert, and make it into some kind of competition or gatekeeping situation. Let people enjoy things. And now, minimizing this thread and never looking at it again.

[0] https://100r.co/site/weathering_software_winter.html


Yeah, these younguns have a lot to learn. :-) The notion that there's something innovative about using a small VM to port software is hilarious. BTW, here is a quite impressive and effective use of that methodology: https://ziglang.org/news/goodbye-cpp/

Dewey Schorre and Meta II, eh? Who remembers such things? Well, I do, as I was involved with an implementation of Meta V when I was on the staff of the UCLA Comp Sci dept in 1969.


nice! which direction did meta-v go meta in?

i recently reimplemented meta-ii myself with some tweaks, http://canonical.org/~kragen/sw/dev3/meta5ixrun.py

i guess i should write it up


Heh, no way do I remember the details. I just remember that we were rewriting it in 360 assembler but I left before that was completed (if it was), and that I wrote an Algol syntax checker in Meta V that was obliquely referenced at the end of RFC 57.


oh, that's too bad. i guess memories fade

it's interesting to see programming language advancement cited as a major contribution to the development of a network protocol


I think it was a very minor contribution ... they wrote a pseudo-Algol program as a form of documentation of their network protocol and were concerned about checking the grammar/syntax of the program (people actually cared about the quality of documentation back then), and I wrote a syntax checker for it in Meta V, as it was on hand because it was written at UCLA (I don't know whether it was Dewey (Val) who designed and implemented Meta V or someone else) and was used by people in the Comp Sci Dept. to design programming languages. But the dept was shifting to networking at the time (the IMP had just arrived) under the direction of Leonard Kleinrock and through the efforts of pioneers Steve Crocker, Vint Cerf, and Jon Postel (all of whom had attended Taft High School together) ... this is why the authors of that network protocol were visiting UCLA. I got involved because I worked for Crocker, under the direct management of Charley Kline, who was the fellow who made the first ever networked login.


Kartik Agaram [1], The paper about his Mu project [2] presents a rationale for a particular kind of minimalism, and its realization.

1. https://akkartik.name

2. https://akkartik.name/akkartik-convivial-20200315.pdf


I use Suckless's terminal, st. It's great. To install it, I compile it from source. It takes a couple seconds. Take a look at their work.


do you think writing st is an achievement that merits a turing award

because i've also written a terminal emulator, and it compiles from source in 0.98 seconds https://gitlab.com/kragen/bubbleos/blob/master/yeso/admu-she... screenshot of running vi in it at https://gitlab.com/kragen/bubbleos/blob/master/yeso/admu_she...

(steps to reproduce: install dependencies; make; rm admu-shell admu-shell-fb admu-shell-wercam admu admu.o admu-shell.o admu_tv_typewriter.o; time make -j. it only takes 0.37 seconds if you only make -j admu-shell and don't build the other executables. measured on debian 12.1 on a ryzen 5 3500u)

i wrote pretty much all of it on october 12 and 13 of 02018 so forgive me if i don't think that writing a terminal emulator without scrollback is an achievement comparable to pascal and oberon etc.

not even if it were as great as st (and actually admu sucks more than st, but it can still run vi and screen)


Last I checked, st was around 8k lines. It's not bad (xterm's scrollbar and button handling is in a similar LOC range), but I'd argue it's not minimalist, so even if writing a terminal had qualified, st isn't it.

WRT to the scrollback it seems like they're going overboard in being difficult about features that adds little code but has much impact, while not paying close attention to their dependencies. Things they could've done without (the copy on my system includes libz, libpng, libexpat (assuming for fontconfig, which is itself a giant steaming pile of excessive complexity), and even libbrotlicommon... I'm pretty sure I have no brotli images on my system that st has any business touching...

I used st until I replaced it with my own, and I can't fault it for many things in terms of usability, though. Other than the box drawing - it's not pixel perfect (I only bring that up because I bikeshedded a pixel-perfect override for the boxdrawing characters for my font renderer when I was bored a while back so it's the one place where I can crow about mine being better than st ;) - in every other area it still has warts to clean up)


yeah, i don't mean to bag on st here at all

it would be interesting to see what an even more minimalist and more usable terminal emulator looked like. both your work and st are constrained by having to support terminal control languages with a lousy strength to weight ratio, something oberon opted out of


Yeah, there's a ton of cruft. On one hand I find it fun to see how much of vttest I can make it through, on the other hand, supporting DECALN (DEC service test pattern - just fills the screen with capital E) is just a boxticking exercise, and while that's one of the dumbest one there are also dozens that are hardly ever used, or that are used in rare cases but doesn't really need to.

That is one area where st's "tmux copout" on scroll somewhat makes sense - it would be a reasonable option to define a clean sufficient subset that lets you run enough stuff to tell people to just run anything that breaks under tmux/screen or a separate filter.

But from what I see with terminals, there's a lot of reluctance to do this not because people believe all these codes are so important but because it seems to become a bit of a matter of pride to be a precise as possible. I admit to having succumbed to a few myself, like support for double-width and double-height characters, as well as "correct" (bright/dim rather than on/off) blink and support for the nearly unsupported rapid blink... There is also a pair of escape codes to enable and disable fraktur. This is fertile ground for procrastinating terminal developers to implement features used by one person in the 70's sometime.

At the same time I sometime catch myself hoping some of these features will be used more... A very few I'll probably add support for because I want to use them in my text editor, e.g. different coloured underlines, and squiggly underlines are both easy to do and actually useful..

I think with a cleaner set of control codes, though, you could certainly fit quite a few of those features and still reduce the line count significantly...


I have used DECALN, which is sometimes useful for testing purpose (especially in full screen, although sometimes even if it isn't).

I will want to see support for the PC character set, for EUC character sets (including EUC-TRON), TRON-8 character code, bitmap fonts (including non-Unicode fonts), Xaw-like scrolling and xterm-like selecting, ability to disable receiving (not only sending) 8-bit controls (which should be used to switch between EUC-JP and EUC-TRON, as well as other purposes), a "universal escape" sequence (recognized anywhere even in the middle of other sequences), and some security features (I have some ideas that I don't even know if it is possible on Linux or on BSD, such as checking the foreground process, and being able to discard any data the terminal emulator sent to the application program that as not yet been read yet, which can avoid a file or remote server to send answerbacks which will execute commands in the shell, if you add a cancellation code into the shell prompt, etc)


It's a one-liner to fill your screen, and the fraction of people who even know DECALN exist is so small I wouldn't be surprised if the total invocations of DECALN by terminal users in recent decades is smaller than the number of indirect invocations by terminal implementers via vttest.

It made sense as a service tool on a physical terminal, not much now. It's not that it's a problem - it's trivial to support. It's just that it is an example of one of hundreds of little features that are box-ticking exercises that would cause about a dozen people worldwide to shrug if they noticed they weren't there before they'd do the same thing a slightly different way and not think about it again.

Some of the ones you list are useful some places, but many of them don't need to be in every terminal. I want more, but smaller, options built from generic reusable components.

E.g. most of my terminal does not care what your character set is, or what type of fonts you want to use, or how your scrolling works, or if you have scroll bars, or whether there is a shell, or whether there's a program being run by a terminal vs. a program embedding the terminal, or if it's running in a window, or whether it has GUI output at all or is entirely headless. This is true for most terminals. Yet these components are rarely separated and turned into reusable pieces of code.

Most of the features you list are features I don't need, won't implement, and don't care about. But what I do care about is that with some exceptions (e.g. the Gnome VTE widget) most terminals reinvent way too much from scratch (and frankly most users of terminal widgets like VTE still reinvent way too much other stuff from scratch), put too much effort into supporting far too many features that are rarely used, instead of being able to pick a terminal that is mostly just pulling in generic components as a starting point.

The result is a massive amount of code that represents the same features written over and over and over again and sucking time out of the bits that differentiate terminals in ways useful to users.

E.g. just now I've been starting to untangle the bits in my terminal that handles setting up the PTY and marshaling IO between the shell or other program running in it and the bits that handle the output to the actual terminal. The goal is to make it as easy for casual scripts to open a terminal window and control it as it was on the Amiga, without having to spawn a separate script to "run in it".

On the Amiga you could e.g. do "somecommand >CON:x/y/w/h/Sometitle" to redirect "somecommands" output to a separate terminal window without any foreground process, with the given dimensions and title, and assorted other flags available (e.g. "/CLOSE" would give yo ua close button, "/WAIT" would keep the window open after the process that opened it went away etc.).

If you've written a terminal, then part of your terminal represents 99% of the code to provide something almost like that.

Beyond that, I'm going to rip the escape code handling out too, so code that don't want to do escape codes still can pretend they're talking to a terminal with a somewhat ncurses-y interface but with the freedom to redirect the rendering or render on top of it, or whatever, then that makes "upgrading" from a terminal UI to somewhat of a GUI far easier (the Amiga, again, had a lot of this, with apps that'd mix the same system console handler used for the terminal with few graphical flourishes; it lowered the threshold to start building more complex UI's immensely)

Then I'm going to extract out the actual rendering to the window into a separate component from the code that maintains the (text) screen buffer, so that I can write code that uses the same interface to render either to a terminal or directly to a window.

Same for e.g. font-handling - I've decided I don't care about bitmap fonts, but the actual bit of my terminal that cares about any kind of fonts is ~40 lines of code, and to most of my terminal components it doesn't matter if you output anything anywhere, but even of the remaining code actually dealing with GUI output, 3/4 doesn't care, or know, about fonts at all (managing a window, clearing, filling, scrolling take up more). Making it pluggable so someone could plug in either a client side bitmap font renderer or code to use the old X11 text drawing calls is trivial.

Because with all of these things broken out into components it doesn't matter much if my terminal doesn't support your feature set, if "writing another terminal" doesn't mean writing the 95% of the code that implements shared functionality over and over again.

You could write literally half a dozen of custom tiny terminals like that before even approaching the line count of xterm's mouse button handling code alone.


That is good, to have separate components of the codes, that can then be reused. Do you have terminal emulator codes with such things? Then, we can see, and it can easily be modified.


i'd forgotten about bright/dim blink

i've been thinking that maybe nested tables would be better than character cells, for example, accommodating proportional fonts and multiple sizes much better


> i'd forgotten about bright/dim blink

Everyone has... I'm not sure it's a big loss, but I found it funny to "fix" and the fun of tweaking tiny things like that lies at the core of a whole lot of terminal bikeshedding...

> i've been thinking that maybe nested tables would be better than character cells, for example, accommodating proportional fonts and multiple sizes much better

A lot of simplicity could easily go away if it's not done well, but I like the idea. I want to eventually support some limited "upgrades" in that direction, but will take some cleanup efforts before that'll be priority.


No better source yet, I think.

But it is the real account of Bertrand Meyer, creator of the Eiffel language.


Niklaus Wirth's death was also announced (by Andreas Pirklbauer) an hour ago on the Oberon mailing list:

https://lists.inf.ethz.ch/pipermail/oberon/2024/016856.html


thank you

dang, maybe we can change the url to this instead? this url has been stable for at least 14 years (http://web.archive.org/web/20070720035132/https://lists.inf....) and has a good chance of remaining stable for another 14, while the twitter url is likely to disappear this year or show different results to different people


yeah, and i hope meyer would know

but still, it's twitter, liable to vanish or block non-logged-in access at any moment


Since Twitter is suppressing the visibility of tweets that link outside their site I think it would be perfectly fair to block links to twitter, rewrite them to nitter, etc. There also ought to be gentle pressure on people who post to Twitter to move to some other site. I mean, even I've got a Bluesky invite now.


bluesky seems like the site for people who think that the problem with twitter was that the wrong billionaire gets to decide which ideas to suppress

(admittedly you could make the same criticism of hn; it certainly isn't decentralized and resilient against administrative censorship like usenet was)


Well I didn't mean to just endorse Bluesky but call it out as one of many alternatives.

I'm actually active on Mastodon but I am thinking about getting on Instagram as well because the content I post that does the best on Mastodon would fit in there.


it won't surprise you to learn that i like mastodon but haven't used it in months


Do you know about Pixelfed?


Yep. I interact w/ Pixelfed users on Mastodon all the time,


Please don't make dang do more work.

https://news.ycombinator.com/item?id=38847048


Martin Odersky, creator of the Scala language and Wirth's student, also seems to believe it: https://twitter.com/odersky/status/1742618391553171866


I've been a massive fan of the PhD dissertation of Wirth's student Michael Franz since I first read it in '94. He's now a professor at UC Irvine, where he supervised Andreas Gal's dissertation work on trace trees (what eventually became TraceMonkey)


thank you, i definitely should have mentioned franz's work, even though i didn't know he was gal's advisor

perhaps more significant than tracemonkey was luajit, which achieves much higher performance with the tracing technique


> wirth was the greatest remaining apostle of simplicity, correctness, and software built for humans to understand; now only hoare and moore remain

Also Alan Kay still with us.


in the neat/scruffy divide, which goes beyond ai, wirth was the ultimate neat, and kay is almost the ultimate scruffy, though wall outdoes him

alan kay is equally great, but on some axes he is the opposite extreme from wirth: an apostle of flexibility, tolerance for error, and trying things to see what works instead of planning everything out perfectly. as sicp says

> Pascal is for building pyramids—imposing, breathtaking, static structures built by armies pushing heavy blocks into place. Lisp is for building organisms—imposing, breathtaking, dynamic structures built by squads fitting fluctuating myriads of simpler organisms into place.

kay is an ardent admirer of lisp, and smalltalk is even more of an organism language than lisp is


I had wanted to interview Val Schorre [1], and looked him up on a business trip because I was close. Died 2017, seems like a righteous dude.

https://www.legacy.com/us/obituaries/venturacountystar/name/...

[1] https://en.wikipedia.org/wiki/META_II


yeah, i wish i had had the pleasure of meeting him. i reimplemented meta-ii 3½ years ago and would recommend it to anyone who is interested in the parsing problem. it's the most powerful non-turing-complete 'programming language' i've ever used

http://www.canonical.org/~kragen/sw/dev3/meta5ixrun.py

(i mean i would recommend reimplementing it, not using my reimplementation; it takes a few hours or days)

after i wrote it, the acm made all old papers, including schorre's meta-ii paper, available gratis; they have announced that they also plan to make them open-access, but so far have not. still, this is a boon if you want to do this. the paper is quite readable and is at https://dl.acm.org/doi/10.1145/800257.808896


Ok, so I've been taking a crack at this. Can you help me understand something? On page 8, figure 5, the production for the entire program starts

    '.SYNTAX' .ID .OUT('ADR' *) ...
but I'm having trouble understanding what the ADR code is supposed to do. By my understanding, that line should instead read something like

    '.SYNTAX' .ID .OUT('CLL *') .OUT('HLT') ...
where HLT is some code that causes the machine to halt, or possibly

    '.SYNTAX' .ID .OUT('B *') ...
using the otherwise-unused unconditional branch code, and then defining R on an empty call stack as a machine halt.


i think adr is the equivalent of '.long' in gcc or 'dw' in masm, though the description of the adr pseudo-operation is not very clear. it says it 'produces the address which is assigned to the given identifier as a constant'. on a stack/belt machine or an accumulator machine, 'produces' could conceivably mean 'pushes on the stack/belt' or 'overwrites the accumulator with', but the meta ii machine doesn't have an operand stack, belt, or accumulator; it has a return stack with local variables, an input card, an output card, and a success/failure switch, so it doesn't make sense to read 'produces' as a runtime action. moreover, 'adr' is not listed in the 'machine codes' section; it's listed along with 'end' in a separate 'constant and control codes' section, which makes it sound like a pseudo-operation like '.long'. i suspect 'end' tells the assembler to exit

i agree, it would make much more sense to say

    '.syntax' .id .out('cll ' *) .out('hlt')
and thus eliminate the otherwise-unused adr, or simply to put the main production of the grammar at the beginning of the grammar, which is what i did in meta5ix. i think they do define 'r' on an empty call stack as a machine halt, btw

they do mention this startup thing a bit in the text of the paper (p. d1.3-3)

> The first thing in any META II machine program is the address of the first instruction. During the initialization for the interpreter, this address is placed into the instruction counter.

so i think the idea is that their 'binary executable format' consists of the address of the entry point, followed by all the code, and the loader looks at the first word to see where to start running the code. this sounds stupid (because why wouldn't you just start running it at the beginning?) but elf, a.out, and pe all have similar features to allow you to set the entry point to somewhere in the middle of the executable code, which means you have total freedom in how you order the object files you're linking. so even though it's maybe unnecessary complexity in this context, it's well-established practice even 60 years later, and maybe it already was at the time, i don't know

i hope this is helpful! also i hope it's correct, but if not i hope it's at least helpful :)


It is a good paper, and I give much respect for ACM opening up their paywall of old papers. They even stopped rate limiting downloads. I'd like to think my incessant whining about this had some effect. :) It is such a wonderful thing for curious people everywhere to be able to read these papers.

I haven't reimplemented meta-ii, I will.

You might like https://old.reddit.com/r/rust/comments/18wnqqt/piccolo_stack...

And https://www.youtube.com/@T2TileProject/videos


thanks! i like lua a lot despite its flaws; that's what i wrote my own literate programming system in. lua's 'bytecode' is actually a wordcode (a compilation approach which i think wirth's euler paper was perhaps the first published example of) and quite similar in some ways to wirth's risc-1/2/3/4/5 hardware architecture family

i hope they do go to open access; these papers are too valuable to be lost to future acm management or bankruptcy


Are you familiar with the 'Leo' editor? It is the one that comes closest to what I consider to be a practically useful literate programming environment. If you haven't looked at it yet I'd love it if you could give it a spin and let me know what you make of it.

https://leo-editor.github.io/leo-editor/


i read a little about it many years ago but have never tried it. right now, for all its flaws, jupyter is the closest approximation to the literate-programming ideal i've found


Yes, Jupyter is definitely a contender for the crown, it's a very powerful environment. I've made use of a couple of very impressive notebooks (mostly around the theme of automatic music transcription) and it always gets me how seamlessly the shift between documentation and code is. I wished the Arduino guys would do something like that it would be make their programming environment feel less intrusive and less 'IDE' like (which mostly just gets in the way with endless useless popups).


What are the lua's flaws in your opinion? Sincere question.


there are a lot of design decisions that are pretty debatable, but the ones that seem clearly wrong to me are:

- indexing from 1 instead of 0;

- the absence of a nilness/nonexistence distinction (so misspelling a variable or .property silently gives the wrong answer instead of an exception);

- variables being global by default, rather than local by default or requiring an explicit declaration;

- printing tables by default (with %q for example) as identities rather than contents. (you could argue that this is a simplicity thing; lua 5.2 is under 15000 lines of code, which is pretty small for such a full-featured language, barely bigger than original-awk at 6200 lines of c and yacc plus 1500 lines of awk, and smaller than mawk at 16000 lines of c, 1100 lines of yacc, and 700 lines of awk. but a recursive table print function with a depth limit is about 25 lines of code.)

none of these are fatal flaws, but with the benefit of experience they all seem like clear mistakes


Thanks!


sure! what do you think?


I agree with all the points.

It's been long time since I last used lua and only positive memories remained :) I used it for adding scripting in the apps I worked on and the experience was very good -- sandboxed from the start, decent performance.

Perhaps having 0-based indexes would've been bad for our users but I don't think they used arrays at all.


> wirth was the greatest remaining apostle of simplicity, correctness, and software built for humans to understand

Absolutely!

And equally important was his ability to convey/teach CS precisely, concisely and directly in his books/papers. None of them have any fluff nor unnecessary obfuscation in them. These are the models to follow and the ideals to aspire to.

As an example see his book Systematic Programming: An Introduction.


> tex is written in pascal;

Just thought about that when Donald Knuth's Christmas lecture https://www.youtube.com/live/622iPkJfYrI lead me to one of his first TeX lectures https://youtu.be/jbrMBOF61e0 : If I install TeX on my Linux machine now, is that still compiled from the original Pascal source? Is there even a maintained Pascal compiler anymore? Well, GCC (as in GNU compiler collection) probably has a frontend, but that still does not answer the question about maintenance.

These were just thoughts. Of course researching the answers would not be overly complicated.


> If I install TeX on my Linux machine now, is that still compiled from the original Pascal source?

If you install TeX via the usual ways–TeX Live and MikTeX are the most common—then the build step runs a program (like web2c) to convert the Pascal source (with changes) to C, then uses a C compiler. (So the Pascal source is still used, but the Pascal "compiler" is a specialized Pascal-to-C translator.) But there is also TeX-FPC (https://ctan.org/pkg/tex-fpc), a small set of change (patch) files to make TeX compilable with the Free Pascal compiler (https://gitlab.com/freepascal.org/fpc/).

For more details see https://tex.stackexchange.com/questions/111332/how-to-compil...


> Is there even a maintained Pascal compiler anymore?

Of course

https://www.freepascal.org/


Nice. Although "latest news" are from 2021.


Free Pascal does very infrequent releases though the compiler is under active development and even has a bunch of new features both for the compiler (e.g. a wasm backend) and the language itself. There are always multiple daily commits in the git log by several developers.


The discussions in Gitlab are still active: https://gitlab.com/freepascal.org/fpc/source/-/issues

Of course, you can always build the latest version from Git.


It's not like Pascal changes a whole lot.


> wirth was the greatest remaining apostle of simplicity, correctness, and software built for humans to understand; now only hoare and moore remain…

No. There is another.

https://en.m.wikipedia.org/wiki/Arthur_Whitney_%28computer_s...


Some would dispute the "built for humans to understand". Whitney's work is brilliant, but it's far less accessible than Wirth's.


The point of Whitney's array languages is to allow your solutions to be so small that they fit in your head. Key chunks should even fit on one screen. A few years ago, Whitney reportedly started building an OS using these ideas (https://aplwiki.com/wiki/KOS).


I'm aware of the idea. I'm also aware that I can read and understand a whole lot of pages of code in the amount of time it takes me to decipher a few lines of K for example, and the less dense codes sticks far better in my head.

I appreciate brevity, but I feel there's a fundamental disconnect between people who want to carefully read code symbol by symbol, who often seem to love languages like J or K, or at least he better able to fully appreciate them, and people like me who want to skim code and look at the shape of it (literally; I remember code best by its layout and often navigate code by appearance without reading it at all, and so dense dumps of symbols are a nightmare to me)

I sometimes think it reflects a difference between people who prefer maths vs languages. I'm not suggesting one is better than the other, but I do believe the former is a smaller group than the latter.

For my part I want grammar that makes for a light, casual read, not having to decipher. I want to be able to get a rough understanding with a glance, and gradually fill in details, not read things start to finish (yes, I'm impatient)

A favourite example of mije is the infamous J interpreter fragment, where I'd frankly be inclined to prefer a disassembly over the source code. But I also find the ability to sketch out such compact code amazing.

I think Wirths designs very much fit in the languages that are skimmable and recognisable by shape and structure category. I can remember parts of several of Wirths students PhD theses from the 1990s by the shape of procedures in their Oberon code to this day.

That's not to diminish Whitney's work, and I find that disconnect in how we process code endlessly fascinating, and regularly look at languages in that family because there is absolutely a lot to learn from them, but they fit very different personalities and learning styles.


I sometimes think it reflects a difference between people who prefer maths vs languages. I'm not suggesting one is better than the other, but I do believe the former is a smaller group than the latter.

This dichotomy exists in mathematics as well. Some mathematicians prefer to flood the page with symbols. Others prefer to use English words as much as possible and sprinkle equations here and there (on their own line) between paragraphs of text.

The worst are those that love symbols and paragraphs, writing these dense walls of symbols and text intermixed. I’ve had a few professors who write like that and it’s such a chore to parse through.


i keep hoping that one day i'll understand j or k well enough that it won't take me hours to decipher a few lines of it; but today i am less optimistic about this, because earlier tonight, i had a hard time figuring out what these array-oriented lines of code did in order to explain them to someone else

    textb = 'What hath the Flying Spaghetti Monster wrought?'
    bits = (right_shift.outer(array([ord(c) for c in textb]),
                              arange(8))).ravel() & 1
and i wrote them myself three months ago, with reasonably descriptive variable names, in a language i know well, with a library i've been using in some form for over 20 years, and their output was displayed immediately below, in https://nbviewer.org/url/canonical.org/~kragen/sw/dev3/rando...

i had every advantage you could conceivably have! but i still guessed wrong at first and had to correct myself after several seconds of examination

i suspect that in j or k this would be something like (,(@textb)*.$i.8)&1 though i don't know the actual symbols. perhaps that additional brevity would have helped. but i suspect that, if anything, it would have made it worse

by contrast, i suspect that i would have not had the same trouble with this

    bits = [(ord(c) >> i) & 1 for c in textb for i in range(8)]
however, as with rpn, i suspect that j or k syntax is superior for typing when what you're immediately evaluating expressions rather than writing a program to maintain later, because the amount of finger typing is so much less. but maybe i just have a hard time with point-free style? or maybe, like you say, it's different types of people. or maybe i just haven't spent nearly enough time writing array code during those years


The k solution is ,/+|(8#2)\textb

k doesn't have a right shift operator, but you don't need that, you can use the base encoding operator instead

Personally I think this is clearer than both the array-ish python and the list comp.

https://ngn.codeberg.page/k/#eJwrSa0oSbJSCs9ILFEA4gyFkoxUBbe...


I'm sitting here with the K reference manual, and I still can't decode this.

I'm wildly guessing that your approach somehow ends doing something closer to this:

    textb.bytes.flat_map{_1.digits(2)}
Which I must admit it took me embarrassingly long to think of.


If you go to the link and press 'help' you'll see some docs for ngn/k

The relevant line is

I\ encode 24 60 60\3723 -> 1 2 3 2\13 -> 1 1 0 1

So (8#2)\x encodes x in binary, with 8 positions. And because k is an array language, you don't need to do any mapping, it's automatic.

,/+|x is concat(transpose(reverse(x))) (,/ literally being 'reduce with concat')


> If you go to the link and press 'help' you'll see some docs for ngn/k

Almost as impenetrable as the code unless you already know the language. But that's ok - I guess that's the main audience...

E.g. trying to figure out what "\" means, in that help is only easier now because you gave me the whole line, as there are 64 occurrences of "\" in that doc and I wouldn't have known what pattern to search for to limit it...

It's back to the philosophical disconnect of expecting people to read start to finish/know far more detail inside out rather than skimming and relying on easy keyword lookups... (yes, we're lazy)

> 'reduce with concat'

So "flatten" in Ruby-speak, I take it (though "flatten" without an argument in Ruby will do this recursively, so I guess probably flatten(1) would be a more direct match).

> you don't need to do any mapping, it's automatic.

After these pointers (thanks!), here's - mostly for my own learning, what I ended up with not in an attempt to get closer to the linenoise (we can do that with a horrific level of operator overloading that'd break most of the standard library, though we can't match k precisely). Please don't feel obliged to go through this unless you're morbidly curious; I just had to, but I'm sure you'd suffer going through my attempt at figuring out what the hell k is doing...:

    textb="What hath the Flying Spaghetti Monster wrought?"

    # Firstly, I finally realised after a bunch of testing that 1) "(8#2)" does something like this.
     # That is, the result of 8#2 is (2 2 2 2 2 2 2 2), which was totally not what I expected.
    def reshape(len,items) = [].fill(0...x)

    class Integer
      # For the special case of (x#y) where x is a positive integer, which is frankly the only one
      # I've looked at, we can do:
      # So now 4.reshape(2) returns [2 2 2 2] just like (4#2) in ngn/k
      def reshape(items) = Array(items)*self

      # Now we can do something somewhat like what I think "encode" is
      # actually doing - this can be golfed down, but anyway:
      # With this, "a".ord.encode(8.reshape(2)) returns [0,1,1,0,0,0,0,1],
      # equivalent to (8#2)\ "a" in ngn\k
      def encode(shape)
        rem = self
        Array(shape).reverse.map do |v|
          val = rem % v
          rem = rem / v
          val
        end.reverse
      end
    end

    # Now we can break Array too.      
    class Array
      # First a minor concession to how Ruby methods even on the Array
      # class sees the focal point as the Array rather than the elements.
      # E.g. `self` in #map is the Array. If the focus is to be on applying the
      # same operation to each element, then it might be more convenient
      # if `self` was the element. With this, we can do ary.amap{reverse}
      # instead of ary.map{|e| e.reverse} or ary.map{ _1.reverse}. 
      # To get closer to k, we'd have needed a postfix operator that we could
      # override to take a block, but unfortunately there are no overridable 
      # postfix operators in Ruby. E.g. we can hackily make
      # ary.>>(some_dummy_value) {a block} work, but not even
      # ary >> (some_dummy_value) { a block} and certainly not
      # ary >> { a block }
      #
      def amap(&block) = map { _1.instance_eval(&block) }

      # If we could do a "nice" operator based map, we'd just have left it
      # at that. But to smooth over the lack of one, we can forward some
      # methods to amap:
      def encode(...) = amap{encode(...)}
      # ... with the caveat that I realised afterwards that this is almost certainly
      # horribly wrong, in that I think the k "encode" applies each step of the
      # above to each element of the array and returns a list of *columns*.
      # I haven't tried to replicate that, as it breaks my mind to think about 
      # operating on it that way. That is, [65,70].encode(2.reshape(10))
      # really ought to return [[6,7],[5,0]] to match the k, but it returns
      # [[6,5],[7,0]]. Maybe the k result will make more sense to me if I
      # take a look at how encode is implemented...

      def mreverse = amap{reverse}
    end

    # Now we can finally get back to the original, with the caveat that due to
    # the encode() difference, the "mreverse.flatten(1)" step is in actuality
    # working quite differently, in that for starters it's not transposing the arrays.
    #
    p textb.bytes.encode(8.reshape(2)).mreverse.flatten(1)

    # So to sum up:
    #
    # textb            ->   textb.bytes since strings and byte arrays are distinct in Ruby
    # (8#2)           ->   8.reshape(2)
    # x\y               ->   y.encode(x) ... but transposed.
    # |x                ->   x.mreverse
    # ,/+x             ->   x.flatten(1)   .. but really should be x.transpose.flatten(1)
    #
    # Of course with a hell of a lot of type combinations and other cases the k
    # verbs supports that I haven't tried to copy.


i can't decode it either but i think you're right. note that this probably gives the bits in big-endian order rather than little-endian order like my original, but for my purposes in that notebook either one would be fine as long as i'm consistent encoding and decoding


I got a few steps further after lots of experimentation with ngn/k and the extra hints further downthread:

https://gist.github.com/vidarh/3cd1e200458758f3d58c88add0581...

The big caveat being it clicked too late that 1) "encode" is not "change base and format", but "for each element in this array, apply the module to the entire other array, and pass the quotient forward", and 2) encode returns a list of columns of the remainders rather than rows (the output format really does not make this clear...).

So you can turn a list of seconds into hour, minute, seconds with e.g.: (24 60 60)\(86399 0 60), but what you get out is [hour, minute, second] where hour, minute, second each are arrays.

If you want them in the kind of format that doesn't break the minds of non-array-thinking people like us because the order actually matches the input, you'd then transpose them by prepending "+", because why not overload unary plus to change the structure of the data?

   +(24 60 60)\(86399 0 60)
Returns (23 59 59 0 0 0 0 1 0)

Or [[23,59,59], [0,0,0], [0,1,0]] in a saner output format that makes it clear to casuals like me which structure is contained in what.

Now, if you then want to also flatten them, you prepend ",/"

I feel much better now. Until the next time I spend hours figuring out a single line of k.


It would do if it the bits weren't reversed, which is done with |


thank you for the correction! and the explanation


thank you very much! which k implementation does this page use?

oh, apparently a new one called ngn/k: https://codeberg.org/ngn/k


As much as it's a struggle to read, you got to love the people implementing this are dedicated enough to write their C as if it was K.


I think the K would likely be both simpler and harder than your first example by reading very straightforwardly in a single direction but with operators reading like line noise. In your case, my Numpy is rusty, but I think this is the Ruby equivalent of what you were doing?

    textb = 'What hath the Flying Spaghetti Monster wrought?'

    p textb.bytes.product((0...8).to_a).map{_1>>_2}.map{_1 & 1}
Or with some abominable monkey patching:

    class Array
      def outer(r) = product(r.to_a)
      def right_shift = map{_1>>_2}
    end

    p textb.bytes.outer(0...8).right_shift.map{_1 & 1}
I think this latter is likely to be a closer match to what you'd expect in an array language in terms of being able to read in a single direction and having a richer set of operations. We could take it one step further and break the built in Array#&:

    class Array
      def &(r) = map{_1 & r}
    end

    p textb.bytes.outer(0...8).right_shift & 1
Which is to say that I don't think the operator-style line-noise nature of K is what gives it its power. Rather that it has a standard library that is fashioned around this specific set of array operations. With Ruby at least, I think you can bend it towards the same Array nature-ish. E.g. a step up from the above that at least contains the operator overloading and instead coerces into a custom class:

    textb = 'What hath the Flying Spaghetti Monster wrought?'
    
    class Object
      def k = KArray[*self.to_a]
    end
    
    class String
      def k = bytes.k
    end

    class KArray < Array
      def outer(r) = product(r.to_a).k
      def right_shift = map{_1>>_2}.k
      def &(r) = map{_1 & r}.k
    end

    p textb.k.outer(0...8).right_shift & 1
With some care, I think you could probably replicate a fair amount of K's "verbs" and "adverbs" (I so hate their naming) in a way that'd still be very concise but not line-noise concise.


that all seems correct; the issue i had was not that python is less flexible than ruby (though it is!) but that it required a lot of mental effort to map back from the set of point-free array operations to my original intent. this makes me think that my trouble with j and k is not the syntax at all. but conceivably if i study the apl idiom list or something i could get better at that kind of thinking?


I think you could twist Python into getting something similarly concise one way or other ;) It might not be the Python way, though. I agree it often is painful to map. I think in particular the issue for me is visualizing the effects once you're working with a multi-dimensional set of arrays. E.g. I know what outer/product does logically, but I have to think through the effects in a way I don't need to do with a straightforward linear map(). I think I'd have been more likely to have ended up with something like this if I started from scratch even if it's not as elegant.

    p textb.bytes.map{|b| (0...8).map{|i| (b>>i) & 1} }.flatten
EDIT: This is kind of embarrassing, but we can of course do just this:

    textb.bytes.flat_map{_1.digits(2)}
But I think the general discussion still applies, and it's quite interesting how many twists and turns it took to arrive at that


well, simplicity anyway, arguably (like moore) to an even higher degree than wirth


As far as I know, Henry Baker is still with us. I had a dream where I interviewed Wirth for like 20 hrs so we could clone him with an LLM. We need to grab as much video interviews from folks as possible.


henry baker has made many great contributions, but last time i talked to him, he was waiting for somebody to start paying him again in order to do any more research

but i'm sure he'd agree his achievements are not in the same league as wirth's


I wasn't trying to compare them in any other way other than, that Henry Baker is still with us.


cool, sorry if i came across as confrontational


It's OK on Hacker News to dis a reputable news source now?




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: