Hacker News new | past | comments | ask | show | jobs | submit | more cscheid's comments login

(disclosure: I work on Quarto) I 100% agree with you. It's partly why tools from the Quarto lineage (knitr, rmarkdown) work hard to make popular latex features like crossreferences work in HTML and other forward-facing formats. At the same time, if you haven't tried Typst, my opinion is that doing so is an afternoon well spent. It's an impressive system even early in its development stage. I'm hoping it finally displaces LaTeX --- and I'm a former academic having written about a hundred papers in LaTeX!


I have high hopes for Typst, but I'm very disappointed they didn't design for accessibility from the start (and nice HTML would have done the job). It's shocking how inaccessible latex's output is, and yet Typst manages to be even worse!

Quarto's html output on the other hand is generally lovely for accessible output.


Quarto dev here, happy to answer specific questions you might have!


Hello, I have a rather specific question.

I want to write a detailed tutorial (as HTML page) and a condensed version of it (as Reveal JS slides) from a single document.

I have found this suggestion[1] to specify the separate output file name for the slides in the header, and `quarto render myfile.qmd` will generate both.

Is there a way to include content (long form text, code, or images) that will only be exported in the HTML page but not in the slides (where space is more limited)?

[1] https://github.com/quarto-dev/quarto-cli/discussions/1751


Take a look at our conditional content support.

You'll want to use something like {.content-visible unless-format="revealjs"}

https://quarto.org/docs/authoring/conditional.html


Thanks! I'll give it a try.


I recently did this by putting long form text into the talking notes of the revealjs presentation


Is there support for pluto notebooks? It doesn't seem so from searching online. If not, are there plans for supporting pluto notebooks?


Excellent question.

My understanding is that Pluto has its own execution engine outside of Jupyter, and so would require the creation of a new "engine" in our codebase. We are a pretty lean team that has other priorities for 2024, but we would very much love to see Pluto running in Quarto.

There is an open PR right now (https://github.com/quarto-dev/quarto-cli/pull/8645) to add a Julia-native engine to Quarto, from the developers of Makie which we hope to merge soon. I don't think that will provide instant Pluto support, but it will certainly make it easier for other Julia-native folks to build on.


Thanks for working on Quatro!

I greatly enjoy using Quarto in RStudio but in VS Code I prefer Jupyter notebooks because of the GUI and excellent integration with Data Wrangler. Do you know if there is a public roadmap for your VS Code extension? It would help me decide whether I should consider transitioning from notebooks to Quarto.


Not that I'm aware of.

With that said, Quarto works very well _with_ Jupyter notebooks. You can develop in them and then use them directly as inputs to our system. This is how, for example, Jeremy Howard and Rachel Thomas from fast.ai use it (https://www.fast.ai/).


I wish there was a better way to run quarto as a script, as in, as fast as `source` in R and `include` in Julia. Current behavior

1. Has scoping rules that make it difficult to debug 2. Has low latency, making it frustrating for debugging.


Thanks for the feedback.

The scoping rules are by design and match .ipynb workflows in the case of multiple documents, so we're unlikely to change it.

The render latency of quarto is definitely higher than we'd love, but we have a plan and have been steadily improving it. Quarto 1.4 is generally about 20% faster than 1.3, and we have performance regression infrastructure to not let us slip on it.


Thanks for the feedback! Just to re-state my case in clearer terms:

For my personal workflow (others may differ), compiling to html is only done once at the end of a session, and the latency wouldn't matter if it could execute like a script. Weave.jl^[1] has a great feature called `include_weave` which has the features I like.

But take my feedback with a grain of salt. I generally just save things in folders and compile a pdf separately with many tables and figures.

[1] https://weavejl.mpastell.com/stable/usage/#include_weave


I think this is going to be right up your alley, then:

https://quarto.org/docs/computations/render-scripts.html


Perfect! So glad this exists!


I asked on the Mac OS mailing forum, but no response.

How well does this work w/ a TeX-oriented editor? Say TeXshop?


> I asked on the Mac OS mailing forum, but no response.

I don't know what forum you're referring to. We monitor our GitHub discussions very closely, though: https://github.com/quarto-dev/quarto-cli/discussions/

> How well does this work w/ a TeX-oriented editor? Say TeXshop?

Quarto can produce .tex output from .ipynb or .qmd inputs, which can then be further edited directly in your text editor of choice (TeXshop, or even something like overleaf) should you want to.


My apologies, it was on the Mac OS X TeX mailing list.

My question was whether one could use a TeX-oriented editor in lieu of VS Code --- would this be a reasonable option, or, do the advantages/capabilities conferred by VS Code make it something which pretty much require its use?

I'm trying to work up an environment for a largish project which I'm currently doing on Gitbook:

https://willadams.gitbook.io/design-into-3d/

and I'm just not finding any tools which are a good fit yet, and when I move, I'd like to have a bit of familiarity to begin, and TeXshop is one tool in this space which I am familiar with.


Quarto files are just markdown files, so any text editor will work. There is nice support for Neovim, JupyterLab, and RStudio too, if any of those are more familiar.

That said, the Quarto VS.Code extension has some very nice features that I think would be awfully useful in a big project:

- A visual editor to allow a simpler editing experience (could be a pro or con ;-)).

- Completions for document centric things like cross references and bibliographies, completions for yaml configuration options.

- Live preview of LaTeX math, Mermaid and Graphviz diagrams

- Syntax highlight for the markdown and embedded languages

- A nice preview workflow

I'm not sure if those are enough to overcome the lack of familiarity, but thought I'd highlight some of the benefits. Neovim and Rstudio both have very strong features with most or all of the above. Our JuptyerLab extension is more minimal, really only helping with markdown rendering.


That is _very_ persuasive. Thanks!

Downloading VS Code now.


What's the monetization strategy?


I work for Posit. We're a PBC, and developing open-source software is quite literally our mission. You can read more here: https://posit.co/about/pbc-report/ (we used to be RStudio, so that's the term you're going to find in that 2021 report)

We're for profit, but here's the relevant paragraph: "Together, RStudio’s open-source software and commercial software form a virtuous cycle: The adoption of open-source data science software at scale in organizations creates demand for RStudio’s commercial software; and the revenue from commercial software, in turn, enables deeper investment in open-source software, which benefits everyone."


Time to break out John Baez's checklist: https://archive.org/details/TheCrackpotIndex


The first author is Stuart Kaufmann: https://en.wikipedia.org/wiki/Stuart_Kauffman

Not a crackpot.


Never heard of it, but it seems to give a good digest of the format. Unfortunately, it seems aimed at non-academic crackpots. It doesn't even award points for vacuous mentioning Schrödinger or Kant.


I was really turned off by the mention of Kant at first, but reading on it's relevant to the discussion. This is a real scientist applying Kant's idea to real phenomena in a useful way. Please read the paper in detail.


I had glossed it, and the part where they make a surprise move to Kant, is ... not very convincing. The way they describe it doesn't make sense. It's not as if there's any proof that a "Whole" is life or vice versa. It's just one of their assumptions.

If we take their writing as some form of evidence for it, they claim children inherit your Parts, but that's not true. They also imply that Parts cannot exist outside the Whole, which is patently false when taken literally. But in the loose sense in which they seem to use it, I could totally see a piano as a Whole: keys, snares, hammers, sound board, it doesn't make sense outside a piano. Also, notable features of life aren't included nor implied by the concept Whole.

They also call Collectively Autocatalytic Sets an "established mathematical theory", but it's a mathematical property that can be true of some domain. It doesn't prove anything. There aren't any proofs involving that property in the paper either. Later they call it a "chemical reaction system," which seems to be more to the point, but there are so many of those.

It's just another idea, and not an original one either. Wikipedia: "Autocatalytic sets constitute just one of several current theories of life." That Autocatalytic sets by itself isn't enough to explain life may be a point, but there's no reason to assume they've found the magical ingredient in Kant.


Maybe better link, more readable : https://math.ucr.edu/home/baez/crackpot.html


that's great.

it gets steep from 29 onward. and for a reason.


> not _better enough_

Wirth was such a legend on this particular aspect. His stance on compiler optimizations is another example: only add optimization passes if they improve the compiler's self-compilation time.

Oberon also, (and also deliberately) only supported cooperative multitasking.


>His stance on compiler optimizations is another example: only add optimization passes if they improve the compiler's self-compilation time.

What an elegant metric! Condensing a multivariate optimisation between compiler execution speed and compiler codebase complexity into a single self-contained meta-metric is (aptly) pleasingly simple.

I'd be interested to know how the self-build times of other compilers have changed by release (obviously pretty safe to say, generally increasing).


Hmm, but what if the compiler doesn't use the optimized constructs, e.g. floating point optimizations targeting numerical algorithms?


Life was different in the '80s. Oberon targeted the NS32000, which didn't have a floating point unit. Let alone most the other modern niceties that could lead to a large difference between CPU features used by the compiler itself, and CPU features used by other programs written using the compiler.

That said, even if the exact heuristic Wirth used is no longer tenable, there's still a lot of wisdom in the pragmatic way of thinking that inspired it.


Speaking of that, if you were ever curious how computers do floating point math, I think the first Oberon book explains it in a couple of pages. It’s very succinct and, for me, one of the clearest explanations I’ve found.


Rewrite the compiler to use a LLM for complication. I'm only half joking! The biggest remaining technical problem is the context length, which is severely limiting the input size right now. Also, the required humongous model size.


Simple fix: floating-point indexes to all your tries. Or switch to base π or increment every counter by e.


That’s not a simple fix in this context. Try making it without slowing down the compiler.

You could try to game the system by combining such a change that slows down compilation with one that compensates for it, though, but I think code reviewers of the time wouldn’t accept that.


probably use a fortran compiler for that instead of oberon


His stance should be adopted by all languages authors and designers but apparently it's not. The older generation of programming language guru like Wirth and Hoare are religiously focused on simplicity hence they never take compilation time for granted unlike most popular modern languages authors. C++, Scala, Julia and Rust are all behemoth in term of complexity in language design hence have very slow compilation time. Popular modern languages like Go and D are the breath of fresh air with their lightning fast compilation due to their inherent simplicity in their design. This is to be expected since Go is just a modern version of Modula and Oberon, and D is designed by a former aircraft engineer where simplicity is mandatory not an option.


You cannot add a loop skew optimization to compiler before compiler needs a loop skew optimization. Which it would not need at all because it is loop skew optimization (it requires matrix operations) that need a loop skew optimization.

In short, compiler is not an ideal representation of the user programs it needs to optimize.


Perhaps Wirth would say that compilers are _close enough_ to user programs to be a decent enough representation in most cases. And of course he was sensible enough to also recognize that there are special cases, like matrix operations, where it might be wirthwhile.

EDIT: typo in the last word but I'm leaving it in for obvious reasons.


Wirth ran an OS research lab. For that, the compiler likely is a fairly typical workload.

But yes, it wouldn’t work well in a general context. For example, auto-vectorization likely doesn’t speed up a compiler much at all, while adding the code to detect where it’s possible will slow it down.

So, that feature never can be added.

On the other hand, may lead to better designs. If, instead, you add language features that make it easier for programmers to write vectorized code, that might end up being easier for programmers. They would have to write more code, but they also would have to guess less whether their code would end up being vectorized.


perhaps you could write the compiler using the data structures used by co-dfns (which i still don't understand) so that vectorization would speed it up, auto- or otherwise


Supported cooperative multitasking won in the end.

It just renamed itself to asynchronous programing. That's quite literally what an 'await' is.


It hasn't won. Threads are alive and well and I rather expect async has probably already peaked and is back on track to be a niche that stays with us forever, but a niche nevertheless.

Your opinion vs. my opinion, obviously. But the user reports of the experience in Rust is hardly even close to unanimous praise and I still say it's a mistake to sit down with an empty Rust program and immediately reach for "async" without considering whether you actually need it. Even in the network world, juggling hundreds of thousands of simultaneous tasks is the exception rather than the rule.

Moreover, cooperative multitasking was given up at the OS level for good and sufficient reasons that I see no evidence that the current thrust in that direction has solved. As you scale up, the odds of something jamming your cooperative loop monotonically increase. At best we've increased the scaling factors, and even that just may be an effect of faster computers rather than better solutions.


in the 02000s there was a lot of interest in software transactional memory as a programming interface that gives you the latency and throughput of preemptive multithreading with locks but the convenient programming interface of cooperative multitasking; in haskell it's still supported and performs well, but it has been largely abandoned in contexts like c#, because it kind of wants to own the whole world. it's difficult to add incrementally to a threads-and-locks program

i suspect that this will end up being the paradigm that wins out, even though it isn't popular today


I was considering making a startup out of my simple C++ STM[0], but the fact that, as you point out, the transactional paradigm is viral and can't be added incrementally to existing lock-based programs was enough to dissuade me.

[0] https://senderista.github.io/atomik-website/


nice! when was this? what systems did you build in it? what implementation did you use? i've been trying to understand fraser's work so i can apply it to a small embedded system, where existing lock-based programs aren't a consideration


It grew out of an in-memory MVCC DB I was building at my previous job. After the company folded I worked on it on my own time for a couple months, implementing some perf ideas I had never had time to work on, and when update transactions were <1us latency I realized it was fast enough to be an STM. I haven't finished implementing the STM API described on the site, though, so it's not available for download at this point. I'm not sure when I'll have time to work on it again, since I ran out of savings and am going back to full-time employment. Hopefully I'll have enough savings in a year or two that I can take some time off again to work on it.


that's exciting! i just learned about hitchhiker trees (and fractal tree indexes, blsm trees, buffer trees, etc.) this weekend, and i'm really excited about the possibility of using them for mvcc. i have no idea how i didn't find out about them 15 years ago!


Then you may be interested in this paper which shows how to turn any purely functional data structure into an MVCC database.

https://www.cs.cmu.edu/~yihans/papers/concurrency.pdf


thank you!


Sound’s nifty. Did this take advantage of those Intel (maybe others?) STM opcodes? For a while I was stoked on CL-STMX which did (as well as implementing non-native version to the same interface)


No, not at all. I'm pretty familiar with the STM literature by this point, but I basically just took the DB I'd already developed and slapped an STM API on top. Given that it can do 4.8M update TPS on a single thread, it's plenty fast enough already (although scalability isn't quite there yet; I have plenty of ideas on how to fix that but no time to implement them).

Since I've given up on monetizing this project, I may as well just link to its current state (which is very rough, the STM API described in the website is only partly implemented, and there's lots of cruft from its previous life that I haven't ripped out yet). Note that this is a fork of the previous (now MIT-licensed) Gaia programming platform (https://gaia-platform.github.io/gaia-platform-docs.io/index....).

https://github.com/senderista/nextdb/tree/main/production/db...

The version of this code previously released under the Gaia programming platform is here: https://github.com/gaia-platform/GaiaPlatform/blob/main/prod.... (Note that this predates my removal of IPC from the transaction critical path, so it's about 100x slower.) A design doc from the very beginning of my work on the project that explains the client-server protocol is here (but completely outdated; IPC is no longer used for anything but session open and failure detection): https://github.com/gaia-platform/GaiaPlatform/blob/main/prod....


this is pretty exciting! successfully git cloned!


> in the 02000s there was a lot of interest

I read that as octal; so 1024 in decimal. Not a very interesting year, according to Wikipedia.

https://en.wikipedia.org/wiki/1024


> in the 02000s there was...

So sometime between "02000" and "02999"?


i meant between 02000 and 02010; is there a clearer way to express this that isn't ridiculously prolix


Meanwhile, in JS/ECMAScript land, async/await is used everywhere and it simplifies a lot of things. I've also used the construct in Rust, where I found it difficult to get the type signatures right, but in at least one other language, async/await is quite helpful.


Await is simply syntactic sugar on top of what everybody was forced to do already (callbacks and promises) for concurrency. As a programming model, threads simply never had a chance in the JS ecosystem because on the surface it has always been a single-threaded environment. There's too much code that would be impossible to port to a multithreaded world.


It has mostly won for individual programs, but very much not for larger things like operating systems and web browsers.


Mostly won for CRUD apps (yes and a few others). Your DAW, your photo editor, your NLE, your chatbot girlfriend, your game, your CAD, etc might actually want to use more than one core effectively per task. Even go had to grow up eventually.


It's moving in more and more.

A core problem is that it's now clear most apps have hundreds or thousands of little tasks going, increasingly bound by network, IO, and similar. Async gives nice semantics for implementing cooperative multitasking, without introducing nearly as many thread coherency issues as preemptive.

I can do things atomically. Yay! Code literally cooperates better. I don't have the messy semantics of a Windows 3.1 event loop. I suspect it will take over more and more into all walks of code.

Other models are better for either:

- Highly parallel compute-bound code (where SIMD/MIMD/CUDA-style models are king)

- Highly independent code, such as separate apps, where there are no issues around cooperation. Here, putting each task on a core, and then preemptive, obviously wins.

What's interesting is all three are widely used on my system. My tongue-in-cheek comment about cooperative multitasking winning was only a little bit wrong. It didn't quite win in the sense of taking over other models, but it's in widespread use now. If code needs to cooperate, async sure beats semaphores, mutexes, and all that jazz.


Async programming is not an alternative to semaphores and mutexes. It is an alternative to having more threads. The substantial drawback of async programming in most implementations is that stack traces and debuggers become almost useless; at least very hard to use productively.


Indeed, however the experience with crashes and security exploits, has proven that scaling processes, or even distributing them across several machines, scales much better than threads.


preemptively scheduled processes, not cooperatively scheduled


Ah, missed that.


In the last 15 to 20 years asynchronous programming --- as a form of cooperative multi-tasking [1] --- did gain lot's of popularity. That was mainly because of non-scalable threads implementations in most language runtimes, e.g. the JVM. At the same time the JS ecosystem needed to have some support for concurrency. Since threads weren't even an option the community settled first on callback-hell and then on async/await. The former reason to asynchronous programming alleged win is currently being reversed. The JVM has introduced light weight threads that have the low runtime cost of asynchronous programming and all the niceties of thread-based concurrency.

[1]: Asynchronous programming is not the only form of cooperative programming. Usually cooperative multi-tasking systems have a special system call yield() which gives up the processor in addition to io induced context-switches.


In .NET and C++ asynchronous programming is not cooperative, it hides the machinery of a state machine mapping tasks into threads, it gets prempted and you can write your own scheduler.


But, isn't the separation of the control-flow into chunks, either separated by async/await or by sepration between call and callback, a form of cooperative thread yielding on top of preemptive threads? If that isn't true for .NET, then I'd really interested to understand what else it is doing.


No, it is a state machine that generates an instace of a Task from Task Parallel Library, and automates the Run()/Get() invocations from it.

Assuming your type isn't an Awaitable, with magic methods to influence how the compiler actually generates the state machine.


async/await has the advantage over cooperative multitasking that it has subroutines of different 'colors', so you don't accidentally introduce concurrency bugs by calling a function that can yield without knowing that it can yield

i think it's safe to say that the number of personal computers running operating systems without preemptive multitasking is now vanishingly small

as i remember it, oberon didn't support either async/await or cooperative multitasking. rather, the operating system used an event loop, like a web page before the introduction of web workers. you couldn't suspend a task; you could only schedule more work for later


And these fancy new names aren't there just for hiding the event loop? :)


Sort of and sort of not.

The key thing about 2023-era asynchronous versus 1995-era cooperative multitasking is code readability and conciseness.

Under the hood, I'm expressing the same thing, but Windows 3.1 code was not fun to write. Python / JavaScript, once you wrap your head around it, is. The new semantics are very readable, and rapidly improving too. The old ones were impossible to make readable.

You could argue that it's just syntactic sugar, but it's bloody important syntactic sugar.


I never left 1991 and I haven't seen anything that has made me consider leaving ConcurrentML except for the actor model, but that is so old the documentation is written on parchment.


> You could argue that it's just syntactic sugar, but it's bloody important syntactic sugar.

Yes, of course you could, since everything beyond, uh, paper tape, next-state table, and current pen-position (or whatever other pieces there are in a theoretical Turing machine) is basically syntactic sugar. Or, IOW, all programming languages higher than assembly are nothing but syntactic sugar. I like syntactic sugar.

(But OTOH, I'm a diabetic. Gotta watch out for that sugar.)


does this mean that lisp programmers are syntactic diabetics

i hear they're concerned about cancer of the semicolon


Exactly. The way I think about it, the "async" keyword transforms function code so that local variables are no longer bound to the stack, making it possible to pause function execution (using "await") and resume it at an arbitrary time. Performing that transformation manually is a fair amount of work and it's prone to errors, but that's what we did when we wrote cooperatively multitasked code.


> when we wrote cooperatively multitasked code

That's my point, we still do that. And based on your phrasing we're forgetting it :)


Sure, that's a good way to look at it. Another way to look at it: because the process of transforming code for cooperative multitasking is now much cleaner and simpler, it's fine to use new words to describe what to do and how to do it.


cooperative multitasking, as i use the term, keeps you from having to transform your code. it maintains a separate stack per task, just like preemptive multitasking. so async/await isn't cooperative multitasking, though it can achieve similar goals

possibly you are using the terms in subtly different ways so it appears that we disagree when we do not


Cooperative multitasking is "The illusion of simultaneously executing code paths by having said code paths pass control to each other fast enough."

If the OS forcefully switches control it's preemptive.


that definition is different from the definition i'm using; it covers both what i'm calling 'cooperative multitasking' and things like async/await, the npm event handler model, and python/clu iterators

in the mac os 8 documentation, explaining how mac os 8 only has 'cooperative multitasking', the term is defined in the way i'm using it (https://developer.apple.com/library/archive/documentation/Ca...):

> In programming, a task is simply an independent execution path. On a computer, the system software can handle multiple tasks, which may be applications or even smaller units of execution. For example, the system may execute multiple applications, and each application may have independently executing tasks within it. Each such task has its own stack and register set.

> Multitasking may be either cooperative or preemptive. Cooperative multitasking requires that each task voluntarily give up control so that other tasks can execute. (...)

> The Mac OS 8 operating system implements cooperative multitasking between applications. The Process Manager can keep track of the actions of several applications. However, each application must voluntarily yield its processor time in order for another application to gain it. An application does so by calling WaitNextEvent, which cedes control of the processor until an event occurs that requires the application’s attention.

that is, this requirement that each task have its own stack is not just something i made up; it's been part of common usage for decades, at least in some communities. the particular relevant distinction here is that, because each task has its own stack (or equivalent in something like scheme), multitasking doesn't require restructuring your code, because calling a normal function can yield the cpu. in the specific case of macos this was necessary so that switcher/multifinder/process-manager could multitask mac apps written for previous versions of macos that didn't have multitasking

what term would you propose for what i'm calling 'cooperative multitasking', like forth and mac os 8 and windows 3.1 (https://softwareengineering.stackexchange.com/questions/3507... https://retrocomputing.stackexchange.com/questions/791/how-d...)? this terminology is not absolutely standardized, and i'd be happy to use different terminology in order to be able to communicate productively

also could you please answer my request for clarification in https://news.ycombinator.com/item?id=38861074


> it covers both what i'm calling 'cooperative multitasking' and things like async/await, the npm event handler model, and python/clu iterators

Those are implementation details. What's actually happening in all cases is my definition.

> also could you please answer my request for clarification in

Yes, your examples or the 5 million other implementations of event loops. You forgot to add gtk's for example :)

> in the mac os 8 documentation

... and I see no mention of stacks on Wikipedia:

https://en.wikipedia.org/wiki/Cooperative_multitasking

Which lumps in both the explicit event loop style and the syntactic sugar that got added later.

We could throw definitions around till kingdom come like this. And it's not the exact definition that's my problem.


thanks! but here we were discussing specifically the distinction between the approaches to concurrency that require you to explicitly structure your code around yield points, like async/await, and the kinds that don't, like preemptive multitasking and what i'm calling cooperative multitasking. this is unnecessarily difficult to discuss coherently if you insist on applying the term 'cooperative multitasking' indiscriminately to both, which i've shown above is in violation of established usage, and refusing to suggest an alternative term

i'll see if i can flesh out the wikipedia article a bit


Where did I mix preemptive and cooperative multitasking?

And why do you think that in the case of an explicit event loop you don't have to yield? You do have to, and have to sort out some way to continue on your own. Which makes the new 'syntactic sugar' approaches much easier of course. Doesn't mean the principle isn't the same and they don't deserve the same name.


you didn't and i don't


Coroutines are better than both. Particularly in reasoning about code.


which kind of coroutines do you mean and how are they better


if the implied contrast is with cooperative multitasking, it's exactly the opposite: they're there to expose the event loop in a way you can't ignore. if the implied contrast is with setTimeout(() => { ... }, 0) then yes, pretty much, although the difference is fairly small—implicit variable capture by the closure does most of the same hiding that await does


Not asking about old JavaScript vs new JavaScript. Asking about explicit event loop vs hidden event loop with fancy names like timeout, async, await...


do you mean the kind of explicit loop where you write

    for (;;) {
        int r = GetMessage(&msg, NULL, 0, 0);
        if (!r) break;
        if (r == -1) croak();
        TranslateMessage(&msg);
        DispatchMessage(&msg);
    }
or, in yeso,

      for (;;) {
        yw_wait(w, 0);
        for (yw_event *ev; (ev = yw_get_event(w));) handle_event(ev);
        redraw(w);
      }
async/await doesn't always hide the event loop in that sense; python asyncio, for example, has a lot of ways to invoke the event loop or parts of it explicitly, which is often necessary for integration with software not written with asyncio in mind. i used to maintain an asyncio cubesat csp protocol stack where we had to do this

to some extent, though, this vitiates the concurrency guarantees you can otherwise get out of async/await. software maintainability comes from knowing that certain things are impossible, and pure async/await can make concurrency guarantees which disappear when a non-async function can invoke the event loop in this way. so i would argue that it goes further than just hiding the event loop. it's like saying that garbage collection is about hiding memory addresses: sort of true, but false in an important sense


What worries me is we may have a whole generation who doesn't know about the code you posted above and thinks it's magic or worse, real multiprocessing.


okay but is that what you meant by 'hiding the event loop' or did you mean something different


(To set the tone clearly - this seems like an area where you know a _lot_ more than me, so any questions or "challenges" below should be considered as "I am probably misunderstanding this thing - if you have the time and inclination, I'd really appreciate an explanation of what I'm missing" rather than "you are wrong and I am right")

I don't know if you're intentionally using "colour" to reference https://journal.stuffwithstuff.com/2015/02/01/what-color-is-... ? Cooperative multitasking (which I'd never heard of before) seems from its Wikipedia page to be primarily concerned with Operating System-level operations, whereas that article deals with programming language-level design. Or perhaps they are not distinct from one another in your perspective?

I ask because I've found `async/await` to just be an irritating overhead; a hoop you need to jump through in order to achieve what you clearly wanted to do all along. You write (pseudocode) `var foo = myFunction()`, and (depending on your language of choice) you either get a compilation or a runtime error reminding you that what you really meant was `var foo = await myFunction()`. By contrast, a design where every function is synchronous (which, I'd guess, more closely matches most people intuition) can implement async behaviour when (rarely) desired by explicitly passing function invocations to an Executor (e.g. https://www.digitalocean.com/community/tutorials/how-to-use-...). I'd be curious to hear what advantages I'm missing out on! Is it that async behaviour is desired more-often in other problem areas I don't work in, or that there's some efficiency provided by async/await that Executors cannot provide, or something else?


> I ask because I've found `async/await` to just be an irritating overhead

Then what you want are coroutines[1], which are strictly more flexible than async/await. Languages like Lua and Squirrel have coroutines. I and plenty of other people thing it's tragic that Python and Javascripts added async/await instead, but I assume the reason wasn't to make them easier to reason about, but rather to make them easier to implement without hacks in existing language interpreters not designed around them. Though Stackless Python is a CPython fork that adds real coroutines, also available as the greenlet module in standard CPython [2], amazing that it works.

[1] Real coroutines, not what Python calls "coroutines with async syntax". See also nearby comment about coroutines vs coop multitasking https://news.ycombinator.com/item?id=38859828

[2] https://greenlet.readthedocs.io/en/latest/


Bliss 36 and siblings had native coroutines.

We used coroutines in our interrupt rich environment in our real time medical application way back when. This was all in assembly language and the coroutines vastly reduced our multithreading errors to effectively zero. This is one place where C , claimed to be close to the machine falls down.


interesting, i didn't even realize bliss for the pdp-10 was called bliss-36

how did coroutines reduce your multithreading errors


I’m working up a blog post and will let you know when it is ready. Not at my desk just now


that sounds awesome! i hope to have a chance to read it!


well some of the things i know are true but i don't know which ones those are; i'll tell you the things i know and hopefully you can figure out what's really true

yes! i'm referencing that specific rant. except that what munificent sees as a disadvantage i see as an advantage

there's a lot of flexibility in systems design to move things between operating systems and programming languages. dan ingalls in 01981 takes an extreme position in 'design principles behind smalltalk' https://www.cs.virginia.edu/~evans/cs655/readings/smalltalk....

> An operating system is a collection of things that don't fit into a language. There shouldn't be one.

in the other direction, tymshare and key logic's operating system 'keykos' was largely designed, norm hardy said, with concepts from sigplan, the acm sig on programming languages, rather than sigsosp

sometimes irritating overhead hoops you need to jump through have the advantage of making your code easier to debug later. this is (i would argue, munificent would disagree) one of those times, and i'll explain the argument why below

in `var foo = await my_function()` usually if my_function is async that's because it can't compute foo immediately; the reasons in the examples in the tutorial you linked are making web requests (where you don't know the response code until the remote server sends it) and reading data from files (where you may have to wait on a disk or a networked fileserver). if all your functions are synchronous, you don't have threads, and you can't afford to tie up your entire program (or computer) waiting on the result, you have to do something like changing my_function to return a promise, and putting the code below the line `var foo = await my_function()` into a separate subroutine, probably a nested closure, which you pass to the promise's `then` method. this means you can't use structured control flow like statement sequencing and while loops to go through a series of such steps, the way you can with threads or async

so what if you use threads? the example you linked says to use threads! i think it's a widely accepted opinion now (though certainly not universal) that shared-mutable-memory threading is the wrong default, because race conditions in multithreaded programs with implicitly shared mutable memory are hard to detect and prevent, and also hard to debug. you need some kind of synchronization between the threads, and if you use semaphores or locks like most people do, you also get deadlocks, which are hard to prevent or statically detect but easy to debug once they happen

async/await guarantees you won't have deadlocks (because you don't have locks) and also makes race conditions much rarer and relatively easy to detect and prevent. mark s. miller, one of the main designers of recent versions of ecmascript, wrote his doctoral dissertation largely about this in 02006 http://www.erights.org/talks/thesis/index.html after several years working on an earlier programming language called e based on promises like the ones he later added to js; but i have to admit that, while i've read a lot of his previous work, i haven't read his dissertation yet

cooperative multitasking is in an in-between place; it often doesn't use locks and makes race conditions somewhat rarer and easier to detect and prevent than preemptive multitasking, because most functions you call are guaranteed not to yield control to another thread. you just have to remember which ones those are, and sometimes it changes even though your code didn't change

(in oberon, at least the versions i've read about, there was no way to yield control. you just had to finish executing and return, like in js in a web page before web workers, as i think i said upthread)

that's why i think it's better to have colored functions even though it sometimes requires annoying hoop-jumping


> async/await guarantees you won't have deadlocks

You will get them in .NET and C++, because they map to real threads being shared across tasks.

There is even a FAQ maintained by .NET team regarding gotchas like not calling ConfigureAwaitable with the right thread context in some cases where it needs to be explicitly configured, like a task moving between foreground and background threads.


(it arguably needs to be updated, so that people stop writing single line 'return await' methods which waste performance for no reason (thankfully some analyzers do flag this))


thank you! i haven't tried using .net more than a tiny bit


I always knew my experience with RISC OS wouldn't go to waste!


Not in Java, .NET and C++ case, as it is mapped to tasks, managed by threads, and you can even write your own scheduler if so inclined.


Also (AFAIK) not in JavaScript. An essential property of cooperative multitasking is that you can say “if you feel like it, pause me and run some other code for a while now” to the OS.

Async only allows you to say “run foo now until it has data” to the JavaScript runtime.

IMO, async/await in JavaScript are more like one shot coroutines, not cooperative multitasking.

Having said that, the JavaScript event loop is doing cooperative multitasking (https://developer.mozilla.org/en-US/docs/Web/JavaScript/Even...)


>Supported cooperative multitasking won in the end.

Is this the same as coroutines as in Knuth's TAOCP volume 1?

Sorry, my knowledge is weak in this area.



Thanks, will check that.


The quick answer is that coroutines are often used to implement cooperative multitasking because it is a very natural fit, but it's a more general idea than that specific implementation strategy.


interesting, i would have said the relationship is the other way around: cooperative multitasking implies that you have separate stacks that you're switching between, and coroutines are a more general idea which includes cooperative multitasking (as in lua) and things that aren't cooperative multitasking (as in rust and python) because the program's execution state isn't divided into distinct tasks

i could just be wrong tho


Yeah thinking about it more I didn’t intend to imply a subset relationship. Coroutines are not only used to implement cooperative multitasking, for sure.


well, i mean, lua's 'coroutines' are full tasks with their own stacks, unlike, say, python's 'coroutines'. so arguably it isn't that one can be used to implement the other; it's that they're two names for the same thing

lua's coroutines aren't automatically scheduled (there isn't a built-in run queue) but explicitly resumed, which is a difference from the usual cooperative-multitasking systems; arguably on that basis you could claim that they aren't quite 'cooperative multitasking' on their own

the last time i implemented a simple round-robin scheduler for cooperative multitasking was in july, as an exercise, and it was in arm assembly language rather than lua. it was 32 machine instructions and 64 lines of code (http://canonical.org/~kragen/sw/dev3/monokokko.S), plus 14 lines of example code to run in the threads. when i went to go look at that just now i was hoping to come up with some kind of crisp statement about the relative importance or complexity of the stack-switching functionality and the run-queue maintenance facility, but in fact there isn't a clear separation between them, and that version of the code creates all the tasks at assembly time instead of runtime. a more flexible version with start, spawn, yield, and exit calls, which respects the eabi so you can write your task code in c (http://canonical.org/~kragen/sw/dev3/einkornix.h et seq.), is 53 lines of assembly and 34 machine instructions, but similarly has no real separation of the two concerns


> arguably on that basis you could claim that they aren't quite 'cooperative multitasking' on their own

Right, I think this is where I am coming from. Generators, for example, can be implemented via coroutines, but I would not call a generator "cooperative multitasking."

That's very cool! Yeah, I have never done this myself, but in my understanding implementations in assembly can be very small.

> when i went to go look at that just now i was hoping to come up with some kind of crisp statement about the relative importance or complexity of the stack-switching functionality and the run-queue maintenance facility, but in fact there isn't a clear separation between them

That's fair, but I don't think that's the final say here, as you were building a system for cooperative multitasking explicitly, with no reason to try and separate the concerns. When a system is very simple, there's much less reason for separation.

Actually, this makes me realize why I probably have this bias for thinking of them separately: async/await in Rust. The syntax purely creates a generator, it is totally inert. You have to bring along your own executor (which contains a scheduler among other things). Separating the two cleanly was an explicit design goal.


while python-style generators aren't cooperative multitasking (by the usual definition in which cooperative multitasking maintains a separate stack for each task), they can be implemented using cooperative multitasking, which is (arguably!) what happens if you use lua coroutines to implement generators

it certainly isn't the final say! it's just an analysis of how my own code turned out, not any kind of universal lesson

the implementation in monokokko, which reserves the r10 register to always point to the currently running task, is five instructions

            .thumb_func
    yield:  push {r4-r9, r11, lr}   @ save all callee-saved regs except r10
            str sp, [r10], #4       @ save stack pointer in current task
            ldr r10, [r10]          @ load pointer to next task
            ldr sp, [r10]           @ switch to next task's stack
            pop {r4-r9, r11, pc}    @ return into yielded context there
interestingly, what you say of rust's generators is also sort of true of monokokko

> The syntax purely creates a generator, it is totally inert. You have to bring along your own executor (which contains a scheduler among other things).

the above five instructions, or arguably just ldr r10, [r10], is the executor. the in-memory task object consists of the saved stack pointer, the link to the following task, and then whatever variables you have in thread-local storage. but from a different point of view you could say that the in-memory task object consists of the saved stack pointer, a pointer to executor-specific status information (which for this executor is the following task, or conceptually the linked list of all tasks), and then other thread-local variables. i think the difference if you were to implement this same executor with rust generators is just that you probably wouldn't make the linked list of all tasks an intrusive list?


I'm gonna have to mull over "implement generators using cooperative multitasking" a bit :)

> i think the difference if you were to implement this same executor with rust generators is just that you probably wouldn't make the linked list of all tasks an intrusive list?

You still could, and IIRC tokio uses an intrusive linked list to keep track of tasks. There's no specific requirements for how you keep track of tasks, or even a standardized API for executors, which is why you'll hear people talk about why they want "to be generic over runtimes" and similar.


That's fascinating. I'd imagine there are actually two equilibria/stable states possible under this rule: a small codebase with only the most effective optimization passes, or a large codebase that incorporates pretty much any optimization pass.

A marginally useful optimization pass would not pull its weight when added to the first code base, but could in the second code base because it would optimize the run time spent on all the other marginal optimizations.

Though the compiler would start out closer to the small equilibrium in its initial version, and there might not be a way to incrementally move towards the large equilibrium from there under Wirth's rule.


Do you happen to remember where he said that? I've been looking for a citation and can't find one.

I think that some of the text in "16.1. General considerations" of "Compiler Construction" are sorta close, but does not say this explicitly.



The author cited, Michael Franz, was one of Wirth's PhD students, so what he relates is an oral communication from Wirth that may very well never have been put in writing. It does seem entirely consistent with his overall philosophy.

Wirth also had no compunction about changing the syntax of his languages if it made the compiler simpler. Modula-2 originally allowed undeclared forward references within the same file. When his implementation moved from the original multi pass compilers (e.g. Logitech's compiler had 5 passes: http://www.edm2.com/index.php/Logitech_Modula-2) to a single pass compiler http://sysecol2.ethz.ch/RAMSES/MacMETH.html he simply started requiring that forward references had to be declared (as they used to be in Pascal).

I suspect that Wirth not being particularly considerate of the installed base of his languages, and not very cooperative about participating in standardization efforts (possibly due to burn out from his participation in the Algol 68 process) accounts for the ultimately limited commercial success of Modula-2 & Oberon, and possibly for the decline of Pascal.


In the past, this policy of Wirth's has been cited when talking about go compiler development.

Go team member, Robert Griesemer, did his Phd under Mössenböck and Wirth.


Note that Oberon descendents like Active Oberon and Zonnon, do have premptive multitasking.


Another happy, years-long FastMail customer here.


Although the above comment might sound negative and harsh, it is a perfect distillation of modern research-oriented academic environments. (I was a moderately-successful professor in these environments. I woke up one day and simply couldn't do it anymore.)


I 100% agree with everything you said, I just wanted to make this point even more forcefully:

> Saying nature is mathematical is just saying that nature is consistent in the laws it follows.

It's more like "wherever nature isn't mathematical, we don't think about it as being mathematical", so saying "nature is mathematical" is very strongly tautological.

It's the same kind of situation as "why is everything linear?", or "why is everything an oscillator". The answer is more like "the things that aren't can't be easily described mathematically, and so we don't. What shakes out is mostly linear or quadratic (oscillators)".


Physics is just one (non-)choice of axioms. Math that works in the world is physics, it's all discovered. You can't invent math and expect physics to follow.

Other math is invented through the choice of axioms and then we discovery the theorems that follow from those.


Macleod’s Star Fraction is still unsurpassed for me in political scifi. It’s the kind of novel that can only be written by someone born in Europe, with the lived experience of wars being reflections of reflections of reflections of old conflicts, warped and folded and turned inside out. It’s a wild read.


Reading Macleod's Fall Revolution books as a teen 20-ish years ago blew my fucking mind. I had barely half a clue about the historical and political references he was building on (and making what I assume were very clever wry jokes about) but it took me somewhere else--somewhere else entirely. Somewhere very different from the chrome-plated Jonathan Swift kind of stuff I'd got my hands on before then (modulo some outliers like Monica Hughes, who tended to be more "abandon technology, return to monke").

The first Ken Macleod book I bought circa 1998 was The Cassini Division, solely because it had a shiny embossed cover of a robot shooting lasers, which of course had no relation to the book's contents.


(disclosure: I'm a quarto dev. But I'm also a big fan of typst)

You're right that typst is _very good_ at extensions, and likely will always be superior to quarto when it comes to that. The fundamental advantage typst has is that it's a "greenfield" project, and a very well-designed one at that, especially when compared to TeX.

> Looks like there are extensions that can be programmed, but they are more like second class citizens that you are not suppose to use normally.

We take quarto extensibility pretty seriously! "Simple" customization is available without need to program extensions, mostly through metadata configuration and classes and attributes in the document. This covers the basics like CSS, layout, document listings, etc.

For slightly more sophisticated extensions, you can create "filters" that operate directly on the document AST, either using the built-in Lua extension API or reading/writing a JSON representation (these are both built on top of Pandoc's capabilities, which quarto leverages extensively).

For reusable, packageable functionality, the extension system as it exists today is simple but certainly meant to be used "normally". It's how custom formats (the common, concrete use case is to provide different styles for particular academic journals) are defined and used.


Roughly, the sets of computational problems that people used (use?) MPI for. Things like numerical solvers for sparse matrices that are so big that you need to split them across your entire cluster. These still require a lot of node-to-node communication, and on top of it, the pattern is dependent on each problem (so easy solutions like map-reduce are effectively out). See eg https://www.open-mpi.org/, and https://courses.csail.mit.edu/18.337/2005/book/Lecture_08-Do... for the prototypical use case.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: