Hacker News new | past | comments | ask | show | jobs | submit login

It hasn't won. Threads are alive and well and I rather expect async has probably already peaked and is back on track to be a niche that stays with us forever, but a niche nevertheless.

Your opinion vs. my opinion, obviously. But the user reports of the experience in Rust is hardly even close to unanimous praise and I still say it's a mistake to sit down with an empty Rust program and immediately reach for "async" without considering whether you actually need it. Even in the network world, juggling hundreds of thousands of simultaneous tasks is the exception rather than the rule.

Moreover, cooperative multitasking was given up at the OS level for good and sufficient reasons that I see no evidence that the current thrust in that direction has solved. As you scale up, the odds of something jamming your cooperative loop monotonically increase. At best we've increased the scaling factors, and even that just may be an effect of faster computers rather than better solutions.




in the 02000s there was a lot of interest in software transactional memory as a programming interface that gives you the latency and throughput of preemptive multithreading with locks but the convenient programming interface of cooperative multitasking; in haskell it's still supported and performs well, but it has been largely abandoned in contexts like c#, because it kind of wants to own the whole world. it's difficult to add incrementally to a threads-and-locks program

i suspect that this will end up being the paradigm that wins out, even though it isn't popular today


I was considering making a startup out of my simple C++ STM[0], but the fact that, as you point out, the transactional paradigm is viral and can't be added incrementally to existing lock-based programs was enough to dissuade me.

[0] https://senderista.github.io/atomik-website/


nice! when was this? what systems did you build in it? what implementation did you use? i've been trying to understand fraser's work so i can apply it to a small embedded system, where existing lock-based programs aren't a consideration


It grew out of an in-memory MVCC DB I was building at my previous job. After the company folded I worked on it on my own time for a couple months, implementing some perf ideas I had never had time to work on, and when update transactions were <1us latency I realized it was fast enough to be an STM. I haven't finished implementing the STM API described on the site, though, so it's not available for download at this point. I'm not sure when I'll have time to work on it again, since I ran out of savings and am going back to full-time employment. Hopefully I'll have enough savings in a year or two that I can take some time off again to work on it.


that's exciting! i just learned about hitchhiker trees (and fractal tree indexes, blsm trees, buffer trees, etc.) this weekend, and i'm really excited about the possibility of using them for mvcc. i have no idea how i didn't find out about them 15 years ago!


Then you may be interested in this paper which shows how to turn any purely functional data structure into an MVCC database.

https://www.cs.cmu.edu/~yihans/papers/concurrency.pdf


thank you!


Sound’s nifty. Did this take advantage of those Intel (maybe others?) STM opcodes? For a while I was stoked on CL-STMX which did (as well as implementing non-native version to the same interface)


No, not at all. I'm pretty familiar with the STM literature by this point, but I basically just took the DB I'd already developed and slapped an STM API on top. Given that it can do 4.8M update TPS on a single thread, it's plenty fast enough already (although scalability isn't quite there yet; I have plenty of ideas on how to fix that but no time to implement them).

Since I've given up on monetizing this project, I may as well just link to its current state (which is very rough, the STM API described in the website is only partly implemented, and there's lots of cruft from its previous life that I haven't ripped out yet). Note that this is a fork of the previous (now MIT-licensed) Gaia programming platform (https://gaia-platform.github.io/gaia-platform-docs.io/index....).

https://github.com/senderista/nextdb/tree/main/production/db...

The version of this code previously released under the Gaia programming platform is here: https://github.com/gaia-platform/GaiaPlatform/blob/main/prod.... (Note that this predates my removal of IPC from the transaction critical path, so it's about 100x slower.) A design doc from the very beginning of my work on the project that explains the client-server protocol is here (but completely outdated; IPC is no longer used for anything but session open and failure detection): https://github.com/gaia-platform/GaiaPlatform/blob/main/prod....


this is pretty exciting! successfully git cloned!


> in the 02000s there was a lot of interest

I read that as octal; so 1024 in decimal. Not a very interesting year, according to Wikipedia.

https://en.wikipedia.org/wiki/1024


> in the 02000s there was...

So sometime between "02000" and "02999"?


i meant between 02000 and 02010; is there a clearer way to express this that isn't ridiculously prolix


Meanwhile, in JS/ECMAScript land, async/await is used everywhere and it simplifies a lot of things. I've also used the construct in Rust, where I found it difficult to get the type signatures right, but in at least one other language, async/await is quite helpful.


Await is simply syntactic sugar on top of what everybody was forced to do already (callbacks and promises) for concurrency. As a programming model, threads simply never had a chance in the JS ecosystem because on the surface it has always been a single-threaded environment. There's too much code that would be impossible to port to a multithreaded world.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: