Hacker News new | past | comments | ask | show | jobs | submit login

This is a wildly editorialized and misleading title. I clicked through just to see how it was going to rationalize the fact that people demonstrably have been building scalable concurrent Go software, with goroutines, at truly huge scale. But of course, the paper says nothing of the sort; it makes an aside about how an earlier design of the Go runtime was less scalable than the current one, and that's it.

This is a textbook example of why people shouldn't editorialize titles.

The right title here is "Fibers Under A Microscope".




> But of course, the paper says nothing of the sort; it makes an aside about how an earlier design of the Go runtime was less scalable than the current one, and that's it.

The paper makes two references to Go: one to talk about split stacks, and one to reference the FFI (specifically, cgo). Cgo absolutely still has large overhead. That was true when the paper is written, and it's true today. It's inherent to the M:N small-stack design that your FFI calls that require big stacks require switching from a small stack to a big stack. You cannot get that overhead down to zero; it's a fundamental tradeoff of the design.

Go's solution here is to try to minimize the amount of FFI usage. As the paper points out, that may work for Go, but will absolutely not work for many other use cases.


A weird hidden benefit is that Go ends up having a lot of fundamental lib re-impmemented in Go itself. And as a user, that can be convenient for all your batteries to be in Go. Assuming someone else does the hard work obviously.


Given the amount of subtle bugs that this has caused and continues to cause in Go, I don’t really consider this a benefit.


I just changed it and then came here to see this comment.

Submitted title was "Microsoft: fibers/goroutines inappropriate for scalable concurrent software[pdf]".

Submitters: "Please use the original title, unless it is misleading or linkbait; don't editorialize." This is in the site guidelines (https://news.ycombinator.com/newsguidelines.html).


Sorry, I agree it's not a great title. I was worried the paper's title was misleading (just sounds like textiles), so I chose a representative statement from the author's abstract + conclusion.

The link comes via https://devblogs.microsoft.com/oldnewthing/20191011-00/?p=10... which summarized the PDF as """a fantastic write-up of the history of fibers and why they suck. Of particular note is that nearly all of the original proponents of fibers subsequently abandoned them [...] fibers are basically dead""".

By restricting itself to the TIOBE top 10, the paper also misses a discussion of BEAM which successfully offers N:M threading.


Oh, in that case let's just switch to the URL to that blog post and let it make its point directly.

Changed from http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2018/p136... above.


I had to scroll through all the comments to find this to understand why the comments don't match the article. I think you should have left it, now it's very confusing.


That's why I posted https://news.ycombinator.com/item?id=21230286 and pinned it to the top.

Experience has shown that switching to a better URL is generally better for discussion, though there can be a lag before the thread catches up.


I am far of being an Erlang expert. But is not the Erlang concurrency model also N:M threading ?

If so, it is also a successful usage of Fibers/green threading in highly concurrent environment.


From what I understand Erlang has a naive memory model; so whatever a process does, it cannot share efficiently.

It's made to handle 1000s of 1-to-1 calls that all fit individually on one core, not one 1000-to-1000 call that needs to use all cores at the same time.

Parallelism without "intra cpu and inter core" forced scope has never been hard, just spin up more machines.

Or if I put it this way: "if you don't need parallelism and memory speed between the parallelism enough to encounter cache miss problems, could you as well use separate computers?"

If the answer is yes then fibers and coroutines are meaningless.

This is the problem of this decade, if we can't solve the bottle neck of cache misses on sharing memory (in parallel on one task across cores efficiently) we have no reason to try and scale the number of cores at all.

And if that's true we have hit peak Moore's law for transistor computers; in performance, years ago and in energy efficiency, probably around 8nm.

Just to illustrate the problem one last time: Naughty Dog converted it's engine to fibers to allow 60 frames per second, but the controller input to frame latency increased with many frames (because the multiple cores cooperating on each frame meant they have to push the frame back because memory is slow) so the benefit was actually reduced.

You get lots of smooth bells and whistles (that look good when you don't play the game) but the only meaningful metric (how fast the character acts on your reactions, which is the definition of gameplay) actually degrades.

That said I'm looking forward to TLoU 2 as much as the next guy, but I'm not expecting any technical improvements except visuals.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: