Hacker News new | past | comments | ask | show | jobs | submit login
A Taste of JavaScript's New Parallel Primitives (hacks.mozilla.org)
237 points by faide on May 5, 2016 | hide | past | favorite | 103 comments



You know, I'm not entirely sure how I feel about this. On the one hand: yeah, I get that having really multithreaded stuff is pretty handy, especially for certain computationally-bound tasks.

On the other hand, I quite like the single-threadedness of javascript. Promises-based systems (or async/await) give us basically cooperative multitasking anyway to break up long-running (unresponsive) threads without worrying about mutexes and semaphores. I understand exactly when and where my javascript code will be interrupted, and I don't need to wrap blocks in atomic operation markers extraneously.

I've written plenty multithreaded code, starting with old pthreads stuff and eventually moving on to Java (but my own experience with threaded stuff is limited mainly to C and Java), and it can be a real pain. I guess limiting shared memory to explicitly named blocks means you don't have as much to worry about vis-a-vis nonreentrant code messing up your memory space.

That said, it is a pretty useful construct, and I see where this can benefit browser-based games dev in particular (graphics can be sped up a lot with multicore rendering, I bet).


[I'm a colleague of the OP and Mozilla/TC39 member, i.e. someone who cares a lot about the JS programming model :)]

I'm enthusiastic about SharedArrayBuffer because, unlike threads in traditional languages like C++ or Java, we have two separate sets of tools for two very separate jobs: workers and shared memory for _parallelism_, and async functions and promises for _concurrency_.

Not to put too fine a point on it, shared memory primitives are critical building blocks for unlocking some of the highest performance use cases of the Web platform, particularly for making full use of multicore and hyperthreaded hardware. There's real power the Web has so far left on the table, and it's got the capacity to unleash all sorts of new classes of applications.

At the same time, I _don't_ believe shared memory should, or in practice will, change JavaScript's model of concurrency, that is, handling simultaneous events caused by e.g. user interface actions, timers, or I/O. In fact, I'm extremely excited about where JavaScript is headed with async functions. Async functions are a sweet spot between on the one hand the excessively verbose and error-prone world of callbacks or often even hand-written promise-based control flow and on the other hand the fully implicit and hard-to-manage world of shared-memory threading.

The async culture of JS is strong and I don't see it being threatened by a low-level API for shared binary data. But I do see it being a primitive that the JS ecosystem can use to experiment with parallel programming models.


Yes, the thing I in particular worry about is the event dispatch system. The last thing we need there is multithreaded event dispatch, where multiple handlers fire at the same time, possibly resulting in race conditions on state managing objects.

But on closer inspection of the post, this implementation seems to be highly targeted at certain kinds of compute-bound tasks, with just the shared byte array based memory. It's well-partitioned from the trad ui / network event processing system in a way that makes me optimistic about the language.


I'm curious about 2 things:

1. How is the accidental modification of random JS objects from multiple threads prevented - that is, how is the communication restricted to explicitly shared memory? Is it done by using OS process underneath?

2. Exposing atomics greatly diminishes the effectiveness of automated race detection tools. Is there a specific rationale for not exposing an interface along the lines of Cilk instead - say, a parallel for loop and a parallel function call that can be waited for? The mandelbrot example looks like it could be handled just fine (meaning, just as efficiently and with a bit less code) with a parallel for loop with what OpenMP calls a dynamic scheduling policy (so an atomic counter hidden in its guts.)

There do exist tasks which can be handled more efficiently using raw atomics than using a Cilk-like interface, but in my experience they are the exception rather than the rule; on the other hand parallelism bugs are the rule rather than the exception, and so effective automated debugging tools are a godsend.

Cilk comes with great race detection tools and these can be developed for any system with a similar interface; the thing enabling this is that a Cilk program's task dependency graph is a fork-join graph, whereas with atomics it's a generic DAG and the number of task orderings an automated debugging tool has to try with a DAG is potentially very large, whereas with a fork-join graph it's always just two orderings. I wrote about it here http://yosefk.com/blog/checkedthreads-bug-free-shared-memory... - my point though isn't to plug my own Cilk knock-off that I present in that post but to elaborate on the benefits of a Cilk-like interface relatively to raw atomics.


1. You can't ever get a reference to regular objects that exist in other threads (workers). Communication with workers is limited to sending strings, copies of JSON objects, transfers of typed arrays, and references to SharedArrayBuffers.

2. I assume it was done at a low level so that multi-threaded C++ could be compiled to javascript (asm.js/WebAssembly).


For (1) does this mean that everything in the global namespace barfs when called from a worker thread?

(2) sounds like it might need a larger set of primitives, though I'm not sure.


1. Web workers don't share a javascript namespace or anything with the parent page. They're like a brand new page (that happens to not have a DOM). Outside of SharedArrayBuffer, there's no shared memory.


As someone who doesn't know much about how parallelism primitives are implemented, I need to ask why SharedArrayBuffer needs a length to be specified? From my layman viewpoint, this seems too low-level to be used for casual everyday applications.


> On the other hand, I quite like the single-threadedness of javascript.

Douglas Crockford's strategy of taking a language, identifying a subset of it, calling it "The Good Parts" and sticking to it is a great motivation to welcome new features, let them evolve but keep your distance from them until they're fleshed out. This has pretty much been the M.O. of Javascript and IMO has worked great..


> I understand exactly when and where my javascript code will be interrupted

That's why callbacks, promises, async/await and all that are neither multitasking, nor multithreading. They are all about control, while multithreading is all about parallelism and is essentially a very low-level specialized thing, that nobody should be using, unless absolutely necessary.


> multithreading is all about parallelism

This just isn't true. Why do you think people wrote multi-threaded applications back when almost all machines had just one processor and just one core? Threads give you concurrency as well, even if you don't want or need parallelism.


Of course they do. Anything can give you concurrency. But pretty much anything can make concurrency easier, than threads.

> Why do you think people wrote multi-threaded applications back when almost all machines had just one processor and just one core?

Almost none did. Popular networking servers were either preforking, forking or asynchronous. Desktop GUIs were event driven. Threads weren't even very usable on most systems at that time, i.e. up until a decade and a half ago or so, weren't they?


Threads weren't very usable on most systems until around 2001? No, I don't know where you've got that idea from but it's not the case.

Java had threads in 1996. The Windows 32 API had threads from at least Windows 95. Windows NT had them since 1993. I don't know when Linux got threads and couldn't find anything, but I would presume it was the mid 90s at the very latest. In fact I don't think any of these threading APIs will have even changed much since the mid 90s. They weren't new ideas at the time either!

Look at this thread programming manual from 1994 which on page 3 lists five benefits of using threads at the time, only one of which is utilising the relatively rare multiprocessors. http://www4.ncsu.edu/~rhee/clas/csc495j/MultithreadedProgram...


Linux got threads in 1996 when kernel 2.0 introduced the "clone" syscall, allowing multiple processes to share address space. LinuxThreads was built on top of this to implement the pthreads api.


I was using threads on Windows NT in 1998 (on single core, single processor machines). They were perfectly reliable.


Multi-threading was far more popular on Windows then other OSs because starting new processes was so damned expensive.

On many other systems the overhead of bringing up a new process was so much closer to that of bringing up a new thread that you only needed threads for really high performance parallel code and/or when you needed fast shared memory and/or were very memory constrained. Any time the individual tasks are fairly independent, had noticeable bottlenecks other then CPU, and data larger then the process itself, say a web server, processes were more than adequate and you don't need to worry about certain potential concurrency issues.


Yes, this is all true. But also many threading implementations (especially Linux) back then were pretty bad. Solaris and Windows were the only places where it made sense to use them.

See also "Green Threads" in early Java implementations.


Exactly. Threading actually enables a simple asynchronous blocking programming model through locks (at the expense of introducing loads of potential locking hazards).


Blocking presumes synchronicity. Locks are by definition synchronization primitives.


Great point. I find myself reminding people about this all too often.


Well, I'm old enough to remember coding for Mac OS 8, where "multitasking" was indeed cooperative - I had to say "oh, you can interrupt me here, if you want" at different places in my code, which meant bad actors could lock the system of course. It wasn't great.

On the other hand, in the uncommon event I do have some weird javascript thing that's going to take a long time (say, parsing some ridiculously-big JSON blob to build a dashboard or something), I know I can break up my parse into promises for each phase and that I won't be locking up other UI processing as badly during that process. So: not exactly multitasking / threading as you say, but still a handy thing to think about.


I'm still totally ignorant of the new primitives in the original link, so maybe that's why I'm confused, but: are you saying that as of today, wrapping a big parsing job into a promise frees up the event loop? I really don't think that's the case, is it? JSON.parse is gonna be blocking/synchronous whenever it happens.

Can you explain a bit more of the implementation you're describing?


You can use setTimeout to "free up the event loop". Using setTimeout(fun, 0) will run fun after the event loop has been freed up IIRC. NodeJS has a function called setImmediate that does exactly that.

JSON.parse as implemented is going to be blocking. But it's possible to implement an asynchronous, non-blocking JSON parser

See also : http://stackoverflow.com/questions/779379/why-is-settimeoutf...

Edit : requestAnimationFrame is a better alternative to setTimeout(fun, 0), as it allows the browser to update the UI.


Not the parsing part, but the processing part. Assume I've got a big pile of data and am calculating stuff like correlations on it. If I break the process up into chunks, I can go chunk.then(nextChunk).then(afterThat) etc etc. JSON.parse still blocks, but it's the post-processing I'm talking about.


I disagree that parallelism is inherently a low-level specialized thing. There are a lot of operations, like handling any sort of media, that are naturally parallel. If I'm parsing through an image file, I know exactly where it begins and ends, and with very few bits that are dependent on other bits, it just makes sense to be able to spread that operation through the many cores that exist in a modern machine.


Welp, the idea is portability. This is a bridge between other platforms into JS, and -- as you mentioned -- its usage is largely specialized.

Most people don't really know what typed arrays are, but they're in ES6 nevertheless.


A multi-threaded JavaScript also means this becomes more troubling:

https://github.com/nodejs/node/issues/5798

Node.js uses OpenSSL instead of the operating system's CSPRNG. The biggest argument for "WONTFIX" is "Node.js is single-threaded so OpenSSL fork-unsafety isn't a concern for us".

If JavaScript becomes multi-threaded, it's not unreasonable to expect Node.js to follow. If it does follow, expect random numbers to repeat because of OpenSSL's broken RNG.


> "Node.js is single-threaded so OpenSSL fork-unsafety isn't a concern for us"

I don't see this quote within your linked issue and, as far as I can tell, there's no discussion of multi-threading.


I use quotes differently than journalists. I use them to indicate "this is a separate sentence that expresses an idea mid-sentence" and to indicate tone shift, not as a quote for a specific person. I use a > prefix for direct quotes.

That exact string isn't from the Github issue, it's a summary of one argument dismissing some of the OpenSSL RNG's worst issues.

Here are two direct quotes if that's what you want:

> forking is not an issue for node.js

> The bucket list of fork-safety issues that would have to be addressed is so long that I think it's safe to say that node.js will never be fork-safe.

There was also off-ticket discussion on IRC where similar arguments were made.


Forking and threading are different things. Forking creates new processes and duplicates memory. It raises entirely different issues from multi-threading, which does neither. See: http://stackoverflow.com/q/2483041/331041

They discussed forking, but did not discuss multi-threading.


So for OpenSSL's RNG vs. the operating system's CSPRNG is there a difference between forking and multi-threading?


Yes. The problem with an in-process RNG and forking is that the RNG state is duplicated, so both processes get the same sequence of numbers. Multithreading just needs locking to prevent corruption because the state is shared.


Thanks for taking the time to explain.


Meh. Operating system CSPRNGs can be slow, whereas userspace CSPRNGs seeded from the OS CSPRNG can be fast, fast, fast. Explain how the OpenSSL RNG is broken, and why it's a bad idea to rely on it.


That's explained in painstaking detail in the Github issue I linked to.


Okay. I read the whole thread (ugh). Arguments for using the kernelspace CSPRNG basically boils down to this:

1. Kernelspace CSPRNGs generally don't change, are well audited, and are generally accepted to be secure.

2. Userspace CSPRNGs don't provide any additional security benefit.

3. OpenSSL is a questionable security product with it's history of vulnerabilities.

So, with that said, let's look at each of them.

For the first point, I don't fully agree. The Linux kernel CSPRNG has changed from MD5 based to SHA-1 based. I have heard chatter (I don't have a source to cite this) that it should move to SHA-256 with the recent collision threats of SHA-1. There is also a separate movement to standardize it on NIST DRBG designs (CTR_DRBG, Hash_DRBG, HMAC_DRBG- https://lkml.org/lkml/2016/4/24/28). Starting with Windows Vista, Microsoft changed their CSPRNG to FIPS 186-2 or NIST SP 800-90A (depending on Windows version), which could be hash-based or AES counter based. OpenBSD changed from using Yarrow and arcfour to ChaCha20. So, no, kernelspace CSPRNGs change all the time.

For the second point, I greatly disagree. First, you need a specific RNG to compare to. It's considered "unsafe" to use MD5 as a CSPRNG, although that would require pre-image attacks on MD5, of which it still remains secure. Additionally, a userspace AES-256-CTR_DRBG is theoretically more secure than an AES-128-CTR_DRBG design. While that matters little it terms of practical use, the reality is that AES-256 has a larger security margin than AES-128, as I understand it. Same for using SHA-256-Hash_DRBG instead of SHA-1-Hash_DRBG. Userspace CSPRNGs can be more secure than kernelspace.

Finally, as far as I know, attacks on OpenSSL have been overwhelmingly CBC padding oracle attacks, in one form or another. There have been a couple RNG vulnerabilities with OpenSSL (https://www.openssl.org/news/vulnerabilities.html), but same with say the Linux RNG (https://github.com/torvalds/linux/commit/19acc77a36970958a4a...). So, I'm not sure this is a valid point.

The biggest reason why you should use a userspace CSPRNG is performance. System RNGs generally suck. The Linux kernel can't get much faster than about 15-20 MiBps. Similarly with Mac OS X and FreeBSD. OpenBSD can get about 80 MiBps (on testing with similar hardware), but that's just painful for a single host trying to serve up HTTPS websites, when the HDD (nevermind SSDs) can read data off at 100 MiBps without much problem. The kernelspace CSPRNG can't even keep up with disk IO.

Userspace CSPRNGs can get into 200-300 MiBps without much problem, and with AES-NI (provided that you're using AES-128-CTR_DRBG), 2 GiBps (https://pthree.org/2016/03/08/linux-kernel-csprng-performanc...).

But, I do agree with one very serious concern on using userspace RNGs in general: they can introduce bugs and vulnerabilities that don't exist with the system CSPRNG. Expecting a developer to get this right, especially one who is not familiar with cryptographic pitfalls, can be a massive challenge. But this isn't the case with the OpenSSL RNG.

So, I guess I don't see the point to move the node.js CSPRNG dependency from OpenSSL to kernelspace.


> Microsoft changed their CSPRNG to FIPS 186-2 or NIST SP 800-90A

It's changed once, in Vista SP1. Since then it's only used AES256 in CTR mode as a DRNG as specified in NIST 800-90. So I'm not sure it's fair to say it changed that much. Linux's CSPRNG has also not seen much change other than to make it more resilient in certain conditions (there was some paper on it IIRC) and to add hardware RNG support (e.g. rdrand).

> 3. OpenSSL is a questionable security product with it's history of vulnerabilities.

I don't think this is the (main) argument against its CSPRNG although it may be one of them. My understanding is the main argument against it is that it's overly complicated by design (e.g. entropy estimation, how it's initialized (especially on Windows)). You could also probably argue that it may be showing its age with its use of SHA1 but you could say the same for the Linux kernel as well.

If you want to look at a userspace CSPRNG done right (or what I believe to be one done right) just take a look at BoringSSL's[1]. In the case where there is a hardware RNG it will create a ChaCha20 instance, keyed with the OS's CSPRNG, and use that ChaCha20 instance to filter the rdrand output (as to not use it directly or xor it). If there isn't a HW RNG then it will just use the OS CSPRNG directly.

There's no entropy estimation, no way to seed it, and by design it's simple and fast. You're correct that the system's CSPRNG may not be fast enough, in fact the BoringSSL dev's mentioned this[2] citing TLS CBC mode. This is probably more a problem on Linux than Windows due to the design of the CSPRNG (Linux's is pretty slow).

So with everything being said I would argue that it's always the correct choice to use the system CSPRNG unless it otherwise can't satisfy your needs. In which case just use BoringSSL then.

As a side note if you really need to generate A LOT of random numbers just use rdrand directly. You should be able to saturate all logical threads generating random numbers with rdrand and the DRNG (digital RNG) should still not run out of entropy.

[1] https://boringssl.googlesource.com/boringssl/+/master/crypto...

[2] https://www.imperialviolet.org/2015/10/17/boringssl.html (under the "Random number generation" section)


> If you want to look at a userspace CSPRNG done right (or what I believe to be one done right) just take a look at BoringSSL's[1].

BoringSSL just uses /dev/urandom directly. It's not a userspace CSPRNG. And as you pointed out, for GNU/Linux systems, it's slow. This is why userspace designs, such as CTR_DRBG, HMAC_DRBG, and Hash_DRBG exist- so you can have a fast userspace CSPRNG with backtracking resistance.

Case in point. On my laptop:

$ pv < /dev/urandom > /dev/null

1.02GB 0:01:20 [13.3MB/s] [ < => ]

$ openssl enc -aes-128-ctr -pass pass:"sHgEOKTB8bo/52eDszkHow==" -nosalt < /dev/zero | pv > /dev/null

2.13GiB 0:00:11 [ 198MiB/s] [ <=> ]

And on a server with AES-NI:

$ pv < /dev/urandom > /dev/null

2.19GiB 0:01:06 [ 20MiB/s] [ <=> ]

$ openssl enc -aes-128-ctr -pass pass:"sHgEOKTB8bo/52eDszkHow==" -nosalt < /dev/zero | pv > /dev/null

31.9GB 0:00:34 [ 953MB/s] [ < => ]

I've seen other hardware with AES-NI that can go north of 2 GiBps, as I already mentioned. Although not backtracking resistant, those are fast userspace CSPRNGs, that are clean in design.

I've designed userspace CSPRNGs that adhere to the NIST SP 800-90A standards. They're seeded from /dev/urandom on every call, and perform much better than relying on /dev/urandom directly. I won't say they're bug free, but if you read and follow the standard (http://csrc.nist.gov/publications/nistpubs/800-90A/SP800-90A...), it's not too terribly difficult to get correct, and PHP, Perl, Python, Ruby, and other interpreted languages can outperform the kernelspace CSPRNG.


> BoringSSL just uses /dev/urandom directly.

Only if there's no hardware RNG support which I admit can happen (it's not perfect, I freely admit that). I suspect that for Google's use on their servers it's a non-issue (assuming where they use it and need the high(er) speed stuff they'll always have rdrand support). If there is rdrand support then it will only reseed from /dev/urandom after every 1MB (or 1024 calls) of generated random data (per thread).


I know very little about the subject, but when multi-core processors were first introduced, I remember that the general wisdom was that server software could benefit, from its highly concurrent nature, but games and graphics would not, because they are generally single threaded. I wonder if the general wisdom at the time was wrong, or if there has been a big shift to take advantage of multiple cores. I'm guessing the later.


Maybe for your standard web app, but try porting Unity or Unreal Engine over without better parallelism.


I'm excited about the `SharedArrayBuffer` addition, but quite meh on the `Atomic.wait()` and `Atomic.wake()`.

I think CSP's channel-based message control is a far better fit here, especially since CSP can quite naturally be modeled inside generators and thus have only local-blocking.

That means the silliness of "the main thread of a web page is not allowed to call Atomics.wait" becomes moot, because the main thread can do `yield CSP.take(..)` and not block the main UI thread, but still simply locally wait for an atomic operation to hand it data at completion.

I already have a project that implements a bridge for CSP semantics from main UI thread to other threads, including adapters for web workers, remote web socket servers, node processes, etc: https://github.com/getify/remote-csp-channel

What's exciting, for the web workers part in particular, is the ability to wire in SharedArrayBuffer so the data interchange across those boundaries is extremely cheap, while still maintaining the CSP take/put semantics for atomic-operation control.


> if we want JS applications on the web to continue to be viable alternatives to native applications on each platform

This is where I disagree with the direction Mozilla has been going for years. I don't want the web to be a desktop app replacement with HTTP as the delivery mechanism. I'm fine with rich single page web apps, but I don't understand the reason why web apps need complete feature parity with desktop apps.

Why not let the web be good at some things and native apps be good at others?


I don't know if there's a uniform Mozilla position on this, but here's mine! :) The main reason I care about the Web is because it's the world's biggest software platform that isn't owned. If someone can deliver their app to the world without submitting it for review by an app store and without paying a company a %-age of the revenue, and if they can market it through the viral power of URLs, then they have a lot more control over their own destiny. That's why I think it's important for the Web not to give up on hard but solvable problems.

But also I think there's a false dichotomy between "the Web should just be for documents" and "the Web should just be for apps." The Web is simultaneously an application platform that blows all other platforms out of the water for delivering content. First, there's a reason why so many native apps embed WebViews -- despite its warts, CSS is the result of hundreds of person-years of tuning for deploying portable textual content.

But more importantly, you just can't beat the URL. How many more times will we convince the entirety of humanity to know how to visually parse "www.zombo.com" on a billboard or in a text message? It's easy to take the Web for granted, it's fun to snark about its warts, and there's a cottage industry of premature declarations of its death. But I personally believe that the humble little hyperlink is at the heart of the Web's power, competitive strength, and longevity. It was a century-old dream passed on from Vannevar Bush to Doug Englebart to Xerox PARC and ultimately to TBL who made it real.


But more importantly, you just can't beat the URL. How many more times will we convince the entirety of humanity to know how to visually parse "www.zombo.com" on a billboard or in a text message? It's easy to take the Web for granted, it's fun to snark about its warts, and there's a cottage industry of premature declarations of its death. But I personally believe that the humble little hyperlink is at the heart of the Web's power, competitive strength, and longevity. It was a century-old dream passed on from Vannevar Bush to Doug Englebart to Xerox PARC and ultimately to TBL who made it real.

URLs are great, but they don't have to be limited to the web. Or, rather to say, the thing on the other end of the URL doesn't necessarily need to be something the browser handles directly.

I'd like to see something developed that lets you do something like:

    x11://myapp.example.com
where clicking on that link in a browser launches the remote app and then renders the UI locally using X11 remoting - as opposed to trying to render the application UI in the browser.

OK, I know, go ahead and say it.. X11 sucks, X11 remoting doesn't work on WAN links, etc. To which I say:

a. Fine, let's invent something better, that still avoid the need to pack every ounce of functionality in the universe, into a web browser.

and

b. That doesn't jibe with my experience anyway. Just earlier this week I was playing around and decided to launch a remote X app using X forwarding over ssh, over a public Internet link. Worked like a champ. In fact, it reminded me of how fucking awesome X11 remoting really is, and makes me long for either a resurgence of interest in it, OR (see a above) the invention of a newer, better version that everybody can be happy with.

There's also a lot to be said for delivering applications using Java Web Start as well. JWS is wicked cool technology that is tragically under-utilized. IMO, anyway. :-)


That would require plugins which are being phased out and for good reason.

Namely, because they allow something like Adobe Flash Player - which doesn't come close to supporting all the platforms the Web runs on - to become a defacto standard. Thus restricting a large portion of the Web to just a few platforms Adobe wants to support.


That's one way of looking at it. OTOH, not allowing plugins arbitrarily restricts web users to the lowest common denominator of technologies that are supported by browser vendors. Why should I care to have my technology choices dictated by browser vendors any more than I care to have them dictated by Adobe?

That said, nothing about what I'm proposing specifically requires plugins. All it would require would be for the browser vendors to work together (cough, cough, I know, cough cough) to implement a standard mechanism for doing this. Actually, it might not even take that. Browsers already have a way to setup handlers for unsupported content types and what-not, so it might be possible to build what I'm thinking about largely on top of that. Of course it would mean that if you wanted to run an "application" UI you'd have to have a suitable platform (an X server, or something like an X server) running along-side your browser. So you maybe wouldn't be able to run OpenOffice on your smart-phone. OK, personally, I can live with that. Not all devices are equivalent and there's no reason to expect every thing to work one every device.


That's true you don't actually need a browser to do what you're talking about. In fact I'm pretty sure x11 already does this.


> without paying a company a %-age of the revenue

It may not be a %-age of revenue, but you definitely can't host a non-trivial webapp for free either.

You could even argue that in many webapps scaling costs are proportional to revenue, which makes it awfully similar to an app store.

> But also I think there's a false dichotomy between "the Web should just be for documents" and "the Web should just be for apps."

Yeah, I don't have a clear idea on where the web should "end", but wow... web pages able to eat all my cores and have data races seems like a line to be crossed with great caution and care.


> web pages able to eat all my cores

We're already there:

    var code = "while(true){}";
    var Blob = window.Blob;
    var URL = window.webkitURL || window.URL;
    var bb = new Blob([code], {type : 'text/javascript'});
    code = URL.createObjectURL(bb);
    for (var i = 0; i < 8; i += 1) new Worker(code);


> Yeah, I don't have a clear idea on where the web should "end", but wow... web pages able to eat all my cores and have data races seems like a line to be crossed with great caution and care.

We crossed that line long ago (well, not precisely with data races, but with message/event ordering races...)


that presumes that this freedom of self marketing is based on freedom of the internet. there are many countries where internet is restricted and URLs is a part of the restriction filter.


For me, it's because "native apps" still fail at the first step, installation.

Whether it's a website, or web-app, "installing" it is as easy as going to a URL. You can go to that same URL on your PC, phone, tablet, your friend's computer, etc. and it will run the same. It's easy to share, easy to remember, and if it takes more than 5 seconds from the time you hit enter to the time you are using it, we consider that a mistake on the creators part. Plus the ability to discover new applications is extremely easy, and interoperability with other web applications is both easy for the developer and the user.

Compare that to desktop applications where installation is still an "event", and you are lucky if it can be done faster than a few minutes. Plus there are portability issues (oh that doesn't have a windows version?), it's difficult to share (try explaining how to install software to a non-technical person...), there is DRM all over the place, and they are significantly less secure.

Even most mobile applications take 30+ seconds to install on my android phone, and have all the same issues with discoverability, cross-platform issues, vendor lockin, and permissions/security issues.


Support for more advanced features like WebGL, DRM, codecs, etc vary by browser and platform. Even the protocols underlying the web - HTTP and TLS - have varying levels of support that affect app developers and portability. Not to mention sharing a link between mobile devices and desktops doesn't always work very well.

Don't get me wrong: I appreciate the portability of the web. I just worry the focus on making it more native-like will introduce more of what makes the native ecosystem so frustrating.


Per the more advanced features.

The one thing that the web has that native doesn't, is a standards body that works to unify and standardize these features so they can eventually work across all platforms.

There have been hiccups in the past (probably the most notable one being the WebSQL vs IndexedDB, but luckily that's mostly sorted out now), but for the most part it's been pretty smooth sailing.

Yeah, you definitely need to wait longer to use new features that can't be "polyfilled" or "shimmed", but the "reward" is that the entire platform is at least moving in the same direction.

While I agree that native has some pretty massive upsides, I personally feel that they should be reserved for things that need them, much like how assembly is treated when compared to higher level languages. And bringing those upsides to the web in a safe and consistent manner isn't a bad thing.



>I don't want the web to be a desktop app replacement

That ship has sailed a long time ago.

>I'm fine with rich single page web apps, but I don't understand the reason why web apps need complete feature parity with desktop apps.

Because ...?


Because ...?

Not the poster you're replying to, but I'll share my thoughts:

1. Trying to make the browser ideal for both browsing content, and rendering rich application UIs bloats the browser.

2. Time spent trying to make the browser a poor imitation of an X server is time that could go into making the browser better at, ya know, browsing. FSM only knows, Firefox could use a LOT more developer time spent on improving performance and reducing the memory footprint. (Yeah, I know, sometimes that those goals overlap. But not always, which is the point)

3. For all the talk about how X11 remoting doesn't work over the Internet, I've done it and it worked just fine. YMMV, but it certainly can work just fine in at least some situations.

4. Trying to create a rich experience in the browser inevitably leads to conflicts that don't exist in a desktop app. For example, typically the F1 key is the "Help" key. So if I'm sitting in a web application, and I hit F1, what happens? Do I get help for my application, or for the web browser? Likewise, can my app easily use the F11 key? No. And look at the UI inconsistency between web "apps". There's none. With desktop apps, most apps adhered (mostly) to one of a relatively small set of standards... CUA, or whatever. With web apps, the experience is all over the damn place.

I'm sure there are other good reasons, but those jump out to me.


>Trying to make the browser ideal for both browsing content, and rendering rich application UIs bloats the browser.

Maybe, maybe not - depends on how you define bloat. Modern operating systems, mobile and desktop, are magnitudes larger than their counterparts from years ago, are they bloated because they are bigger?

Having a more feature-rich browser doesn't make a browser 'bloated' - it makes it more feature-rich.

>Firefox could use a LOT more developer time spent on improving performance and reducing the memory footprint.

That's already happening, except browser performance is now focused on improving rendering engines and js engines to enable rich applications, because every modern browser is already very good at rendering 'simple', non-RIA, pages.

>For example, typically the F1 key is the "Help" key. So

Woah woah. Giving web applications capabilities that match those of native desktop applications, doesn't mean mimicking every desktop convention. The Web doesn't work with function keys, that's a convention that arose largely from platform limitations - so F1 doesn't mean 'Help' in the Web, just like F1 doesn't mean help in your CLI either. What's wrong with that?

>For all the talk about how X11 remoting doesn't work over the Internet

Anything can work - the problem is that there is no universal platform accessible to consumers to enable X11 streaming. On the other hand, almost every computing device these days provides an HTML/CSS/JS rendering engine. That's why developers want to build on top of the web-stack. So through a quirk of history that's what we ended up with - in an alternate universe, maybe the standard for rich web applications are Java Applets, or ActiveX, or X11, or SSH, or whatever. That's not the world we live in.

>With web apps, the experience is all over the damn place.

Sure, and you can go crazy, but there are conventions. For example, the way links look and behave is intuitive. I've built desktop and web applications, and there is a core 'feel' that all web-applications share because the browser takes care of so much of the behaviour and provides the base widgets and the framework. Things like the way links look, that they are underlined when you hover over them and your cursor changes, that you can right-click on a link and open in a new window/tab, how cut-and-paste and text-highlight is handled, drag-and-drop, (browser) zoom, how fullscreen works. Yes, any given page can override some of those of behaviors, but then again, so can a 'native' app.


One reason might be that native app distribution platforms are getting progressively more closed, whereas the web is getting/remaining open.


I might get downvoted for this, but I think it is an important point to really consider. The claim that the web is getting/remaining open is somewhat dubious. As JavaScript and web browsers get increasingly complex, it is harder for anybody to just start up and write a usable, compliant web browser from scratch, which in fact entrenches the status quo.

And why do we simply trust the existing browsers? Google has very specific goals to monetize you and Chrome can be leveraged to help that goal. Microsoft's historic fight with the web and now their current changing business goals are a reminder that their web browser goals can always change, and their current model seems more like Google. Apple is always Apple. And why should we blindly trust Mozilla? They depend on external funding to keep the foundation going to pay for a lot of the complex engineering that goes into Firefox. I'm not accusing them of anything wrong, but you can look up prior controversies about their funding sources and decisions and see people don't agree it is all rosy.

I'm suggesting the increasing technical complexity is not necessarily working towards the goal of an open web because it is entrenching the gatekeepers that can make the web browsers.


Distribution. It's what copyrights are about. GPL and open source licenses are all concerned with terms of distribution. App stores and walled gardens are about controlling and charging rent for distribution. It's basically all about trade.

The web for software distribution is free or very near free in a lot of cases.

Your point about browser vendors monetizing their users is valid, but that is unrelated with distribution of software that targets the browser.

I choose Mozilla; I trust them the most of all of the browser vendors and I appreciate how they continue to drive so many web standards forward.


I'm not sure that being able to write a competing browser is a requirement for the platform to be open. Just that anybody can write and deploy code to it.

(Full disclosure: I'm employed by Mozilla, but these opinions are entirely my own.)


Closed has benefits. "Curated" is a nice euphemism for closed.

It's much easier to childproof a curated app store than The WWW for example.

I'm not saying closed is better -- they're just different and that's ok. In fact I like how different they are. It means each has its own unique strengths and doesn't have to worry about trying to do it all.


IMO what you want are curated views into open platforms. Some arbiter deciding what is/isn't acceptable for the platform is very far from ideal. Consider all the great games that have been prevented from going on iOS.

Admittedly, something like this seems difficult to do for the web.


So, you're pretty negative on WebGL then?


Pretty meh. I'm sure there are some excellent uses of it that I encounter and enjoy without realizing it, but few of the advanced demos work for me (Chrome on Linux w/Intel gfx) or bring my computer to its knees rendering at very low FPS.

It feels like something that was more interested in competing with native than offering a constrained and portable approach.


Quake 3 doesn't work for you? :)

http://media.tojicode.com/q3bsp/


A low resolution static subset of a game from 1999 does seem to work in my browser... I'm not super impressed.


This is the last piece needed to allow multi-threaded code with shared state to emscripten [0]. A very good thing indeed

[0] http://kripken.github.io/emscripten-site/docs/porting/guidel...


The saving grace of JavaScript's everything-is-async, single threaded model was that it was just slightly less difficult to reason about than multiple threads and shared state. (Though I'd say that's debatable...)

My guess is that, despite the sugar coating that JavaScript's async internals have received of late, writing stable multi-threaded code with JavaScript is going to be hard.

JavaScript now has the safety of multi-threaded code with the ease of asynchronicity!


SharedArrayBuffer only allows plain typed (byte) arrays to be shared at least. Arbitrary javascript objects can't be shared, so there's a very clear division about what can get affected by other threads and what can't. You don't have to worry about whether existing libraries are thread-safe, etc.


Not only that, but if you don't need "Shared" array buffers (meaning more than one thread using it at once) you can use "Transferrable" [1] ArrayBuffers.

It's just a zero-copy transfer to a worker (or from a worker) but it makes sure the "sender" doesn't have access to the memory any more.

It's incredibly easy to use, avoids all the common issues and pitfalls with shared memory, and being zero-copy it's stupidly fast.

Obviously it's not a replacement for true shared memory, but i've used it in the past to do some image processing in the browser (broke the image into chunks, and transferred each chunk to a worker to process, then return and stitch them all back together).

[1]https://developer.mozilla.org/en-US/docs/Web/API/Transferabl...


That's pretty rad - the vast majority of what I would see myself wanting to do in a multithreaded javascript world would be limited to something with transferrable arraybuffers. Like: "hey, worker, go do some work and lmk when you're done". Moving big chunks of memory around in ways that atomically only ever have one allowed accessing thread would be plenty.


I thought this too, but the fact that you can't send only a chunk of an array buffer is a huge limitation. It basically limits you do using a background thread to do something, instead of dividing the work among many (well, you can, but you loose most of the benefit of transferrable).


Well you can still divide the work among many workers, you just need to incur the cost of copying/splitting the buffer before you start sending them off.

In most cases you know how many workers you want at the start of the program, so that cost of splitting/merging only happens once (and you can do that splitting/merging in a worker to avoid hanging the main thread) and then you can pass those chunks around freely.


For workloads like graphics, the cost of splitting/merging happens each time.


Until the point that someone makes the SharedJSON library to store JS objects in binary that is!


Why is everything-is-async so entrenched in JavaScript? I'd love to have pre-emption and blocking IO in JavaScript but it seems to have been thoroughly excluded by design. What is the reasoning behind this?


If you've ever used EventMachine in Ruby, or Twisted in Python, you've probably encountered the reason.

Blocking has simple semantics, but it's hard to scale. Everything-is-async is a little more complicated, but scales nicely. The problem is when you combine the two, you only get the worst of both approaches.

If most things are async, and you've only got one thread, and you block that thread, then your whole app is blocked. Now you've got the scaling problems that come with blocking, and the complexity that comes with async.


Well preemption and blocking should go hand in hand. The distinction of whether or not that's multi-threading is more of a semantic argument. I'd consider them the Amino acids that the multi-threading protein is made of.

The entirely async model means that if something accidentally endless loops, it kills everything. The only reason why we get the "This script is taking a long time" is because some preemption is provided as some magic beyond the scope of the JavaScript itself.


Unless you're using webworkers, then javascript runs in the UI thread. If you do blocking IO in the UI thread, then things freeze.


That's the problem in a nutshell, It needn't be that way.

Blocking IO is just one aspect. Anything that is not instantaneous is blocking. Without the ability to preempt something taking a long time you will always be prone to the user interface freezing.

Methods like Array.map() are specifically designed to manage bulk operations immediately, which, without preemption, is at odds with a design where the user interface should perform with the lowest latency.


On my grossly overpowered workstation, I can only crank the number of workers in the Mandelbrot demo to 20 [1]. Attempting to go beyond 20, the console reports:

    RangeError: out-of-range index for atomic access
That said, 20 workers is about 11x faster than the single-threaded version.

[1] https://axis-of-eval.org/blog/mandel3.html?numWorkers=20


I keep hoping that JS would evolve to support the actor model, a la Erlang/Elixir, with their process based persistence, concurrency via message passing, etc. It just seems so much simpler and tractable than this proposal.


I've seen projects to compile Erlang to JS, but has anyone experimented with a JS compiler that targets Erlang's BEAM VM like Elixir?

JS is an approachable language but Node has problems with scaling and error handling of non-blocking IO. Erlang solves those problems but the language is not approachable and has a smaller ecosystem than JS. I'm imagining something like Node with "micro-workers" so developers could reuse their existing JS code, but not have to worry about scaling or non-blocking APIs.


Please implement!


That wouldn't do much for the use cases they want to use it (multimedia, games, number-crunching etc as in the example in TFA)


In some ways web workers feel a bit like actors. granted a poor man's actor.


> This leads to the following situation where the main program and the worker both reference the same memory, which doesn’t belong to either of them:

If only Mozilla had some technology that could deal with ownership of memory...

Seriously, if rust doesn't have an ASM.js optimized target yet, it really should.


I would be very surprised if Rust doesn't target WebAssembly sometime in the future. It's just such a good fit. If I remember correctly there are already talks about doing MIR->WASM.



We Rust folk have definitely got plans in this space. :)


If rust can be compiled to LLVM, then emscripten can be the backend to asm JS


The issue is emscripten uses a different LLVM version than we do; so it can work, but it's got some rough edges.

We want both compile to JS and compile to wasm to work well, the work just isn't done yet.


>Consider synchronization: The new Atomics object has two methods, wait and wake, which can be used to send a signal from one worker to another: one worker waits for a signal by calling Atomics.wait, and the other worker sends that signal using Atomics.wake.

Having not yet played with this myself: is anyone familiar with what kind of latency overhead is involved with signaling in the Atomics API? I'm not very familiar with the API yet, so I've no idea how signaling is implemented under the hood.

The MessageChannel API by contrast (i.e. postMessage) can be quite slow, depending. While you can use it within a render loop, it usually pays to be very sparing with it. Typical latency for a virtually empty postMessage call on an already-established channel is usually .05ms to .1ms. Most serialization operations will usually balloon that to well over 1ms (hence the need for shared memory). Plus transferables suck.

>Finally, there is clutter that stems from shared memory being a flat array of integer values; more complicated data structures in shared memory must be managed manually.

This is probably the biggest drawback to the API, at least for plain Javascript. It really favors asm.js or WebAssembly compile targets for seamless operation, whereas plain Javascript can't even share native types without serialization/deserialization operations to and from byte arrays.


I'm excited to see progress in the area of JS concurrency, but I'm not sure how useful this is going to be. It lets me share ArrayBuffers between workers, but all of my data is in the form of Objects, not primitive arrays.

One place where I would like to use this is for collision detection, like in this example: http://codepen.io/kgr/pen/GoeeQw

But I'm relying on objects with polymorphic intersects() methods to determine if they intersect with each other, and once I encode everything up as arrays, I lose the convenience and power of objects.


Here's a typed objects system for JS which uses ArrayBuffers for backing storage, in future it will also support SharedArrayBuffer - https://github.com/codemix/reign (disclaimer: I wrote this).


If only we did not have mutable data structures, there would be no or few problems to find in this.

Concurrency isn't hard - try Clojure core/async and you will find out. Shared mutable state is mind-boggingly hard


If the problem that this is trying to solve is that `postMessage` is slow and you can't transfer slices of arrays, then perhaps they should solve it by speeding up `postMessage` and making array slicing cheap? Forcing a shared-memory concurrency model into JavaScript seems like a bit of an overreaction.


Are JavaScript workers implemented using real OS threads or green threads? How heavyweight is a worker?


They are real OS threads in all implementations i've seen.

As for how heavy, they are definitely a bit heavier than i'd like. But rather than me try to describe it, [1] is a really good benchmark with results that you can run yourself if you want.

[1]https://github.com/gmarty/web-workers-benchmark


I really want WebCL. Worker threads are just so lame compared to what GPUs can do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: