They say running the hello world example "is as simple as this:"
wasmer run wasmer/winterjs --dir=. serviceworker.js
But ... what does that mean?
After I turned on a new computer, I certainly can't type that command and have it running "a JavaScript Service Workers server written in Rust, that uses the SpiderMonkey runtime to execute JavaScript" as they put it.
What is the background here? Say I have a newly installed Debian 12 machine - how do I get to use this thing?
On a tangent here: many IT people really suck at explaining things from the start.
I sometimes wonder if they fail to remember a time where it was them that didn't know e.g. what a console actually is and how it can be used, before giving the person opposite an non-explaination like: "You just have to do $CLI-APPLICATION-TASK"
Not that I have anything against command line applications (quite the opposite) — but if you, the trusty IT person, recommend someone to do X, you better check if the know the things required to even understand what X is.
The majority of programmers wouldn't know how to effectively help a family member understand how to open a commandprompt and enter a command, yet they would all be capable of it. Some even did it before, when thats all computers were!
I recommend grabbing a bucket of patience and sitting down with an older but interested non-technical person and telling them to do some normal everyday things on their PC. You will quickly learn to start tutorials with: "First, make sure you have git installed. Then clone this repo by opening a terminal like so: ..."
To be fair it depends on the audience. As a dev I don't want to wade through a git tutorial for any project I look at, I already know how to use git. But if your audience isn't other software devs then yeah you should probably start from the beginning. This is why we have installers because no one wants to do the work to install or explain how either.
Yeah of course. This is always a balance. But my observation was that IT people in general tend to lean towards not explaining enough, rather than overexplaining things.
> I sometimes wonder if they fail to remember a time where it was them that didn't know e.g. what a console actually is and how it can be used
To be fair, the reason I'd fail to remember such a time is the same reason that I can't remember anything else from that time. Most people don't remember being a baby.
Unless you think knowing the word "console" is the important part there. I would have just said "DOS".
So you learned to use a command line interface when you were a baby?
I have been programming for half of my life since I was a teenager, but I can still remember how frustrating it was that some things just were assumed. I still remember when software without a GUI was an unpleasant thing for me, I can still remember struggling to understand how all the web technologies tie together and so on.
Understanding that what you know isn't automatically self-evident or obvious even to a person that might be more intelligent than yourself is an important skill in the IT sector if you want to communicate effectively.
I learned to use a command line interface before I was old enough that I can summon any conscious memories of the time now.† Like I said, "the same reason". I would have been less than three years old.
The family computer had Windows, but I would not use it other than to play Solitaire, because it functioned worse than DOS. The default UI was DOS Shell ( https://en.wikipedia.org/wiki/DOS_Shell ; the page image there is not a perfect match to our installation, but it's close.), so most often I'd be using what amounted to a very primitive graphical interface, but I would also use straight DOS for reasons I do not recall.
† Almost. I vaguely remember the large boards that my mother prepared in order to teach me to read, things like a magazine photograph of a horse juxtaposed with a large letter H, but I don't remember being so instructed, and that memory is subject to later reinforcement because those boards were kept for a long time. But it stands to reason that I would have needed to learn to read before learning to use a computer, so there's a chance some of those board memories predate that knowledge. I can remember my mother singing to me, but there's no way to date those memories precisely. Any memories that still maintain concrete details come from well after I could use a computer.
I think this is for people who want to run their own cloudflare workers (sort of) and since nobody wants to run full node for that, they want a small runtime that just executes js/wasm in an isolated way. But I wonder why they don't tell me how I can be sure that this is safe or how it's safe. Surely I can't just trust them and it explicitly mentions that it still has file IO so clearly there is still work I need to do customize the isolation further. The reason why they dont show more info about this is probably because they don't really want you to use this to run it on your own, they are selling you on running things on their edge platform called "Wasmer Edge".
So that's probably why this is so light on information.. the motivation isn't to get you to use this yourself, just to use this via their hosted edge platform. But then I wonder why I wouldn't just use https://github.com/cloudflare/workerd which is also open source. Surely that is fast enough? If not then it should show some benchmarks?
I suppose the performance claim is here:
https://wasmer.io/products/edge#:~:text=Cloud-,Cold%20Startu...
The documentation is lacking, but the important context is that you can run this through the Wasmer webassembly runtime, which provides complete isolation.
It would only be able to access host directories that you explicitly grant access to (--mapdir from the CLI)
Cloudflare workers support wasm via JavaScript. At least I believe that if you write a worker in rust what happens is that CF wraps your wasm in the correct js to provide environment bindings.
In practice it make little to no difference, but CF workers are indeed js first
The way I understand it, it's a JS interpreter (SpiderMonkey) compiled to WASM that supports the Service Worker API[1].
So you basically get an unlimited number of sandboxed JS environments, that you can spawn very quickly like browser tabs. Unlike Node.js with which you'd have to do a cold start every time.
And on top of that, there's another layer of sandboxing thanks to WASM, but that's just because they already run everything on WASM.
So to summarize, it gives you speed and process isolation plus a standard API.
You just described what’s wrong with a lot of the WASM world.
I understand what WASM is and how it works at the base level but the ecosystem seems fragmented and loaded with technologies described in terms that would only make sense if I knew the rest of the ecosystem.
How do I port something to WASM? Is that even possible. What problem does it solve?
There seems to be no entry point.
The only area where WASM seems straightforward is things like Dioxus where you compile Rust or some other language to run in the browser as an alternative to JS. That makes sense. But server or “edge” WASM?
It solves the problem of cold-starts, isolation/security and portability. Basically, WASM-based cloud offerings will be able to offer more performant, cheaper and scalable compute that works everywhere regardless of the underlying architecture. Basically Docker 2.0
Here is an ELI5 explanation of the Wasmer blog post announcing WinterJS:
WinterJS is a new way to run JavaScript Service Workers that Wasmer created. Service Workers are little programs that run in your web browser to make websites work better.
The normal way to run Service Workers is using Node.js. But Node.js can be slow and heavy.Wasmer made WinterJS using the Rust programming language. Rust makes it very fast!
WinterJS also uses the same JavaScript engine that Firefox uses, called SpiderMonkey. This makes it work the same way as a web browser.So WinterJS lets you run Service Workers in a fast and lightweight way, without needing Node.js. You can run it on your computer with a simple command.WinterJS is special because it can also be compiled to WebAssembly. This means it can run in Wasmer Edge or even directly in your web browser!
> The normal way to run Service Workers is using Node.js.
This is false, and is not claimed in the original post. Service Workers run in the browser, and Node.js is not compatible with the service worker API. By which I mean that in Node, one cannot write `self.addEventListener("fetch", event => ...)`
It’s all to do with the latest trends of edge computing. If you’re not familiar with that then a lot of the principles here wouldn’t make much sense. But equally, if you’re not interested in edge computing then this is going to be of zero value for you regardless.
So one is meant to deploy these services workers on this runtime at edge nodes?
Like, user sends message to their geographically close edge node --> service worker on there caches it and routes it to appropriate main server? Or something?
> Service workers are like adding extra threads when running JS in your browser. People use them in client-side web app code to get around the single-threadedmess of JavaScript. [...]
Aren't you just describing workers in general here (without the "service" part)?
To me "Service Workers" is a worker purposed especially for proxyfying requests made by the client, on the client, and act upon it (example: by returning a version cached by the js). The main use-case I've seen is implementing an offline mode.
When you just want to unlock multi-threading capabilities in JavaScript, you just rely on "WebWorkers" another worker mechanism without the proxy part, I've never seen those referred as "Service Workers".
With that written, I don't get what Service Workers have to do being on the server-side though :/
> I don't get what Service Workers have to do being on the server-side though
Yeah, this is extremely confusing. Other posts talk about running your own CloudFlare workers... but I fail to see any potential connection between service workers and code that executes on edge nodes.
I think they're calling these edge workers "service workers" which is an unfortunate name as it clashes with the name of the Service Worker web API. This is just some JS running under a reduced set of runtime APIs (vs Node or the browser).
Also I think the project's goal is to extend the Wasmer platform (which appears to have the limitation of only running WASM assemblies) with the ability to run JS workers. Hence the need for a JS interpreter and runtime compiled to WASM.
I actually tried writing something like this one weekend last year. I really like the idea of the https://wintercg.org/ group this runtime is named after, and I wanted a non-Node, non-Cloudflare, non-X version just to explore what it took to implement some of the APIs they've standardised on.
It was based on QuickJS and Tokio and never got far. Instead of implementing builtins in C, as the article suggests you would with QuickJS, I was implementing them in JS. I think I got to the point where the very simplest hello world would work.
It is weird that this is so high up in the frontpage yet no one in the comments understands what it is. Hopefully someone from the project team sees this and comes up with an understandable description.
I am still confused, it's a JavaScript runtime intended to be deployed to JavaScript/Wasm runtimes? Why does a JavaScript runtime need a JavaScript runtime?
> I am still confused, it's a JavaScript runtime intended to be deployed to JavaScript/Wasm runtimes?
Seemingly.
> Why does a JavaScript runtime need a JavaScript runtime?
Because if you want to create a Service Worker server for CloudFlare Workers and other JavaScript/Wasm runtimes, that's the only option for doing that AFAIK.
> The WinterJS server is published in Wasmer as wasmer/winterjs.
Struggling with the assumed knowledge here, and having to hop a number of pages. What is wasmer? Why is this published there and not npm.
I think there is a new set or layer of terminology, some very close to each other, that the wasm world introduces but which nobody wants to explain. That leads to confusing when trying to understand a project like this.
You need to be careful when looking at wasmer.
They build upon the Wasm and WASI standard their own stuff like WASIX.
Running WASIX compatible services/stuff may not be compatible with other WASI runtimes.
Furthermore they do offer their own package manager to serve Wasm modules. Which is why it is published there.
They are doing a lot of effort to speed up the development around Wasm, but you need to know what you can use with your run time and what is the "real" Wasm/WASI standard.
What does it mean "blazing fast"? Every new thing is blazingly fast nowadays.. deno, bun, turbopack, etc. I don't even know what does it mean to me as a user, does it really super fast or it is just some fancy words. "blazingly fast" compared to what?
Yes. "It just works" is a famous Apple meme. Others using "tm" are memeing on companies that come up with stupid marketing names for every little thing that doesn't require any branding at all.
Something that helps me: instead of reading "blazing/blazingly fast" as "blazingly fast", read it as "blazingly fast". As in, it is fast at doing whatever it's doing, as are almost all entrants in... whatever this stuff is. But it is doing it in a way that is figuratively analogous to a large conflagration. Whatever is running on it will end up a blackened, twisted pile of wreckage. Perhaps after achieving its purpose, perhaps a bit before.
I'm not saying that's what they're saying, but I am saying it's a valid interpretation of what they're saying. It's worth speculating about which interpretation better fits the subject at hand.
Yeah this kind of statement drives me up the wall. It's even more annoying when they do show some benchmark stats, but don't provide any details on how to run the benchmark yourself.
Especially considering that SpiderMonkey is definitively not blazing fast compared to V8 or JSC, for common web workloads anyway. At most it’s comparable. Somehow doubt adding a layer of wasm indirection makes it faster. But I suppose everything is indeed blazing fast when compared to no target at all.
I assume they mean startup speed (cold-start). Since they're not using JIT (WASM doesn't support it) the runtime performance will be miles beyond V8 and JSC.
They use Rust (i like Rust), so they have to somehow convince themselves that their mess of overly abstracted code strewn across 2000 crates is, in fact, very very fast and it's all worth it.
I think people use a language like Rust because they hear its got potential for performance, but its told to them like so: "Rust is blazing fast!".
So they perpetuate that idea, and just hope to god that their program, even if its horribly cache-inefficient, allocates aggressively, context switches constantly, etc. is still fast enough that it feels fast.
Also, of course, on dev machines which often have high end hardware, it probably does feel very fast. "I can't even see any time between input and result, its so fast!", said the programmer running his script on an overclocked i9-9900k with nothing else open
To be clear, nothing about Rust or C or C++ is fast. They allow you to write faster code than most other languages, some more easily than others. All of those you can easily write the slowest code imaginable in. Try doing a bunch of very expensive copies in C# or Java - youd have to go out of your way. In C++ its default behavior.
> their mess of overly abstracted code strewn across 2000 crates
Have you actually taken the time to read the source code of this project to see how many creates are used and how it is architected, or are you just piling shit on them for no reason?
I use leetcode as my way of learning new programming languages, since if you already know the concepts then you can just focus on language ergonomics (how does this language look like when solving a problem I've seen before?). In general my naive Rust code is 2x faster and also leaner on memory than my Go code, and my Go code has the same relationship to Node.
But I wouldn't say this is a special quality of Rust. I'd expect similar winnings had I chosen any other non-GC compiled language like, say, Zig. Just saying that the well known tiering of languages has held true for me.
I’m struggling to understand what you mean. Why on earth would you compare whatever poorly optimised concept you have in your head with .Net or the JVM? Like really? How is that helpful or even remotely related to the problem at hand? The idea here is to write service workers which are more efficient than nodeJS service workers, and winterJS seems to succeed very well at this. It doesn’t really matter what goes into the build process, it’s what happens once it’s executed we care about and if you have any Java or C# code to rival or surpass WinterJS then I really think you should share it.
I’m sorry if you’re one of the developers who’s stuck on Java or C# and feeling frustrated for whatever reasons. The JVM is fast and will continue to power much of the world for decades to come, and C#, well, it’s probably not going anywhere either. But what you’re writing about here doesn’t really make a lot of sense in the greater scheme of things. Right now, the client side of things is JavaScript and while we build it in a lot of different ways it’s hard to argue against the need for better service workers until we have a suitable replacement, which we likely won’t have in our careers.
I'm saying languages with a Runtime and everything-is-a-(nullable)-reference dont have this issue of programmers being able to write super awful code. There is a clear limit to how slow simple looking code can be in C# or java, whereas languages like C++ (which I have a few years experience in) definitely allow you to write the worlds slowest code without the code looking like it.
Maybe you expected me to have a combative point due to my introduction, but i dont - im just pointing out that people use languages like Rust and C++ thinking they are fast, when really, an inexperienced programmer in either will write slower code than an inexperienced programmer in a lot of other languages.
They feel they dont need benchmarks, because their language is "fast".
I still don't think your point makes a lot of sense. C# and Java aren't going to protect you from writing bad code. I've had to do enough re-writes to remove RestSharp from codebases to know that C# developers aren't exactly "good" at using C# or it's standard toolage. So in that sense, it's very similar to Rust.
You're right of course. It's very easy to write shitty code with much futher reaching consequences in C++, but I've never really experienced it. Like, we use Typescript for basically everything (not because we necessarily like Typescript but because it allows us to share developer resources across basically everything) but when we need performance we turn to C or C++. We also use C and C++ for embedded, but that's mostly because we kind of have to. We'll eventually turn to Rust exactly because it offers some of the benefits you talk about from Java or C# without any of the downsides.
I know both Java and C# have large fanbases, and for Java at least, there is good reason behind that, but even if you went that road in 2023, I'm not sure why you wouldn't follow the good people at places like Lunar and simple turn to Go. Unless of course you're stuck with some huge legacy codebase, which in itself is a very good reason to stick with Java.
> They feel they dont need benchmarks, because their language is "fast".
It's not though. Even if you use Java or C# you're eventually going to need to turn to C and C++ for better computation. I know a lot of medium sized C# houses never get there, but maybe there is a reason C# is basically only used by companies that stagnate at a certain size?
> There is a clear limit to how slow simple looking code can be in C# or java
Java/C# code can become incredibly slow if you allocate way too many objects, use Streams/LINQ or touch URL#equals. Why are managed languages specifically better here?
I suppose that for the platform provider it's easier to ship only one runtime and then run everything over it, also might be better in terms of security because of the sandboxed nature of WASM.
Once WASI gets stabilized, you will also have the option to mix JS with the other languages that already target WASM, so your infra could be build on top of different languages and change them according to the requirements, just think of you build your entire system in JS, compile in modules and deploy them as components, then you can swap one component at the time when necessary.
Cpu architectures and hardware is incompatible, so OSs provide a layer across. OSs are incompatible, so browsers and their virtual machines (i.e. javascript runtimes) provide a layer across. Browsers are incompatible (in some ways more than others), so wasm provides a layer across. Wasm is cool, so we backport wasm + the vm + the browser + the OS to a new program, which abstracts all that again. Cross platform is hard and hasn't gotten any easier.
I'm sure we are on a good path, sure, but unless there is a monopoly, there will always be a need for cross-compatible software. We should build software like that, like we've been doing in C++ and Rust and languages which are built to acknowledge that platforms are different.
I don't think that's the intended use-case for this.
If you want to do this, I think something like ComponentizeJS[0] is what you would be looking for. As far as I can tell you'd also need to create WIT (Wasm Interface Type) definitions for the interface(s) that mod_wasm expects for its WASM modules, as they don't provide them themselves.
After I turned on a new computer, I certainly can't type that command and have it running "a JavaScript Service Workers server written in Rust, that uses the SpiderMonkey runtime to execute JavaScript" as they put it.
What is the background here? Say I have a newly installed Debian 12 machine - how do I get to use this thing?