Hacker News new | past | comments | ask | show | jobs | submit login
Py2wasm – A Python to WASM Compiler (wasmer.io)
193 points by fock 7 months ago | hide | past | favorite | 49 comments



> py2wasm converts your Python programs to WebAssembly, running them at 3x faster speeds

This is clearly written in the article, but I hope that the impatient readers will understand that this is 3 times faster than the CPython wasm, not the native CPython.


even more specifically, 3 times faster than wasmer's build of cpython (whatever that is), running on their runtime.

i'd be curious to see this benchmark extended, as in my own experience toying with python-like interpreters, you get ~2x slowdown (like their end result) from just compiling to wasm/wasi with clang and running in any 'fast' runtime (e.g. node or wasmtime).


Hey, I'd love to reproduce the ~2x slowdown you commented from running the workload in Native CPython compared to Wasm CPython (in any runtime, browser or outside).

Any tip would be helpful so we can debug it. If your claims are accurate, we can easily get py2wasm even running faster than native CPython!

Note: we benchmarked in a M3 Max laptop, so maybe there's some difference there?


Here is a trivial example, also on a M3 Max:

  cat hello.py
  print("Hello, Wasm!")

  time python3 ./hello.py
  Hello, Wasm!

  ________________________________________________________
  Executed in   26.86 millis    fish           external
     usr time   16.37 millis    0.13 millis   16.24 millis
     sys time    7.25 millis    1.14 millis    6.11 millis

  time wasmer hello.wasm
  Hello, Wasm!

  ________________________________________________________
  Executed in   84.77 millis    fish           external
     usr time   50.26 millis    0.14 millis   50.12 millis
     sys time   28.97 millis    1.21 millis   27.76 millis

  time wasmtime hello.wasm
  Hello, Wasm!

  ________________________________________________________
  Executed in  141.72 millis    fish           external
     usr time  120.86 millis    0.13 millis  120.72 millis
     sys time   16.65 millis    1.20 millis   15.45 millis


note that i'm not claiming 2x slowdown for cpython.

but here's one test i've run just now:

using your test file, i've run this one-liner on my x86_64 linux laptop in https://pyodide.org/en/stable/console.html in chromium (v8) and natively (pyodide's urllib doesn't handle https for some reason):

   import requests;exec(requests.get('https://gist.githubusercontent.com/syrusakbary/b318c97aaa8de6e8040fdd5d3995cb7c/raw/1c6cc96cf98bd7bd41c81ba9d10dc4d19b1c3e53/pystone.py').text)
native: ~470k

pyodide: ~200k

i.e. ~2.4x slowdown


Thanks!

That should help us a lot to investigate further the difference in timing. I remember trying the py2wasm strategy about 4 years ago, and I got it running faster than native CPython, so there must be something we are missing!


Yeah more honest would have been "2x slower"!


Even more honest would be “on one particular microbenchmark that everybody optimizes for”.

That said, this seems like a really cool project that could have some real value! It still fully depends on having the full CPython runtime environment, but that means it could work correctly with most existing code and libraries (including numpy and script).


From what I could tell, Nuitka dynamically loads extension modules so this wouldn't work with Webassembly modules which are statically linked. If I could be enhanced to automatically build wheels of packages like numpy and statically link them, then that could bring extensions to Wasm and be very cool. IIRC, Google does this with their Python apps so the concept does exist but don't think I've seen any OSS tooling trying that.


So I had a look at the repo/branches at https://github.com/wasmerio/py2wasm. This might be a nit, but in the spirit of OSS: if I'd done this work, I'd have contributed it to upstream Nuitka. I definitely would not have forked a whole new GitHub repo and given it a completely different name.

What's the rationale for creating a new project and calling it py2wasm? Am I missing something?


Thanks for the feedback! I'm Syrus, main author of the work on py2wasm.

We already opened a PR into Nuitka to bring the relevant changes upstream: https://github.com/Nuitka/Nuitka/pull/2814

We envision py2wasm being a thin layer on top of Nuitka, as also commented in the article (not a fork as it is right now, although forking got us into the proof of concept a bit faster!).

From what we gathered, we believe that there's usefulness on having py2wasm as a separate package, as py2wasm would also need to ship the precompiled Python distribution (3.11) for WASI (which will not be needed for the other Nuitka use cases), apart of also shipping other tools that are not directly relevant for Nuitka


On a different note, will there some technical documents on how exactly to use it, e.g. exporting functions etc...?

By the way is it an alpha release, usable for testing, or something in between?


Reading the blog post- they are contributing it upstream


Well, reading the blog post, they're announcing a whole new compiler. I'd encourage you to have a look at the PR their post links to.


My first thought upon reading Nuitka's name in the article was along the lines of "I hope they've contributed some of this to Nuitka". Not a very nice trend.


What trend?

Forking happens all the time. It's one of the benefits of open source. I've often made PRs on github to projects while shifting projects I work on to my fork since upstreaming can take months or often never happen

Granted, Nuitka is an active project, but wasmer.io is going to have a much more focused desire with their changes that they can deploy, meanwhile getting it upstream will have to go through rounds of review & adjustments

https://github.com/rustwasm/wasm-pack/pull/937 this small change took over 2 years to merge. Open source takes time


Nice--it's great to see more options for interoperability between Python and WASM. I was aware of Pyodide [1], but Py2wasm seems to be a tool for compiling specific scripts into WASM rather than having an entire Python distribution in the browser. Please correct me if I'm misunderstanding the difference between the two however.

[1]: https://pyodide.org/en/stable/


That's my understanding as well. And Pyodide being a great piece of engineering, is still lagging behind compiled Wasm languages in terms of footprint at least and performance sometimes, unfortunately.


What's up with the botspam in this thread? Looks like hundreds of sex bot accounts are being created. Pages seem really slow to load currently... someone should notify dang.


How come the CPython interpreter is so slow when compiled to WebAssembly? According to the figures in the article, it runs at 23% the speed of the native version.

Wasmer, which they're using in the benchmarks, should run code "at near-native speeds" according to the docs[0]. Apparently it has multiple compiler backends, so maybe the choice of backend affects the result?

[0] https://docs.wasmer.io/runtime


With WasmGC finalized, I hope we see more compilers that target it instead of including their own GC.

It's could be a new interpreter, or maybe a version of CPython that uses WasmGC structs for all Python objects, or a compiler similar to this but that targets WasmGC directly.


WasmGC is not really what you think it is. Think of it as a full-on replacement for the linear memory model, not as a way to add GC to it.

It's exceptionally difficult to port a language runtime over to WasmGC -- it doesn't even offer finalizers, so __del__ can't implemented correctly (even ignoring the issues with existing __del__ semantics in Python leading to compatibility issues). It doesn't support interior pointers, so you can't have a pointer into the middle of an array.

There's no easy way to port Python to WasmGC.


Current WasmGC is missing a lot of critical features for production grade GC, like finalization and interior pointers. It's really promising though, so I think once the gaps get filled in you'll see a bunch of runtimes start moving over - maybe .NET, etc.


Nice. Maybe a pie in the sky but considering how prevalent python is in machine learning, how likely is it that at some point most ML frameworks written in python become executable on a web browser through WASM?


You can already run inference of many modern ML models in-browser via, e.g., https://huggingface.co/docs/transformers.js/en/index .


Last we checked for one of our use cases around sandboxing, key pydata libraries were slowly moving there, but it takes a village.

At that time, I think our blockers were Apache Arrow, Parquet readers, and startup time. There were active issues on all three. The GPU & multicore thing is a different story.as that is more about Google & Apple than WASM wrt browsers, and I'd be curious about the CUDA story serverside. We didn't research the equivalent of volume mounts and shmem for fast read-only data imports.

For Louie.AI, we ended up continuing with serverside & container-based sandboxing (containers w libseccomp, nsjail, disabled networking, ...).

Hard to motivate wasm in the server for general pydata sandboxing in 2024 given startup times, slowdowns, devices, etc: most of our users who want the extra protection would also rather just self-host, decreasing the threat model. Maybe look again in 2025? It's still a good direction, just not there yet.

As a browser option, or using with some more limited cloud FaaS use cases, still potentially interesting for us, and we will keep tracking.


Pyarrow is giving a headache trying to get it compiled with Emscripten


An irony here is that we donated the pure JS/TS implementation of arrow to apache


Python in ML acts as a glue language for loading the data, the interface, api, et. The hard work is done by C libraries running in parallel on GPUs.

Python is quite slow, and handles parallelization very badly. This is not very important for data loading and conditioning tasks, which would benefit little from parallelization, but it is critical for inference.


Depending on the underling machine learning activity (GPU/no-GPU) it should be already possible today. Any low level machine learning loops are raw native code or GPU already and Python’s execution speed is irrelevant here.

The question is do the web browsers and WebAssembly have enough RAM to do any meaningful machine learning work.


WebGL, WebGPU, WebNN

Which have better process isolation; containers or Chrome's app sandbox at pwn2own how do I know what is running in a WASM browser tab?

JupyterLite, datasette-lite, and the Pyodide plugin for vscode.dev ship with Pyodide's WASM compilation of CPython and SciPy Stack / PyData tools.

`%pip install -q mendeley` in a notebook opened with the Pyodide Jupyter kernel works by calling `await micropip.install(["mendeley"])` IIRC.

picomamba installs conda packages from emscripten-forge, which is like conda-forge: https://news.ycombinator.com/item?id=33892076#33916270 :

> FWIU Service Workers and Task Workers and Web Locks are the browser APIs available for concurrency in browsers

sqlite-wasm, sqlite-wasm-http, duckdb-wasm, (edit) postgresql WASM / pglite ; WhatTheDuck, pretzelai ; lancedb/lance is faster than pandas with dtype_backend="arrow" and has a vector index

"SQLite Wasm in the browser backed by the Origin Private File System" (2023) https://news.ycombinator.com/item?id=34352935#34366429

"WebGPU is now available on Android" (2024) https://news.ycombinator.com/item?id=39046787

"WebNN: Web Neural Network API" https://www.w3.org/TR/webnn/ : https://news.ycombinator.com/item?id=36159049

"The best WebAssembly runtime may be no runtime" (2024) https://news.ycombinator.com/item?id=38609105

(Edit) emscripten-forge packages are built for the wasm32-unknown-emscripten build target, but wasm32-wasi is what is now supported by cpython (and pypi?) https://www.google.com/search?q=wasm32-wasi+conda-forge


The actual underlying models run in a lower-level language (not Python).

But with the right tool chain, you already can do this. You can use Pyodide to embed a Python interpreter in WASM, and if you set things up correctly you should be able to make the underlying C/FORTRAN/whatever extensions target WASM also and link them up.

TFA is compiling a subset of actual raw Python to WASM (no extension). To be honest, I think applications for this are pretty niche. I don't think Python is a super ergonomic language once you remove all the dynamicity to allow it to compile down. But maybe someone can prove me wrong.


We implemented an in-browser Python editor/interpreter built on Pyodide over at Comet. Our users are data scientists who need to build custom visualizations quite often, and the most familiar language for most of them is Python.

One of the issues you'll run into is that Pyodide only works by default with packages that have pure Python wheels available. The team has developed support for some libraries with C dependencies (like scikit-learn, I believe), but frameworks like PyTorch are particularly thorny (see this issue: https://github.com/pyodide/pyodide/issues/1625 )

We ended up rolling out a new version of our Python visualizations that runs off-browser, in order to support enough libraries/get the performance we need: https://www.comet.com/docs/v2/guides/comet-ui/experiment-man...


For popular ML/DL models, you could already export model to ONNX format for interference. However, glue codes are still python and you may need to replace that part with the host’s language


It may be a stupid question, but is there already some DOM access or a dedicated package that allows writing web applications in this compiled python?


Yes. Here is an example how to integrate Python to React front end

https://blog.pyodide.org/posts/react-in-python-with-pyodide/

Similar examples for raw DOM manipulation should be available for Pyodide as well. It has bindings to all JS code, so it can do everything that JS can do.


Thanks. I know about the one for Pyodide. Is it confirmed that it works for this new py2wasm compiler?


> We worked on py2wasm to fulfill our own needs first, as we want to accelerate Python execution to the maximum, so we can move our Python Django backend from Google Cloud into Wasmer Edge.

I was under the impression that Django was stateful and not meant to be run in a serverless/edge cloud. Is that not the case or are you planning to do a special setup to support that?


I want the opposite. The Python VM is everywhere. Can't I just run my WASM on Python? (The WASM runtime must be pure Python, installing C/C++ extensions is against the spirit of this.)


Hmm, what I want is to run WASM using a CPython extension. CPython extensions are vital to Python. I'm curious why you say an extension is against the spirit of it.


I mean, the performance would be abysmal.


Would it be too late to call it PyToWasm?

I know Python 2 has been EOL for over four years now but you still see it (much to the chagrin of decent engineers everywhere) in the wild; could generate confusion.


What's the hello world wasm file size?


  ls -lh hello.wasm
  -rw-r--r--@ 1 tcole  staff    24M Apr 22 14:18 hello.wasm
This is on macOS. That is hello.py compiled to wasm with Py2wasm.


Indeed, this can be way further optimized. For example, you can probably do a wasm-strip and wasm-opt passes that would leave the wasm file being ~5-10Mb. Still very big, but a bit more reasonable.

The good thing is that thanks to Nuitka you could actually do some tree shaking and only include the imported files and not everything in between (although this might break other behavior such as eval and so).


You got it to work? For me it complains about some header files missing and breaks the nuitka installed on the system.


I wonder what's the meaning of doing this , if hello world program need 20M to download before run ?


Wow! 24M makes it even less practical than interpreted pyodide.


so does this mean we can run FastAPI inside a browser?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: