Hacker News new | past | comments | ask | show | jobs | submit login
Speeding up Electron apps by using V8 snapshots in the main process (github.com/raisinten)
58 points by raisin10 4 months ago | hide | past | favorite | 55 comments



What's old is new again. Emacs has long done snapshots a bit like this, albeit in a hacky way[1][2]

[1]: https://lwn.net/Articles/673724/ (2016)

[2]: https://lwn.net/Articles/707615/ (2016) and discussion here: https://news.ycombinator.com/item?id=13073566


JVM has a similar thing called appcds (https://docs.oracle.com/en/java/javase/17/vm/class-data-shar...) since JDK 10 and you can also use Criu (https://criu.org) for this, at least I used it for some JVM apps before.

Don't forget that Common Lisp and Smalltak implementations have had this since the beginning of time :D


IBM J9 also had explicit support for sharing code heap between multiple applications, and there were ways to preload code into older Oracle JDK's than 10 by abusing iirc bootstrap classpath?


There’s also OpenJDK Project CRaC (based on CRIU) and AWS Lambda SnapStart (leveraging CRaC).


Lisp I in 1960 did this, too.


Android uses a similar approach to launch apps quickly. When you request to start an app the zygote process forks and the child does the work of starting the app. It allows the children to share all the upfront initialization that is done once per boot.


Chrome does the same thing for (per ~tab) renderer processes!

Arguably at least as important as the speed/CPU usage improvements is the fact that this lets all processes share all "initialize once read never" memory that would otherwise have to eventually be swapped out once per process (or, in the case of Android, hang around until the app gets killed, since I think Android does not swap).


iOS does this too now.


Do you know how Apple calls their version? I’m curious how they implement it.


Not parent, but maybe they are possibly referring to "State Restoration"?


dyld closures and prewarming actually


Thank you! That sounds pretty different in implementation from zygote processes though:

Zygote processes do go through the initialization once, but then clone that pre-initialized state via forking. That makes launches faster, but importantly also reduces memory usage by sharing effectively read-only dirty memory across all processes (e.g. everything read into memory by each process via read() instead of memory mapping).

dyld closures seem to be caching process-specific linking precalculations, and prewarming just opportunistically (pre-)launches an app in case it's needed later. Same effect (faster launch time), but probably no memory savings, and a very different implementation.


Yes to be clear when I said "iOS does this" I meant "kinda similar to the whole freeze and restore thing" (which is the general topic here) not "the exact thing Android is doing"


There's so so much goodness available from tapping vm's as such, as virtual machines that we might do cool checkpoint/restore off on.

Java has had some similar abilities to rebuild the pre-compiled stdlib with whatever else you want already included. So, each Java instance both boots faster & has less overhead. Forgetting the name though! Node has options too! https://nodejs.org/api/cli.html#--build-snapshot


Problem with Node is that the snapshot functionality has several severe limitations.

Copy pasted from NodeJS docs:

---

Currently the support for run-time snapshot is experimental in that:

1. User-land modules are not yet supported in the snapshot, so only one single file can be snapshotted. Users can bundle their applications into a single script with their bundler of choice before building a snapshot, however.

2. Only a subset of the built-in modules work in the snapshot, though the Node.js core test suite checks that a few fairly complex applications can be snapshotted. Support for more modules are being added. If any crashes or buggy behaviors occur when building a snapshot, please file a report in the Node.js issue tracker and link to it in the tracking issue for user-land snapshots.

---

(1) is fine as you can use a bundler to work around it. (2) can't, as far as I know, be worked around.


Will this make slack work fast? That piece of crap even cannot resize efficiently on a Mac m3 pro.


I see electron get a lot of hate, and don’t get me wrong, I empathise with your issue, but has anyone ever documented an analysis on whether these sorts of things are caused by electron itself, or just bad application logic which just happens to be running under electron?


As someone who's made a few Electron apps in the past, I can say that it has a lot more to do with what you're putting into the app itself than the runtime shell you're working with. It's not impossible to make an Electron app that's very efficient, to the point where people probably wouldn't realize it uses Electron unless they opened the app itself and poked through the source code. That said, due to Electron being a layer above the actual application stuff that's doing the "real" work, it's a lot harder to sniff out performance problems and especially to reproduce them on all platforms. So I wouldn't call this "bad application logic", I would say that it probably has more to do with using older (and deprecated) APIs to handle certain things like resizing the window, which could be replaced with better ones if the devs at Slack were able to easily see that this was causing problems for people. Unfortunately, Electron does make that a bit harder to do.

Basically...I'd say this has more to do with the fact that Slack's development team doesn't get enough control over their applications in order to make things like this happen faster.


Yeah - people use Electron because web-style UI toolkits are what they know and work cross platform, but those toolkits make it easy to write very un-performant applications that do block/repaint/block/repaint cycles which feel awful.


Everyone knows the electron apps even if they perform well - they’re the ones that don’t integrate properly with basic system services and don’t use native controls.


Ah yeah, common examples include Premiere Pro, Blender, Photoshop, Fusion 360 and a host of other Electron applications because there is no integration + they don't use native controls.


The Adobe apps integrate fine (on macOS at least). I don't know about any of the others, but would assume that a cross-platform UI solution leads to a bad experience.


Are the Adobe applications using the native toolkits on macOS? Because on Windows they certainly not, and I'm having a hard time imaging macOS has completely different UIs than the Windows versions.


Unclear, they certainly respond to the kinds of system standards that I would expect though. Things like emacs bindings.


Maybe it's just not a 'pit of performance'.

What I mean is: perhaps the path of least resistance (and thus the cheaper way) does not lead to good performance by default.

If only platforms and frameworks would make the fast way easy, and the slow way hard, it would be simpler to have faster software (not that it's easy to do).

Now, if only all programs had performance tests on all commits (like browsers use speedometer 3, and track the trend over time, because they know it matters to users).


After using VS Code for a while coming from vim, I’m inclined to say it’s bad application programming.


There are many places VIM/Neovim are a lot slower / less responsive than VSCode.

- Syntax Colorization of large source files.

- Auto-Completion Performance

- Rendering on Navigation when line lengths are very long.

Only thing Vim/NVim faster is Start-Up speed.

If someone care about speed actually, try Kate and Zed


I think the parent was calling vscode good from a performance perspective.


Indeed I was. I mentioned vim because I’m used to that level of performance and VS Code is on par. I have experienced sluggishness with Slack, so I meant Slack likely has performance issues due its application code rather than its Electron dependency.


I'd say that it's a great synergy of wasted time, money, and energy:

A bloaty runtime (primarily because it's not shared across processes, as every app brings its own private copy of Electron) enables running the horrors of modern web development, i.e. pulling in thousands of libraries, sometimes each in several versions.


I use localslackirc at work, so I don't get overwhelmed. But it's certainly not for everyone. I'm the only one in my company who uses it (AFAIK).


Slack is just a website you can open in a tab/window. You don’t need to add Electron your browser diet to use it.


It’s desktop app that’s built on Electron.

Although, yes, when I’m on resourced constrained machines, I open it in a Firefox tab.


I run Slack in a Firefox window, and I had to add an extension to reload the page every 15 minutes due to live leaks.

Would cause FF to eat tens of GB, making FF sluggish until it crashed...


Hopefully this applies to VSCode as well. I try to avoid Electron apps at all costs, but sometimes there's no other choice.


VsCode surprises me to be honest. Besides slow starts, it does not seem to be suffering from any major slowdown for me. I’d expect that thing to get bloated pretty far, really fast.


VS Code does become slower for me on my MacBook M1. I typically don't restart the app unless I have to. After few days of usage, scrolling the code area becomes so slow (<20fps) that I have to restart the app. I haven't experienced such slowdown in other IDEs, such as Xcode or Qt Creator.


I wonder if there are any other native IDEs with the same level of functionality. I feel like electron is forcing me to compromise on performance in exchange for features.


What projects do you typically work on? I would say VS code only excels at JavaScript or perhaps newer languages with a robust LSP.

What sort of features do you find most important from VS Code?


I work on full stack typescript applications.

The most important features for me would be responsiveness (I usually have multiple projects open simultaneously, and performance tends to degrade after a few days) and memory consumption.


Do you actually care about the vscode startup time? That's the only part that's really improved here. I restart my IDE basically once every update, so that time barely registers.


How about making all installed Electron applications faster by using single shared runtime?


Could work, except Electron changes very often. Electron is like 10 years old at this point, with 1,512 releases listed on GitHub, meaning 12 releases per month (151 releases per year). Only applications who use the same version would be able to actually take advantage of this, so not sure how big impact it would have.

Instead, Tauri (Electron-like framework) does it like that by default, using whatever you have installed instead of shipping their own runtime. Might be interesting if you're looking for a sleeker version of Electron (but made with Rust instead of mostly C++ and TypeScript).


Electron could also figure out a way to change less often, like JVM does ;)


This is the JS world, if it ain't updated in the last week or two, people think the project is abandoned :)


Electron's release cycle is highly impacted by Chromium's release cycle because of how tightly coupled it is, so unfortunately this feels unlikely to happen.


A lot of Electron applications in the Arch Linux package repositories use a system electron package, it's nice. They have to be split by major version, though.


Is there an advantage to doing this versus using something like ncc to just package everything into a single js file with zero dependencies?


Let's say you're initializing a huge array on boot that takes 100ms. Packaging everything into a single .js file won't change that. Taking a snapshot of the state after boot and using that for subsequent boots will allow you to skip that.


Surprised that vercel/ncc is still supported but vercel/pkg is not... Wonder why that is.


Thanks, this really helps! We are planning to improve the performance of our Electron app.


can we do this for react native? asking for a friend...


Not the same runtime, but iirc parsing,etc is done at an earlier stage in Hermes/RN so some parts of the initialization process is already done in that way if I'm not mistaken.


Isn‘t that kind of the idea of Hermes?


I think all the sources are pre-parsed and compiled but it's not an outright snapshot as in nothing is executed ahead of time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: