Hacker News new | past | comments | ask | show | jobs | submit login

As a developer shipping executables to MacOS, Windows, and Linux today this is the only assumption I can make. I cannot assume that dependencies exist or are the correct version, I can't assume the user knows that package managers exist or that the dependencies shipped by the package managers are correct, I can't assume anything exists on PATH or LD_LIBRARY_PATH, and I can't even assume that libc or libc++ on a system will work with the executable I compiled on mine.

Meanwhile 16GB of memory is becoming common and 1TB SSDs cost less than $100. So taking up a little bit more space saves me time and money and reduces the cost to ship to my customers. Oh well.




I understand your point, however I still think it's unacceptable to have 1-1.5 GB of RAM consumed by simply two Electron applications running idle.


That's not the baseline though, the baseline RAM consumption for an Electron app is about 100MB. If you have garbage code as input it doesn't matter which stack executes it, you'll have garbage performance output.


Even 100MB is crazy. I know RAM is cheap but the absurdity of that amount is beyond me.


It's a lot for sure, but it's not quite that simple:

- First of all the amount of RAM consumed actually depends on how much free RAM you have, Electron thinking it's a browser boggles up some extra RAM "just in case", which is a tread-off that probably works better for actual browsers than the average Electron app.

- Secondly while displaying "hello world" costs you ~100MB of RAM the RAM required as the app scales in complexity doesn't scale quite that fast, you may very well work on your app for a year and still need about ~100MB or RAM for it, you'll have to write very inefficient code (or keep a lot of data in memory) for it to require 1GB for JS' heap or something crazy like that.

Still ~100MB is a lot for sure, I think a lot of it could be trimmed away if the developers really tried to lower memory usage significantly, like maybe a much more efficient "Electron Mini" could be made with some effort.


How many Electron applications do you honestly have running on your computer at any given time such that 100MB each is so concerning?

Right now I'm running VS Code and PyCharm, each with an one open project and one open editor. PyCharm is eating 1.8 GB while VS Code is only eating 130 MB. Funny enough, I see people complain about VS Code being a resource-hungry Electron app all the time but I've never seen anyone gripe about the resource usage of JetBrainz IDEs.

This isn't an excuse to Electron all the things, but browser-based GUIs do have their place.


It’s not crazy - go run a blank Cocoa app in Xcode and see how much memory it takes.

People have unrealistic expectations on this.


Honestly I could try to ignore the memory consumption if at least it was snappy. The few electron apps I use have visible lag when I do something that forces a large redraw.

That's a problem that didn't have with GTK app running on a computer with one order of magnitude less computing power and RAM two decades ago. VS code on the first computer I used to program would probably be unusable (if it even launched), yet we had full fledged IDEs back then. I'm not talking about advanced language server plugins here, just basic usage.

The great side effect of this is that if you avoid all this wasteful crap and keep using old school technology, computers are snappier than ever. My terminal always pops up instantly, nvim fires up faster than I can press return, rg lets me search a huge codebase with barely noticeable latency. My IMAP mail client is faster, more configurable and more ergonomic than any webmail I've seen.

Life is good when you avoid the www.


The problem is that every app developer is starting to believe the same thing, so we just end up with more and more bloat. Computer hardware has never been more powerful but because of bloated apps actual performance feels -- to this user -- the same as it was in 2010.


It isn't your fault that you have to do this, it is that our industry hasn't created enough quality, easy to use, cross platform gui libraries for popular languages, such that the only viable place to turn is to use a whole web browser.


We've had about 60 years and multiple huge attempts at it, and IMHO HTML & CSS _are_ the easy to use cross-platform GUI platform. Nothing else stuck--flash, java swing, qt, etc. were all super capable and promising but failed to gain mass usage. HTML & CSS is the way. Nothing in existence has as much capability, developer usage and talent pool, accessibility and internationalization features, etc. A good GUI system is much more than just getting boxes and text on the screen.


I suppose this depends on your point of view. QT has achieved broad acceptance as "the" cross platform solution among open source Linux-first applications, as well as much of the open source ecosystem more generally. (Open source being what it is means that there are many exceptions, obviously.)

For example, on my computer right now I have 55 applications that depend directly on qt5-base, not including libraries and parts of QT. This is also not including a ton of applications that depend indirectly on QT, including most every KDE desktop application, which depends through KDE's frameworks.

So while QT may not have caught on in commercial software development, I'd say calling it a failure depends very much on what software ecosystem you're in. You might argue that HTML has achieved "universal usage" for desktop apps in a way that QT has not, but I would have to disagree. I don't have a single HTML UI or Electron app on my computer, and I don't feel as though I've given anything up. In fact I simply haven't come across any of these apps that I felt like I needed.

So I might say that HTML has failed to gain mass usage on the platforms that matter to me. :-)


I don't think HTML+CSS is the way because of innate goodness of them, but because the web led to so many people being expert at it, and so many tools being built around it, including a cross platform browser or too that are top notch to render it all.

At the very least a proper gui library could precompile all of that stuff so you aren't literally parsing HTML and CSS to render things, and HTML And CSS parsers don't need to be part of your running code. Nor a Javascript JIT, and runtime, etc.


I don't disagree either to be honest, but the reality is we live in a world where almost every business is dependent on a presence that shows up in web browsers. This puts enormous pressure and real dollars spent on making web browsers and the HTML & CSS ecosystem super fast, accessible, etc. Thousands of developers are trained every day how to make basic HTML & CSS experiences. Any UI system that throws all that away and tries to build up a similar ecosystem from scratch is going to be churning and churning for years.

Don't forget browsers are pretty spectacular runtimes. V8 and its JIT compiler is arguably one of the best runtimes of any language in the world. Sure the very first view of a page is going to do some parsing, etc. but as it runs it gets faster and faster with core functions and components compiled on the fly into platform machine code. The sandbox and security and encryption support in browsers is top notch and supremely battle-tested and hardened. With WASM now pretty mainstream we're starting to see entirely new frontend UIs coded in languages like C++, Go, Rust, etc. that are incredibly fast too. If you squint hard enough the browser is really no different than the JVM or .NET CLR these days--it just has 20 more years or so and an order of magnitude more developers working on improving it.


I think you're right, but I'm not sure about "easy to use". It's only easy to use because everybody uses it, so as you point out there's a massive talent pool and massive resources.

As somebody who only started developing for the web a few years ago after a long time working with Qt, winforms, GTK and other "old school" native toolkits, I really don't find the web superior in terms of simplicity outside maybe of a few niches. You end up having to resort to dozens of external libraries to emulate the base functionality of something like Qt. And unless you want to go the transpiler way (which, admittedly, is incredibly common these days) you have to do it all is Javascript which is easy to pick up but a pretty huge liability in the long run IMO. It's just not a very good language, even if you stick to "the good parts".


> were all super capable and promising but failed to gain mass usage

Depends where you look. Plenty of hardware devices use Qt as their UI. Modern Mercedes-Benz and Ford cars, LG TVs UI, stuff like the Remarkable tablet or th Telegram chat app... The nice thing being that you can test the UI on whatever OS you're running. Also, i18n on the web ? Come on. It's terrible when compared to Qt tooling


I don't think this is entirely true. Have you tried wxWidgets or even Qt?

In my experience, this is more about resources and economics ("reuse existing code" vs. having to learn something new).


I didn't say I was shipping GUI software ;)


> Meanwhile 16GB of memory

Oh if only that was enough. My Safari is currently using 25.33GB¹ and and it regularly goes over 30.

> 1TB SSDs cost less than $100

Ah, if only it was that easy.

¹ (According to iStat Menus; it's harder to see in Activity Monitor due to the separate processes.)


you can't buy an M1 mac with more than 16GB of memory so it seems like you have bigger problems


My Mac isn't an M1, but it does only have 16GB of RAM. Yes, it goes into swap when Safari grows like that. It's mostly ok even with several GB of swap used, but can slow down.

My solution is to restart Safari when it gets too bad, as it's obviously leaky.

For a long time I used Firefox, was annoyed at how slow it would get on a busy browsing day, and didn't realise the memory consumption of Safari (also open) was overloading the poor machine. One day I saw the stats and realised what was happening. Now I open only one browser at a time, and everything is much nicer.

If I decide to get another Mac (undecided), I'm holding out for an M1X or whatever with more RAM. 16GB isn't comfortable for my work any more. I'm not the kind of person who casually buys new expensive machines, so won't be getting the x86 32GB as an intermediate knowing I don't really need it, as I think it would be better to end up with both an x86 (which I already have) and an ARM going forward. I'm into code generation and portability, so that's better for me. And I like the idea of less fan noise!


I was going to comment this elsewhere, but workstation class problems require workstation class solutions. If 16GB isn't sufficient for your work you should upgrade. Unfortunately Apple has stopped making competitive workstation laptops.

As an example, the new MacBooks are not very competitive (for performance) with the latest XPS series from Dell.


Unfortunately, The XPS would be useless for that part of my workstation class problems that target the Apple ecosystem.

Like almost everyone, I'm financially constrained as well as space constrained. so buying multiple expensive machines, or a high end Mac Pro or something is not on the table as an option.

So it's a compromise.

My compromise at the moment is to use a MBP for Apple things, do smaller Linux and Windows things in a VM on it, and do big Linux and Windows things on cost-optimised rented servers much more powerful than the XPS from Dell. That seems to be a better use of the resources I have for the workstation class problems I'm choosing to solve.

An additional target of my interests is the M1-class processor with it's ARM plus extensions architecture.

So I will wait and see what the next high end, ARM-based MBP from Apple is like. By all accounts the M1 is an excellent and powerful processor, competitive with other Intel-based laptops, so its successor may be a good match for my needs. It might not be, in which case I will need to revisit my strategy, but until it's announced we don't know, and it doesn't make sense to buy an XPS at the moment for what might be just a few months of only marginal discomfort. I have my servers after all.


Is the M1 really not competitive with the latest XPS?


16GBs of memory isn’t enough for me anymore.[1] Although that is mostly due to Intellij and the Web at large (my browser).

[1]https://news.ycombinator.com/item?id=26120743


[flagged]


I've never had a customer report performance issues relating to memory usage or a market analysis that suggested we should improve memory size or disk space used in the binaries we ship.

What I have gotten are expensive bills for network bandwidth and cloud platforms when backends are too chunky or download sizes too large.

Which of the two do you suggest we optimize for, the one that costs money and gets complaints, or the thing people argue about on HN and Reddit but mysteriously never materialized in reports?


I think it has changed since Moore's law pretty much stalled on single core. Windows 10 runs fine on old Core 2 Duo with 4GB RAM and SSD.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: