Hacker News new | past | comments | ask | show | jobs | submit login
Goodbye, Native Apps (medium.com/the-innovation)
101 points by delvin0 on Oct 31, 2020 | hide | past | favorite | 230 comments



How is this getting uovoted ? Is nobody reading the article ? A few paragraphs in and I can see the author has terrible writing skills and doesn't know what he's talking about

>In fact, writing software with C/C++ was hard because developers had to work with different operating system API(Application Programming Interface).

Seriously ? Writing C++ apps is hard because you had to use different APIs ? That's the least problematic part about C++ app development, behind say the fact that your app can segfault when dealing with strings... (and especially pre C++11) or that you could take lunch breaks during builds which kills GUI iteration.

I feel like this doesn't belong on front page and people upvoting this are doing the site a disservice by upvoting based on title alone.


"Goodbye, Medium articles". That's the article I want to see trending...


I’ll upvote that


Hey, Jeff Bezos' article was pretty good.


I agree the article in of itself doesn't deserve front page placement, but the discussion that it is generating does. Sometimes I upvote or favorite mainly because I found the comments engaging and informative.


I mean this is a dead horse beaten over and over weekly, but even if it wasn't I don't expect to read blog spam when I click on front page stuff on HN - if this becomes the norm the site will lose value for me.


My thoughts exactly. The one problem with native apps is the resources and time you need to develop the same app for several platforms. That's all.


That's really a modern problem that happened after mobile became mainstream - but by the time that happened nobody was pushing C++ for app development anymore - Android was Java only for frontend and iOS was objc.

If you did apps in days where C# and Java took over for C++ mobile still wasn't a thing and macos was super niche (especially in enterprise) - few people really cared about cross platform - demonstrated by C# being windows only and growing in popularity during the time.

Visual Studio dev experience was just better than anything comparable at the time (Borland stuff was dead by C# era) and C++ was just an inferior language for app development (especially pre modern C++ where you couldn't even rely on stl being implemented correctly across platforms and people regularly rolling their own containers)


That seems to me to be what the Medium author was saying.


It was a bit hard to read honestly. A bit of rehash but that is most of the content on the internet anyways. Felt like someone writing for a school assignment.


It reads like something generated by GPT3.


This should tell you something about the quality of patrons on this site.


It is a shit article.


As much as anything else the growth of hybrid apps is a symptom of a couple of things:

1) people now expect apps to run on a million different devices and nobody has the time or resources to develop four or five native apps with feature parity between them

2) progress on UI frameworks is pretty much stalled. Just looking at .NET, because I'm familiar with it and traditionally desktop software has been a big area of concern for it, Windows Forms will work for forever but hasn't been touched in a long time, WPF has some sharp edges that make it feel unfinished but isn't being touched, and WinRT was basically stillborn because of their attempts to tie it in with the whole "new" Windows 8 ecosystem. Meanwhile, browsers are getting new capabilities constantly. It's hard to blame anyone who looks at that mess and says to hell with it and goes with a Web browser-based solution.


Indeed, my prediction is that native apps will make a comeback when the UI toolkits catch up. Flutter is in this vein. And some of the Rust UI ecosystem looks promising (although very early stage).


Agreed, I think the web was simply first in being a highly versatile, capable, and relatively cheap to develop with cross platform UI toolkit.

I’m much more optimistic about the future of native apps, the best ideas from the web are already making their way into Rust and the recently announced .NET MAUI, and with WebAssembly I think we’ll start to see native performance on the web instead of the other way around.


> with WebAssembly I think we’ll start to see native performance on the web instead of the other way around.

I wouldn't be so sure about that. WebAssembly doesn't do anything to help the rendering bottleneck. To do that you really have to replace (or innovate in some way) the DOM.


I think the approach many is taking is just put a canvas and draw on top. Internally, you may end up creating another DOM like structure to store all your objects. The worst thing about this approach is that you lose all browser's native support like accessibility. I think Flutter is using a combination of DOM, CSS and canvas.


Sure, but at that point it's not really "web" anymore. You're just writing to a WebGl or WGPU canvas, which you could do just as well from a "native" app.


So we've basically reinvented Java applets - 25 years later.


> And some of the Rust UI ecosystem looks promising (although very early stage).

Can you recommend any projects to check out?


Iced https://github.com/hecrj/iced and Druid https://github.com/linebender/druid

Iced is taking the "get something working now" approach, whereas Druid is taking the longer term "build it right" approach. I wouldn't recommend either of them for serious projects yet though.

I would also recommend Raph Levien's blog https://raphlinus.github.io/.



Can they catch up? I feel the reason why web apps took off is because so much money has been spent on the underlying technologies (HTML, CSS, etc.) that makes them extremely powerful.

I’ve used a lot of UI frameworks and getting to do anything complex is always much harder than doing the equivalent in HTML/CSS/etc.


I think so. Web has the advantage of already catering to literally every layout & style use case (grid, multiple columns, floats, inline elements, etc. etc. etc.). But Flutter has the advantage of being designed from scratch so it can avoid all the idiotic mistakes the CSS made (e.g. box-sizing, the insane difficulty of centering things, etc.).

I don't think it will take too long for the niceness of a sane design to outweigh the lack of features.

The only thing I'm not convinced about is that Flutter-web will ever be viable for most web pages (i.e. non-app ones). It kind of works but it's big and slow. Probably eventually people will do a native website, and then use Flutter for mobile and desktop.


Something that one can easily do even with Motif and Windows Forms, just learn the layout APIs to start with.



> Windows Forms will work for forever but hasn't been touched in a long time

WinForms very recently got HiDPI support, better accessibility features, and was ported to .NET Core where they also fixed bugs in some controls.

WPF in the meantime has been arbitrarily declared stable and Microsoft refuses to fix anything. Everybody is supposed to switch to UWP which they already deprecated or WinUI which hasn't been released yet.


Windows Forms remains supported, yeah, but there are no marquee feature to make it easier to develop for really. That seems like it's more focused on getting people to bring their existing WinForms work to .NET Core.


The "marquee feature" of Windows Forms to my mind is the staggeringly large number of commercial component packs available. To this day, getting something working quickly that works as it should for the target operating system (provided that is Windows) is vastly easier than any other alternative.


Of all the Windows GUI frameworks WinForm has to be the easiest to develop for, for the majority of developers.

Sure writing large apps is not going to be straight forward, but WinForms is super simple to get started with.


I agree and it's still pretty much the one I'll use but it's in a weird limbo where it doesn't seem like they really want you to use it


They complete revamped the WinForms designer for .NET 5 in Visual Studio 2019. It's buggy as hell (as is the rest von VS2019) but they tried to make it better and are definitely not doing nothing.


UWP and WinUI are marketing terms for the same technology stack that keeps being developed.


Agreed. As the developer of an Electron app, I would jump ship in a heartbeat if a viable alternative emerges. Until then, I'd rather use my time to add new features rather than wrangle multiple implementations of the same app. The core app logic is the same on every operating system, so why on earth should I have to rewrite it three or more times?


I know of people doing C++ backend as an Electron extension. The C++ can be compiled on almost any OS does the non GUI stuff.


then why still use Electron? why not just c++-backend with an embedded http server for frontend that uses the browser


For many users that can be an alien experience. For most non-technical people, browser is still primarily for accessing contents on the web and switching them to a user experience where you have to start a backend server locally and then access it via the web browser can just feel hacky and even a bit off-putting.


You can do that perfectly transparent, the executable just starts the user standard browser.

It was a standard practice for some applications in the .com days.

Electron is just another example of the disease making Web == ChromeOS.


I would assume for the same reason that they don't just use a normal web app. They want the packaging of a native app.


Package as native app, it starts the user browser pointing it to localhost into its builtin server, plenty of examples during the 90's, e.g. CUPS printer management tools.


I'm not sure the app ESR famously blew up for having a horrible UX is the example I'd point to to say it's not confusing.


Theres a lot of people proficient in Electron. It also handles IO I assume


AKA Chrome APIs.


Thanks, I haven't worked with it but assumed.


Security? Seems like any webpage could make calls to your program, no?


That's just standard CSRF stuff isn't it


That is the proper way of doing, but then some lazy devs rather code with Chrome APIs instead of Webs standards.


If you take a look at the GTK+ ecosystem...it's kinda broken, too.

Simple use case: Make a sidebar fade in and fade out, while changing the dimensions of the right box. Pretty close to impossible to implement in a clean manner, even within glade.

And then, try to support a mobile device in a responsive way with libhandy.

Now you throw the towel and just get on with 20 lines of CSS and literally two HTML elements.

CSS should not be underrated when it comes to layouting. It is super flexible, and UI frameworks always will lack behind due to their architectural patterns.


I never coded GTK apps, but isn't the theming done with CSS since GTK 3?


Yes, with global css, which is literally one file for every single application, extension, gnome shell, terminal and ui windows on your whole system.

That is why so many apps don't work bug free with other themes. There's no way to predict how your app will behave on another system with another theme.

So personally I think the mess of overused dummy css classes in the GTK ecosystem is really a bad design. They could've gone with custom, namespaced, ui elements instead of that box.something.something shit.

They reinvented divitis. Quite literally.


As a Linux user, I love this though. Brings me a lot more.


I have Lazarus up and running... it's currently taking 32 Megabytes of RAM. It compiles in the blink of an eye, has one of the best possible 2 way GUI builders in the open source world, and I can reach back 30 years into the libraries I wrote in the days of Turbo Pascal 7/MS-DOS and pretty much use them intact.

It amazes me how many people went with the .NET bloatware and all that follows it. Of course, 95% of programmers out there are newer than me and don't have knowledge of this type of efficiency to compare against.


You have to thank Borland's management for that.

Delphi and C++ Builder are still around, but now only some lucky enterprise employees get to play with them.

.NET Native and C++/CX were finally shaping up to be Microsoft's proper version of what .NET and Visual C++ should have been all along.

However they are the most recent victims of the whole Reunion reboot, .NET Native now has uncertain future, while C++/CX got replaced by C++/WinRT with a tooling at the same level as doing C++ ATL 2.0 in 2000.

Still, there is a certain guarantee of the underlying platform and respective languages being around.

Borland mismanagement was my hard lesson to only use tools from platform vendors. Not only did they decide to leave the indie developers, they were always late providing bindings to Microsoft SDKs.


> but now only some lucky enterprise employees get to play with them.

LOL, "lucky enterprise" my ass. The only poor souls who still work with that shitty bug ridden stone-age IDE from Embarcadero have to do that because they never managed to get rid of VCL (which might have been nice 20 years ago. Today it's just bad compared to modern frameworks).

Stay away from Embarcadero, don't become dependent on such vendor lock-in.


At least there is a free Community Edition now. It is one release behind (so doesn't have the latest goodness, eg the LSP server for code completion), but it does let you build apps for non-commercial use or pre-income startups for free, using Delphi/C++, the VCL, etc. Ie, if you want to encourage people to use tools like Delphi, it is much more open to indie devs than it used to be.

Link: https://www.embarcadero.com/products/delphi/starter


They are making it easy to get started with when there is a whole bunch of information I have to give them in order to download even a trial/community edition. Contrast that to most other programming languages / development environments, where you usually can just download and run it.

  Field required:  First Name 
  Field required:  Last Name 
  Field required:  Email 
  Field required:  Password 
  Field required:  Verify Password 
  Field required:  Company 
  Field required:  Phone 
  Field required:  I have read the Community Edition End User License Agreement and confirm that my usage of the Community Edition version complies with its terms and conditions.
  Field required:  I have read, understand and agree to Embarcadero's Terms and Conditions & Privacy Statement
  Field required:  Yes, I would like to receive marketing communications regarding Embarcadero products, services, and events. I can unsubscribe at any time.


Contrast that to most other programming languages / development environments, where you usually can just download and run it.

There are, sadly, other offenders too. Microsoft is possibly the worst among them. Gone are the days of being able to use a free edition of Visual Studio to develop Windows applications with no strings attached. And good luck even figuring out what the privacy policy is, a problem that also applies with VS Code. I mean, why should desktop software even need a privacy policy?! Oh, right, telemetry, the plague of 21st century software. And then you have the mobile platforms and the offensive conditions and financial cut demanded by their gatekeepers, keeping Microsoft company in the obnoxious developer experiences department.

Meanwhile, OSS development tools and open platforms seem to be blowing away much of the proprietary stuff in administrative and business terms (as well as often in technical terms) now. Too many greedy platform owners trying to lock everyone in, not realising that Ballmer was right all along and without developers their platform is worthless anyway. And discussions like this, and the emphasis today on cloud-hosting (usually running FOSS) and web apps, are the result.


While bringing developers to the stone age before RAD tooling was a thing.

Because when one designs languages over weekends and late nighters, state of the art GC, JIT and GUI tooling are at very deep bottom of their roadmaps.

So thank you very much, but I will keep enjoying Java, .NET and C++ based tooling.


I think newer languages like Rust and Go are obvious counterexamples to your stereotype there. Heck, even JS and Python are. The runtimes, tools, libraries and overall developer experience for languages like these are easily on par with the Java or Microsoft ecosystems today, and in some respects far superior. They are all freely available without any strings attached, and they all work well on open platforms like Linux as well as Windows or Apple desktops.


I'm confused

Why would someone both give up the ease of use and power of VB6 and/or Delphi / Lazarus, and want the bloat of Java, .NET, C++, etc.. that just complicate things for no good reasons?

It's like the programmers of the world went insane somewhere around 2002.


What is so complicated about .NET or Java?


Having had to use Embarcaderos Delphi, does anyone actually like that? It may be resource friendly as if it's still 2000, but the rest of the experience the same. Pascal is ageing and it shows, Embarcaderos IDE is the worst I've ever had to deal with and click and drag GUI, is it really so pleasant to work with? I find it easier to layout a Qt app. Visually or in code.


I am fully aware of it, but the damage has been done and very few will give it a second look, which is quite a produvity loss, but so is the circle of "innovation".


I've tried it a few times. The installation was very strange I remember having to reinstall a few times in order for it to work correctly.


> always late providing bindings to Microsoft SDKs

I remember Delphi adding many Windows features before Visual Studio. Windows Vista Aero support, Support for building native apps for the Microsoft Store, etc.

Delphi never shipped with bindings for 100% of the APIs, but the beauty of Delphi was I could create my own bindings with only a little code from Delphi, so it wasn't a roadblock. That is the huge difference between Delphi and non-native development tools: You aren't held back by lack of libraries or API bindings.


Delphi never had proper support for Metro, initially they faked it with VCL styles.

https://stackoverflow.com/questions/9653260/resources-for-na...

Also how come Embarcadero supports Windows features before Microsoft does?

I am a big Object Pascal/C++ Buider fan, but also acknowledge the reality of their actual support.


.NET Native is an absolute nightmare in practice. Glacial compile times (I’ve heard many people complain that their Azure DevOps CI times out after an hour), and so many bugs that you only discover at runtime. I gave up on .NET Native after discovering that it can’t even handle the ultra-popular Dapper ORM.

While I like the idea of .NET AOT, the execution left so much to be desired. The developer experience is so bad that I’m shocked that it’s still required for Store UWP apps.


Well, if you have been following up on blogs, community videos and github issues, most likely by the time .NET 6 comes out, the stack will look like Windows 7 development model never went away and the only improvement was replacing COM with the improvements brought by UWP.

With the store sandboxing Win32 apps instead.

https://github.com/dotnet/designs/blob/main/accepted/2020/fo...

https://github.com/microsoft/ProjectReunion

My biggest grip with .NET, since 2001 alphas for MSFT partners, was not being AOT like Delphi (NGEN was never meant for anything other than fast startups).


Why is it we always keep going in circles, when .Net was seemingly, finally on the right path they changed again and it seems to be going in the old .Net direction.


What is the point of AOT? xcopy + .NET Framework worked perfectly for what I used them.


Borland dragging their feet on 64 bit support helped me win quite a few customers from one of my competitors.


> It amazes me how many people went with the .NET bloatware and all that follows it. Of course, 95% of programmers out there are newer than me and don't have knowledge of this type of efficiency to compare against.

It doesn't amaze me. It's very productive to work in and the bloat just doesn't matter that much in most environments where .NET is even on the table. Now that prevailing trends are different, you can build leaner .NET Core apps.


It's funny to see this comment, because Delphi and such were actually considered rather bloated back in the day --- a basic windowed app with not much in it (like a "Hello World") taking up several hundred KB was not unusual, compared to tens of KB for the MFC equivalent, and a few KB for pure Win32.

...and a computer having 32MB of RAM was considered outrageously luxurious.


They were using static linking by default though. You could always switch the linker to dynamic linking and get those sweet 20-something KiB binaries.

MFC with C++ was using dynamic linking by default, but you could always switch to static and see the binary bloat to several MiBs in size.

So if you do a proper comparison, there's really not much difference between them.


I did Delphi 7 back in the days, but not sure I understand your complaint: .NET desktop apps do compile and launch in seconds.


Here's the thing. Apple, Google, etc are trying to lock developers into specific platforms.

This is why Apple has been so hostile towards PWAs.

It's totally possible to build a PWA that behaves like a native app but Apple actively tries to destroy them.

There will never be a viable hybrid app platform.


> It's totally possible to build a PWA that behaves like a native app but Apple actively tries to destroy them.

It may look the part but it rarely feels like it.

Something like OmniGraffle would not end up feeling the same. Even Microsoft's apps on mac feel better than their web-based counterparts in O365.


Yeah, proponents of cross-platform frameworks love to ignore the second half of "look and feel". It's ridiculous to claim that a web app can behave like a native app when even Qt apps still tend to have obvious tells.


The important question is, does that matter?

Slick web apps are taking over the software world, whether the likes of Apple want them to or not. Looking like any specific native platform is less important, if it's even relevant at all, in an era when users are working with different web apps each with their own look-and-feel all the time anyway.


Does it matter? Yes, of course. The desire for consistency and usability may not be strong enough to reverse the current trend away from native apps, but that doesn't remove their advantages.

And I find it very telling that even your own comment continues the trend of over-emphasizing look over feel. I'd be happy with apps randomly launching in night mode, if they would just have all the right controls in the right places, and have the performance and memory footprint of native apps.


The problem with that argument is that you're assuming consistency with native platform standards is the dominant consideration. I contend that, today, it often is not. Many users are spending much of their time using web sites and applications rather than native ones; this trend does not appear to be in dispute here. Moreover, the basics of how web sites (and by extension web apps) work have a longer history of established conventions than any of the major native platforms; the likes of Jakob Nielsen were making the case for consistency and following user expectations more than two decades ago, long before the likes of iOS and Android and whatever we're calling Windows' UI today were glints in their respective creators' eyes.

Also, web technologies can perform just fine the vast majority of the time. Modern JS engines have excellent performance, obviously not rivalling expertly coded C and assembly, but certainly comparable to your average native application for most purposes. Modern browsers also have good support for hardware acceleration and can render UIs that respond quickly enough to user interactions that again for most purposes there is no perceptible delay or jank. Today we're looking towards WebAssembly as a vehicle for potentially more efficient languages and runtimes, though clearly that technology is still in its infancy and its future is far from certain. Now, you can undermine all of that potential if you bloat your web site/app with tens of megabytes of junk scripts that are all competing for resources and blocking stuff and phoning home and whatever, but that's not really the fault of the web technologies, it's just bad developers and/or bad managers creating a bad application.


> Moreover, the basics of how web sites (and by extension web apps) work have a longer history of established conventions than any of the major native platforms;

That "and by extension web apps" bit is completely wrong. The long-standing conventions of how web sites work are mostly irrelevant to fancy web apps, and to the extent that they are relevant, web apps break them left and right. Just look at how many web sites/apps hijack or break scrolling or the back button or the ability to middle-click on a link and get a new tab or the ability to highlight text. Web apps are all about breaking the usability standards for web sites and replacing them with a bastardized version of the usability conventions from various native OS toolkits. But in spite of that, nobody ever really expects drag and drop or rich copy and paste or any other data exchange mechanism to work between web apps.


Just look at how many web sites/apps hijack or break scrolling

Hardly any? That's been a minor trend in web sites for a while, but changing scroll behaviour for no good reason is widely regarded as an antipattern by UI professionals. I don't recall ever seeing normal scrolling behaviour subverted in anything I'd call a web application.

or the back button

This is a tricky one from a usability perspective, because some users see URLs as shortcuts to particular parts of a web application and expect the back button to behave accordingly as they navigate information in the app, while others think of the whole application as being a single page and expect the back button to just leave everything. But there is a whole set of browser APIs for managing that behaviour, and it's something a well-designed application will at least present in a consistent and logical way.

or the ability to middle-click on a link and get a new tab

This sounds like you're talking more about web sites than applications again, and again it also sounds like you're talking about bad design that web UI professionals would universally disagree with. It's usually caused by newbies who read some style-over-substance tutorial and decided that making links or buttons with elements other than the designated anchor and button ones that exist for that purpose was a good idea. After the first few glaring usability problems, they'll learn better.

or the ability to highlight text.

I don't really understand this one at all. The only times you wouldn't be able to highlight text in a web application would be if the designers have actively prevented it, for example to prevent selecting UI labels along with the content of a text field. This typically works exactly the same way as any native desktop application.

But in spite of that, nobody ever really expects drag and drop or rich copy and paste or any other data exchange mechanism to work between web apps.

I don't know what you mean here, either. Dragging and dropping text between web applications typically works fine. If by "rich copy and paste" you mean other more complicated data types, what happens is obviously highly context specific, but once again, this is the same story on native desktop applications. It's not as if you can copy a selection of spreadsheet cells and paste them into a drawing package with obvious and meaningful results either.

As a final comment, the examples you're talking about here all seem very "meta". As such, they're not particularly interesting to me, because as a UI developer working on a web app you generally get the expected behaviour by default with these kinds of things. Sure, some people can and do break them. They're just bad UI designers. Some people make native applications with shocking pink skins over the normal window dressing, too, but that doesn't mean all native apps are bad.


I keep hearing the "web technologies are fine, you're just using them wrong" refrain over and over again, but it's ultimately not very convincing. If it's true, then where are all the good web applications? I've certainly never seen any of them. Can you give an example?

If nobody can get Web technologies to deliver a good experience, then I'm not sure it matters that much whether or not it's possible in theory. Delivering a theoretically great user experience won't get you anywhere unless it also translates to practice.


What do you consider a good application? If you have never seen anything that would qualify on the Web then I have to ask what standards you are seeking and whether any software actually meets them.


At least for the purposes of comparing to web apps, the criteria would be: responsive/low-latency (responds to input quickly), fast (completes tasks quickly), uses resources proportional to the functionality it provides, and doesn't often hang or spend noticeable amounts of time waiting for a network request before responding to an action.

Applications that meet these criteria include: Thunderbird, KiCAD, VLC, Vim, tmux, Blender, evince, Handbrake, and Pidgin.


responsive/low-latency (responds to input quickly), fast (completes tasks quickly), uses resources proportional to the functionality it provides, and doesn't often hang or spend noticeable amounts of time waiting for a network request before responding to an action

That seems like a fairly low bar to clear. Aside from the network request issue, I think every web application I've worked on for a decade or more would tick all of those boxes, from intranet tools to browser-based interfaces embedded in device firmware. I'm sure there are many others in the industry who could say the same. A lot of the things I'm thinking of are for internal use, but in terms of public examples, you can just look at most of the successful big-name business SAAS applications, and they tend to be strong on these requirements as well. Ease of use is a huge selling point for attracting customers, and no-one is winning points by being clunky in their web GUI in 2020.

Network speed and reliability is a different issue, and obviously many web applications are particularly vulnerable to problems there because they have such a strong communication element. But then the same is true of native applications that are for communication or a front-end to a client-server system like a central database.

Applications that meet these criteria include: Thunderbird, KiCAD, VLC, Vim, tmux, Blender, evince, Handbrake, and Pidgin.

That's an interesting set of examples. The other point under discussion was about whether web applications cause usability problems by deviating from native platform UI conventions. I can't help noticing that several of the native applications you mentioned there do exactly that.

For example, Blender's UI was infamous for being so unusual that anyone coming from other 3D modelling software found it hard to use, and for looking and behaving nothing like a conventional native application on a platform like Windows. Eventually, that became so much of a problem that they basically rewrote the whole UI layer to work in more conventional ways.

Handbrake has its good points, but its interface looks like a GUI from the early 2000s where the designer just threw as many different types of control onto a form layout as they could manage.

Thunderbird also has its good points, but its UI is incredibly glitchy in some areas (dragging and dropping comes to mind) and it definitely fails to meet your fast and responsive criteria at times.


While there are really good looking examples to me the most of web apps do not look slick at all. They're rather extremely unergonomic


I agree with that. I've been struggling with that on a web app I've been working on.

Part of the problem is a new twist on the old problem. I don't have to develop for Mac and Windows and Linux, I have to develop for Desktop PCs, Tablets, and Phones. My options are to create a GUI for each of them or use something like Bootstrap to build one that runs on them all, and that has limits and trade offs, but it's still pretty good.

One of the things I've done with this upgrade is provide a way for the user to store their data in the CouchDB native app running on their desktop PC. It's a "local-first" and "offline-first" web app so once it's installed it doesn't need or use an internet connection and while I have no way of making a comparison it appears to me to run pretty close to native app speeds.

When CouchDB is installed on the user's desktop PC any web app configured to use it can use it. It only requires the user fills out a simple web form to set up a user and database for the app.

Taken together, a modern web browser and CouchDB come pretty close to fully featured client side runtime environment for desktop PC web apps. It wouldn't be too hard to create a web app that looks and feels very much like a native app when running full screen on a Mac and Windows.

A client side runtime for web apps is something I've been thinking we need since I built my first web app and that was before they were even called "web apps". I'm not the guy to make that, but I think we need it. CouchDB and a web browser come pretty close.


I don't disagree, but I think you could say the same about native apps too. For this discussion, I think the most important thing is what good examples of each type can achieve, since presumably those are the ones that most people will choose to use.


>"I don't disagree, but I think you could say the same about native apps too"

Of course I could. I was just countering the original point that sounded that web apps are slick just because they're web apps. Making good GUI is hard (well I'd skip Hello world here).


I was just countering the original point that sounded that web apps are slick just because they're web apps.

Sorry, maybe that was written ambiguously, because that wasn't the intended point at all. The point I was trying to make was that the web applications that are slick are taking over the world. Being polished and easy to use is a big advantage, and IMHO there's been a lot more progress on this front in web development recently than in native applications.

In contrast, the mobile platforms are far too much style-over-substance and have glaring usability problems as a result.

Most desktop applications haven't really changed their basic form for decades, they just show up with flat icons and kindergarten levels of bright colours these days. That does mean here is a level of consistency and familiarity, which is valuable. However, it also means most of them aren't benefitting from decades of further experience and research in UI design and from newer UI patterns coming out of that experience that have proved to be effective in other contexts.


OmniGraffle could be fine; hasn't Figma shown us the way?


Figma falls into the trap of all other claimed high performing web apps, conflating maintaining a 60fps refresh rate with responsiveness. Try dragging any object around (even simple shapes) and you'll see it trail the mouse cursor in Figma. Does not happen in OmniGraffle. The result is a distinct difference in feel.


Can any person in the planet edit the same document than you in real time, for free, running hardware up to 5 to 10 years old, in < 3 min after receiving a link with Omnigraff ?

Because that is the main feature of figma. Latency of mouse issues is a good trade off from my POV.


Fair point! I might file this under 'latency' which is often ignored for improvement so long as it's tolerable.


It easily could feel like it, given a very reasonable amount of support by platform owners.

There is nothing magical about Ui powered by compiled vs interpreted code. It is limitations in the platforms that are the roadblocks.


I always hear about these theoretical web apps that are just as good, nay, better, than native apps. Where are they?! Certainly no web app I ever used would qualify...


I think that can often be due to a lack implementation skill.

Our users (https://usebx.com) seem to appreciate the native performance and feel of our web app.


You should try Figma. Will open your eyes a bit.


Yeah, you can only use it when you got internet


The fact you can just run it without installing anything makes up for a lot of faults (doubly so when the vendor isn't Microsoft but somebody nobody's ever heard of).


On the flip side, most web apps don't offer their full feature set without requiring you to go through a sign-up process that is at least as onerous as downloading and unzipping a Mac app bundle. Even when a web app outsources authentication to something like Facebook, there are still at least as many clicks.


It's not about clicks so much as sandboxing.


Mac apps are sandboxed.


I'd nevertheless sooner visit an unknown Web site than install an unknown application.


Isn't that curious then, that Apple is forcing some developers to turn to web-based solutions [1,2] instead of encouraging them to actually create native locked in experiences?

Seems to me that there's less of a "apple is purposefully malicious towards PWA because of their master plan for platform lock" and more of a "apple has a very long history with being completely incompetent on the web (see: icloud as a whole, safari now slowing standards adoption, newer web endeavors like the apple music app) which in turn is slowing PWA adoption because they now own one of the most dominant mobile platforms.

[1] https://www.businessinsider.com/microsoft-xbox-game-pass-app...

[2] https://www.theverge.com/2020/9/25/21455343/amazon-luna-appl...


> It's totally possible to build a PWA that behaves like a native app but Apple actively tries to destroy them.

Ugh, no. No it's not. At least not yet.

Accessibility features alone are almost always woefully crap in web-apps, compared to what native apps have access to, at least on macOS (and SwiftUI is amazing in how it lowers the barriers in implementing accessibility in your app from the get-go.)

Shit like Electron and PWAs seem to be championed by user-hostile developers that just want things to be easy for themselves, without considering what's best for the users and their hardware resources.


Shit like Electron and PWAs seem to be championed by user-hostile developers that just want things to be easy for themselves, without considering what's best for the users and their hardware resources.

There is a standard counter to that argument at this point. Developers on native platforms might be able to achieve a better experience on that platform than a web app given the same time and resources. However, if the time and resources have to be split N ways to build a native app on each of N platforms, compared to investing everything into polishing a single web application, the outcome might be very different. While each native developer is still worrying about whether they're aligning and labelling a button in the platform-standard way, the web team is already refining their UI using the results of their third round of usability testing and has determined that the button shouldn't have been there in the first place and designed and implemented a more intuitive UI. And since they launched their version 1 two months earlier than any of the native apps did, they even have the extra revenue in the bank to pay for those usability tests, too.


> While each native developer is still worrying about whether they're aligning and labelling a button in the platform-standard way, the web team is already refining their UI using the results of their third round of usability testing and has determined that the button shouldn't have been there in the first place and designed and implemented a more intuitive UI

Your hypothetical scenario seems to be treating the app's UI as if it exists in a vacuum, rather than existing alongside other apps.

If there's a platform-standard UI convention that applies to a button, then UI testing in the context of that platform is probably not going to tell you to remove that button entirely—you shouldn't be surprising users by removing functionality they expect to find present. And if there is a UI convention that tells you how to position that button, you probably shouldn't A/B test the positioning of that button and should focus your usability testing on the UI elements that are not dictated by the platform's standards and conventions.


In God we trust; all others must bring data.

(Origin unknown)

Your arguments use the word "probably" a lot. As someone who does a lot of UI design professionally, I prefer to rely on the kind of user testing you apparently dismiss, precisely because prior expectations about what works well so often turn out to be inaccurate. Indeed, there have been plenty of native platform standards that have awful usability in recent years, which have rightly been criticised for it by professionals wielding empirical evidence.


> Indeed, there have been plenty of native platform standards that have awful usability in recent years, which have rightly been criticised for it by professionals wielding empirical evidence.

Platform native UI conventions are very often sub-optimal, if only because they're old. But sub-optimal standards are very often preferable to unpredictable, and are definitely preferable to having to juggle multiple conflicting UI conventions at the same time when multitasking. That's why we still have QWERTY, and why all the surviving scrollbars are on the right, and why pie menus never caught on.

You say you test UI designs professionally, and claim the higher ground of having empirical evidence. But it still sounds like you're using worthless methodology by focusing only on your one app at a time and ignoring how it fits into its environment and the user's broader workflow. Is that correct, or have you actually quantified the overall productivity loss an app introduces by violating the user's expectations and habits?


Platform native UI conventions are very often sub-optimal, if only because they're old. But sub-optimal standards are very often preferable to unpredictable

The reason that junk like flat design and derivatives like Material Design are awful for usability has nothing to do with being old and everything to do with being unpredictable. Often, a user literally can't tell what parts of an interface are interactive or how they work, because affordances barely exist. It's like the old mystery meat navigation meme for web sites, except they actually did it seriously and thought it was good.

But it still sounds like you're using worthless methodology by focusing only on your one app at a time and ignoring how it fits into its environment and the user's broader workflow. Is that correct, or have you actually quantified the overall productivity loss an app introduces by violating the user's expectations and habits?

Well, firstly, a testing methodology is literally the opposite of worthless if it gives you an objective measure of the increased financial value generated by a change under consideration.

Secondly, you assert without evidence that the kind of change we're talking about does violate the user's expectations and habits, and you further imply that this causes a loss of productivity. As I have argued in earlier comments, the assumption that the user's expectations are governed primarily by their native platform's conventions is not necessarily valid any more, because users spend so much of their time inside a browser using online facilities instead of other native applications.

Moreover, the answer to your other question is yes, we have done many tests over the years that compared options including the native approach on various platforms with some other options we were considering. In the nature of such tests, the outcomes varied. In some cases, we did end up going with presentation similar to the native conventions on one or more platforms; often this coincided with cases where the native conventions across major platforms were similar as well. In other cases, we went with a completely different presentation style, as performance with the native conventions was significantly worse.

The point of all of this is still that ideally you don't want to make UI decisions based on assumptions or dogma if you could try different possibilities with real users and make your decisions based on objective evidence instead.


> The reason that junk like flat design and derivatives like Material Design are awful for usability has nothing to do with being old and everything to do with being unpredictable.

I'm surprised to see you mentioning the flat design trend as something you consider "old" in any way. I see it as a fad that is past its peak but still far too prevalent to regard as being in the past. And when I was talking about platform native UI conventions, I definitely had older stuff in mind than Windows 8.

> Well, firstly, a testing methodology is literally the opposite of worthless if it gives you an objective measure of the increased financial value generated by a change under consideration.

See, this is the biggest problem here. I'm talking about usability and value to the user. You're talking about optimizing the UI to exploit the users for your maximum profit. Those two motivations are obviously not well-aligned, and if you're on the side of that divide where the ad-tech stuff is, then you're not even trying to have the same conversation I'm having. Your incentives are to maximize the user's engagement with your product, so of course you don't care about how well it fits into their multitasking workflow; you want to monopolize the user's time.


I'm talking about usability and value to the user. You're talking about optimizing the UI to exploit the users for your maximum profit. Those two motivations are obviously not well-aligned

I could not disagree more strongly. I have built a career built, in no small part, on a simple business model of creating software that users like because it's easy and works well, and consequently attracting and retaining happy (and paying) customers. This has absolutely nothing to do with ad-tech, which I generally regard as a toxic business model for exactly the reasons you're arguing.


PWAs are extremely user friendly in one dimension: they frequently take 1-2 orders of magnitude less space. This is very relevant for low end devices, and has been one of their biggest selling pints since inception.


You really think accessibility for native apps will always be better than a web app that follows a11y?


I think so. If we're talking about "out of the box" or what you'd find on average. As an Android dev for a few years now I think we get a lot out of the box and I get that web devs also use libraries or frameworks that have similar benefits. At that point it's comparing framework A's a11y vs framework B's a11y vs Android a11y.

Not an Android fanboy but I'm going to assume that the bigger (widely used/constantly iterating) "platform" (for a lack of a better term) has better a11y. And if there is a web framework that provides this (react?) do the majority of websites use it ? like they do native api's for native apps

(and this isn't even talking about the api's accessible to native vs web app)


Doesn't really pass the smell test.

Google is developing cross-platform Dart/Flutter.

Apple is developing cross-platform Swift/SwiftUI.

Facebook made React and then cross-platform React Native.


Apple is definitely not developing cross platform. Swift the language is cross platform. But the SwiftUI is only "cross platform" within apple's different devices. A true cross platform is when a SwiftUI app would work on Android.

Native OSs can be built to be "cross platform". For example, the OpenGL API. The same API works on different OSs, developers can code against the same API and it works on different OSs. In an ideal world, maybe there could be a standard API for presentation controls, UI drawing, animation, 2D/3D graphics, networking, filesystem access, threading and more. Each OSs would implement the same API and add additional platform specific APIs to differentiate themselves. The key is application developers would have a common core set of APIs and language to implement the 80% business logic and UI logic.

Note, this is essentially what HTML, JS and CSS is doing. But the web platform is creating a runtime that exposes APIs to do a lot of different things. A CSS transform a single API that causes the DOM to animate in specific way. There are thousands of these APIs, and the runtime implements all of them. This is why the web runtime itself heavier and it takes 100mb just to show something simple on web platform.

For flutter, the core engine is just a 2D drawing surface, the APIs it exposes is just drawing shapes. And all of the widget self contains rendering, various settings and the application pulls in the widget used in the app. This makes the runtime smaller. Flutter is more efficient because the abstraction is lower and the core runtime is trying to do less things. On the scale of level of abstraction, flutter is on one end and the web platform is on the other. For our ideal OS platform, we can select the right level of abstraction to balance between performance, standardization, and flexibility.

But in the real world, all of this require collaboration between OS vendors. Apple's business model is try to sell more IPhone, Mac, Apple Watch, IPad. They make the argument that Apple's platform has the best apps that isn't available on Google's or Microsoft's platform. And this actually works. Why is Android tablets not taking off, and IPad Pro is? People buy the IPad Pro for apps like Notability, Photoshop and more. People still buys Windows and not ChromeOS because its got native Photoshop and Matlab. These apps are coded using Apple's or Micosoft's language, frameworks and APIs. And that exactly is what is preventing these apps from appearing on Android and ChromeOS easily and reducing people's need to buy Apple and Microsoft's devices. While these vendors may not say they are actively trying to lock in developers. They definitely don't want the some developers who coded an complex application for their platform to easily move it to another platform. If this transition cost is too low, it doesn't play into their business model. Their business model pushes them to differentiate their platform against others, and as a side effect, it increases the barrier and transition cost.

This is a tug of war between application developers desire to have all platform as similar as possible, and the platform owners desire to differentiate their platform and prevent other platforms obtaining the same capabilities as my platform.


Are you suggesting Javascript has access to all hardware functionality exposed to C/ObjC/Swift?


With a proper permission system in place, sure.

Desktop apps got the permission model all wrong. You run a program and that grants them access to everything.

The web's model is closer to that of mobile apps. It asks for a permission at the time that it's needed. Not all sites get this right, but browsers are starting to crack down on requests the moment you enter a site.


I've heard of Ionic/Cordova which gives JS apps the ability to access native APIs, but haven't used it to see if it's truly capable of all functionality


There are many local markets across the globe where Apple devices are irrelevant, so trying to block PWAs won't work as much as they would like to.


If it were so easy, why is Postman pure garbage compared to Paw on mac?


This is why Apple should be forced to allow multiple web browser rendering engines on iPhone. Cause they're so damn anti-competitive and being forced to develop for Safari makes developing web apps horrible


I feel like writing a 'goodbye goodbyes' medium piece which talks about never listening to a 'goodbye' article again.


Can you find an example of a tech goodbye article which has been wrong?



VSCode is a poor example. Its a fully featured IDE, and sure it would be lighter without electron, but it still wouldn't be a lightweight piece of software. I use JetBrains which is not Electron, but also uses it's fair share of ram.


To be fair, JetBrains runs on the JVM which arguably has the same problems.


Yeah but if it didn't, would I be able to run it with no differences on Linux, OS X, and Windows?


But maybe you shouldn’t be able to run it with no differences on Linux, macOS, and Windows.


Well... why not, exactly? I have work to do on all three, as do many others, and it seems like a win both for me and Jetbrains for that not to matter.


Because the platform conventions are different.

Think of it this way: if you have a system full of apps that conform to platform conventions, then you only need to learn the platform conventions and you’re suddenly more or less an expert in every new app you encounter.

If you need to learn every. single. apps. stupid rules and UI all over again, then sure, you can transfer skills in that one app over to another platform, but that’s it. You want an email app? Good luck learning every keyboard shortcut and ridiculous UI decision all over again.

Let’s just say this paradigm shift has not been driven by people with OCD (or good taste, for that matter).


Ok but another way to think of it is I've been using IntelliJ 30+ hours a week for years and don't want to throw away those skills because my new job wants me to use a different OS. I'm sure power users of, e.g., Excel or Emacs or vi have similar feelings.


I wonder if forcing IntelliJ to run with ZGC would help since it releases unused memory back to the OS after some time.

I did this with the Eclipse Language server and my memory usage cut in half on the same project.


Most of the core functionality of VSCode is powered by native extensions written in C++.

It's essentially only using HTML and JS as the UI engine.


This is false. A very small amount of the core functionality of vscode is powered by native code, for example: find in files shells out to ripgrep, the majority of the other usages are mainly just exposing various native apis to js (like ptys, fast process trees, windows specific apis not surfaced by electron, etc.). The regex engine used for parsing textmate grammars is a special case that uses wasm.


Around 1997 I was able to carry around a copy of Homesite on a floppy and run it in many places, on machines in the neighborhood of 32MB of RAM. It understood multiple languages, you could define additional ones and your own tag fillers using a custom XML syntax, connect to a database to inspect schemas, etc. Not as full featured as VSCode, but it had an amazing amount of functionality.


Visual Studio -- itself a fully featured IDE, even back in the early 2000s -- used to run on machines with less than 1 GiB of RAM.


It also didn't support as many languages/features and was super slow on contemporary hardware.

Even my 9 year old laptop can run VSCode just as fast as my 3 year old main machine.

Back in 1998 when Visual Studio 6.0 (actually just the second publicly released version) was released, it took ages to load and used up quite some RAM. 6 years later, when I switched to VS.NET 2003, VS6 ran super fast on my PC.

The difference was, however, that my 1998 PC was a 300MHz Pentium II with 64MB of RAM, wheras my 2003 machine was a n overclocked AMD Athlon XP at more than 2500MHz with 2GB of RAM.

Now compare that with my 2011 laptop vs. my 2017 laptop: my 2011 machine has a dual core 2.5GHz CPU with 8GB of RAM. My 2017 laptop is a dual core 3.5GHz CPU with 16GB of RAM.

22 years ago, 5 years of progress meant a 10x increase in RAM and 8x increase in CPU speed. During the past 10 years we saw a doubling in RAM (barely, TBH) and maybe 1.5x in CPU speed (at the same core count).

That's why the old software felt so fast - because we used it during a time when a major PC upgrade actually meant something. If you used a software package for five years, you could actually see more than a doubling in performance (e.g. Core2Duo E8600, 2008 vs Core i3-4350, 2013 [1]).

That's just not the case anymore and skews our perception regarding performance heavily towards "lean" vintage apps.

[1] https://www.techspot.com/article/1039-ten-years-intel-cpu-co...


I never remember Visual Studio 6 (VC++) being slow at all at the time on 1996-era hardware - quite the opposite (other than the dreaded "updating intellisense..." which would hang things for a while).

I distinctly remember subsequent versions (.NET 2002, which I think had the UI re-written) being a lot slower than 6, and me still using version 6 when I could because of this.

In fact, I can remember VS 6 opening in seconds, compared to later versions being much slower.


I do remember using my enterprise edition with C++, Visual Basic, SQL Server T-SQL and Architecture Modelling tools.


Nope is a very quite good example, because one can compare it to Emacs, vim, Sublime Text loaded with a similar set of plugins.

And it isn't an IDE, rather a programmer's text editor.


For decades people complained that Emacs was huge, bloated, etc. Now there are bigger web pages


“Eight Megabytes And Constantly Swapping” must be an alien sentiment to so many younger techies.


Emacs is an alien phenomenon to younger techies. Who would use an editor that's older than their dad?


There are two kinds of fools. One says, "This is old, therefore it is good"; the other says, "This is new, therefore it is better."

William Ralph Inge wrote that in 1931, but it has never been more true than when applied to modern technologies today.


They are using an Unix descendant probably as old as his grandpa in their pockets.

TTY's date back to the Victorian era. AUX cables date back to the XIX century, too.

Most silverware design at home date back to 1800-1900.


Except I don't need an application in 2020 hardware to replicate my Emacs experience in 1995 hardware, with hardly any improvement worth mentioning.


It seems a bit unfair if your computing environment is faster than an 0.0125 MIPS KA-10 though doesn’t it?


I use emacs exclusively but VScode is way snappier when dealing with JSX syntax at least


I haven't compared ram usage but vscode feels (subjectively obviously) very slow by comparison to me. Intellij is a beast of an application in terms of features also.


I like to joke/not really joke that (insert JetBrains product)’s main feature is converting laptop battery charge into heat and fan noise ;)


In my opinion, VS Code is not a "fully featured IDE". In fact, I would argue that it is even not an IDE. Simply because, "I" ("Integrated") in IDE implies lack of external dependencies or, in other words, self-sufficiency. Install PyCharm, Rider or any other real IDE, for that matter, and you have a truly fully featured development environment (yes, you can install some optional plugins, but it is largely not needed). Install bare-bones (i.e., without any extensions) VS Code and you have just a nice development-focused editor, but definitely not an IDE.


Interesting that you mention pycharm. VSCode has a superb integrated experience for TypeScript ootb with language, refactoring, IntelliSense support, etc, no plugins or additional language servers needed.


Well, this is not terribly surprising, considering TypeScript's origins in Microsoft. Not adding full IDE support for their core languages (.NET) to VS Code is understandable as well, though - the company does not want to jeopardize sales of Visual Studio, their commercial true IDE. In any case, you could argue that VC Code has IDE-level functionality for TypeScript, but I still stand by my general argument.


Electron is only the latest in a long line of almost good enough cross-platform solutions. Java for example was supposed to solve this problem but the widgets never looked quite right and Microsoft threw a wrench in the works so Java-on-the-desktop never quite caught on.

What Electron gets mostly right is the UI, since everybody is used to browsers. What it gets wrong is insisting on shipping an entire browser rather than using the platform's native webviews, so the result is ridiculous bloat.


Or native apps will go through a revival due to new languages, compiler optimizations and access to intrinsics and C libraries.


As a developer of a native Mac email client, I sincerely hope that this is true. At least in the Apple ecosystem, there are new technologies like SwiftUI and Mac Catalyst that should make this easier, particularly for the army of iOS developers looking to bring apps to the Mac.

Before starting my app, I did briefly consider going the cross-platform route, and I realize that it's possible to build a decent app that way. But personally, I could always tell when I used one. Little things like swipe gestures and drag and drop didn't quite work right. The extra polish, consistency, and speed of going native is so nice for things you use all day every day. With the shift to remote work, people might notice this extra 5-10% of polish more than they did a year ago.


It's a double-edged sword. I like that no matter what OS I'm using I can set up IntelliJ to work exactly the same, down to non-native keymaps.


Technology still matters. It's not native vs not-native, there are many points along this scale. An app written in Swift on an iPhone should run faster than a similar app in Java on Android. If they run about the same, remember that the iPhone can do it with less memory and battery consumption. The Java Android app is more native than a React-Native app executing js.


yep, so the goal should be to compile everything down to light binary images the first time, and not require mobile devices work it all out at runtime.


Modern Android versions have multiple tiers.

Interpreter written in Assembly for fast startup, if the application was never executed.

Followed by JIT compilation to native code and when the device is idle, the PGO data collected by the JIT is used to produce a binary for direct native execution.

As of Android 10 those PGO files are uploaded to the stores and then if a similar device installs the application, they will get the PGO data as well and thus achieve a relatively fast result for their initial compilation.


I am unsure we're going to see a performance increase there; nobody has really made the highest performance languages faster by more than an order of magnitude in the last couple decades.


As an avowed skeptic of all things deep learning, I do wonder if there is a role for AI generated native code from some form of psuedocode.

I know that a lot of old school code used very little memory due to everything sharing one set of libraries. I wonder if in the future, “installing an application” could be flipping a bit in a registry and your device being delivered a highly optimized monolithic system image.


> I do wonder if there is a role for AI generated native code from some form of psuedocode.

I think that's pretty much what a good compiler is meant to do?

smaller binary images would be amazing though, especially when OS dependencies are required


I hate this trend so much. I just dug around in Time Machine for the old version of Evernote, after being frustrated by the Electron trash they are calling version 10. It's missing so many little touches that just magically happen when you make a real application instead of having to fake everything from inside a browser.


I’ve been tempted to do a native clone of an early version of evernote for a long while now. Learning it’s turned into electron trash might just push me over the edge and actually do it.


Have you tried Notational Velocity?


It’s been abandoned for almost a decade now. It survived changes until now.

In Catalina it’s crashing left, top, bottom. Moved to nvAlt which, it seems, is abandoned now as well. Author is focused on something caller nvUltra. No idea what’s that.

Besides for people who use Evernote, nv isn’t a replacement at all for them.


searches

No sync, looks like it won’t let me put images/PDFs in, no way to collaborate with my co-writer on projects, last version is from 2011. Nope. Haven’t tried it, don’t think I will, I dunno who it’s for but it’s not for me.


I mean, that Pentium III he shows up top probably cost $900, and even with inflation, a Windows laptop with 4gb of RAM costs $250 off Amazon.com right now. The "low-end PCs" that the author complains are being left out by Electron apps... a lot more people have PCs capable of running VS Code than had PCs that could run the old Borland IDEs back then. Computing is more accessible now, not less.


The browser is the new OS. For most users, a Chromebook-like PC is all they'll ever need. On mobile, most native apps should be web apps or PWAs. Native code is now unnecessary in terms of hw/os features accessibility and performance.

In order to run such stacks we need to devote a couple of GB's of RAM for the OS and browser. Not a bad deal if you compare cost to benefits.


I don't think it's impossible to have (relatively) lightweight hybrid apps. The article focuses entirely on electron, which ships a whole browser. There are other options, such as pywebview, that use the system native browser. Still means the underlying JS bundle has to perform, which goes the same for websites too.

That said, I normally seek native apps where possible (ripcord for slack, sublime), but that's also pretty hipocrytical of me ref Kanmail!


Well, HTML/CSS/JS apps can fit into 5mb size even they contain HTML/CSS/JS engine linked in.

I've started publishing Sciter.JS builds to prove that:

https://github.com/c-smile/sciter-js-sdk

Those binaries contain HTML5, CSS 2.1 + some Level 3 modules and full ES6 engine.

4-5mb of binaries is comparable with native hello-worlds of purely native Qt, wxWidgets, etc.


Unfortunately, Sciter is not opensource. Qt and wxWidgets are.


There are also "chromeless" apps which are just web apps (no Electron) and runs in the default browser, but without the address, tab bar etc. You can run the "app" by for example adding the "--app" flag for chrome, or -k for iexplore. Then there is also "add to home screen" (A2HS).


Paw is a far better app than Postman, on a Mac, for no other reason than it is native and behaves as you would expect.


Last time i checked this app felt native https://hoppscotch.io/

(See the fact that you can use it after a click, how can i do the same with paw ?)


It feels like a browser app. It’s snappy, but not as snappy as native.

You also need to expend several more clicks to install a browser extension to get the same level of functionality that Paw has.


I wonder if there is really a big difference between using an Electron based application or a progressive web application (PWA). My current guess is, that the biggest advantage of PWA's is, that you use the same browser for several applications. But I never really tested my guess.


An advantage/risk of Electron is inheriting the users filesystem and device privileges.


The cheapest laptop I could find on walmart.com right now has 4GB of RAM. "Low-end" isn't what it used to be! Only techies care about things like how much RAM a program is using. End users, from my experience at least, don't really care - as long as it works.


You might be right with respect to non-techie users. But, it's not about whether it merely works. It's about what we could be doing with modern hardware if we used it as efficiently as old software had to use its hardware.

What kinds of wild things could we accomplish on this hardware if we weren't bogged down in gigabytes and teraflops of bloat?


> What kinds of wild things could we accomplish on this hardware if we weren't bogged down in gigabytes and teraflops of bloat?

Not that many: An early 1990s PC platform could be thoroughly described in a 200 page book and you could write a boot loader for the CPU, a VGA driver, and drivers for the most common peripherals from scratch in a few weeks.

In fact, games of that era shipped with their own audio drivers, (C/E/V)GA libraries and peripheral support.

Today this would be a) impossible because many manufacturers (cough NVIDIA cough) don't even release OSS drivers and specs and b) individual programs don't own the hardware anymore - the OS does. Also the multitude of target platforms (CPU types, -core counts, and -speeds, graphics cards, peripherals, etc.) makes it virtually impossible to ship code that it optimal for each of even the most common combinations of hardware.

The final nail in the coffin of the "super lean no bloat why-not-just-unikernel-everything-for-maximum-performance" idea can be summed up in one word: cost.

Development costs would be insane if we started optimising every aspect of every program for performance (on every possible platform, no less), memory use, and (binary-) size.

And that's even ignoring the fact that it's more often than not outright impossible to optimise for binary size, runtime performance, and memory footprint all at the same time.

Plus interactions between programs (plugins, {shell-}extensions, data formats, clipboards, etc.) require "bloat" like common interfaces and "neutral" protocols.

Most of the myth of great "old software" comes from the fact that functionality was severely limited compared to modern apps and that many folks simply weren't around to actually see and feel how much some of them actually sucked.

Sure, Visual Studio 6.0 runs incredibly fast on a vintage 3.2 GHz Pentium 4 with 2GiB of RAM using Windows 2000 - but when it released in 1998 many PCs had a 60MHz Pentium 1 or a 100MHz 486DX4 with 64MiB of RAM and it ran like a three-legged dog with worms on these machines compared to the DOS-based Borland-C...

Speaking of which, remember when sometime around the 2000s all Borland Pascal program stopped working, because CPUs had become too fast (>200MHz IIRC)? That was because their runtime used a loop to determine how fast the CPU was, which caused a divide-by-zero on fast machines IIRC.

Good times indeed...


Eh, BIOSes had an API-like interface under ASM assembler macros under DOS. It was relatively easy to do stuff directy with hardware.

>many PCs had a 60MHz Pentium 1 or a 100MHz 486DX4 with 64MiB of RAM and it ran

By 1998 most people switched to a Pentium because of the huge performance gain. And by 2000, everyone had a Pentium2 with ~96mb of RAM.


> And by 2000, everyone had a Pentium2 with ~96mb of RAM.

That's a bold claim! The PII was released around 1998 and you basically just asserted that everybody buys the latest CPU as soon as its released.

The reality is that most PC users never upgrade their machine and buy a new one instead. The average age of a PC is about 5 years and no, aside from enthusiasts nobody buys the latest and greatest as soon as gets released.

Businesses in particular hold on to their assets for some years due to depreciation (which incidentally is 5 years for PC class devices).

So in 2000, the average PC was 1995-level hardware.


I was there. In 2000 the average PC was 1997 era hw... with a Pentium2, AMD k7, or a Celero overclocked to ~450MHZ? making a great alternative to a Pentium2 and a Pentium III@450.

Windows 98 was on its peak and the Pentium MMX often was horrendously slow to start up things. Good with Windows 95, but by 2K everyone was onto 98/SE because of good additions and an easy PNP support.

W98SE was used even when XP got released and a few years more.

Also, your statement about the P4 with that huge amounts of RAM (2GB) is even more unusual than a PII in y2k.

When I had an AMD Athlon in 2003, I barely had 256MB of RAM. I stayed with that up to 2009 with Debian 4 DVD's.I tried some Fedora releases and they where a huge no-no in my machine, and Solaris was impossible.


Machines running too fast...

Reminds me of the Turbo button on PCs in the 90s, which all my school mates and I at the time thought was for a speed-up.

Au contraire!

https://en.m.wikipedia.org/wiki/Turbo_button


I have a one word counter response. Winamp.


Funny you mention Winamp - I stopped using Winamp ages ago precisely because its 2002(?) rewrite was garbage and didn't support the one feature I was actually using at the time (SHOUTcast) The whole AOL/Time-Warner sellout debacle didn't help either.


Probably fewer because we'd be spending more time pulling our hair out on trying to make cross-platform C++ apps work instead.


They may not care about how much RAM it is using, but the blatant disregard for resource usage manifests in other ways. Most people I know are just resigned to believe their 4GB RAM laptop will be obsolete in a few years. When things "get slow" it means chuck the whole computer. I do think that the recklessness to which we use our virtual resources contributes to e-waste, which is a physical problem.


Sure, but why can't I replace the battery in my phone easily? Why does the computer that controls my fridge cost as much as a new fridge to replace (if you can even get one)? Why is a cheap printer about the same price as the cartridges? Our throwaway, built-in-obsolescence approach to product development is definitely a problem, but I think it's pretty obvious that the causes reach far beyond the use of resource-intensive UI frameworks.


Normal users don't care how much RAM their programs use, because they don't know what RAM is. They think that computer hardware has a single one-dimensional property called "speed" and that if their software performs badly, it's because their computer is too slow, not because the software is bloated.

My daily driver has 4GB of RAM, and it's possible only usable because I'm extremely frugal with my software choices. I run Debian with XFCE and pipe memory usage to my panel so I can always see whether I'm in danger of swapping. I use a Firefox extension that prevents me from opening too many tabs. I stick to the terminal for as many tasks as possible. I categorically refuse to use electron apps. And despite all that, I still ending up OOMing and having to hard reset every few days. For a normal person who doesn't know what RAM is and has an antivirus constantly running the background, the 4GB are going to be used up almost immediately, their machine will start swapping, and then they'll get the impression that their 4GB machine is "slow," despite being faster than high-end computers people used to do exactly the same things ten years ago.


Any tips on getting stuff to run fine on low end machines. I'm currently running Lubuntu on a very cheap laptop and while its usable, most applications are get pretty irresponsive. I've also tried running some BSD's on it whith very bare bones wm's (i.e. ctwm) and that became unusable quickly. Although, I think the issue there was lack of proper graphics drivers and being stuck with the framebuffer driver.


As a techie I mostly do not care as long as it works either, if I am not writing it myself (and sometimes even then but I care more now than years ago). However; things do not work a lot of the times; my wife is a writer, almost all my friends are non technical and they complain a lot about how crap everything is. And when I check the culprit is always lack of memory and it is always chrome (the browser and in Electron) that is eating all of it. It is miserable imho; techies can pretend the enduser does not care; maybe they should ask.

I know how to fix it for myself (I have scripts to do it automatically) but most people do not and just reboot when the system becomes too unresponsive. Not so great for 2020.


Does the end user care enough to switch? As if all you are doing is complaining but it does not alter behaviour (specifically usage or purchasing behaviour) then for all business purposes they do not care.


They don't know enough to even know what to blame, so there's little benefit to resolving the issue, I think.


Agreed, companies often make anti-user decisions for good "business purposes."


> And when I check the culprit is always lack of memory and it is always chrome (the browser and in Electron) that is eating all of it.

Are you sure it's not the "150 IQ I-have-20-tabs-open-at-any-given-time" usage pattern that's actually causing this? I just checked out of curiosity and Edge (for lack of an installed Chrome) used "just" ~380MiB for a rather big website.

Sure, websites (and especially ads!) taking up unnecessary amounts of memory and performance play into this, too, but the expectation that you can just leave 10 bloated websites open on a glorified netbook from 2014 is more to blame than anything else.


> just leave 10 bloated websites open on a glorified netbook from 2014 is more to blame than anything else

Sometimes yes, but;

And how are non technical users supposed to know they should not do this? A lot of people do not know how bookmarks or even ‘windows’ work so they leave open the websites they visit so they do not forget them or have to open them again... How would they know that this is bad? The browser supports it and no one told them. For them there is no correlation between tabs and computer slow. Just as many people, when you say ‘your memory is full’ start throwing away picture from their drive. I think you vastly overestimate computer users... I am ‘the computer guy’ of the village because ‘I do stuff with computers’ and I run into the strangest things all the time. There is, for instance, a large stack of perfectly fine laptops in my house because some people just bought new ones because the old one ‘was slow’ and they were fed up.

Also, what do tabs open have to do with IQ?


> Also, what do tabs open have to do with IQ?

It's a meme.

> And how are non technical users supposed to know they should not do this?

For the same reason you need a license to legally drive a car. While I'm generally not a fan of the RTFM-attitude, I have absolutely no patience for people who are unwilling to even learn about the very basics of the complex machine they're operating.

Why on earth doesn't "the computer guy" tell them about where to learn the basics instead? Teach a man to fish an all that...

> There is, for instance, a large stack of perfectly fine laptops in my house because some people just bought new ones because the old one ‘was slow’ and they were fed up.

There's several reasons for that to happen - sometimes it starts at the point of simply buying the wrong product. Leaving the whole why-even-a-laptop-in-the-first-place aside, I have been convinced for the past 10 years now that 90% of all laptop users would be better served with a tablet (preferably an iPad).

First of all, non-technical users cannot make an informed purchase decision and way too often buy garbage products (e.g. low-tier CPU with not enough RAM) in order to save maybe 10%.

Secondly, instead of learning about the product they own and its limitations, they pile on crapware on top of bloatware and not once even manage to do basic maintenance (like disk clean-up, which is literally just a button press away).

"The computer guy" shouldn't "fix" their machines but advice them to just get an iPad instead - problem solved for both sides.

It's pretty short-sighted to blame a particular set of software packages for a whole pile of problems that stack upon each other:

• underpowered hardware

• complex operating systems that require knowledge and manual maintenance to keep them running smoothly

• increasingly bloated websites and ads (just test it yourself - HN allocates single-digit MB per tab, while a news site easily causes >200 MB allocations)

• computer illiterate users that are unwilling to learn even the very basics about their machine (Sapere aude!)

• an inability of users to judge their own needs and requirements vs their options in terms of technology (i.e. desktop PC vs laptop vs Chromebook vs tablet vs smartphone)

• the failure of the industry to communicate their target audience (i.e. you don't need a laptop to watch some Netflix, browse the web, do photo editing, etc.)

• not all operating systems are the same - sometimes all it takes is to make the switch (be that Linux or MacOS)

But no! That's way too much thinking and way too differentiated a view point! It's so much easier to just blame Chrome. Or Electron. Or JavaScript. Or Windows.

Anything but daring to actually form a complete picture...

But hey, that's basically today's Zeitgeist in a nutshell anyway I suppose.


This is why building for web first is underrated – let people access everything within their browser, it's naturally cross-platform

'But Larry Page and Google were not interested in application software. “We like the web,” he is said to have told Lars Rasmussen, one of the Gordon’s fellow co-founders. And he set the team a deadline to get their idea working in a web browser.'

Source: https://medium.com/@lewgus/the-untold-story-about-the-foundi...


My main grievance with solutions like Electron the app doesn’t adapt to the platform.

Personally, I don’t see much benefits of Electron or non native apps when they don’t mix well with the OS UI interaction model. In the end you need to implement a lots of parts twice for Windows and macOS due to the differences interaction models.

In the case just share the business logic, and write UI separately, works perfectly fine. I have written apps that run on the web, natively on Android, macOS, iOS, and Windows were the only change is the UI layer and data storage (e.g. iCloud on Mac etc)


The thesis that native apps are being supplanted by hybrid apps seems wildly overstated.

I have about 500 applications on my Mac (465 in the Applications folder and at least a few dozen others scattered about), and I believe only two of them are hybrid apps (Visual Studio Code and Discord). There might be another two or three I'm forgetting about.

In other words, somewhere between 99% and 99.6% of all my applications are native, and I see zero reason to believe that percentage will decline very much in the next few years.


how do you have so many applications? Do you just accrue them over time? How many have you used in the last 6 months?


I was honestly shocked just now when I checked the number. Quite a lot of them are small utilities and menu bar apps. Many others were acquired via bundles (e.g., BundleHunt, Paddle, etc.) and a fair number of those I've probably never used.

Here's a list of about 60 apps that I've used in the last month:

Activity Monitor, Adobe Digital Editions 4.5, AppCode, Bartender 3, BBEdit, BitBar, calibre 4.23, Carbon Copy Cloner, ColorSnapper2, Console, Dash, Discord, Downie 4, EtreCheckPro, Evernote, FastScripts, Firefox, Font Book, GoLand, Google Chrome, HazeOver, Highland 2, IntelliJ IDEA, iTerm, iTunes, Keyboard Maestro, KeyCue, Launchpad, Mactracker, Magnet, Messages, Microsoft Excel, Microsoft Outlook, Microsoft Word, OneDrive, Parallels Toolbox, Paste, Paw, PopChar, PopClip, Preview, Safari, Script Editor, Scrivener, shayre, SnapNDrag Pro, Snappy, SQLPro Studio, SQLPro for SQLite, Terminal, TextSoap, TG Pro, Typinator, Vienna, Visual Studio Code, VLC, WebStorm, Window Tidy, Xcode, Xojo.


To be fair, calling the JetBrains IDEs "native" might be going a bit too far ;)


They're not what I would call native, but they are native by the definition of the article being discussed, i.e., they're not "hybrid" apps.


I know the article is talking about "truly" native apps, but React Native is quite an interesting project. Microsoft has been heavily investing into React Native for Windows and macOS, so combined with RNWeb (which is still a work in progress, but quite usable), a developer can target iOS, Android, Windows, macOS and web with the same code base and have a native UI for each platform. It's pretty magical when it all works.


To be fair, writing apps that tries to throttle the use of memory is harder as you might have to let go of unused data onto disk instead of read and forget or process data little by little instead of all at once.

So, as soon as devs realize their apps fit in 1GB or something these days where even mid range phones have 4GB of RAM, they see no reason to optimize further.


"Older native software used resources in a well organized manner"

That made me laugh a bit. Yes, I tend to agree with other criticisms, although in some ways this type of increased resource usage has been going on forever, so I'm not sure it can be laid purely at the feet of hybrid programs:


> Hence, there were no intermediate layers like run-times in order to execute native applications; just binary content was there.

That's not true. Even with "native binaries" you have the OS as an intermediary.


No company that tries to have the biggest possible user base will drop old devices that are running Android 4.2 or iPhone 4. Author lives in a dream 10 years from now.


Memory usage is not that important to users. They would way rather have feature parity and familiar UI across all devices. That’s kinda why the browser is cool...


Memory usage is important to users!

I recently wasted two weeks starting a new contract at a tier one multi billion dollar orginisation unable to do any work as their standard issue laptop where 8gb window machines and they had no availability of other machines. It took two weeks pointing out we can't do any work, our contractor rates they paying us to do nothing and the approaching deadline to make it way up the chain for senior execs to organise machines with 16gb.

If you need to run Docker with a few containers that's several gigs immediately unavailable to the system. Start MS Teams, another gig, visual studio code / intelij, another gig, a browser another gig or so.

Even without running developer tools on a 8gb machine which seems to be standard issue laptops at most office based companies and average non techy person off the street you very quickly hit the memory limits and swapping to disc as you have a handful of small Electron applications open.

Memory usage is extremely important to users just that most users being non technical don't know it and just think their machine is slow.

Running an application in 2020 with the ubiquitous use of Electron my desktop apps seem less responsive, i can run fewer of them, and any sign of bad internet connection my system comes to a halt as all that apps even if only sending metrics block until api requests are complete or timeout vs running them in the background and not blocking ui functionality.


I think the basic principle at work here is that new programs will only ever be optimized enough to be functional on a typical (< 5 years old) computer.


This is really poorly written and argued. The topic isn't exactly novel either. I'm wondering if these upvotes are organic


As an example I haven't seen elsewhere of why web apps suck (maybe this could be coded around?): I can receive a notice from Slack of a new message, with who it's from and a snippet preview, as I'm leaving my office's WiFi, and then the actual Slack app won't show me the actual message until I return to WiFi :-(


Sounds more like an implementation detail to me. The notification doesn’t contain the entire message (it goes through a different channel altogether). By the time the app tries to poll to retrieve the message (I assume here that it’ll wait until you try and view it) there’s no connection.


What a confusing title.


why not just golang+http in one binary as backend and browser for the GUI frontend, that seems can solve all the GUI problems.

of course you will need a modern browser, which is everywhere these days.



aware of that and yes that's an interesting project


It doesn't solve the GUI problems of the web in general.


what is it missing then? more specifically, what can't be done this way comparing to electron ,or Qt5 style GUI.

yes for corner cases Qt/Electron has their place, but I feel 95% GUI these days can be done with Browser + Backend, which is what Electron is doing basically(chromium+nodejs), I just don't want to run browser and electron in parallel, both are memory hungry, so why not just directly browser+backend(in C,golang,nodejs,or whatever)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: