I agree it's disappointing to think I've found a native app only to realize it's just an Electron app. But I don't need the idea to die, I just want better transparency in app stores so I can know ahead of time whether an application is native or just a wrapped webapp.
> This browser-in-a-box cancer needs to die a painful death.
That depends on what you think the result would be.
What do you think companies who use Electron, CEF, embedded web views, etc today would do if those technologies all died tomorrow? For example, do you think GitHub, WordPress, Figma, Discord, Whats App, Slack, Trello, Skype, or Spotify would hire native Windows desktop development teams? (Or even Mac/Linux desktop development teams?)
Personally I doubt that there would be any increase in native app development. Any developer who cares about this is already making native applications, Electron just makes web apps slightly more convenient.
> But I don't need the idea to die, I just want better transparency in app stores so I can know ahead of time whether an application is native or just a wrapped webapp.
That’s a great idea. Or a simple “Native App” badge for native apps.
> Because your average Joe doesn't know the difference and the race to the bottom affects everyone.
Would it be possible that the average Joe is more familiar with a web stack vs a native or cross platform desktop stack?
Is it not possible that the difference therefore might be between:
- having a questionably performing app built on web technologies
- vs a buggy one that's built on a native/cross-platform stack, or even not having one altogether because they can't build with that tech
as opposed to:
- having a questionably performing app built on web technologies
- having an awesome native/cross-platform app that runs better and respects the OS design
Or, who knows, maybe it's just cheaper to use web tech and those other options have failed to make themselves as easy to get started with and work on, especially when you're looking for good cross platform options that would run nearly everywhere and be popular enough to have tooling and tutorials.
It's the same how something like Rust might be a good fit for writing correct web applications, but very few people actually use it for that and might instead reach for something like Python because that lets them iterate faster, even if neither the type system, nor the performance is great.
Actually, who knows, maybe the problem is not that there's not enough "good software" out there, but rather that different people have wildly different views on what matters, in addition to there just being too much software in general.
> Your notion that browser-based apps are somehow bug-free is absurd.
My notion is that more people are familiar with using the web stack, than any other alternative.
Out of curiosity, I perused some local job boards: out of about 50 technical role ads that I looked through, 4 were embedded or desktop development, there were some DevOps and ML related roles in the middle, but the majority were web development.
If that's the set of technologies and the languages that people are familiar with (high abstraction level, no manual memory management), then attempting to use these "performant" options obviously wouldn't turn out well, due to a lack of skill, familiarity and/or user experience of that tooling.
I mean, in an ideal world, GUI software would be even easier to create than using Lazarus was back in the day (the RAD approach), but sadly the greatness that was lcl is mostly lost to time because nobody cares: https://en.wikipedia.org/wiki/Lazarus_Component_Library
> Using tools not made for the job yields crappy results, who would have known.
Depends on what the goals are. If they are to take people from the job market that currently exists (lots of webdevs) and build software that is good enough and ship it to earn $$$, then clearly they've succeeded, no matter how much people complain about the inefficiencies or how suboptimal the tools might be considered.
Wine requires no more installation than Electron. There are lots of "pirated" releases for macOS/Linux with a bundled copy of Wine (sometimes with third party patches or some libraries swapped out).
I can built a basic app with html, javascript, etc that supports many platforms in an evening. How long do you think it would take to support windows, macos, ios, android, xbox, linux, etc, etc? Much more than 50x the time even if you already are familiar with development on all the plaforms.
It's the same in any other industry where cheap plastic overtakes all, except and average user can't tell the difference, other than on "my PC is slow" level.
It is everyone's responsibility to not make shit software.
too many gotchas with LGPL to become a universal solutions. I wish that gtk was more stable across all platforms. I have a few on macos and some are less than ... stable... compared to on linux.
This too, shall pass. The minimum amount of memory for a double buffered fullscreen surface at 8bpc on a 4k monitor is 48MB. RAM is meant to be used, and if it was not for memory hogs, the DRAM industry would be a decade behind where it is today.
And you won't get Skype, Slack, Teams and many other applications people use on a daily basis on Linux, at all. Features on Mac will be limited compared to Windows version, because the team prioritizes work on the platform with the largest userbase. There will be more inconsistencies in feature and UI for "desktop" version vs web version, if companies still bother to maintain two versions.
That's the future you want to see, huh?
Have you developed a cross-platform desktop application in the last few years and make sure everything works on every platform? Probably not. If there is a way to make it easy, cheap, reliable to support multiple platforms and the solution takes little system resources, I'm sure everyone will want to do that. Before that happens, stop your wasting complaining about the Electorn mode of making apps. This will not change. It is the only thing that makes business sense and make developers' life easy at this time.
Or to put it simply, are you going to pay for the extra cost used for developing "native" applications for each platform? Put your money where your mouth is.
Telegram manages just fine, and looks the same on all platforms. I even ran it on FreeBSD for a few weeks with no issues. Their desktop client is developed pretty much by one person in C++ with Qt. It's really feature rich these days and actually works, unlike some of the things you've listed.
Seems a bit weird to use Skype as your example, since (a) it already had a native application for Linux, (b) AFAIK it was the same (Qt) codebase as Windows and Mac, so no feature discrepancy, and (c) Skype also developed clients for other operating systems like Symbian, Android, Blackberry, etc. as well as a Java-based client for other mobiles.
If anything, it's easier to develop cross-platform native applications these days, since the mobile space has mostly collapsed to just Android + iOS.
Yep, but it seems to be a PITA. They endure whatever changes happen on Firefox UI, which are well tested on Firefox, but not on Thunderbird and Thunderbird has much more UI to manage than Firefox. See this interesting Thunderbird talk at FOSDEM on visual change that mentions this issue [1].
You also kinda have to fork Firefox to do this. It would be good to be able to #include <gecko-embedded-framework.h> and build the UI from there. XULRunner seemed nice too.
Using Gecko when you are not Firefox is such a pain that
- all alternative browsers that are not forks of Firefox that were based on Gecko have abandoned: they stopped being maintained, or switched to WebKit or Blink, which is a shame.
- all apps based on XUL / Gecko, like Songbird, have mostly disappeared.
It needs to be easier.
Gecko seems like a drag for Thunderbird. It shouldn't. For this, it needs to be a proper toolkit, with stability guarantees, and proper support to third party apps, and easily reusable. That's not the focus for Firefox devs though.
> all alternative browsers that are not forks of Firefox that were based on Gecko have abandoned: they stopped being maintained, or switched to WebKit or Blink, which is a shame.
Which browsers? Pale Moon, Basilisk, K-Meleon are still being developed.
They are all kind of pre-multiprocess/Rust Firefox forks. It seems Pale Moon has forked Gecko into Goanna and made it embeddable (which is neat!) and that's what K-Meleon uses too. Which I didn't know.
Is Goanna on part with web standards? Maintaining what seems basically a folk of an old Gecko must be hard.
It also kinda validates my point: using Gecko elsewhere is a PITA. You have to work hard to make it embeddable.
To answer your question, Gnome Web / Epiphany was once based to Gecko. It switched to WebKit because using Gecko was harder and harder. Konqueror optionally allowed you to use Gecko, but that stopped being possible a long time ago for the same reason. Galeon and Camino both died a long time ago.
Brave, Vivaldi & Co picked Chromium instead of Gecko. With Eich coming from Mozilla, I think Brave considered Gecko but that was deemed too hard.
They have a video of Servo running on a Raspberry 400 faster than Chromium. However there are no downloads or build instructions specifically for the Raspberry in the repository on GitHub or in the issues. Maybe it's just build for Linux.
Googling servo and raspberry together gives a lot of hardware projects with motors, even when including mozilla in the query.
The plan is to see how viable Servo is as an alternative to WebView. If it works well I expect Tauri to provide an option to use Servo when building the app.
I'm currently suffering the pains of developing a Tauri app that relies on the system WebView (which is the default for Tauri). It's unreliable (especially on Windows where people love to mess around and run "debloat" scripts), and causes slight differences on each platform. Tauri lets you bundle the WebView, but this causes the installer to grow like 150 MB. I presume this alternative would be a lot smaller.
After messing around with various cross platform desktop toolkits, including a big Electron app and some smaller Tauri apps, I've settled on Flutter. It's not perfect but the results I'm getting are so far much better than anything I was able to achieve using repurposed web tech.
Afaik bundling webview is only available with appimage. We also have a big tauri app and it is a PITA to develop. I actually opened an issue to support bundling a chromium ala electron.
That's what happened, but if I remember correctly, it was supposed to be an entirely new engine. I had a lot of hope for it, as the demo looked really promising at the time. It really was what Mozilla needed to get back on track, because to be honest, Firefox was pretty sucky when compared to Chrome at that time. I also liked the idea of inventing a whole programming language for that purpose (Rust), it reminded me of C/UNIX.
In the end, Firefox got better, and we have Rust, a great language on its own, but I think it could have been even better. And I was particularly disappointed when Mozilla laid off the Servo team, I feel they let go of the most important thing they had.
> And I was particularly disappointed when Mozilla laid off the Servo team, I feel they let go of the most important thing they had.
Gotta pay out those CEO bonuses somehow.
In a sick plot twist Mozilla gets shocked back to its senses, re-hires the team, restarts the effort to replace Gecko with Servo, and Firefox finally lives up to its potential. (I wish.)
Servo didn't need to be dropped, rust could have been handled oh so much better. Mozilla is not being run very well, they need to cut their "social" activities and focus on the browser and promoting free software through example rather than preaching, just throw some of that Google money at EFF/ACLU instead. I am kind of neutral on pocket and their other activities. I don't see how they could go wrong with partnerships like with Mullvad though. focus focus focus is what they need as well as a new CEO
Not really no, it was more to explore a greenfield web engine design, and also to use rust to do that.
Not for user facing features as all, for one thing servo barely has a UI
no it was to eventually be replacement for gecko engine. It was used as a testbed for a while before mozilla killed it, and it was forked for open source. Huge difference.
Was not Servo a super nice thing it would allow better multithreading through Rust's power as compared to old, ancient C++ that everyone and her neighbour says it is so bad?
What happened exactly to Servo? Why it was discontinued?
Servo was always intended as a way to proof certain technologies without the restrictions of a full browser engine like Gecko, so they could integrate them into Firefox/Gecko later if they panned out. They did and things got integrated into Firefox with https://wiki.mozilla.org/Quantum.
Then Mozilla had a sustainability crisis and - imho unwisely - decided that one of the things that they could do without in the future was the Servo team.
Without funding Servo effectively was put into sleep mode since people need to eat. Then it got donated to the Linux foundation and got new funding and progress has started again.
Not sure what's a scam about Mozilla's VPN offering? AFAIK it's powered by Mullvad, one of the VPN providers with the best reputation when it comes to privacy.
This isn't true. Servo was originally intended to become the rendering engine in FF.
As it became clearer that a full engine wouldn't be complete any time soon (if ever), they pivoted to using Servo to gradually upgrade the existing engine.
I hope that being at Igalia forces the team to have laser focus in being a real embeddable solution for developers. The last I checked maybe a year ago or more, it isn’t.
I commented over the years how Servo isn’t a real alternative because they don’t actually provide any API surface comparable to using CEF or full Chromium or WebKit, and as a result it’s a nonstarter.
I think someone working on it had mentioned they were looking into creating a CEF-like API for embedding, but if the project says it’s an embed-able engine before anything else and it can’t even be used for that purpose, I have no idea what that team is focusing on other than rendering itself. I’d be more interested in even just a partially compliant engine whose primary focus was actually embedding.
It might be OK if you want to build a Firefox? It’s not if you want to use it as an actual embedded renderer.
An operating system that isn't written in Rust can contend with browser remote code execution vulnerabilities. Having the browser implemented in Rust to mitigate most remote code execution vulnerabilities stops the problem at the root.
Mozilla tried multiple times to parallelize CSS style calculation in Gecko which is written in C++, and failed all of them. When they tried again in Servo with Rust, they succeeded first time.
They integrated Rust-written parallel CSS style calculation to Gecko. As a result, to this day, Firefox is the only web browser which can parallelize CSS style calculation, and beats every other browser in CSS style calculation performance.
The meme that Rust is easier to parallelize is true.
I don't think this has anything to do with the language itself. If anything, you could claim the same for C++ since "easier to parallelize in Rust" is derived from the fact that Rust models pretty much everything as a shared-ptr so many gotchas you would normally have in multithreaded (but not concurrent) code disappear. Since you have the shared-ptrs in C++, you can achieve pretty much the same and also quite easily.
So I think that the programming language as an underlying reason is most likely a wrong premise to start with. IMO there's a huge difference between "here's several MLOC with all of its 20-years legacy/baggage, and now make N% of that non-trivial work to run faster" and "here's a greenfield project with 0 LOC, no legacy and no baggage, no code to learn, and now please write the algorithm from the ground up". I think this is much more likely to be closer to the truth.
Rust doesn't model everything as a shared_ptr, it gives you a choice of tools that fit different use cases - just like C++ does. The difference is if you mess up, it is massively more likely to detect it at compile time.
I agree that starting from scratch can make a huge difference, but if you're starting from scratch anyway why not use the language that will prevent you from making mistakes in your design?
I did not mean "everything" in the broader context but in the context when it comes to writing "easy" multithreaded programs. Pretty much everything in that case becomes modeled through a shared-ownership or message-passing semantics.
Since those same mechanisms are available in C++, and other languages too, making an argument that some specific XYZ algorithm re-implementation from scratch was more successful only because it was written in Rust, doesn't hold water. It was successful, for the arbitrary definition of success, in its major part because it was a greenfield project.
I believe that suggesting otherwise is plain wrong and misleading.
You might be right, but you're stating this without any evidence, so I don't think it's clearly "wrong or misleading". There are many cases of software rewrites failing, so I'm not sure you can take for granted that "greenfield project" implies higher success rate, and even if you did, I don't see how you can judge how much of this was due to it being rewritten from scratch vs that it was in Rust to claim "major part".
It's common sense what I said. It applies across the industry regardless of the programming languages used. On the contrary, where's the evidence suggesting that the Rust is what made Gecko rewrite succeed? Has there been any rewrite from scratch with some other programming language?
There were two previous attempts at parallelizing CSS layout in Firefox. Both were in C++. Both were abandoned after being unsuccessful. The Servo folks credited Rust's safety guarantees as the reason why they were able to be successful on the third attempt.
I mean, the Rust version was also put into the existing codebase, so it's not clear to me what distinction you're making.
But this presentation was made seven years ago, and the attempts it's talking about are even older, and I wasn't involved with them. So I don't know the answer to your question.
The distinction is whether or not you're rewriting something from scratch carrying no baggage from the thwarts of the existing system or you keep organically growing existing code to meet the new requirements. The latter is usually much much harder. I hope it's clear now.
No, I don't think there's any concrete evidence either way. I'm not trying to argue that it was Rust that made it succeed - I'm sure in reality it was some mixture of both, as well as other factors.
Having to work and learn through the codebase to make some substantial improvements often requires substantial effort and even rewiring the code architecture itself. That's enough of the evidence for me.
https://mykzilla.org/2017/03/08/positron-discontinued/