One thing that makes me use a web-app rather than a native one on PC is that I know for a fact that the browser app will be very limited in tampering with my computer, something I'm not so sure about when installing native applications.
Often I see Nvidia's or Microsoft's or Discord's own services snooping in my installed applications, but it can go much further with, e.g., applications like grammarly that have access to my current buffer and clipboard virtually everywhere.
Another example, if I use Discord's web it has no real way of knowing what I'm doing on my PC, but when I used the desktop client, it showed and notified publicly half the world I was on Fortnite (even though, I was trying the Verse programming language in Fortnite's editor) which means that Discord tracks all of my activities on my computer.
Thus, honestly, I'd rather choose the web version of most applications 9 times out of 10, I simply have no clue what native applications know, do, and it's hard for me to debug what data they are sending and receiving, something that's quite transparent if I open the dev tools.
Absolutely. For anything internet-based (social media, chat, meetings etc.) web apps are vastly superior for this reason. In fact, anything which doesn't explicitly need the richness of a true desktop GUI (e.g. Photoshop or games) should probably be a web app.
In addition to the massive benefit of running browser-sandboxed, web apps are also:
1. Easier to start and stop (open/close a tab)
2. Less permanent (clear your browser's local storage and no data is left to stagnate on your hard drive)
3. Portable - no installation required
4. Cross-platform by nature of being a website
And you're totally right about companies pushing desktop apps because they're more invasive. It's outrageous that a desktop app can access virtually anything on your system in a totally opaque way.
Desktop computing needs a massive push towards finely-grained privacy/security controls in a similar way to how browsers work.
> Desktop computing needs a massive push towards finely-grained privacy/security controls in a similar way to how browsers work.
Sure. That’s iOS app sandboxing. And people perpetually complain about the inconvenience of having each application in its own file space and having to click a button (or, gasp, go to the files app) to make data appear in another sandbox.
If you can’t make the UX seamless, as in “the OS can read your mind and know which things to make available in the sandbox” then people will complain. And if you weaken that and start letting more through then you lose the benefits of sandboxing. It’s the same battle that seclinux has faced - secure configurations are often quite inconvenient.
IMO the solution is to give users the tools to run apps in a sandbox without forcing them to or making it the default. If an operating system like macos were to ship with a user-friendly UI to change what a given app has access to, then even relatively non-technical users might decide to e.g. turn off WhatsApp's access to various things if they happen to read a news article about the app's data collection.
In addition if an OS provides a way for sandbox settings of apps to be changed, then something like "ublock origin for the OS" could be created, where someone maintains a list of what capabilities should be granted to various apps, and users can just install something once and forget about it to harden their system.
Most users trust most of the apps installed on their computer, but might have a few things that they are suspicious of — such as things they have to install for school or work. Providing a way for users to restrict the capabilities of specific apps when they feel like they need it could be valuable.
Linux already effectively has this because it has stuff like user namespaces and better support for controlling other processes using ptrace than other OSes. This enables users to use things like proot and bubblewrap to apply sandboxing when needed.
It occurs to me that there exists a fairly elegant UI trick to hide this: RiscOS had no save or load dialogs, everything was drag and drop to or from the file browser.
> In fact, anything which doesn't explicitly need the richness of a true desktop GUI (e.g. Photoshop or games) should probably be a web app.
I'm going to throw a bit of shade at Photoshop... remember how up through CS3 they actually did have a native interface? From CS4 and beyond it turned into cross-platform web gunk.
Always disappointing to see. CS2 and CS3 felt so fluid on Mac OS X.
This misses that the browser sandbox doesn't include sandboxing internet access. I can't be sure that the application will never send what I enter into it out to the server that's hosting the app. Also any changes to the app's code are not obvious. I could be served different code each time I refresh the page, or it could be pulled in the background and evaled.
> This misses that the browser sandbox doesn't include sandboxing internet access. I can't be sure that the application will never send what I enter into it out to the server
Aside from the factual inaccuracy (Firefox has a "Work Offline" menu item), this is an irrational demerit. Other apps can do exactly the same things (and more); you're applying a double standard.
Even mobile apps have much more finely grained privacy and security controls and sandboxing. It feels like we should have multiple classes of browser apps - consumer apps (Discord, Slack, Spotify, Steam, etc) that are more mobile-like in their explicit permissions and dev tools that carry warnings for their largely unbounded behavior. Or perhaps even have “dev mode” on your machine that opens up access to system processes etc, that is both opt in and timeboxed (I.e. enable dev mode for 8 hours).
It’s also weird how I still go to, to me, random websites to download these apps (I trust Spotify.com, but having users google “Spotify” to find an app is ripe for supply chain attacking). The only time I ever visit some of these websites is to download their desktop app. In some sense, gaming has sort of figured this out as most everything comes through one of a handful of platforms nowadays (steam, gog, epic, etc).
I’m sure there are lots of solutions here but I think the current state definitely could be significantly improved.
Centralizing everything into app stores benefits our corporate overlords more than anyone else. It gets even worse when every approved app has to depend on the platform's API. The privacy and security issues still exist, but now developers and users have to pay to for an untrusted app.
maybe wasm+wasi(x) would allow us to build an open "app store" platform or a sort of package manager for desktop apps?
it could provide a cross platform, sandboxed alternative to Shipping Chromium, while still supporting web technologies to build the GUI (powered by the system's native web view?)
i dont like having a centralized platform but i guess we could establish a standard file format (like .deb or AppImage?) or have a federated platform? i really know very little about all this though...
Unless you want to touch USB (WebUSB sometimes work, sometimes don't) or Bluetooth - Good luck on iOS where Apple is not supporting BLE interface on purpose, but on Windows and Android situation is not much better. Even ChromeOS can't run Web Bluetooth properly. Supported on paper, does not work anyway.
May I ask you to elaborate on what is not working for you with WebUSB and Web Bluetooth? I'm surprised to hear ChromeOS can't run Web Bluetooth properly.
> One thing that makes me use a web-app rather than a native one on PC is that I know for a fact that the browser app will be very limited in tampering with my computer, something I'm not so sure about when installing native applications.
I see this as more of a failing on the part of desktop OSes than anything. I know a lot of more technically inclined folks are used to the programs they run having access to all the same things they do and may even prefer that, but it’s been proven countless times over that large commercial devs will abuse this access however they can.
It’s because of this that I’m not nearly as upset by things like macOS app sandboxing as some seem to be. It’s nice to have the assurance that these apps can’t poke their noses anywhere that I haven’t explicitly allowed them to.
Unfortunately I don’t know when sandboxing will become the norm on the dominant desktop platform (Windows) where it’d make the most positive impact. Sandboxing breaks all sorts of dev assumptions and thus backwards compatibility, which is something of a sacred cow in the Windows world.
It's a failure of our security model still being based on room-sized machines from half a century ago. There, many users shared the same machine, so protecting them from each others' files was the main focus. But it was assumed that any program run by the user was fully trusted - either they were an expert programmer, or an office worker who had been given a prescribed set of trusted programs for their duties
Smartphones have been beneficial as they've shown an alternative model built essentially from a clean slate (their non-original kernels are of little relevance here), proving that a more fine-grained permissions model does work in the "real world" for most "ordinary person" use cases, though advanced users will likely always need to make use of escape hatches at times. And now we have technologies like Flatpak and distributions like Fedora Silverblue which are slowly but surely bringing it to desktop
Also for Windows I think the transition can be done. It doesn't need to be a big bang. Imagining something like Flatpak-style isolation, just introduce it as an enterprise feature, requiring explicit enabling by the admin for each program. These first users will therefore be sysadmins who know what they're doing more, and will also see the most benefit of it. Then roll it out for general users, perhaps allowing developers to add their own programs to a default-enable list (incentive to do so undetermined). Then eventually move to a fully opt-out model where you just disable it on problematic programs (and have another list of known ones). And also have a registry flag to globally disable it, for the peace of mind of skeptics
> I see this as more of a failing on the part of desktop OSes than anything.
I agree, and the reasons are ones that I think are also a failing:
- Sandboxing is used by OSs to take control back from users (whether that’s stopping them from building their own non-brand system or stopping them from ripping bit-perfect copies of copyright audio)
- Proper sandboxing is not implemented because it can be used to circumvent the anti-piracy measures that app developers use (specifically offline-friendly native apps). If all data can be wiped, so can the record of the trial expiration, for example.
- The commercial OS makers want a “no one can do it except us” policy and it’s been shown time and time again that any amount of “except us” becomes an attack vector.
- Clawing back permissiveness either requires a lot of expensive dev time or breaks older software (or both) and they don’t seem to be prioritizing users when going down that road.
And as much as iOS is becoming the standard for how to sandbox apps, did everyone know that iOS apps can communicate with each other (and persist data even after an uninstall) through Safari’s local storage/cookies?
I support the idea of using a web app over a native app to avoid tampering, but it’s a bit of a leap to say “Discord tracks all of my activity on my computer”.
Typically, that type of game notification is the game reporting it. Games want to have that Discord integration.
Frankly, I have more confidence that a web app will actually work, which is why I usually default to them if I can.
I can't tell you how many times I've installed an Android app by a big company and it was fundamentally broken in some way. This would happen even if I was using a non-rooted phone, though on my current rooted phone the problem is slightly worse when an app somehow detects that I'm rooted and prevents me from using it. Say what you want about the "clunkiness" of webpages, but I've used tons of native apps that aren't even using webviews that are extremely clunky and slow. The web loads plenty of unneeded crap, but native developers don't seem to give a f*** if their app bundles are half a gigabyte. Maybe it's because they all need to ship AI models now or whatever.
Often I see Nvidia's or Microsoft's or Discord's own services snooping in my installed applications, but it can go much further with, e.g., applications like grammarly that have access to my current buffer and clipboard virtually everywhere.
Another example, if I use Discord's web it has no real way of knowing what I'm doing on my PC, but when I used the desktop client, it showed and notified publicly half the world I was on Fortnite (even though, I was trying the Verse programming language in Fortnite's editor) which means that Discord tracks all of my activities on my computer.
Thus, honestly, I'd rather choose the web version of most applications 9 times out of 10, I simply have no clue what native applications know, do, and it's hard for me to debug what data they are sending and receiving, something that's quite transparent if I open the dev tools.