I took a peek at the "Explorer" app code base. PHP + JS + jQuery, but surprisingly readable (but a lot of practices in there that are not exactly considered "best" today). Perhaps we have sunk too deep into the SPA/API rabbit hole that we can no longer appreciate simplicity.
I hear PHP is a pretty good language nowdays. Now, if we could just optionally drop the $ sign from the beginning of every variable and use them only for string templating ;).
They do, but very well-defined ones, little placeholders that just concern themselves with formatting. You include the whole variable name and that makes it brittle, now random bits of string data have to be kept in sync with actual code, and makes it a lot more unwieldy to format in a way that isn't the default representation.
Those do look quite nice, fairly far away from the nightmares I've dealt with before. They also seem to avoid the problem of just about summoning a whole separate archaic syntax just for jamming variables into a string.
I still identify with the “frontend developer” title, even though my skill set has grown beyond it. Best practices are framework dependent, the tooling one should choose is dependent on their problem space, and we do appreciate simplicity.
No one is forcing people to use React, Svelte, Vue, etc. The sentiment of wishing for a simpler time has never felt right to me, cause no one is policing what people use. If you feel like people are forcing you into frameworks, I’m sorry, and I feel like I have a piece of advice.
If you truly believe a framework isn’t worth it, tell your team why. You might be right, and you might be wrong.
I’ve worked at large and small companies; discussions about what technologies to use have always been welcome. I’ve advocated for and won when making a case for “simpler solutions,” when it makes sense.
All that said if you think using a framework framework === complexity for the sake of complexity, you’re wrong. OP I’m not saying you think this, but I have to imagine that it’s the sentiment of at least part of the folks who made this the top comment.
SPAs are here due to organisational, not technical reasons.
For large organisations it's easier to manage a team of workers with narrow specialisations than a single, but almost equally productive developer like here.
It's particularly visible in banking applications, where modules are almost completely independent and built by non-overlapping teams, so they take forever to load due to the limitation on concurrent requests in browsers.
I've been evaluating some self-hosted/open source company management systems this weekend.
Most of them are classic multi-page PHP apps. It's terrible. I can't stand how it always reloads each time I click on something. I remember it didn't bother me much in the past when everything was like that, but today we have much better options.
I went for a SaaS built as a SPA. When I click on something, for example to create a new entity, I get a modal/side-panel immediately. When I want details, I get them much quicker and can return back to the listing immediately.
> It's particularly visible in banking applications, where modules are almost completely independent and built by non-overlapping teams, so they take forever to load due to the limitation on concurrent requests in browsers.
They just can't code SPAs well. It's perfectly possible to have both quick loading times and very separate modules.
> I can't stand how it always reloads each time I click on something.
And yet here you are, using HackerNews just about every day, which works exactly the same way. No SPA. No 'forever loading', just a classic, fast SSR webapp that treats individual pieces of content as documents, just like how the web was designed to work.
This is a forum, not a management system where I need to open dozens of listings and hundreds of different detail pages and make modifications there.
It also doesn't bother me much on here because HN is much faster than these systems - no wonder, since it's just a forum. But try updating the details of 50 employees, each separated into 5 subpages, when you need to wait 750ms to have each page load. And you need to load that page multiple times - first to get into the detail, then to put it into "edit" mode, then to save it - and then you can switch to a different subpage of the detail.
Amdhal's Law would suggest that much of the 750ms latency is likely attributed to I/O operations (mainly DB ops) and not CPU. Also, without knowing anything about your setup: upgrading to PHP 8 and/or enabling opcode cache may help. All that said, my original quibble was with development time, not run time overheads.
Beauty of that though is that you can CMD+click on each one and open all 50 into separate tabs, then work on each one whilst the others load in the background. Want to cross-reference? Easy, just shift a tab to a new window. All likely not possible in an SPA, either because links don’t work like links, or because it’ll muck up state.
There's no AJAX call in a SPA if all I did was close a detail side-pane to get back to the listing. The AJAX call goes on in background if I saved a modification and I can continue to use the app as it runs. And when I open a detail, at least I'm not watching the whole damn app reload and reconstruct itself from scratch for the 1000th time, and need to download just the data and not the whole bloated HTML mess.
To play Devil's advocate, while I agree with you, in reality it happened multiple times (last one, 2 minutes ago while writing a comment in other HN entry) where I am writing the response but then would like to have a peek at the chain of comments I'm responding to.
Using the phone that's very annoying. I have to long press on the entry's link on the header, open in a new tab, recheck the message trail, and go back to the first tab to end writing.
So simoly as that, I could think of lots of times where a SPA-style dropdown editor box inserted in the conversation view, would be more useful than the current HN's reply page.
If the reply page contained the whole discussion, though, the problem would be equally solved too. So I'm not saying that an SPA would be 100% better, just mentioning an example that occurred to me.
Agreed on all points, it's just that a large part of why they got a foothold is that you can fire one of your three frontend developers on a moment's notice, whereas a single so-called web developer doing their work and more will have leverage.
They're visibly snappier than server-rendered frontends, or even frontends with JS sprinkled on top when done correctly, but large organisations generally don't do them correctly.
You really don't need an spa for this. You can do the same ux with Turbo from Hotwire (and before it Turbolinks) and traditional server side rendering.
There's a massive difference in how I use a forum and how I use a company management system. MPA is fine for a forum, not for a company management system. This is the web indeed, and the company management app is just using the web browser as an app platform. No reason to conflate the two.
We are, indeed, in a world where Jquery is now the preferable choice. Frontend development is going through what video games went through in the 80s, so much new shovelware every day that people are eventually going to get tired of it.
jQuery is completely unnecessary now, you can use the built in element selectors to do all the same stuff without pulling a huge lib in. The value in jQuery was acting as a shim to smooth over IE's deficiencies back in the day. Now, its a waste of bandwidth.
In my professional life, I use jQuery daily, in my personal projects, almost never. BUT that doesn't mean jQuery doesn't have certain niceties that plain-old javascript doesn't. But implementing them in modern javascript is only a few lines of code, so I end up with my own micro-jquery usually by the end of a project, because I will implement a feature from jQuery I like that doesn't exist (or at least, that I don't know about).
I usually do something like:
const $ = function(selector) { };
at the start of every project, implement document.querySelectorAll, then return an object with a handful of functions like on(event, callback), off(event), one(event, callback), attr/data/prop(name[, value]).
All of them are basically just tiny <10 LOC shims for a lot of built-in functionality. But that way I still get the niceness/terse-ity of jQuery, but not the bloat and overhead of it being able to still do:
There are only a few cases where that's true today. Probably 90% of the functions that helped a lot before have been implemented by browsers natively today.
> So? The jQuery code still works regardless of when the browser decided to catch up.
So, it's not needed anymore. I also started using jQuery in order to get cross-browser support for handy stuff, but since that stuff is now cross-browser without using any 3rd party libraries at all, there is no need to use extra bandwidth and CPU cycles from your users to download something that basically already exists.
Most of the world's bandwidth is being used to stream Netflix and Youtube. I think the web will be OK if my site sends a network request for an 87KB script every now and then.
I never undertood the fuss with bandwidth and jquery either... it's only 30k once behind mod_deflate. To put it in perspective, vue.global.js is 463k or 104k once gzipped.
There are many reason not to use Jquery for greenfield projects now, but bandwith is not one of them.
The "fuss" is not about the specific numbers of bandwidth but more the mindset of not using more resources than absolutely necessary. And yeah, my breed who seems to care about this seems to be scarcer and scarcer out there but I promise, we still do exists.
Made a mostly minimal status page app thing for some services based development and ripping out the jQuery used during prototyping cut the total load by almost half. There are some use cases where skipping even small but unnecessary inclusions can make a measurable difference.
jQuery was a browser normalization layer and one heck of an API for DOM manipulation.
The first is thankfully not needed anymore, the second hasn’t been matched in simplicity and power by the native one, unfortunately. Not by a long shot.
If you don’t believe me, check out one of the “You Might Not Need jQuery” sites. They read much like “you might not need firelighter, stones and sticks are all you need”.
to be honest, I don't use jquery either, but https://youmightnotneedjquery.com/ seems to be telling me that I might just need jquery if I want the code to be concise and readable.
EDIT: oh, I just realized that's exactly what you were saying!
That's larger than all of the JS business logic in most of my applications and comparable to some of my stylesheets. For mobile users it all adds up, and not everybody has fantastic reception all the time.
You're not wrong about logos, but I find that webp will often squish that sort of thing down to 10-15k.
GP isn't saying you should use React, they're saying that if you don't need React you probably don't need jQuery either and you might as well just use pure Javascript. Most of the functions people used jQuery for are natively supported in every major browser that runs Javascript, and the few that aren't have decent polyfills.
I mean, let's be real; most of what people were using jQuery for was stuff like `querySelector`, `closest`, `classList`, etc... For those people, pure Javascript is JQuery now. If you don't want a framework and you don't want reactive/declarative interfaces, then you probably don't need a library.
I would say the same thing (maybe to a lesser degree) about "standard" libraries like Lodash. Native Javascript has a `reduce` function now, it's widely supported. And if you have to target any of the very few browsers that don't have it, then polyfilling is (usually) the better choice. See also Blueberry and promises; just use JS promises, they're fine.
I really enjoy using alpine.js. It gives me dead-simple lightweight DOM manipulation with much less code than vanilla JS or jQuery and doesn't require any build systems.
I agree. There is no bandwidth wasted when using a popular CDN just drop it in and enjoy the simplicity it still ads to the already nice as well JavaScript methods.
It’s not a silly point. It’s clarifying a misconception that was long used as an argument for using CDNs to deliver script assets like jQuery, which was being repeated here in context.
I feel like if you are worried about 30kb, you're probably also not using custom fonts; or you're at least not blocking rendering behind them (you're accepting FOUT).
It's a reasonable argument to make that minimizing 3rd-party library use is premature optimization for most websites, but if you get to the point where you need that optimization then it is 30kb and it won't be cached (CDN caching is a myth, CDNs are only useful for minimizing server load/response-time they aren't shared between domains) and it will block your initial app logic.
In which case it makes even less sense for someone to complain about reducing Javascript bloat just because a web font that a developer doesn't control and doesn't have any input over exists.
I am sympathetic that worrying about 30kb is overkill for a lot of apps. But if you are in a position where you're really optimizing for speed to the point where even megabytes matter, then those 30kb are not cached and they block page logic from executing until they're downloaded. And that remains true even if there's another performance change that would be higher impact that you're not allowed to make.
If anything, being not allowed to reduce asset size for your website would make efficient JS bundling matter more, since it's one of the few resource bundles you'd be allowed to do something about and it would be pulling double-duty to try and slightly cover for the inefficient fetches that you can't change.
General question: Is there an actual use for this kinds of projects, beyond showing what can be done? I have seen many different "web desktop" projects and while most of them were impressive in their own (technical) ways, I could never see the an actual use case for them.
My Synology NAS uses something like this for its navigation and even if it's initially strange, you instantly miss it when you access it from your phone, when the UI turns into this nested list which is not intuitive at all
I personally hate this from Synology. While their apps are meaningful structured and look good, it is just a natural mismatch to put sub windows into a parent window. Anyone remembering multi sub window apps in the Microsoft office suite in the 90s/00s. Was also horrible without a browser/html involved.
StarOffice (precursor to LibreOffice) used to take this one step further and replicated the taskbar and start menu inside the application window: https://winworldpc.com/product/staroffice/5x
Yes, and from day one Opera (when it was still called Multitorg) used it to give us something like "tabbed browsing". I think that was in the mid-1990s ;)
Yeah, same. I have a synology NAS and the desktop-of-windows-in-a-window aspect of it just seems archaic to me. I have a browser with tabs, thanks. If I really need two things going, I'll open two tabs or two browser windows.
I also like the Synology UI. It is helpful to see multiple apps at the same time and is especially useful when moving files around to have two or more file browser windows open.
We talk about have consistent desktop experience everywhere. This provides exactly that. Your desktop is everywhere you go.
What sucks is, you need a whole browser for it. What sucks is, I think that's the only way to do it. A full on VM to make it real.
I would use this if the browser stack had more promise of not changing for 50 years and not owned by Google. Unfortunately our last place of freedom is native execution.
Personally I can see myself using this to access my homelab from work without having to open extra ports. Since I can't install a personal VPN client on my work machine, a web-based desktop behind a login would give me most of what I want.
Guacamole was actually my first choice and it's what I used a year ago. In that time span I've screwed up enough that my lab actually needs a whole rebuild, so I'm looking at alternatives to see what could be swapped over. As it is, I think Guacamole remains my first choice. I was just responding to the question of what this could be used for.
As someone who has spent years working on one of these, that is a very interesting question. I do it as a passion/side project so it's not as important to me, but I also like to think of it like the "Field of Dreams" philosophy of "If You Build It, They Will Come". I hope one day if I add enough features and interconnect enough things, the applicability will show itself.
I'd use this to store files on an encrypted Digital Ocean droplet, better than fiddling around with Nextcloud. I treat cloud storage as a USB stick; file management is 100% manual, no sync.
As in basically any web app development... mostly independently of the chosen implementation language. PHP hast lost its insecure beginnings (like injecting variables using GET arguments...) since long.
Went to the demo, instinctively used `cmd-w` to close one of the "desktop" windows, and was quickly reminded of one of the limitations of building an application like this inside a browser.
Note to developer: you may want to add an `onbeforeunload` handler
I heard you can fix that with Alt+F4. Seriously though, this is my biggest gripe with building for the web. You are limited in interactions to what the browser allows and at the end of the day it’s an app inside a sandbox-ish container with an app inside a shadow-dom. WASM full-screen, full-hardware, experiences have yet to take root. It’s getting there.
I wonder how things would have been if we had JavaScript based WMs and DEs. I'd have loved an HTML/CSS supported DE, imagine the possibilities! and seeing how performance of web-based DEs is amazing, I really hope we can get a Linux distro with a HTML/CSS/JS based DE.
This is what I've been working on with LiveG OS, whose desktop environment, gShell, is written in HTML/CSS/JS: https://liveg.tech/os
The OS is also designed to run on mobile devices, so there's some cross-platform consistency between the system on desktop and mobile devices.
It's currently designed to run web apps, but the hope is to also include support for typical Linux GTK and Qt apps (or any app that uses Xwindows) soon to aid daily driveability.
The current release can't do much yet, but we'll be releasing our next Alpha version in about a week or two which include a few more useful features on top of what's already available on our site!
Of course, yeah Discord is a bit of a closed-source walled garden, but we're considering setting up a Matrix.org server some point soon as an open replacement for our current community.
We currently release all Alpha versions onto our site at https://liveg.tech/os/get, so there's no need to 'wait' as such to try out the system in its current state! As for being notified as to when we'll release the next version, we normally let everyone know through Mastodon, as well as on other social networks such as YouTube and even (dare I say it) Twitter — sorry, I mean X.
But it might be worth us setting up some sort of email-based notifier to let people know when new releases are available, so I'll take your advice onboard!
the underlying system has to be something - it can't be HTML all the way down to the CPU microcode. There was also the short-lived FirefoxOS, which is now https://en.wikipedia.org/wiki/KaiOS and countless kiosks.
> the underlying system has to be something - it can't be HTML all the way down to the CPU microcode
Exactly what I was stating to GP (or implying, now that I read it over) — pretty sure Flutter is OpenGL or <canvas> or etc, and not outputting to HTML or some DOM like React et. al. are doing.
I really like the idea of a web based desktop, but I would like to see innovation in the user interfaces of computers.
One idea I am playing with is the idea of "thunking" behaviours on a graphical user interface by chaining together operations, like a typed runqueue. You can interact immediately with what is on the screen but you are presented with options as if the operation was successful and then you can queue up the next thing based on the result of doing that.
EDIT: The idea is that we decouple updates of the GUI with execution and the types of the return values of previous operation. In other words, the types of every operation are known by the system in advance, so you know what data is available at any given point in time.
A desktop which is like a real time strategy game.
One of my biggest pet peeves that frustrate me more about Open Source projects is this bizarre tendency not to put screenshots in the main description; especially for projects that are mainly UI/UX-based.
I realize there's a demo, but what if I'm on mobile and I'm just curious as to what the project looks like; or what if I just want to see what it looks like before I try it out?
Is it a nerd thing, where we're just kind of wrapped up so far in our own projects that we forget such a basic thing? Why does this happen so often? Genuinely asking.
It looks as if it remains constant as long as you remain connected, but it if you disconnect then reconnect, you will reconnect to a new / different session than the one you disconnected from - I'm assuming the 'old' one is lost as part of the reset.
Because this is not actually useful in any way whatsoever. The point of citrix is accessing legacy desktop applications running on some server via a light desktop client. The whole point of that is not having to replace those legacy applications and not having to install them locally. The business is allowing companies that have ancient stuff being able to continue to use that. It's worth a lot of billions because lots of companies just have decades worth of software that their businesses depend on.
You are right that citrix should be able to dumb down what they have to basically a glorified web application that accesses their servers. Which apparently is a thing for obvious reasons (as in they thought of that too, many years ago).
I suspect the complicated bit of Citrix isn't the client, rather at the server end to manage multiple instances of apps running efficiently - where these apps are design to expect they own everything in the enviroment.
Everything that runs inside this "desktop" environment is made for it. So it is definitely not competing with Citrix & co. which are made to run existing stuff under existing operating systems...
well, given that everything is made for the web, it seems like it would be pretty straightforward to jam most existing SPA webapps into something like this.
Not exactly the same thing, but I had good experience with KasmVNC.
Despite having VNC in its name, it isn't fully compliant with the VNC protocol and doesn't support regular VNC clients. Instead, it exposes a web client that you can access to connect to the machine. It also felt surprisingly snappy and nice to use compared to my previous experiences with regular VNC.
I love seeing more web desktops as I think it's an amazingly fun side project. It's interesting to me seeing how far people will go with these. I've been working on mine for several years now. Also open source if anyone is interested in checking it out.
Check Synology DSM. When all your access to a machine you use every day is over the web, having a GUI is really nice. And web is the most obvious way to deliver this GUI to you given there's no video card on the machine.
https://gitlab.com/hsleisink/orb/-/blob/master/public/apps/e...
https://gitlab.com/hsleisink/orb/-/blob/master/public/apps/e...