X is almost 40yo, stable, reliable, difficult to maintain. Wayland is 15yo, barely usable. Any old timers remember 1999 when X was 15? Wayland is like IPv6 of desktops
Anyone who believes that XFree86 was stable in 1999 was apparently running a different XFree86 to me. The sysrq key was mostly useful at the time for using the SAK shortcut to kill all processes on the current terminal, which with luck included your wedged XFree86 which would then get respawned by xdm. I don't miss those times.
XFree86 in 1999 might not have been, but in 1994 when I went to university in the UK and was using Sun computers which were solely X servers to display programs running on the "main" shared computer in the corner of the room, X and all the associated software was all stable and worked fine.
X11 became really stable in the late 00s. And since the introduction of DRI3 it is at the same technological level of Wayland concerning the efficient buffer swap mechanism. X11 is however still miles ahead on the architectural level. Parts of the system like window managers or compositors can fail or be replaced at runtime without affecting running clients. Wayland lacks the appropriate standardized interfaces for that and there are not plans if not outright refusal to actually implement them.
The choices of architecture seem to reflect different priorities, there are things that the Wayland architecture does better than X and vice versa, aren't there? Hard to say X11 is ahead architecturally if you pick features that Wayland may not consider important. I've seen commentary from people who have worked intimately on both protocols and implementations of both who consider Wayland to have the better architecture.
> I've seen commentary from people who have worked intimately on both protocols and implementations of both who consider Wayland to have the better architecture.
This link comes up in literally every Wayland thread and it is even more bullshit now than it was in 2013 when it was first posted (and it was bullshit then too). It is titled "the real story" but it is quite the opposite.
A few key points:
1) he laughs at how X has a bunch of extensions. https://wayland.app/protocols/ hypocrites much. In 2013, since it was completely unusable, it probably didn't have many. But turns out real world use leads to "useless" features being reimplemented.
2) he complains about how X.org has broad hardware compatibility. As if that's a bad thing. Meanwhile wayland, even now it still doesn't work reliably on half the graphics chips on the market.
3) It complains that certain X features are not fully network transparent. True, but most are and you can detect at runtime and gracefully degrade. Wayland "fixes" this by just dropping the whole feature.
4) it flat-out lies saying the X server does nothing yet it is so much hard to maintain code. The core X protocol provides backward compatibility and is rock solid (and really easy to impelment from scratch btw, someone did it in Javascript for a tutorial for crying out loud). Meanwhile the Wayland compositor keeps accumulating everything because of point 1. Need a screenshot? Add it it the compositor. Need a hotkey? Add it to the compositor. Need drag and drop? Add it to the compositor. Need a notification icon? Add it to the compositor. In X, all those are peer to peer. Graphics are actually a relatively small part of a graphical user interface, something Wayland is still slow to learn.
5) He complains that certain applications are written inefficiently with blocking calls which is inefficient over a network connection. Wayland's calls are ALL blocking and just has no network connection.
6) Complains that X may draw things unnecessarily. Indeed... but there's an extension to disable that. Easy fix. Wayland even uses the same drivers!
> Graphics are actually a relatively small part of a graphical user interface, something Wayland is still slow to learn.
This I think is a key insight. I talk about this in another post, but I've been working on "porting" parts of Xfce to Wayland, and there are so many things missing in Wayland that have nothing to do with "graphics" that means that Xfce+Wayland will be missing a lot of useful features until/unless Wayland protocols are invented or extended to make them work.
I don't do Windows development, but I remember that used to be the rebuttal from game devs that Linux missed the point about all the ease of use features that Microsoft provided.
Although all the other Direct services are dead, have they not been replaced with new versions?
And it was true at the time, thought the rebuttal would be "use SDL then" -- it did provide equivalent functionality, packaged into a single library.
They were replaced not by new versions, but by new or even existing libraries/subsystems, that are not marketed as Direct anything anymore. Just like alsa or pulse are not marketed as SDL or Open anything either.
None of Waylands internal functions between a client and compositor are blocking or synchronous to my knowledge. Generally you fire off a message and will later receive an event back from the compositor, which is happening in an event loop.
There is an exception, which is the explicitly blockling "roundtrip" function(s), but that is meant for special cases only.
Being asynchronous was an design goal for Wayland from the very beginning.
This isn't that different than X though which also has the fire off a message and receive event back. Very few of the functions actually make you want (even in xlib, which wraps the protocol itself to be a bit easier to use from C, the majority of functions still are async - much to the chagrin of newbies trying to decipher error messages)
Async X11 was tried with xcb, and it failed. In the above linked video, this is illustrated in the "Bad IPC" part, with the example of gedit startup (gedit uses gtk and gtk uses xcb) and where it blocks.
He claims gedit does 130 blocking InternAtom calls, 34 blocking GetProperty calls, and 116 property change requests. None of those are actually using the available async functions (or even available XInternAtoms call to batch all those 130 into the main thing).
Since this video is coming up on ten years old, it might have been true at the time, and gtk/gedit have since changed the implementation. But regardless, if the video is accurate, they didn't use the non-blocking calls. If the video is not accurate, it is meaningless anyway.
It isn't really Wayland's calls itself, I prolly stretched too much there, but the main thing is the async discussion usually comes up in the context of running applications remotely, which Wayland simply doesn't support. If you do a loop of XInternAtom (which btw you shouldn't do, even in xlib - notice that there's also XInternAtoms, plural, for that kind of thing which batches them to reduce wait time...) locally, it is unlikely to matter. The time spent there is near zero anyway. But remotely, now you might be looking at several ms per iteration and that adds up fast. Now the blocking aspect can become problematic.
Since Wayland doesn't support this kind of network remote use anyway, the distinction doesn't matter. Local X vs Wayland are both fast.
> It isn't really Wayland's calls itself, I prolly stretched too much there,
Do you mean simply incorrect i.e., that Wayland calls are not all blocking? Or do you mean that in practice there are some important high-level situations that do require a round trip despite the low level purporting to be mostly asynchronous?
> Since Wayland doesn't support this kind of network remote use anyway, the distinction doesn't matter. Local X vs Wayland are both fast.
Wayland can run to remote displays though, right? You could argue it's not complete or well supported or nicely integrated into the core protocol or whatever, but you can literally use it today and probably have a package to do it available in any Linux distro you're using (e.g., waypipe). So it's hard to see what you're getting at. Wayland may not have been made with transparent networking support foremost in the protocol but AFAIK the idea was always that you'd be able to do remoting by forwarding buffer contents with the protocol.
> The core X protocol provides backward compatibility
I had a bit of fun a while back trying to get an old X/11 terminal to work with a modern Linux machine and was somewhat surprised I was able to make it work. Sort of at least. Many display managers didn’t implement the proper protocols, but XDM did and I was able to get it to work at least a few times.
1) no, he complains that X11 has a big core and then extensions. Extensions are fine, but they were unable to kick out parts of the core, because it is the core and something somewhere assumes it is there. So they had to maintain it, despite not being used in practice, except by that little something that nobody can point their fingers at.
2) he talks about obsolete hardware. There's no really a point to support s3 trio, at the expense of support for modern hardware, which works ink wastly different way.
3) That graceful degradation is in practice the same, as just using Wayland. Ever tried to use modern X11 app over network? RDP is vastly better experience, (and RDP support is wip in wayland).
4) This is so wrong so I won't even react to it.
5) Wayland calls do not wait for reply. You rapid fire requests and then collect responses as they come. Heck, you can even get a response you didn't ask for ;)
> That graceful degradation is in practice the same, as just using Wayland. Ever tried to use modern X11 app over network? RDP is vastly better experience, (and RDP support is wip in wayland).
I do, in fact, use modern X11 apps over the network literally every day. Some are better than others - if the programmer made the effort to actually gracefully degrade it can be a considerably better experience than the ones who just shoot a constant stream of bitmaps down the wire (which do work better on rdp, i remember once upon a time, I'd ssh to my linux box and set up port forwarding to a windows box on my lan so i can remote desktop to it, then run Xming from there... which is absurd that that actually worked better), but if you do it well, remote X is very nice to use.
The seamless integration of windows from multiple computers is a thing to behold. Remote Desktop is great and I like a lot about it, but even the "Seamless" rdp doesn't work as nice as X.
If you don't mind answering, what do you do that uses GUI over network? Not looking to argue or try to claim <something> would work better or anything, just curious what people use it for seriously these days.
One common example is running data analysis/reduction remotely on a big fast box and then looking at the products on that machine via X11 forwarding.
On a fast low-latency connection there is hardly any noticeable difference between running the application locally or remotely (it's just another window locally opposed to another window in another desktop in a window).
I use tons of things on it. When I use my laptop, often I'll run a local browser (though sometimes I run remote browsers too - Chromium's core works actually surprisingly well on a remote X link - but the bigger problem there is that chromium doesn't support multiple instances, so if I left the browser open on the desktop, the cookies and history aren't shared on the remote instance on the laptop which defeats the purpose of reusing the instance anyway, but sometimes the shared passwords still nice to have) with most everything else being run remote.
Among the specific applications are my developer tools, image viewers, music editors, the apps I'm actually working on, etc. Of course, some of these also work fine on ssh terminals and I do plenty of that too, but there's just no need to be limited and I'll run whatever I want to.
The last time I had to use remote X11, it was to run Oracle's dbca. Everything else was possible to do in some other way, usually more comfortable.
RDP does support integration of windows from multiple computers, in a way of RemoteApps. Even if you are running GUI apps inside WSL2 locally, you are using it.
> he complains that X11 has a big core and then extensions. Extensions are fine, but they were unable to kick out parts of the core, because it is the core and something somewhere assumes it is there.
The thing that I find ridiculous about this attitude is that you have two choices:
1) Remove parts of the core that some (mostly old, unmaintained) applications rely on, which will break them. You'd probably have to call it "X12" now, but that's fine: most X11 applications would continue to work with no (or very few) modifications.
2) Throw out the entire system and build a new one from scratch, that literally no applications will work on until new toolkit backends are written and some applications themselves are rewritten or at least fixed up. Those same old, possibly unmaintained apps that would stop working in #1 are still not working, but now it's along with literally everything else too.
> RDP support is wip in wayland
If I had a dollar for every time I heard "$IMPORTANT_FEATURE is WIP in Wayland", I'd be able to get several pizzas delivered.
3) Build a new system that supports the old system reasonably well and doesn't prevent people using and improving the old system until the new one meets their needs.
> If I had a dollar for every time I heard "$IMPORTANT_FEATURE is WIP in Wayland", I'd be able to get several pizzas delivered.
It's true though there have been features missing. It's good thing that they are being worked on though, no? The X protocol and the Xorg implementation are both abandonware, so your comment comes across as positive, because missing $IMPORTANT_FEATURE in X/Xorg is not WIP.
Most of the criticisms in this talk are solved with DRI3. Also this guy makes money with a consultant agency that mainly works on Wayland and indirectly profits from shitting on X11. This is not a neutral source.
Right? I really feel like not enough attention is paid to the question of: Who is paid to work on "the successor to X" and what influences are at work there?
Seems to me that would be an obvious place to look for "why Wayland sucks," given that unfortunately, "paid" sometimes leads one to "exclusivity."
XFree86 worked fine for me since 1994. I sometimes had problems when exiting a video game like Doom or Quake. SVGAlib would give problems but that wasn't X.
It would help if you did not give easily verifiable facts to reveal the inaccuracy of your recall. You may have been using Doom in 1994 but you certainly were not using Quake. If this is an accurate memory, what year is it from?
I installed Linux on my PC in September 1994 at the start of my sophomore year of college. I remember playing Doom during that school year from Sept 1994 to May 1995.
The Quake Wiki says Qtest was released Feb 24, 1996 and I remember playing that the week it was released.
I always felt that X11 on Linux was just as stable as X11 on SunOS, Solaris, Ultrix, HP-UX, IRIX, and AIX.
Yep, my point was my experience in 1999 with XFree86 was with Linux, which was less stable than today per se, before that I only used Xsgi on SGI Workstations, which was proprietary and was already HW accelerated.
Xfree86 is the commonly-used but bastard stepchild of X. X started on DEC, Sun, HP, SGI unix boxes, Linux came later.
Linux is and was awesome, I still recall installing it using boot/root floppy disks, but X was around well before the '386 machine in the corner being used for CD creation was recognized as "Hey, this is actually useful"...
It's the default on several distros. I regularly play AAA games on my gentoo gaming PC, using proprietary NVIDIA drivers, on KDE Plasma, with little or no performance differences compared to X11.
Even the Steam Deck, arguably the most popular linux PC, runs its default UI on Wayland.
A little weird to classify the Steam Deck as a "personal computer". It's a special-purpose handheld gaming device that just happens to run Linux & Wayland under the hood.
While I wouldn't call Android phones "personal computers" either, they're much closer to being the "most popular Linux PC" than a Steam Deck is.
The Steam Deck is literally a PC, no doubts about that. It's even more "P" than many other "C"s, like most Apple hardware for instance. But I think that the notion that it somehow validates Wayland as mature tech is kinda weak.
It appears that the justification for switching to Wayland in many distros (just like systemd before it) was so that the devs could actually abstract away work on these components and invest time into their flagship features (WM, filesystems, shells, look & feel, the works). For what its worth, looks like this was achieved.
I have tried Wayland several times over the last few years, but I have many persistent niggles caused by third parties refusal to fully support it. For one, Discord doesn't support push-to-talk VOIP on Wayland (I believe it requires them to create a daemon for global keypresses). I also recently 'upgraded' my RX 5700 XT to a 3070, because I received the 3070 for free, and Nvidia just refuses to support Wayland.
It's not that Wayland is bad. It's that companies, whose primary focus is Windows, just refuse to develop for it.
Wayland screen sharing works fine (in Chrome, Webex...). It doesn't work in Teams, but that's Microsoft's issue, since they didn't bother with implementing it.
The Linux Teams client has been unceremoniously abandoned. The debs have been removed from the ms-teams repository. The download instructions haven't been updated, it's managed shockingly poorly.
I'm on Arch on my work machine and it started working with pipewire about 3 months ago... no config necessary just works as intended both screen and window. There are some others at work on Ubuntu LTS but that is because Ubuntu is using ancient packages.
Ah, but the reply was about defaults. Do any of those distros ship with Pipewire as the default? If so, I'm sure it hasn't been since 2019 as the original poster claimed.
Wayland is default on several distributions and has some pain points, much like X did in 1999 (it had many back then, and in fact still does today). But it’s definitely usable, it’s my daily driver here.
The main difference is in 1999 X did not have real alternatives, so if you wanted a graphical desktop, you had to fix the X bugs period. Here if you don’t want to deal with wayland issues you can fall back to X. It makes for slower development…
Wayland is just a protocol and there are multiple implementations in different compositors. Instead of focusing on a single great implementation, there are multiple average and weak implementations.
It puts quite a bit of pressure on desktop environment developers and it seems like they don't really care about many of the defined protocols. https://wayland.app/protocols/
I wonder if Linux desktop would be in a better state now if Wayland also came with a new and advanced compositor used by new DE's and not just a reference example one.
> It puts quite a bit of pressure on desktop environment developers and it seems like they don't really care about many of the defined protocols.
The problem is that the existing protocols are missing key functionality that desktop environment developers need (source: I am one).
Want to list all the toplevel windows that are on a particular workspace so you can write a pager or taskbar widget? Nope, can't do it. The foreign-toplevel protocol and ext-worspace protocol (the latter of which has been in standardization purgatory for 2-3 years now) don't know anything about each other, so you can't do that.
Want to write a widget that lists windows and lets you do a bunch of operations on them? Well, you can, if all you care about is minimize, maximize, fullscreen, and close. If you want to pin/unpin windows, move them between workspaces, resize them, or move them around on the current workspace... nope, can't do it.
That's just two examples off the top of my head. Looking at the list of Wayland protocols, and then digging in to figure out what they provide shows them to be immature and severely lacking.
I also want to point out that you will likely get a better Wayland experience on KDE. They are much more active in implementing Wayland protocols and overall it feels more solid.
This makes me sad because I really like how Gnome looks like, but Gnome developers don't seem to care as much about Wayland stuff.
I'd also confirm this. Gnome started out ahead by virtue of being the target or idealization of most of Wayland's features, but it's stagnated. Nothing is ever improved and they refuse to fix the technical problems holding them back, like the lack of I/O being separated from drawing in Mutter. They claimed to have improved it several times, but it's just as bad as ever. Either they're hesitant, or they just don't have the time and resources to break and rebuild it properly.
Meanwhile, KDE/Plasma's kwin finally stepped past them. And there's also the kwinft project that's rebasing kwin idiomatically as a wlroots compositor.
We may finally get full Wayland adoption, but I don't think Gnome is going to be prominent in it anymore.
I have been a KDE user since 2010. I've started this habit a few years ago that every time I upgrade my desktop distro's release, which is twice per year (first Kubuntu, now NixOS), I try out the wayland mode for a while, until I find the bugs to be too annoying. A few years ago I switched back after a couple of hours. The last time I tried I remained there for 3 months. So yes, a lot has improved, but I'm still back on X11.
This was absolutely true, until I switched to pipewire. Once I did, this started to just work, with no issues whatsoever. (No configuration required, just followed Debian's package dependencies switching to pipewire and it started working in Firefox.)
ChromeBooks were outselling Macs from 2017-2021, although the pandemic meant hundreds of millions of people suddenly needed new computers for remote working and the kids to use for remote schooling, so sales spiked and have since collapsed.
But they sold ITRO 100 million units per year for several years.
It's a relatively standard distro up until the GUI layer, based on Gentoo.
I'd agree that Android is something else, but ChromeOS is mostly the usual GNU + Linux stuff, and a weird display server which is Chrome rendering direct to the screen.
I do. My only ChromeOS device at present is an old Thinkpad T420 with a Core i5 and 6GB of RAM. It runs very quickly with ChromeOS Flex. Currently I have Firefox ESR running on it, and DOSemu. Inside DOSemu I have MS Word for DOS.
It's a Linux. It runs Linux stuff thanks to a built-in feature, and since Flex came out, the Linux support has improved visibly: so for instance Firefox now works properly with either a full titlebar or none, and this depends on the settings within Firefox not on ChromeOS.
It works, it runs, it's useful, now, today.
Personally I don't give a toss about any of the other things you mention. It plays videos smoothly, it's fast and responsive, and my webcam works. I've tried Skype, Whatsapp, Facebook Messenger and Zoom in ChromeOS and all worked fine.
Sure it may be limited. I am not denying that. But it's selling well up against Windows and Mac, which is something no other Linux distro has ever managed to do. Despite all their fancy acceleration features and being free, ordinary consumers are not interested, even though they are FREE.
ChromeOS is not free: you can only get the full version by buying it on custom hardware, and you need a Google account to use it. Those are significant drawbacks compared to every free distro...
And yet 10x more Chromebooks sell per year than all the free distros put together can GIVE AWAY.
That is not just noise. That is no rounding error. That is massive.
The year of Linux on the desktop came, over half a decade ago now, and the Linux world was too busy with infighting and squabbling over Snap vs Flatpak and other pointless nonsense to even notice that the mainstream consumer world has adopted Linux bigtime.
A bit late to this party, but you are woefully, badly misinformed as to what ChromeOS is, and what you can run on it. Well over a decade ago, I was running a full suite of gnu-linux devtools, natively, within chromeos.
You don't even need to install a chroot from another distro. Just get a gcc (chromebrew was the first to package this), and the rest is just gentoo linux (with portage ripped out - and in the early days, you could run a shell script which PUT PORTAGE BACK IN).
And if you take the time to understand the wierd partitioning layout, a couple of bind mounts in the right places is all you need to get the chromeos gui file manager (which is rather crappy, btw) to see your stuff.
Oof, that hits hard, but you're probably right: I expect there are more people who primarily (or even exclusively) use a smartphone or tablet than a general-purpose computer running a general-purpose OS.
There are libre mobile devices, but approximately no-one uses them, just like approximately no-one uses Linux on the desktop. Unless some large vendors start shipping Linux by default on all their devices, or the current mainstream OS vendors become really obnoxious and people go searching for alternative devices with Linux installed by default, that isn't going to change since people really don't want to have to do installs themselves.
No, but the average Linux user has probably been bitten by nvidia compat issues so much in the past that they're even more likely than the average generic laptop user to be using Intel or AMD graphics.
This isn't nearly as true as it used to be. I regularly game on KDE Plasma on Wayland on a 3080 Ti with the proprietary NVIDIA drivers, and it works fine.
I recall none of these distros change existing installs, also plasma (KDE) is often still on X11. So it depends on proportions of new users+new installs+selected DE+video card...
I remember creating a new modeline and adding it to the list. Then using ctrl+alt and +/- to cycle through the modes. I would get to the new mode and the monitor would start buzzing and clicking and the image would flicker. I would quickly toggle to the next mode that was "safe" then go back and edit the modeline and try again.
I did manage to get my monitor to run at 1280x1024 @72Hz but couldn't make it go 75Hz
That NCD terminal wasn't mine, but I had the use of it while I worked at a customer's site for 6 months or so. One of my all-time favourite devices. I wrote so much code on that thing. Thanks!
I ended up leaving with a couple of generations of x terminals, my favorite which could pxe-boot and had a compactflash-like slot on it.
the weirder was an Apollo we had for an internal auction, strangest unix system I've ever run. but the b&w screen was very high resolution for the time and quite crisp.
I'm not sure if it was 1999, but I remember Mir being the golden future of display servers and Wayland being a pipedream. That didn't work out very well.
> I run Hyprland just fine with Wayland, I seriously doubt it is barely usable.
Same. I wasn't convinced about Wayland until I tried Hyprland. It's just great!
The design of X11 meant many windows managers, which mostly were average. But that was ok, as the issues could be addressed by separate tools
Wayland had 3 issues 1) not many tools (now there's wev, ydotools...) 2) they were limited in what they could do, as the keys to the kingdom are mostly given to the compositor, and 3) outside sway (with its own issues) the compositors were not so great.
So if you didn't have a good one, or if it was missing essential options, you suffered until you went back to X: to prep my laptop for uni in the late 2010s I evaluated wayland but returned to X as it was simpler to get a better experience.
Now with hyprland, I love wayland: I can script again very precise behaviors with hyprctl and wlrctl. The foot terminal emulator is great. Edge works fine with the right wayland options.
Much has changed since I first discovered Wayland in 2016: I'd put 50% of that on hyprland (it's seriously wonderful) and the other half on the availability of more wayland-compatible tools.
I'm eagerly waiting for the patches for wine on wayland: not just because I love Office, but because for a long time it was said to be impossible to have a good wine experience on wayland.
Well, these patches prove it wasn't impossible, just a bit hard, and old people are stuck in their ways and hate change even for better tools.
It's like how systemd was so unpopular at first, except it had most of everything ready. Wayland in comparison was missing many small tools that are only important for very few people (ex: for scripting) but about everyone had one thing they couldn't do on Wayland.
I don’t know that much. Why does ping6 matter? Isn’t most internet traffic HTTPS? Like when I check my cloud logs they’re all coming from ipv6 addrs, so I figured it was working
In terms of usability - definitely. But it's built on top of similar unfulfilled promise: we're running out IPv4 addresses VS X11 is unmaintainable legacy full of security bugs. 20 years later IPv4 is still dominant, Wayland is still unusable and X11 just works.
Wait, what have I been doing my work on then? Someone should inform Ubuntu and Fedora, too. Who knew it hasn’t been possible to use the most popular Linux distro out of the box for two years?
I remember in the early 2000s still needing to write to a config file to get X to work on my desktops at the time. Was it 30 years old at that time? I've been running Wayland crash free for a few years already, without the need to touch a single config file. Does that mean it reached better usability at least twice as fast compared to X?
I honestly don’t get people, like were these people just extremely lucky and never had to tweak anything? Like, I remember times when I had to blindly log into my user and try to fix my setup from there, purely by muscle memory.
> I honestly don’t get people, like were these people just extremely lucky and never had to tweak anything?
Yeah, that's about how I feel about Wayland. I'm not sure quite why there are two groups of people with such wildly different experiences talking past each other, but I suspect it comes down to what each user wants the software to do and what hardware they're running on.
X11 cannot handle the average user connecting an average laptop to an average external monitor (no usable per-screen scaling). Maybe this was good enough in the 80s, but today it is seriously holding back the platform.
Sigh. X11 actually can. Even in its original release it supported different PPIs for different displays. But a lot of Linux programs (especially GTK ones) will flat out assume the specified monitor PPIs are just wrong and will use 96 PPI. Plus with the way modern multi-monitor usage works with X11 by faking one giant single screen that doesn't work anymore. But when it works, it works.
But even with that it's STILL possible with the right setup using xrandr where you essentially render at a higher res and then downscale. Ubuntu's X11 version of Gnome has had this out of the box since I think 20.04 and it works very well. IIRC upstream Gnome refused it because that's what Wayland is supposed to do...
> Even in its original release it supported different PPIs for different displays
Yes and no.
X11 screens could have different resolutions, color modes and pixel density, but windows could not span over multiple screens, or moved from one screen to another. The only way the user/the application could move window to a different screen would be to open a new connection to the screen (denoted by that familiar DISPLAY:0.x environment variable) and recreate all the resources there.
There was exactly one application that was capable of doing that at runtime (XEmacs). For all the others it meant restarting the application with a new DISPLAY env var.
Hence Xinerama. It joined all the different physical displays into a single screen, which allowed to move windows around, but came with limitations, like the same color modes or DPI for all displays -- since it was single screen logically.
> But even with that it's STILL possible with the right setup using xrandr where you essentially render at a higher res and then downscale.
You actually don't even have to do this for updated programs - hidpi aware applications scale themselves, vector graphics style, so there's no up then down scaling going on. And applications can easily read the xrandr config and adjust their factor when moved to a different monitor. However, xrandr's ppi factor is not the scale factor you likely want, so this isn't really standardized; each toolkit might do it a bit differently. But all the pieces are there.
Non-aware applications might be bitmap scaled if needed though. (Of course, wayland just breaks all legacy applications anyway so that sets the compatibility bar low regardless)
This is my situation. I use sway for this reason and it’s just about okay if you’re not on Nvidia.
Screen sharing is the main gripe, it’s a pain to configure, involves a lot of different bits of software which have to be orchestrated and none of them are mature enough not to break occasionally in an unexpected fashion.
Well, most providers worldwide still use IPv4 which means that cost of switching to IPv6 is higher than cost of maintaining workarounds for 10 years, and who knows, maybe it will be the same for another 10 years
But it's really not. About 85% of the entire public IPv4 range is currently announced.
Transfering IPv4 prefixes comes with a 2 year transfer restriction period.
IPv4 addresses currently cost about $50, and you have to buy an entire subnet at once, minimum /24.
It makes perfect sense to start charging for them if you're running out of them and don't want to buy additional prefixes.
Not to mention that there's also additional cost involved in case someone was using that particular address (or even worse multiple addresses from the same /24) for spam or malicious purposes, because that means that the entire prefix is currently trashed and there's some effort involved for it to be removed from all the independently maintained blacklists etc.
Yes, having everyone involved learn IPv6 is more expensive than burying your head in the sand and adding another 4 at the end of NAT444444444444444444444, but at some point it’ll just be too ridiculous.
You absolutely can. I’m single stacking IPv6 on my fresh hetzner box.
Cloudflare in front of web services and it just works(tm).
For everything else - Argo can do arbitrary TCP (requires cloudflared though) and then you can start bugging your ISP about the very real need for IPv6.
Really? I thought that one of the complaints about R6RS was that it was too broad and overspecified, while Wayland is too small so every other compositor is reinventing half of X11 by itself. I'd say it's more like R7RS (hash tables? No. Procedural records? No. Enumerations? Nope).
EDIT: This comment may age poorly when R7RS Large is complete.
One of my problems with R6RS was it was a profound shift in philosophy. The consensus criterion was changed from unanimous to simple majority, for instance, and there was much more emphasis on building a practical software engineering language rather than a language core intended to be built upon.
But it kind of failed at that because its standard library had only a small fraction of the features that "practical languages" like Python or Java had! ("> 1 line to send an email -- NON-STARTER!") So it was kind of a fuck you to the existing Scheme user base, in favor of new users that had yet to materialize, and yet it didn't deliver what those new users wanted! (R7RS Small was kind of a return to form for Scheme. I really appreciate the Small vs. Large profiles, akin to C's freestanding vs. hosted implementation profiles.)
And Wayland is kinda the same. "Fuck everything about X11" is as significant a rationale for Wayland as any, but there's a lot of things X11 users need that it didn't do very well until fairly recently. But literally everyone with the know-how to work on X11 backs Wayland instead... so unlike the R6RS situation it's kind of a fait accompli. Enough Scheme implementers were assmad about R6RS that there had to be a compromise.
X11 is actually not really difficult to maintain. There is just no corporate funding anymore. Wayland is designed to be used in car entertainment systems. At least this is where most of the funding is coming from. As such it is completely unsuitable for any desktop use and has had zero community mind share in that space for a very long time.
I think in both cases, there's a similar switch of philosophy. We went from "there are a bunch of components glued together, and with enough glue you can solve anything (except an excess of glue)" (shell scripts for sysvinit vs declarative unit files for systemd; X11 apps that can do almost anything, vs the privileged compositor for wayland), to a model of "we wanted things to just work without having to install glue, so we integrated more functionality people wanted, if you want to swap it out you have to swap out the whole thing and keep up enough with all the other features people want" (e.g. you have to actually get the things you want into unit file directives, or get the features you want into the compositors people use; or you have to build a completely separate compositor or a separate init).
That's a tradeoff. For my part, I'm thrilled that so many more things just work out of the box; however, it's discouraging for people whose features aren't covered yet, since they have to go work on integration rather than writing a specialized tool and encouraging people to glue that tool in. But the benefit of that is that once something is integrated, it just works, with no glue required.
You are presenting this in certain way that I think veers into inaccurate and misleading in an attempt to smooth things over and be nice.
What we had before systemd was -
90% glue code, reimplemented quite badly across X distributions.
That glue code was, in practice, extremely brittle and very very unfun to attempt to keep even simple daemons running "portably" distribution to distribution.
The other 10% was increasingly aging and ill maintained c code snippets.
That was not a nice world for people actually using it.
For people making stuff up about "the old days" that didn't actually participate in the misery of making basic systemv scripts, yeah it was composable and we lost something.
The internals of systemd are just as brittle, and the model of unit file configuration does not really apply cleanly beyond the simplest cases. So editing unit files becomes an undocumented dark art.
Chiming in here to say that systemd seems to cover the 80% (or even 90%) case pretty nicely, however that last 10-20% is now really difficult.
Anyone who has really delved into systemd knows this but they get shouted down as halting progress and hugging bash scripts, which is disingenuous as bash scripts (as per sysvinit) were painful and had great difficulties in areas such as determinism and parallel execution.
If you ever want an example of what I mean: look at how systemd starts mysql. Someone (not me) spent at least a man month making that work.
I do begrudge the all or nothing approach that systemd is taking (even if it claims to be modular), but I will admit openly: that 80% case is a lot nicer.
> If you ever want an example of what I mean: look at how systemd starts mysql. Someone (not me) spent at least a man month making that work.
I was curious, so I cracked open the mariadb.service unit that ships with Arch Linux. Other distros might ship different unit definitions, but this is the one I'm looking at.
It's large, yes, but very well commented and seems quite clear to me.
Much of the complexity seems to be around sandboxing: PrivateNetwork, CapabilityBoundingSet, PrivateDevices, ReadWritePaths, ProtectHome, PrivateTmp. These are all settings to do with hardening the service. They're totally optional, and can be removed without impacting functionality.
There is some extra complexity in the ExecStartPre and ExecStartPost commands. This appears to be something to do with the Galera cluster functionality. I'm not entirely sure what's happening with those, but I imagine these commands would also be present in a SysV init script implementing the same functionality.
The rest is pretty standard stuff: the user/group is set, along with the umask and some ulimits. LD_PRELOAD is set to load jemalloc. There's also some start/stop timeouts configured, with a comment explaining that these same timeout values were used in the SysV init scripts in the past.
Essentially, I'm not really sold that any of this complexity is caused by systemd. The hardening strikes me as a little unusual, and at a guess I'd say that this probably wouldn't be present in a SysV init script. If it were, the configuration would live in executable code, not in strictly declarative config directives.
> The internals of systemd are just as brittle, and the model of unit file configuration does not really apply cleanly beyond the simplest cases. So editing unit files becomes an undocumented dark art.
I call bullshit on that.
We removed few thousand lines of SysV init scripts from our configuration management that were basically fixed by us for subtle errors when we migrated to systemd. There is very little cases that aren't handled by very simple systemd units, in fact I'd dare you to give example that would be easy in SysV and hard in systemd.
Few fun cases:
Process after doing /etc/init.d/servicename start -> status immediately returned service stopped. That confused service managers like Pacemaker that thought the service failed to start. Why?
The app did start -> save pid file. Java app so it took a second.
The script just forked the app into background. So if you run status after start the pidfile was not there and it showed it is stopped. Not the problem in systemd
Another
(IIRC) MySQL init script put pid file in /var/run/, like everything else. Bit old install so /var/run wasn't a separate partition or tmpfs.
MySQL init script also didn't try to start if it found the PID existing in system. It didn't check what* was running tho.
So in crash scenario, server started, MySQL init script went "oh, there is apache daemon using that PID I had last reboot, clearly that means mysql is working" and exited. MySQL status returned MySQL working. Not a problem in systemd
Another:
Script just... sent signal and exited on stop. App could took some minutes to shut down. stop -> start failed, pid was lost coz script removed file with it after sending signal.... similar problem with multi-process app scripts not killing all childs. Systemd "just have a cgroup and mark service as stopped once everything dies" fixes that. YOu can ask for that behaviour if you tell systemd to not kill processes on stop but that's pretty much "purposefully using non-default settings" so can't be really done on accident.
IIRC all or most of that is how SysV init script by standard should not work but were simply bugged
Those scripts were in popular packages in "enterprise" linux distros. If even those maintainers can't make "simple SysV script" then maybe sysv scripts aren't as "simple" as some people claim they are.
X11 offers a lot of functionality on the server - a lot of it unused, but still. Wayland technically specifies little but the protocol. So here the philosophy switch is the other way around.
Simply the fact that systemd turned logs into a binary format that can't be read with standard tools, and this was not (easily? at all?) possible ti cgange made me really strongly dislike it.
systemd has its own tool to read its binary log format, but I've already seen it corrupt its own logs and fail to read it.
And did they do the binary format for efficiency? Get this: I've never seen anything be inefficient due to logging before systemd. Shortly after archlinux switched to it, something was being super slow. Sure enough, it was systemd not being able to handle the amount of ligs something produced.
I think systemd is very opinionated, and something that opinionated should not be such basic piece of the linux landscape. There should be choices of individual components.
I like some parts systemd does but binary format is travesty. And I do not mean "the fact it is binary is bad" but format itself. Example:
strace systemctl status haproxy 2>&1 |grep /var/log/journal |wc -l
356
356 files opened (whole logging dir is around 800MB) only to tell me
CGroup: /system.slice/haproxy.service
├─2428 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock
└─2434 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock
Warning: journal has been rotated since unit was started, output may be incomplete.
Yes, to tell me there are no logs for the app. And it takes multiple seconds (I tested it on NAS with spinning rust).
The binary log format has a huge advantage: it allows for arbitrary amounts of fields containing arbitrary content. Unlike syslog, journald is trivial to parse unambiguously, and you can even dump binary data like crash logs into it if you really want to. It also parses universally -- you always know what the timestamp is, you don't need to craft a per-service regex.
I'm not sure what would be a better alternative. I guess it could have gone with dumping JSON, but then you have issues with newlines, or escaping, and a single bad character can break parsing. At that point you might as well go binary, IMO.
If an application provides some standard command line tool that allows its internal data to be output as formatted text, then it in fact can be read with standard tools and manipulated with shell pipelines and scripts.
Plain text files are nice where possible, but binary formats are not automatically going against the Unix Way.
A well-designed, well-implemented binary log format? Quite possibly so.
Let's create a logging daemon that creates Apache Parquet files, for instance. Or at least uses a well-established, well-tested binary row format, with existing tools capable of easily unpacking it and working with it.
Maybe journald has a good enough API to stick to it. I hope it's going to be replaced with a saner implementation of that API, like Pipewire did with Pulse Audio.
redhat seem to have tried their best to use systemd to turn Linux into Windows NT
- over-complicated service manager
- binary logs that are difficult to find
- incomprehensible task scheduler
- hidden caching dns resolution service
- disk manager
- network manager
- login management
- crappy ntp client
all it needs is svchost.exe
something that was 100% reliable is now about 98% reliable, and when it inevitably breaks it's completely un-introspectable by standard tooling
I have yet to see a sysadmin write a 100% correct systemd service unit file
But look, having a stack that replaces many Unix services, and which you basically completely control, is very convenient. It allows, say, to drastically limit third-party userland in embedded systems, where Red Hat is big.
Systemd is trying to replace Unix, a bit more successfully than GNU Hurd.
the whole "sysv init is so slow because of shell scrips" was a scapegoat. yes bash is relatively slow. and yes, dash is the answer to that problem, not systemd.
they who control systemd now control Linux as an OS. not as an API (that's kernel/libc) but as an OS. how you manage it, run it, suspend it, initialize it, turn it off, everything
Only if we ignore the fact that commercial UNIXes already did this before systemd came to be.
Also in the Cloud OS world, classical UNIX doesn't even matter that much, we only need something to run those containers or managed runtimes on top of.
Wayland has a massive loss in functionality (in the name of "security" and off-loading implementation details to compositors/window managers) compared to X11 though.
Stuff like xdotool, screen sharing, clipboard sharing, etc. is much harder.
It works fine on Arch with the Hyprland compositor: putting the mouse from the terminal to waybar, you can still type in the terminal.
When was the last time you really tried wayland? My first (and last) time was 2017. Much has changed!
In a few years, when wine support is perfected, I think people dissing wayland will be seen as quaint as those insisting on a distribution "unsoiled" by systemd are seen today :)
I tried it up to last month in Ubuntu 22.04. I changed back because I got tired of the broken focus-follows-mouse and Ubuntu tracks the bug but doesn't fix it.
Neither. Screensharing was a massive issue for a while, but it's now far easier than X11 because the server/portal is responsible for a lot of the more fiddly bits.
The underlying problem is that the protocol (ABI, really) has to be specified and implemented for these things to work. Screensharing was a prominent early example of something which hadn't been through that process. It was a very visible issue, an easy wound for hardcore X11 fanatics to pull at, and - apparently - continue to bash on long after it has been solved.
Clipboard woes have likewise been solved.
Mouse and keyboard injection has not yet been solved, but I recall there being some draft spec to that effect.
Ultimately, if Wayland does 100% of what you need, you should be using it because it generally does those things better. If it doesn't do what you need, then stick with X11 until it does (and support for that protocol is widespread).
Why not both? It's a loss in functionality because it's much harder and the Linux desktop environment ecosystem is too niche, fragmented and underfunded to make the transition complete in a way that satisfies all camps so we end up in this traditional linux situation where several solutions coexist in parallel forever.
Time heals all wounds. If it is hard but not impossible eventually the gaps will be filled. Also the people who maintain X11 can't live forever sadly. Eventually there will only be Wayland.
We only have to wait a few more decades at the outmost.
The problem is that systemd vs sysv-init is a false dichotomy.
Systemd took over a ton of important non-init functionality, like DNS, logging, and interactive sessions. That could be fine if systemd did so in a nice and rock-solid way, but it was unpleasantly bug-ridden for years after being thrust on mainstream distros via a hard Gnome dependency.
SysV-init sucks in many ways, it's well known, and I can personally attest to that. I don't want SysV-init to be perpetuated.
There were, and are, viable alternatives to both, which are a less radical rework of the well-established Unix approaches around the area, do not overreach well past the init system scope, and are sane and well-functioning. For examples, see [upstart], [openrc], [s6], [runit].
I think that the prevalence of systemd was mostly achieved not through its technical merits (which undeniably exist) but through Red Hat's strongarming, because Red Hat wants certain things work in a way convenient for their business, and they have a powerful battering ram under their control, the Gnome DE.
Fortunately Wayland is not being force-fed in such a way, because, much like systemd when it was introduced, it's still in many important regards not exactly ready. I, with my 25 years of running Linux on desktop, will gladly migrate to a better graphics architecture when in becomes adequate for my purposes, if the whole current Unix architecture is not obsoleted and replaced wholesale by that time.
openrc: I don't believe openrc had a process supervision story 8-9 years ago. from what I recall at the time openrc was just a slightly better sysvinit.
runit: I have a lot of experience with runit, and while it works, it has many footguns and is hard to use correctly.
s6: a better runit with better footguns
A great thing that more people should read, is https://lwn.net/Articles/578210/ from one of the Debian developers who voted to switch to systemd. It touches on things like the lack of adoption of upstart, and openrc not solving the problems they had.
The largest issue with Wayland is it has "Linuxisms". That means no work was done by the Wayland people to make it portable to other UN*X. So the BSD folks (and other UNIX people) have a lot of work to get it going.
And there still seems to be confusion if or will Wayland require systemd, from what I have seen, no 100% clear direction from anyone.
This used to be true, but not anymore. We have upstream support for FreeBSD in libwayland now. Other BSDs have MRs which are only waiting for CI support.
Wayland itself has nothing to do with systemd, many users are running Wayland without systemd.
Yes I can confirm that Wayland runs fine without systemd!
I am on Gentoo and using openrc and sway runs fine. The only thing that might kind of rely on systemd is sway-idle which wants logind (or to run as root iirc).
I know you know this, being the sway and wlroots maintainer and all but I figured I would show a concrete example of this working fine.
FWIW, swayidle only has an optional dependency on logind. It can be built without. In fact, we're even discussing removing the logind-related features from swayidle. (swayidle never required running as root though.)
I mean, is it really wrong for linux people to focus on linux? What's the point of these different OS's if they all support the same software and software has to be made for lowest common denominator?
What a typical Linux user mindset! That IS one of the largest issues for non-Linux users. Linux is by large margin a niche OS in desktop market share. So it is no issue if all programs are developed only for Windows and/or Macintosh. Right?
I admit that appealing to capitalism isn't likely to advance the Unix desktop but the "we're all in this together" excessive portability attitude didn't produce good software either.
For me personally X11 just got to the point where everything I want works out of the box so switching to wayland seems like entirely pointless experience.
I'm also not a fan of attitude of moving support of even basic functionality like "take a screenshot" or "handle mouse/trackpad properly" into higher layers, it just feels like a lot of work duplication and moves that repetition of work to window managers
Yeah, I'm a big fan of systemd but I can accept that when it first came out it was probably a lot worse than it is today. Similarly, Wayland has a bunch of good ideas, along with a bunch of functionality that isn't there yet. Of course, Wayland also has the problem that it's intentionally excluding some useful features that X had, like global hotkeys and screensharing. I think systemd and Wayland are actually opposites in this regard, where systemd was disliked at first and then turned liked, while Wayland was liked at first but is now turning to disliked
> Of course, Wayland also has the problem that it's intentionally excluding some useful features that X had, like global hotkeys and screensharing...
Screen sharing was always a weak point in X11, actually. VNC servers like x11vnc use absurd, inefficient hacks like periodically capturing fragments of the screen and checking for changes [1] -- while an extension to provide change notifications exists, it's been unreliable for ages [2] and the standard advice is to disable it.
> Yeah, I'm a big fan of systemd but I can accept that when it first came out it was probably a lot worse than it is today.
The first "large" distro to switch from sysvinit to systemd was Arch, and that switch happened over ten years ago. The switch itself was quite rocky (the upgrade path was not particularly seamless, and while Arch users tend to be more tolerant of that sort of thing, it's worth mentioning).
That said, even in 2012-2013, the end result once you completed the upgrade was significantly better than collectively expected. The original plan was to support both systemd and sysvinit (at least for a period of time), but that was quickly abandoned because not enough people wanted to actually maintain sysvinit packages, so support[0] ended up getting dropped very quickly.
[0] Arch is a community project, so "support" is different from what you'd expect in (e.g.) RHEL, but it still has separations of what's considered supported and what's not.
> The first "large" distro to switch from sysvinit to systemd was Arch
No. In terms of released to users, the first was Fedora; the second was Arch; then Mageia; then openSUSE. In terms of integrated into the distribution, the first was Fedora; the second was Mageia; then openSUSE; then Arch.
> Of course, Wayland also has the problem that it's intentionally excluding some useful features that X had, like global hotkeys and screensharing
Screensharing has been supported for some time already. Some apps support it, some don't. It is up to the apps to use the respective APIs, the times of free reign over framebuffer is over.
I mean, you can get screen sharing to work. But there are like 3 different incompatible "standards" for how to do it. There's no single simple answer to "how to record the screen in Wayland". This has been the state of screen sharing on Wayland for at least 7 years.
I'm also very curious about what's envisioned for global hotkeys. Surely we don't expect people to manually go to their system settings and configure some command to run which talks to Discord over an IPC solution to start sharing my voice when I press my push to talk button and stop when I release the push to talk button? But "global hotkeys should be configured on a system basis, not an application basis" seems to be the reigning philosophy, despite being incredibly user and developer hostile.
There is one standard and single simple answer: xdg-desktop-portal with pipewire. Some compositors might have implemented their own private APIs, but that is not a standard by definition.
Global shortcuts are a bit more thorny, exactly for the reason you mentioned. You present one POV, the another is, that application-defined shortcuts are incredibly hostile, as they allow application to stomp on each other in the better case, or hijaack global state in the worse one. Some other operating systems do not allow it either, for the same reasons. The long-term solution could be defining api, that allows application to advertise global actions, and allow the user to configure shortcuts that might (or might not) call these, in some user-friendly way.
> A portal frontend service for Flatpak and possibly other desktop containment frameworks.
When it comes to global shortcuts, I'm not saying it has a super easy solution, but it's something that it's essential to support. Wayland intentionally doesn't, and I can't see that changing in the short term (as you also agree)
It is dbus api, and is able to work cross-namespaces (i.e. flatpak containers too). There no harm in using it in non-flatpak apps, at least you will be ready if your app ends up in flatpak.
Wrt global shortcuts, I see that there is some work done. The intentional part isn't malice, as in not willing to implement it at all. It is about not implementing temporary solution, that will be quick and dirty, and then being stuck for supporting it for next 50 years.
It took Golang 12 years to attain generics. Wayland is already 14 years old.
To tell the truth, X11 took roughly 10 years (1986 to 1996) to get to a pretty usable, while pretty imperfect, state, and largely dominate Unix desktops.
I don't see why a there cannot exist a daemon that handles these IPCs in a standard way (e.g. a pipe), and which can be politely asked by apps to map particular hotkeys. An API / UI should be available for the user to review and customize the mapping, and to resolve possible conflicts.
This is basically the approach that exists in macOS for many years; I haven't heard a ton of criticisms towards it.
As you can see with screen capture, even if the api is available, but it will take years to adopt, with some intentionally dragging their feet -- it is different after all, and the old way worked for me, etc, etc.
MacOS ecosystem moves much faster in this regard; mac users expect rapid adoption of new apis, and do not have 20+ years old bash scripts that should continue to work untouched.
> Screensharing has been supported for some time already. Some apps support it, some don't. It is up to the apps to use the respective APIs, the times of free reign over framebuffer is over.
See, that's the problem, it's putting unnecessary load on everything else using it.
It should have that API and it should have API that just allows dedicated app to allow/deny permissions to use that API. Not put everything on WM to duplicated more and more code for no good reason and making WM developer harder
For the past few months I've been investigating and working on porting parts of Xfce to be usable under Wayland. It's astonishing the number of features that are just not implementable at all on Wayland, at least not without inventing new non-standard Wayland protocols. (Another option is refactoring the desktop environment so all the individual components run in the same process as the compositor, and have access to the compositor's internals, but that's unacceptable for what are hopefully obvious reasons.)
Even after over a decade, Wayland still seems quite immature and poorly thought-through. The protocol standardization seems geared toward satisfying GNOME's use-cases and ignoring everyone else's. The wlroots camp has gone their own way on a bunch of things, which is fine, but fractures the landscape a bit.
Making Xfce fully Wayland means turning the window manager, xfwm4, into a Wayland compositor. For someone already familiar with xfwm4's code base, that's a year or more of work (and a requirement we'd have is that it would have to support both X11 and Wayland, further complicating things).
Even if fixing inherent problems in X11 (security, graphics rendering, etc.) would require compatibility breaks, personally I would find that preferable to throwing the entire thing out and having to build (and build on) an entirely new system. As much as I disagree with JWZ's attitude on a lot of things, his description of most open source projects as a "Cascade of Attention-Deficit Teenagers" seems to be pretty accurate, at least here. X11 hasn't been improved and fixed because no one wants to maintain and improve X.org anymore, and the people who used to maintain it would prefer the fun of chasing and working on new shiny things, even if it means decades of new make-work for anyone working in the Linux GUI space.
Don't get me wrong, I am the first one to cut off anyone at the knees who feels they are entitled to tell open source developers what to do with their time (though it gets a bit more complicated when many of those developers are employed by corporations to do this work). But I think it's pretty shitty to push the desktop in this direction and essentially force desktop and toolkit and application developers to choose between stepping up to maintain and build on X.org (something well out of most people's wheelhouse), or spend a huge amount of time porting to a new display system.
Having said that, Wayland does have promise to be a better system than X11, even though it will likely take another decade to achieve feature parity with what we already have. So I'll continue to work on getting there eventually, even though I resent the fact that I have to learn an entirely new display system so I can work on reimplementing the same features again instead of building new features, fixing bugs, and making things more polished.
> Even if fixing inherent problems in X11 (security, graphics rendering, etc.) would require compatibility breaks, personally I would find that preferable to throwing the entire thing out and having to build (and build on) an entirely new system.
Why?
X11 in the current state would need to be radically redesigned anyway.
There's a bunch of stuff that long stopped making sense, like XDrawLine and similar. The networking protocol sucks horribly and just doesn't perform, even on modern, high end connections, and there's a bunch of baked in assumptions that don't match modern hardware.
Yeah, you could make X12 break compatibility, throw out all the cruft, and redesign the protocol, but at that point, what is even the point? It'll break pretty much every single application in existence anyway.
I wouldn’t say that screens (protocol term) in X11 are totally useless. In fact, this is how I would implement a more secure screensaver (this would require cooperation from the server, though): the screensaver client somehow securely authenticates to the server so it’s entitled to be the screen locking/unlocking process. It’s hosted on a separate X11 screen. When there’s a command to lock the machine, the server switches to that screen. Upon unlock, the server returns to the default screen.
Now if you kill the screensaver, or it segfaults because you hit a lot of keys, or a butterfly causes an EMI disturbance and the screensaver dies, you are left with an empty screen and not with your work exposed like in the current scheme of things.
IIUC this is more or less how the secure screen lockers work. GNOME will switch to the Display Manager (GDM) screen and ask it to validate the user's credentials before switching back to the session screen. If GDM crashes there is nothing to see and it is restarted.
Who, or what groups, get paid or make a living to work on "the successor to X?"
And more importantly, where do THEIR incentives lie, especially regarding the question of "playing nice with the old stuff and other devs trying to build things here."
My best guess is this is where you'll find ALL the answers to "why wayland sucks."
(It may not even be "nefarious," simply "not worth our time," or "why not just use OUR preferred DE instead of the one you're working on?" -- okay, maybe that is nefarious. :)
I think it's fair, since every one of those distros ships an X11 fallback. I desperately want Wayland to succeed but exaggerating its current wins won't get us there.
I don't want this post to focus on the negative, though, so I'll suggest a more positive argument: the people who would have been responsible for a hypothetical X12 instead decided to make Wayland. I can't think of a body of experts more likely to make a correct decision, so I have confidence in Wayland as the path forward.
Fair. I'll admit there are a few rough edges, mainly caused by some apps (Slack) having older versions of certain libraries that makes some functionality (like screen sharing) break.
Dudemanguy wrote about its deficiencies 2022-06-11 [0], ex lack of feature parity with X11 and self imposed limitations like only allowing integer scaling (ie to get 1.5 scaling, it uses x3/x2 scaling). For some perspective, consider checking other hn reader reactions to this post [1].
To be fair, I'm on an five-year old laptop with NVIDIA and since last year it almost works well enough to be a daily driver. For some weird reason Chromium doesn't render at all, even though Chrome does. That's the only remaining bug of significance.
Whereas when I tried a year before I had to bail after an hour because many applications would just have a black screen.
It kind of feels like it will take only one more year for this to work well enough (except then the laptop might be so old hardware support ends up lacking)
I have to disable hardware compositing on X11 to get a reliable desktop (and HW rendering in individual apps like firefox). I'm not sure if something similar is possible on Wayland.
Obviously it doesn't work if your workaround is disabling it. It is either bad hardware, or buggy driver. For the latter, it has to be some obscure hardware; popular hardware would have it fixed.
I have been using linux for over 20 years and reliable hardware acceleration has always been more "miss" than "hit." This goes all the way back to having to disable hardware cursors on my very first linux setup. I hear the amdgpu driver is pretty solid, and the i915 driver I use on my laptop is great. Nvidia is just a mess (nouveau and the nvidia binary drivers are differently buggy) and the radeon driver is complete garbage.
My first linux machine was 386 with Trident 9000, running Slackware, so I'm pretty aware how linux hardware support developed over time. Maybe I was lucky in picking my hardware, but buggy basic functionality was a big exception (minor bugs were there, like in amdgpu cursor not picking the same LUT as the framebuffer, and being jarring white compared to redshifted desktop; incidentally, windows driver had the same issue at the same time).
Not implemented functionality - sure. I've never got TV out running on Radeon 7500 (RV200) during early 2000s, for example. But basic functionality (today), like freezing texture mapping on a hardware, that has 3d driver implemented and that driver comes with distro, no. But then again, maybe I was lucky in my picks.
Looks like it started in 2007 [0], though the content was pretty different, there were some updates over the years, and the current version is indeed from 2013 [1]
Right, the name or any kind of lineage does not matter, Wayland is the X11 successor because it is the current seriously developed and improved open display protocol. Nothing about Wayland stole from X11, has any less legitimacy to be the post-X11 protocol than something called X12, or prevents anybody else from improving X11 or from working on an alternative they call X12 or anything else.
I don't know why people get so hung up about this. People including its creators may have been over-optimistic about Wayland, but it was never going to be a case of a weekend hacking binge producing something useful out of the gate (X11 had that distinction because it was entering a very small and very green field). X11 has been used for so long because it is very well supported, robust, and has been extended and improved for decades and most of the remaining problems it has are very hard to solve. And that's exactly why we also didn't see something called X12 happen overnight either.
The fact that Wayland has been created and worked on and blessed by a number of X11 alumni did lend it a good amount of credence early on, and one might say gives it the right to be spiritual successor to X11. But really if you censor the names and look at the practical reality rather than sentimentality, Wayland creators and developers have been going down the long difficult road of coming up with something better and nobody else did, so that is really why it is the next X11. Progress may seem slow but it does not stop. Features continue to be added, implementations continue to improve, support continues to expand and its takeover seems almost inevitable at this point.
Someone might mention "network transparency" at this point. As far as I've seen from the outside looking in that was never the fundamental requirement of the protocol as far as I can tell. Non-transparent protocol like DRI were never not considered to be X11 by the X11 architects and developers who wrote and merged them, showing they have always been quite willing to step out of rigid dogma and embrace practical application. The original X announcement never said the project was a network transparent window system, it said it was a good-but-not-perfect window system and if a good windows system today does not require network transparent protocol (because users don't care as much as they did back then or because networking can be achieved at other layers) then there is no reason it couldn't be X12. The X12 page linked here enumerates some other X11 features that could be dropped too, I've never seen a reasonable argument for why a network transparent protocol is the be-all and end-all of X.
Good point. And for those who need to connect over the network there are now FOSS implementations of RDP, for example. (And VNC, but the performance there is probably not so good.) Then there’s also proprietary solutions such as BeyondTrust and NoMachine.
Yeah, I know RDP was developed in Redmond but from my layman’s perspective, it’s one of the best protocols for accessing a graphical desktop environment over a network. If you worry about the security, just tunnel via WireGuard.
> I don't know why people get so hung up about this.
People get hung up on this because they believe that free software somehow entitles them to dictate that others perform infinite labour on whatever schedule, projects, or features that they deem fit, irrespective of whether the people they are dictating to want to, think that it's a good idea, or whatever.
That's quite a stretch, Wayland and X (any X, 11 or hypothetical 12) are so vastly different that calling Wayland "the new X" is like calling Quartz "X for macOS".
It's not as far fetched as you'd think. My understanding is that Wayland is a result of X11 developers going together to design a new protocol based on the thoughts and ideas they've had for an "X12" throughout the decades.
Wayland doesn't inherit any architecture from X. XWayland is just a display server hosted by Wayland same as Exceed or Xorg on MS Windows. This is why X will never die since it can always be run on top of other graphical systems.
100% of the developers who know anything about how graphics works under Linux are focusing on Wayland. Development on Xorg is moribund, with only Xwayland getting significant attention.
Hint: X was optimized for 1980s graphics, which was 90% simple blits, line draws, and fills mediated by the CPU perhaps with special fixed-function accelerators for those operations.
In 2023, graphics is done with the GPU -- period. You post draw commands, geometry, and textures to the GPU via shared memory and let it do the work. Programmable shaders open up vast amounts of capability that X11's graphics primitives just don't get you.
So you may be right that Wayland isn't good at what X11 is useful for. But nobody's doing what X11 is useful for today. What people are actually doing, Wayland is excellent at. You will be running a Wayland desktop soon, because toolkit maintainers and distro packagers will simply drop support for X. The Gtk maintainers are already talking about dropping support for X in Gtk+5.
Sure but GTK and the gnome world is highly politicized by RedHat. KDE and Qt are much further away from cutting X11 off. I see the case for Wayland but similar to X11 it is already showing its age in poorly conceived design decisions. Desktop sharing for example should be a first class citizen in 2023.
Xorg is maintained... by Red Hat. As steponlego pointed out, they're already binning up chunks of the code base, but that's just preparing for when they shut the lights off entirely. As soon as Red Hat says "Shop's closed, boys, we won't be updating this code anymore" KDE and Qt will happily cut out their X support. Especially since Qt has for years been targeting mobile phones, car displays, and embedded applications as its business model -- markets where Wayland is at its greatest strength.
Yeah I know RedHat 'maintains' X11, this is part of the politicizing they do. Embrace, extinguish, they just forgot the extend part :)
RedHat and their business focus is a lot of what is wrong with Linux for users today. We don't make money for RedHat so their priorities aren't with us.
I'm hoping someone will still take it over for when happens. RedHat doesn't own X11. Wayland has its uses but there's a lot of niche and legacy usecases what will still need X11.
Probably your best hope is OpenBSD's Xenocara project, a fork of Xorg. If Xorg bitrots away and Red Hat won't touch it, some OpenBSD madlads are likely to step up... at least until Wayland gets running well on OpenBSD. :)
It doesn’t matter to me if the X.org developers have decided not to do their jobs, which is maintaining X.org. This isn’t proof that X.org is bad, it just shows that giggers who really work for big tech firms shouldn’t also be trusted to maintain Free software.
I’ve noticed them slowly trying to ruin X.org for a couple years now, deprecating drivers for no reason whatsoever, etc.
Network transparency doesn't offer any tangible benefits for a lot of apps and desktop environments because they are drawn with bitmaps and textures rather than vectors.
This topic has been done to death for the past decade. VNC and RDP won. X11 was a razor edge case and nothing more.
Yet network transparency was the whole point of (and whole reason for the complex byzantine architecture of) X11. So you have to pay the full expensive complexity and asynchronous distributed api tax, and you still have to reimplement half-assed virtual desktop network transparency at another layer.
X11 just had a different usecase. It was not meant for internet but for many terminals on the local network. Hence the proto design which didn't consider latency a potential problem.
ssh -X is still easier and lighter weight that running a VNC server on any machine I need to run a GUI though.
Incidentally at $CURRENT_JOB this happens very often: when WFH, I RDP to a windows machine, form which I VNC to a unix xvnc box form which I ssh -x to my actual dev box. It is amazing that it works at all and it is quite usable!