Hacker News new | past | comments | ask | show | jobs | submit login
Why don't we have Wayland on Raspberry Pi yet? (2018) (joshondesign.com)
184 points by bpierre on April 23, 2019 | hide | past | favorite | 250 comments



I enjoyed the article but this quote jumped out at me: "What took thirty years to build isn't going to be replaced in a day." This was in regard to replacing X11 with Wayland. It's funny because it would be closer to the truth to say that X11 took 5 years to build, and for the last 25 years everyone has been trying to get off of it, but in the meantime piling more stuff on top of it since that's easier than starting from scratch. 25 years ago there were people saying stuff like, "I wish we had something else, but this is what we're stuck with, and it's just too much work to make a new window system, so let's just add another protocol extension."

So we're already 25 years into trying to replace X11, which is already far longer than the period of time in which it was being designed and actively developed on a foundational level. There's got to be some good lessons learned in all of this about software lifecycles or getting it right the first time or knowing when to move on or OS development by way of mailing list consensus. I totally get why it's still used after all these years, it performs an essential task- but I bet the original creators would be the ones most shocked that it's still a thing in 2019.


I've been reading through the Vulkan spec, which is in way similar in that OpenGL has been extended since its introduction in 1992 with more and more features to support new graphics capabilities as they evolved.

I've seen Vulkan called the successor to OpenGL, but reading the spec it seems more like the end game for raster graphics card programming. OpenGL 4.0 was released in 2010, and since then changes have been incremental. We more or less have figured out how to do raster graphics (ray tracing may be a different story), so it made sense to invest tens (hundreds?) of millions of dollars to develop the Vulkan spec, and then many millions more to implement it.

What other technologies are there were we are more or less at the end game? I know Qt5 widgets is considered feature complete for desktop apps.


Photoshop pretty much got it right a couple decades or so ago, and they've just been porting it, smearing on new lipstick, and figuring out how to make more money with it ever since.


I would argue that this is true of most of Microsoft Office as well. When did they really add a new feature to PowerPoint that you had to have?

And it's no surprise both Adobe and Microsoft have pushed people towards a subscription model for this software: Nobody in their right mind would pay for upgrades otherwise. Arguably Office you need every ten years to ensure you have security updates because of the amount of foreign content you process with it, but Adobe? Psh.


>When did they really add a new feature to PowerPoint that you had to have

Funny enough, the screen recording functionality added to PowerPoint a few updates ago is as far as I can tell the best simple screen recorder available for Windows 10 and the closest thing to native screen recording outside the game bar. Not sure why that hasn't made it into the snipping tool yet.


The feature set of Microsoft Office, yes. But I think Google Docs took some reasonable steps backwards in features in exchange for a big leap forward in collaboration. (Or a few steps towards but not nearly far enough to where Douglas Engelbart was in 1968.)

https://en.wikipedia.org/wiki/The_Mother_of_All_Demos

>The live demonstration featured the introduction of a complete computer hardware and software system called the oN-Line System or, more commonly, NLS. The 90-minute presentation essentially demonstrated almost all the fundamental elements of modern personal computing: windows, hypertext, graphics, efficient navigation and command input, video conferencing, the computer mouse, word processing, dynamic file linking, revision control, and a collaborative real-time editor (collaborative work). Engelbart's presentation was the first to publicly demonstrate all of these elements in a single system. The demonstration was highly influential and spawned similar projects at Xerox PARC in the early 1970s. The underlying technologies influenced both the Apple Macintosh and Microsoft Windows graphical user interface operating systems in the 1980s and 1990s.

http://worrydream.com/Engelbart/

>Engelbart's vision, from the beginning, was collaborative. His vision was people working together in a shared intellectual space. His entire system was designed around that intent.

>From that perspective, separate pointers weren't a feature so much as a symptom. It was the only design that could have made any sense. It just fell out. The collaborators both have to point at information on the screen, in the same way that they would both point at information on a chalkboard. Obviously they need their own pointers.

>Likewise, for every aspect of Engelbart's system. The entire system was designed around a clear intent.

>Our screen sharing, on the other hand, is a bolted-on hack that doesn't alter the single-user design of our present computers. Our computers are fundamentally designed with a single-user assumption through-and-through, and simply mirroring a display remotely doesn't magically transform them into collaborative environments.

>If you attempt to make sense of Engelbart's design by drawing correspondences to our present-day systems, you will miss the point, because our present-day systems do not embody Engelbart's intent. Engelbart hated our present-day systems.

And it's in the direction of multi-user collaboration that X-Windows falls woefully short. Just to take the first step, it would have to support separate multi-user cursors and multiple keyboards and other input devices, which is antithetical to its singleminded "input focus" pointer event driven model. Most X toolkits and applications will break or behave erratically when faced with multiple streams of input events from different users.

https://tronche.com/gui/x/xlib/input/XGrabPointer.html

For the multi-player X11/TCL/Tk version of SimCity, I had to fix bugs in TCL/Tk to support multiple users, add another layer of abstraction to support multi-user tracking, and emulate the multi-user features like separate cursors in "software".

Although the feature wasn't widely used at the time, TCL/Tk supported opening connections to multiple X11 servers at once. But since it was using global variables for tracking pop-up menus and widget tracking state, it never expected two menus to be popped up at once or two people dragging a slider or scrolling a window at once, so it would glitch and crash whenever that happened. All the tracking code (and some of the colormap related code) assumed there was only one X11 server connected.

So I had to rewrite all the menu and dialog tracking code to explicitly and carefully handle the case of multiple users interacting at once, and refactor the window creation and event handling code so everything's name was parameterized by the user's screen id (that's how you fake data structures in TCL and make pointers back and forth between windows, by using clever naming schemes for global variables and strings), and implement separate multi-user cursors in "software" by drawing them over the map.

Multi-Player X11 SimCityNet:

https://www.youtube.com/watch?v=_fVl4dGwUrA

X11 SimCity Pie Menus:

https://www.youtube.com/watch?v=Jvi98wVUmQA

Multi-user menu tracking (added "@$screen" parameterizations):

https://github.com/SimHacker/micropolis/blob/master/micropol...

Opening multiple X11 displays (multiple toplevel "head" windows per screen, each with a unique id, using $win parameterization):

https://github.com/SimHacker/micropolis/blob/master/micropol...


This human hacks!


Don't know when it was added, but I recently found out that you can have multiple synchronized Word windows of the same document.


At least 15 years ago you could drag a marker that hides above the vertical scroll to create multiple views of the same document. I didn't know it carried into other windows, so that might be newer.


On different people's screens?


Yes (in Office 365, in your browser). See https://support.office.com/en-us/article/collaborate-on-word...


PowerPoint now has a great feature where it will do speech to text and supply real time subtitles below your presentation. It’s pretty good too. Seems to ignore swear words though (yes we tested that first).


Photoshop has had at least two massive functional changes; layers in 3.0 and non-destructive editing in CS3.

And probably one huge philosophical change, given that it was originally designed for displaying grayscale images.


Have you used Photoshop lately? There are many new, modern features for selection, content aware erasing, scaling, filling, HDR graphics, 3D, text, computational layers, etc. Get a 30-day trial and try it!


Computational layers of lipstick. And rip-offs of stuff that's been around for decades, that Adobe didn't invent (like tabbed windows, which Adobe patented and sued Macromedia over, in spite of all the prior art).

https://en.wikipedia.org/wiki/Tab_(interface)#Patent_dispute

https://www.donhopkins.com/home/archive/emacs/to.jag.txt

https://en.wikipedia.org/wiki/Tab_(interface)#/media/File:Hy...

https://medium.com/@donhopkins/the-shape-of-psiber-space-oct...

Around 1990, Glenn Reid wrote a delightful original "Font Appreciation" app for NeXT called TouchType, which decades later only recently somehow found its way into Illustrator. Adobe even CALLED it the "Touch Type Tool", but didn't give him any credit or royalty. The only difference in Adobe's version of TouchType is that there's a space between "Touch" and "Type" (which TouchType made really easy to do), and that it came decades later!

Illustrator tutorial: Using the Touch Type tool | lynda.com: https://www.youtube.com/watch?v=WUkE3XLw_EA

SUMMARY OF BaNG MEETING #4, July 18, 1990: https://ftp.nice.ch/peanuts/GeneralData/Usenet/news/1990/_CS...

TOUCHTYPE Glenn Reid, Independent NeXT Developer

The next talk was given by Glenn Reid, who previously worked at both NeXT and Adobe. He demonstrated the use of his TouchType application, which should prove to be an enormous boon to people with serious typesetting needs.

TouchType is unlike any other text-manipulation program to date. It takes the traditional "draw program" metaphor used by programs like TopDraw and Adobe Illustrator and extends it to encompass selective editing of individual characters of a text object. To TouchType, text objects are not grouped as sequences of characters, but as individually movable letters. For instance, the "a" in "BaNG" can be moved independently of the rest of the word, yet TouchType still remembers that the "a" is associated with the other three letters.

Perhaps the best feature of this program is the ability to do very accurate and precise kerning (the ability to place characters closer together to create a more natural effect). TouchType supports intelligent automatic kerning and very intuitive, manual kerning done with a horizontal slider or by direct character manipulation. It also incorporates useful features such as sliders to change font sizes, character leading, and character widths, and an option which returns characters to a single base line.

TouchType, only six weeks in development, should be available in early August, with a tentative price of $249. BaNG members were given the opportunity to purchase the software for $150.


>porting it

Not to Linux :(


Qt Widgets is an amazing library, but honestly it is both more and less featureful than it needs to be in various cases. The rich text document stuff still holds up OK for basic cases, but I think the text rendering story could be a bit better. Last time I was doing low level text stuff in Qt, performance was not super impressive, and some of the APIs left a bit to be desired.


Well said. The notion that Qt widgets are "finished" was just an aspiration not to spend much more money on it I think; they sort of rowed back on this when it became apparent that Qt Quick isn't always appropriate, but by then it had sort of spread around as a "Qt fact" amongst people who didn't actually use it.

The number of rough edges, missing bits and outright bugs mean that it's certainly not "finished"... just like all software really.


They helped spread the rummor when plenty of new features are QML only, specially when targeting non-desktop devices.

To this day if you want a common file dialog that works properly across all Qt deployment targets, you need to use QML, as the Widgets version is not adaptive and will display a tiny desktop common file dialog on an LCD display, for example.


My preferred metaphor is to the RISC revolution. Just like RISC decoupled CPU design from programming language design (in the sense that hardware was often designed to make assembly coding easy), Vulkan has decoupled shader language design from driver writing. OpenGL was designed under the assumption that the general game-dev public would be writing shaders for direct consumption by the driver/hardware combo; Vulkan, on the other hand, seems to be designed to be a) written by a narrow group of game engine developers, and b) generated by compilers from higher-level shader languages.

(NB: I Am Not An Expert and these are my Uneducated Impressions.)


Vulkan was primarily designed to allow issuing batch based (queue) validated instructions sets, issued from multiple threads. Lessons learned from OpenGL ES 2.0 showed only a subset of techniques is needed, hence the API is smaller. Shaders are precompiled. Smaller, simpler driver.


I think/hope the "endgame" for 3D APIs is that they disappear completely into compilers. Vulkan still has too many compromises and design warts to support different GPU architectures from high- to low-end and is already more complex than GL ever was after 25 years of development (just look at the huge extension list that already exists for Vulkan).

I don't need an "CPU API" to run code on the CPU in my machine, so why do I need to go through an API to run code on the GPU (hint: it's mostly about GPU makers protecting their IP).


The irony is that the Raspberry Pi is basically a GPU chip with some small CPU cores tacked on. So you yes you actually need the GPU API to run code on the CPU.


> since that's easier than starting from scratch

That is not quite accurate, although it isn't far off. More accurately; the design mistakes that make/made X11 terrible were being enforced at the driver level.

Take the fact that for many years X was being run as root on linux. Horrific state of affairs for security. Everyone knows it is bad.

Some bright spark tries to write a new window system that runs as an unprivilaged user; and ran smack-bang into the fact that the drivers live in the window system because the Kernel doesn't accept closed source modules and the graphics vendors only support X.

That eventually got fixed with the Intel/AMD graphics graphics open sourcing 2008-2018; at the moment X is becoming a very thin comparability layer for most people and a mandatory pain for Nvidia users as far as I know.

There were a lot of issues like that, and still are with Nvidia. The point is that it isn't replacing X that is hard. The issues is coordinating with Nvidia is hard.


I orbit someone on the nouveau team, so I might be a bit biased with regard to Nvidia

As far as I can tell, Nvidia has two modes of operation with the Linux community, "hostile" and "inept"

Hostile is when they do things like require signed firmware for their GPU's or try to force their will on things like EGLstreams, while everyone else is using GBM

Inept is their situation with things like the Tegra mobile platforms. They simply use nouveau instead I'm told. Even though the two GPU lines of course, share a ton of engineering. For some reason they just decided it's easier to use nouveau on that side, and only that side


>That eventually got fixed with the Intel/AMD graphics graphics open sourcing 2008-2018; at the moment X is becoming a very thin comparability layer for most people and a mandatory pain for Nvidia users as far as I know.

It's even better than that now. For me no part of X is running as root. Last I checked the only reason anyone had X as root (other than driver issues) was if they were running a graphical login manager). Since the login manager has to be up and running before any users are logged in, and likely requires X, it makes since that it's got to run in a privileged context, and works most easily as root. In my case I just use xinit to start my graphical sessions after I've logged directly into a (getty) terminal.

I think supposedly GDM can do rootless X, but I haven't tested that.


> Take the fact that for many years X was being run as root on linux.

Every other driver on Linux has a kernel part and a userspace API accessed through /dev devices or syscalls. It was always unclear why X needed to be any different, but I think this was more about implementation than something fundamental to X. Of course "X" is a lot of things to a lot of people, and it depends whether we're talking about the server, the protocol, the client, the extensions, the window manager, the widget library, etc.


> but I bet the original creators would be the ones most shocked that it's still a thing in 2019.

Couldn't we say the same thing about bash or basically most tools in your typical Unix-based OS? Old code is solid code. If they're to be shocked it's for doing something so right that it's persisted all this time.


Except that X11 isn't doing something so right -- it's still terrible, and it always has been, and we knew it was terrible at the time it was standardized.

But it's funny that you would bring up something as terrible as a shell scripting language like bash to compare to how terrible X-Windows is. Have you ever read through a gnu configure file yourself, or do you just close your eyes and type "./configure"? Who in their right mind would ever write a shell script, when there are so many real scripting languages that don't terribly suck universally available, that don't shit themselves when they encounter a file name with a space in it?

These quotes about X-Windows apply to bash as much as they do to X-Windows:

"Using these toolkits is like trying to make a bookshelf out of mashed potatoes." -Jamie Zawinski

"Programming X-Windows is like trying to find the square root of pi using roman numerals." -Unknown

https://medium.com/@donhopkins/the-x-windows-disaster-128d39...


> as terrible as a shell scripting language

That's a strong opinion. I'm not going to argue for lack of time, but suffice to say that 99% of my interactions with my computer and sometimes with my phone is with a shell scripting language. Shell scripting is awesome.

> Have you ever read through a gnu configure file yourself,

Yes. Generated scripts make for a boring read.

> or do you just close your eyes and type "./configure"?

I do, from people I chose to trust, whether by running the package build scripts written by the package maintainers of my distribution or from github accounts that I judge as trustworthy.

There is a lot of trust involved in using a computer. I mean, if something nefarious might be in the ./configure script, it's more likely to also be in a precompiled program, since more people touched it.

> Who in their right mind would ever write a shell script,

I do.

> when there are so many real scripting languages that don't terribly suck universally available,

Each language is good for different reasons. Shell languages are meant primarily to be used interactively, as opposed to languages like python or ruby. The fact that you can put your everyday interactions in a file and run that is an added bonus.

> that don't shit themselves when they encounter a file name with a space in it?

I rather not have filenames with spaces if it means having a language that allows me to communicate with my machine in a terse manner, allowing for super-easy automation of all sorts of interactions.

I mean, are you really suggesting to mandate quoting of all strings in a shell language? The quotes are optional. That's good! In a shell language, files are basically your variables, so why would you want more syntax around your variables when working interactively?


Oh god there's so many easy reasons why bash is terrible.

* Significant whitespace in surprising ways (a=1 vs a = 1 or spaces following square brackets)

* Word splitting

* No data structures of note, nor any way to create them in any sort of non-hacky way

* No data types, really, for that matter

* Can't really deal with binary data

* Awful error handling

* Weak math

* Weird scoping rules

Honestly as soon as I have to do anything that involves a conditional I abandon bash and use one of the many ubiquitous scripting languages that has great library support for doing all the system stuff you could do from the command line anyway.

Here's a great list of BASH pitfalls: https://mywiki.wooledge.org/BashPitfalls I can't think of any language other than maybe Perl or C++ that comes close to that


They all have reasons.

> * Significant whitespace in surprising ways (a=1 vs a = 1 or spaces following square brackets)

Variables in the shell are nicely coupled with environment variables. As a feature, you can do:

  a=1 b=2 cmd
to concisely assign environment variables for a single command. How would you recommend that be redone? You'd need additional cumbersome syntax if you want whitespace to not be significant, and that sucks for a language meant to be used mostly interactively:

  a = 1, b = 2: cmd
Because shell languages are meant primarily to be used interactively, we want to be very light on syntax. We don't want to have to say `var` or something before our variable definitions. We don't want to have more syntax than we absolutely need for our calls. Nothing like `cmd(a,b)`. cmd can be any string. They're just executable files in some directory. We want to include as many of them as possible, and their arguments can be anything, including `=`. Commands get as much freedom as possible over how they're called to fit as many needs as possible. So, how do you differentiate between calls and variable assignments?

Under those criteria, the current situation of statements being words separated by whitespace and the first words having `=` in them being assignments seems like the ideal solution.

> * Word splitting

Makes it easier to build commands without working syntax heavy complex data structures. Here's an example where word splitting is useful:

  sudo strace -f $(printf " -p %s" $(pgrep sshd))
> * No data structures of note, nor any way to create them in any sort of non-hacky way

Complex data structure lead to more heavyweight syntax, and part of the appeal of shell languages is that everything is compatible with each other because everything is text. If you add data structures then not everything is text.

> * No data types, really, for that matter

Same point as above. Everything being text leads to increased compatibility. I wouldn't want to have to convert my data to pass it around.

That said, you could say that there are weak-typing semantics, since you can do `$(( $(cmd) + 2 ))`, for example.

> * Can't really deal with binary data

Because everything is text to encourage easily inspectable data exchange and compatibility between programs.

That said, while it's not advisable to do it, binary data is workable if you really need to do that. Pipes don't care. I can pipe music from ssh to ffmpeg to mpv, if I want. One just needs to be careful about doing text-things with it, like trying to pass it as a command argument. $() will remove a terminating newline if present, for example. That makes sense with text, but not with binary data.

> * Awful error handling

I don't get this. I think bash has very good error handling. Every command has a status code which is either no error or a specific error. Syntax like `while` and `if` work by looking at this status code. You can use `set -e` and subshells to get exception-like behavior. Warnings and error messages are, by default, excluded from being processed through pipes. What do you find lacking?

> * Weak math

Sure. I'll give you that bash doesn't support fractional numbers natively. zsh does support floating point.

> * Weird scoping rules

It's dynamic scoping, and it does have some advantages over the more commonly seen static scoping. You can use regular variables to setup execution environments of sorts. This somewhat relieves the need to pass around complex state between functions. It's kind of a different solution to the same problem that objects in OOP address.

The only problem is that static scoping became so popular that people now are generally not even aware that dynamic scoping exists or how to use it, so it's now not recommended to use it to not confuse people that don't know that part of the language they're using.

About that, I wish people would just learn more of the languages they use, and not expect every language to work the same, as they're not all meant for the same purposes or designed by the same criteria.


I think bash is fine as a shell and I understand why the syntax is the way it is, I'd just never use it for anything beyond a one liner or a list of commands. All my points were about its weaknesses for scripting, ie, anything that looks and acts like a program. There's basically no reason to use it that way when there are a million better programming languages that are ubiquitous and relatively small.


It still seems better suited when the script is mostly about running executables and passing their inputs and outputs between each other. As an example, I think `pass` was very nicely chosen to be done in bash. I don't think there would've been any benefit in doing it in a non-shell language.

I also think that if a program's job is by good proportion about sourcing shell-scripts to e.g. prepare an environment for them and/or manage their execution, it's also a good idea to make that program in the same shell language. As an example of this, I think Archlinux's `makepkg` was best done in bash.

On why make a program that's about sourcing shell scripts at all, the shell is the one common language all Unix-based OS users know in common, pretty much by definition, so it makes it a good candidate for things like the language of package description files. Besides the fact that software packaging of any language involves the shell, you kind of expect people that call directly on `makepkg` to also want to be able to edit these files, so making them in the language that they're most likely to know is good.


> That's a strong opinion. I'm not going to argue for lack of time, but suffice to say that 99% of my interactions with my computer and sometimes with my phone is with a shell scripting language. Shell scripting is awesome.

It really isn't though. (It certainly can be fun however). Switching from shell scripts, and avoiding to do things manually (like ssh), likely increases your success rate at managing *nix by orders of magnitude. All these "configuration management" tools like Puppet, Chef, SaltStack, Ansible etc. are pretty much just to avoid shell scripts and interactive ssh.


> All these "configuration management" tools like Puppet, Chef, SaltStack, Ansible etc. are pretty much just to avoid shell scripts and interactive ssh.

That's only true if your only use of the shell is for configuration, which isn't really intended to be the case. The shell was meant for any use of the computer, not just configuration. For example, I use ssh/scp when I copy some music files from my computer to my phone, or mpv when I want to play some music files. Indeed, I use the shell for nearly everything.


> Each language is good for different reasons. Shell languages are meant primarily to be used interactively, as opposed to languages like python or ruby. The fact that you can put your everyday interactions in a file and run that is an added bonus.

While UNIX was playing with sh, there were already platforms with REPLs, graphical displays and integrated debugger.

In fact Jupiter Notebooks are an approximation of that experience.

So give me a REPL with function composition and structured data, over pipes and parsing text for the nth time.

Thankfully PowerShell now works across all platforms that matter to me.


> Except that X11 isn't doing something so right -- it's still terrible

Yet your windows render and take input, life goes on, etc. I am pretty happy with it on systems where it runs. Some of the old criticisms like it being a resource hog - might have made sense in the specs of, say, 1993 or earlier, but even restricting the comparison to what most of us have loaded in Javascript at any moment it's pretty lightweight.

> Have you ever read through a gnu configure file yourself ... Who in their right mind would ever write a shell script

I really don't think it's fair to use machine-generated code as an example of why you shouldn't use a particular language. All of those alleged universally available, "real" scripting languages that don't terribly suck would also look pretty bad if you turned them into a target language for GNU autotools.


Shell scripting languages generally make certain common tasks easy that most other scripting languages do not. Namely, process management and especially piping information through those processes.

If that's most of what a program is doing I would 10000 times rather read a script written in bash doing it than a python script that pulls in 800 dependencies just to get halfway to making these sorts of tasks take less than 5 lines for each invocation of a thing that takes less than 1 line in bash.

That's not to say that bash is perfect, but it is very good at what it does.


Shell script also has dependencies: specific version of shell and CLI utilities. Shell script is a good thing when you need to write something quickly with hacks but it is not a good foundation for init system or build system. They easily break if you redefine variables like IFS, add command aliases or use names with spaces or special characters.


> Shell script also has dependencies: specific version of shell and CLI utilities.

Just use posix stuff. And move on.


I'm curious.

How do you for example parse command line arguments in your shell scripts?

Do you regard sed, grep and awk as dependencies when bash scripting?

With regards Python. The subprocess, os, sys etc modules are standard library modules. There is no dependency overhead in using them. Most smaller Python scripts manage very nicely with the standard library.


For the kind of thing you've just described, a Python script would need just one dependency to do it all in roughly the same number of lines:

https://amoffat.github.io/sh/


> Who in their right mind would ever write a shell script

I quite enjoy writing fish scripts. Largely because I can actually remember the syntax for conditionals and loops.


> Except that X11 isn't doing something so right -- it's still terrible, and it always has been, and we knew it was terrible at the time it was standardized.

And, yet, in ALL this time, not a single substitute arose?

We have multiple web browsers, GUI toolkits, IDE's, etc. all with far larger codebases and yet X11 never got replaced properly. And some really brilliant minds worked on it.

So the question you need to ask is "If X is so bad, WHY hasn't it been replaced?"


OTOH, Linux desktop marketshare hovers around 1-2%. So maybe the replacement for X11 has effectively been buying Windows or a Mac.


There is zero reason to suppose that the reason people don't use linux is X11.


Sure there is, starting with the difficulty in setting up, getting it to output the correct resolution with your monitor, correct refresh rate, etc..


It has worked automaticlly for me for the last decade using different distros and the *BSDs. I do remember all that trouble though when I started experimenting with Linux/BSD in the early 2000s.


this hasn't been an issue in almost 20 years.


Just google "can't get linux to display correct resolution"

You'll get plenty of results like this that are more recent than 1999.

https://ubuntuforums.org/showthread.php?t=2012264


There are 2 things that cause this. Not having a driver installed and having it fall back to something which doesn't support the correct resolution or in fact bad cables.

Most user installs will not encounter either problem. New amd has great support out of the box without installing anything and many distros support installing closed source nvidia or will work well enough for non gaming applications with the open source drivers.

Please note that challenges aren't a result of X they are specifically the result of specific manufacturers drivers. Some of which have been more challenging in the past. For improvements on such in the future look to the manufacturers and support the ones that provide the optimal experience.

Please note issues where users can't set the correct resolution for their hardware ALSO occurs on windows 10.

Link from Winter 2018 https://troubleshooter.xyz/wiki/fix-cant-change-screen-resol...


Windows is never going to be ready for the desktop at this rate.


How many distros will install e.g. the proprietary (i.e. covering most recent hardware) NVidia drivers out of the box? If I remember correctly, Ubuntu only just now started doing this.


You don't need the proprietary nvidia drivers to drive a display at native resolution.


You do need them if you want to use recent hardware, or get decent performance, or have working power management. From casual user's perspective, if this kind of stuff is not working properly, it's not really "set up".


Yet my open source AMD driver doesn't do everything that the deprecated fxglr was capable of, and even less than the DirectX 11 version of it.


Examples?


Brazos APU.


Good point. Historically AMD/ATI didn't provide official support for hardware for anywhere near as long as Nvidia and the open source gpu driver didn't provide near the same performance. For example devices could be available as new retail units one year and unsupported less than 3 years later.

To contrast that nvidia generally provided support for a decade. For example the latest release only days ago supports hardware as old as 2012 legacy drivers support hardware as old as 2003.

This is why I have bought nvidia hardware despite other issues however it looks like AMD open source support will be better going forward. This doesn't help anyone with old hardware.


Two good substitutes arose! The Microsoft Windows operating system's window manager, and the Apple OSX Window Manager.


That's like asking "If Trump is so bad, WHY hasn't he been impeached?" Just because something's hard to remove, doesn't mean it's not doing something bad.


I am very thank full that the plan to convert our entire engineering centre to some awful x windows tool set fell though at BT.

Though I still use the training I got in the accelerated 1 weekcrash course in unix we all did.


TBH, even humans shit themselves when they encounter monikers with spaces in them.

Can you spot the moniker with spaces in it in this sentence easily without having to go back and re-read? Without context, did I mean "moniker" or "moniker with spaces in it"?

Spaces are a terrible idea in monikers.


On one hand, there are some things that are simply too complex to attempt implementation with bash a script.

otoh, there are many things that could or even ideally should be kept simple such that a bash script is the best practice, most simple, proven, reliable solution.


The reason X11 got pushed through so quickly 30 years ago was that some vendors were threatening to standardize on X10. Maybe they should have just gone with X10, because it would have been easier to eventually replace.


It's a real shame that MGR never got any traction (I think license problems and it came along too late). It was really "X done right". https://hack.org/mc/mgr/ https://en.wikipedia.org/wiki/ManaGeR


This is exactly what I think when I hear "worse is better". Some software just needs to be right the first time.


X is good enough for majority of the users, even if the architecture is not perfect.


The fact X11, as the "standard" windowing environment for Unix operating systems, was ditched by literally all major Unix operating systems created in the last 20 years like Mac OS X, Android, iOS, etc, tells us that X11 is not good enough.

For example here is an Apple developer explaining why X11 wasn't chosen for Mac OS X: https://apple.stackexchange.com/questions/168980/if-os-x-doe...

(By "Unix operating system" I mean an OS based on a Unix/Linux kernel.)


What I said is that it works for majority of desktop users, the mentioned issues of tearing or security are non issue for this group of users. I know Wayland is supposed to be better because it is modern and has a better design but personally I won't swap something that works for something that is still not supporting all MY use cases (Wine) just because of theoretical benefits (for me for others there may be benefits like mixed DPI setups).

Conclusion I am not defending X11 architecture, but for a majority the features it has are good enough despite of the architecture and limitations.


Not to defend X, because it's awful, but Apple never made that choice. They inherited that choice from NeXT. And at the time that the engineers at NeXT made that choice (mid-late 80s) standardization of X11 was by no means assured.


Screen tearing, bad performance, bad touchscreen support (not independent of mouse cursor), and especially important these days: no real HiDPI support (practically impossible to support different scales for different monitors). It's not good enough.


These claims are blown out of proportion and/or simply untrue.

Screen tearing: both Intel and AMD have hardware-backed "TearFree" buffering to prevent tearing. Bad performance: citation required - in my experience, Xorg is way faster. Touchscreen support - in most cases just works out of the box thanks to libinput. No HiDPI support for different scales per monitor - simply wrong - this is trivial with xrandr.


I still find tearing to be problematic

I have a machine with a supported Radeon card (open-source driver), another machine with a supported nvidia card (binary blob driver), and another two machines using different intel onboard graphics chips (open-source driver)

the radeon and the intel (both drivers which work with it) have issues putting out a stable jitterless 60fps without tearing (with tearfree on, and various combinations of with/without compositing/glx/...)


How did you test the fps?

I'm using the intel driver as well, and it's definitely not perfect. But it's pretty close, at least for me - I get hardware-accelerated video decoding with VAAPI, no tearing, and excellent input latency (~3ms).


I've got a couple of videos I use that make the tearing/jitter obvious, e.g. https://www.youtube.com/watch?v=0RvIbVmCOxg (you can get a .mp4 too which eliminates the browser as a possible cause)

sadly I can see the jitter/tearing, vs. on Windows where it's perfect

(with 4k or higher frame rates it's far more obvious)


That's your video driver's fault. Not X's.


If you click and drag a window around in xorg, does the window stick perfectly to the cursor or is there still noticeable lag?


I use a tiling wm (i3), so I can't exactly drag windows around, but I can drag-and-resize them. In my case, there is no lag. However, there is a tiny amount of stuttering since I don't use a compositor, but I can live with that.


> I use a tiling wm (i3), so I can't exactly drag windows around, but I can drag-and-resize them

Sure you can. Just pop the widow into floating mode; I think the default is meta-space.


There is no tearing with proper drivers and window managers. This was a problem in the past, but now is largely overstated.


I never even knew what the problem was supposed to be. A window isn't rendered absolutely perfectly when I drag it around? How is that even an issue that anyone cares about?! Do you spend all day moving windows around? Compared to major flaws in Wayland like not being able to use a program at all on a remote desktop, it's trivialities.


Well at least it's the best kind of lock-in.


Wayland was designed to solve a very specific set of use cases and nothing more. The number of times I have read "wayland doesn't do that" or "that's not wayland's job" or "wayland is just the compositor" are telling. Of course a stack that is compositor + windowing toolkits is going to miss countless mission critical use cases in countless workflows, not to mention that it is an even further regression from composability in the window manager space and no, dbus is not the answer either, you can't make literally every program depend on dbus just to be able to do simple things like take a screenshot, or automatically rearrange windows. Every program having to use a toolkit or implement a message passing system themselves will never happen because few developers have the expertise needed to do so. This is in part why things like X11 were created in the first place, so that there could be separation of concerns. Wayland stomps all over that and thus doesn't even offer something that could become a defacto standard, so it seems that X11 will continue to live on.


Right, X11 made it possible to have literally hundreds of window managers written to fit all possible tastes.

With Wayland, the ecosystem of window managers will never be as rich, because a window manager has to implement too many things to be usable.


> Right, X11 made it possible to have literally hundreds of window managers written to fit all possible tastes.

I've used it since its inception, however it has always kind of sucked.

I think it goes to it's basic philosophy: it did not enforce policy.

This let it survive a long time. It was whatever people wanted it to be. But because of this flexibility, it never became great.

It's like the old Lilly Tomlin skit - "I always wanted to be somebody, but now I realize I should have been more specific."


What has become great? E.g. has NeXT / macOS graphics system become great? (Yes, I dislike macOS for its lack of customizability and being opinionated. It may be great for someone, but not for me.)


For toolkit consistency I think macos is one of the best. consistency = 1/customizability and I agree with you there, especially annoying for folks who know their way around a computer. For 3d acceleration windows seems to have the most advanced and high performance graphics that is in wide use.


consistency * customizability = 1


You can always take a compositor and fork it to implement your window management features…

or, much better, use a compositor with a plugin system, so window managers can become plugins! :)

https://github.com/WayfireWM/wayfire


What are your use-cases not addressed by Wayland?

The sway compositor has been standardizing protocols for screenshots, screen recording/streaming, composable desktop components and so on.


There is no way to tell current keyboard layout as of now. There is somewhat a debate on that topic because Gnome people what to have dbus api instead of wayland protocol extension for this functionality. Sway people are pushing for protocol extension and for now we have two use sway-specific api so a bar on a screen would be able to show current keyboard layout.


Just wait until everyone else creates their own standard protocols.


Nah, those protocols are also implemented in wl-roots which is the foundation of virtually every single wayland compositor, otherwise you'd have to write 50+k of lines of code yourself.


Only if you ignore the most widely used Wayland compositors on the desktop, Mutter and Kwin. I doubt all compositors built upon wlroots come even close to the user bases of those two.


I use xdotool to simulate mouse/keyboard input, and keynav to move my cursor with the keyboard.


I totally feel this guy's pain. When I started at Sun there was a 'direct to framebuffer' window system which we called "Suntools" It was fast and pretty much all in the kernel and pretty awesome, except this new thing "X11" allowed you to have a big beefy machine (sort of a Raspi 3 equivalent today :-)) with its fans and noise in some machine room while the display and input was on your desk. There were even people who made things they called "X terminals" which were instant on devices that provided the display, mouse, Etc. Sun switched over to X11 and the UNIX windows wars began.

When the SparcStation I came out, I backported Suntools to its frame buffer on a lark and it was screaming fast. That was pretty fun.

Oddly enough, I still use X11 a lot. I have probably half a dozen ARM systems around my office/lab doing various things, and it is simpler to run an X11 client locally and just kick off an xterm to these machines than it is to try to do some sort of KVM nonsense. It is also more capable than running a web server on the Pi and trying to interact with it via web "application" pages.

Oddly enough, I recently (like about 6 months ago) became aware of "KVM over IP" which, believe it or not, you hook this piece of hardware to the display port (or HDMI) and the USB port, and it sends the video (and HDMI audio), out over the network to a purpose built application on a remote system. Wow, sounds just like X11 but without the willing participation of the OS vendor :-).

The point I'm trying to make is there absolutely is a need for both. A direct to the metal way of cutting out layers and layers of abstraction so that you can just render to the screen in a timely way, with support for UI features. (otherwise we'd just program to Unity or Vulkan and be done). But their also needs to be a standardized way of letting "well behaved" programs export their graphics and I/O across the network to a place more convenient for the user.

Arguing passionately for one use case at the expense of the other doesn't move the ball down the road. Instead it just pits one chunk of the user base against the other half.


Funny you'd mention that: I was just comparing a Raspberry Pi 3 with a 1990 SS2, which totally skunks it 86:1!

https://news.ycombinator.com/item?id=19717416

>How many more times faster is a typical smartphone today (Raspberry Pi 3: 2,451 MIPS, ARM Cortex A73: 71,120 MIPS) that a 1990 SparcStation 2 pizzabox (28.5 MIPS, $15,000-$27,000)?

Remember "2^(year-1984)"?

https://medium.com/@donhopkins/bill-joys-law-2-year-1984-mil...

On Nov 8, 2018, I sent Bill Joy a birthday greeting: "Happy 17,179,869,184 MIPS Birthday, Bill"! (2 to the (year - 1984))


Except the two chunks of the user base are not 50:50.

I'd say remote X use has dwindled to a trickle. It's more common to use VNC. Or even a browser (ever checked out novnc?)

I stopped using X completely when I discovered tramp for emacs.


Woah - a huge flash to the past when I read this.

I ported Wayland to VideoCore4 (the multimedia engine in first Pi chip) back in 2011 - it was part of the Meltemi Nokia project that got cancelled the follow year - shame as it was pretty cool and had half a chance IMO. We worked with the team in Oslo on QT acceleration over GL ES and used EGL below Wayland (coupled with some magic APIs to get the VideoCore HVS to work well). Ported this to an ARM combo chip that had just the VideoCore GPU in it as well (no HVS) - it worked pretty well.

Prior to this however, I made a VideoCore demo that used a PCI bridge chip (from Broadcom) and you could plug it into a Dell laptop running Ubuntu and get accelerated video decode and also X11 window scaling working at 60fps. We nearly sold this into Apple for accelerating their macbook's but IIRC, getting the macbook into low power mode whilst the video was playing on the external chip was going to be soo much work that they gave up.

A even further back, I remember validating the HVS hardware block and writing the initial scaler driver for it (IIRC, scaler.c...) and made a dispman2 port for the driver. Circa 2006!

Great team - one of the most enjoyable set of people to work with I've ever come across.


If you connected with Eric Anhold and worked on this, you would help millions of Pi students and users. It is an opportunity.


I checked out Eric's work just - he's doing an amazing job! Much more thought put into the subject (security for GLSL being a huge topic on its own). The lack of MMU in the VC4 architecture is a huge pain point that probably sucks up most engineering cycles when it comes to arbitrary application environments using GPU resources like X11 or Wayland - when memory runs out, what to do? You can throw engineering resources at it, but engineering talent really needs access to VMCS on the VideoCore side todo any worth while work.

When we ported Android to the VC4 architecture the first time (~2010), the low memory killer in Linux was subverted to understand to kill Android applications based on their use of the VideoCore GPU memory and it worked pretty well, yet it would still close the primary running app once in a while. Run monkey over Android and all hell broke loose - really tough situations to defensively code fore. For example, for CX reasons, you had to ignore some GLES errors in the primary client due to low memory, then the system had to kill another application that was using the memory, then it would kill the EGL context for the primary application so it would refresh the entire app lifecycle using an almost suspect code path inside Android. Good times! Imagine Wayload has very similar challenges for normal desktop use.


The open source driver doesn't use VMCS, it instead puts aside a fixed block of memory (typically 256 MiB, which I think is also a limit due to some hardware bugs) for use through the Linux Contiguous Memory Allocator (CMA) that it then draws from.

VMCS only comes into the picture if you use video decode, but I think Dave Stevenson from the foundation hacked the firmware side to support importing Linux allocated memory blocks into VMCS so that you can do zero-copy decode and import into EGL (or more likely HVS, the EGL support for the formats is pretty limited).

(I really liked the design of the HVS - having pretty much scripted planes is a fresh approach over similar hardware blocks that have a fixed number of planes each with own idiosyncracies and limitations)


The HVS was a cool design - the only real issue if I recall was its limited line buffers meaning it was hard to determine what could be composited in real time, so we ended up always rendering to a triple buffer. It also did some amazing scaling in realtime, but this came with some crazy artifacts that were super painful to triage. The occasional "flash of bad pixels" in a live composition screen was really painful. I just remember I wrote the DPI and CPI peripherals that we brought up the HVS with - on a 4MHz FPGA complex, we had YUV camera input scaling up to a WVGA DPI screen running at 60fps on a distributed FPGA platform through the HVS. Fun times.


I think the author is confusing something here

> The reference implementation of the protocol (Weston and it's associated libraries) is written in C. That means you could wrap the C code with Rust, which several people have done already [1] However, I get the impression that the results are not very 'rustic', meaning it's like you are coding C from Rust, instead of writing real Rust code.

> To address the problems of dealing with the existing native Wayland implementations, a couple of the Rust Wayland developers have joined together to build a new Wayland implementation in pure Rust called wlroots [2]

[1] https://github.com/Smithay/wayland-rs [2] https://github.com/swaywm/wlroots

wlroots is written in C whereas wayland-rs - a Rust implementation of the wayland protocol (client and server) is written in - Rust.

I'm not familiar with either project, but this just stood out immediately when looking at the Github pages.


There's also wlroots-rs, which provides safe Rust bindings for wlroots: https://github.com/swaywm/wlroots-rs

But yeah, you would still need the C toolchain with this.


Perhaps the author meant Smithay, which, while it still depends on some C, is almost entirely Rust:

https://github.com/Smithay/smithay


I've been using swaywm on my laptop (XPS 13) with XWayland disabled for a few months now and have had very few problems with it. As far as I can tell Firefox works perfectly, except for some very strange behavior if I try to enable hardware acceleration.

I love how easy it is to configure via config files, external monitors work great, scaling on HiDPI displays has been totally painless. My days off fiddling with xorg config and xrandr are behind me.

I know the creator of sway hangs out here, so if you're reading this, thank you!


Without hardware acceleration doesn't watching video in the browser pretty much destroy your battery life and use most of your cpu time?


So I'm no expert but I believe there are actually two different notions of hardware acceleration at play here:

1) HW accelerated rendering in Firefox, which despite being unfortunately disabled by default on linux can in my experience be enabled (on about:config) without issues. This makes the experience of scrolling much smoother, so I usually do that. I don't think this has any observable effect on battery life for me. However, enabling this in Sway results in some very odd behavior that breaks certain things so I've had to disable it. I can go into more detail on that if you like.

2) Hardware accelerated video decoding. In contrast to the fist one, this makes a HUGE difference in CPU usage and battery life. However this unfortunately cannot be used in any browser AT ALL in Linux, regardless of setup or configuration. The way I watch youtube videos on my laptop is usually with mpv, which does use hw decoding if it is configured to do so.


I've used chromium-vaapi on Linux with some success. I don't think those patches have been accepted in upstream chromium though.


I may give that a shot, although I think chromium still requires XWayland, right?


> enabling this in Sway results in some very odd behavior that breaks certain things

I've been using Firefox Nightly in Wayfire (also wlroots based) with GL (and even WebRender) for quite a long time now, it works very well, about the only issue left is popover placement is odd occasionally. What issues do you have?


With hw rendering enabled, opening a single Firefox window and browsing works fine. However when I try to open a new window the browser and desktop immediately become incredibly laggy and eventually unresponsive. I think I was able to successfully open a different tty and kill Firefox, which then fixed the issue, but it repro'd every time. Open one Firefox window, everythings fine. Open two, things slow to a crawl.

I'm surprised you were able to get WebRender working as well, I think I recall Firefox instantly crashing when I tried that. This was a few weeks ago.


If I disable HW acceleration on the desktop it's horrible perf wise, like VSCode and all the electron app are slow, there is aliasing all over the place when moving windows, slowness ect ...

( Ubuntu 18.04 / Unity on a Dell XPS 13 from 2016 )


Note that on X11 hardware acceleration isn't enabled by default in Firefox.


That is encouraging. What distro are your swaywm and Firefox running on?


I'm running Arch, which I find to be the most frictionless when you're trying to do sort of "bleeding edge" stuff like this.


It is time for me to give desktop Linux another try (which will be my first serious try in 10 years).


Yes definitely! Things have come a long way in 10 years, in my opinion. If you are looking for something more "set up and forget about it" though I would definitely recommend Ubuntu over Arch. I use Arch because I (usually) enjoy tinkering and customizing and setting things up exactly how I want.

Edit for a little more detail: I used Ubuntu for a long time on my desktop and actually did quite a bit of gaming on it. Mostly stuck to games that have native Linux support but from what I hear the compatibility layer that Valve has released is actually really good. If gaming is your thing.

All this is to say, things that were once supposed to be impossible on Linux are now very much possible.


If I understand one of the comments here correctly, I would still need XWayland for vscode, which relies on Electron and consequently Chromium; is that not true?


I ran Arch for a few years before leaving Linux 10 years ago.


You'll feel right at home then.


Don't.... You will waste a big chunk of time on playing windows games on linux....


This article seems to be mixing up languages, which others have pointed out but not in full.

wayland-rs does not wrap libwayland, but instead offers a pure Rust implementation of the Wayland protocol.

Sway and wlroots are both written in C, but wlroots-rs is a project which wraps the wlroots C library in a Rust wrapper.

There are currently no mature Wayland compositors written in Rust.

Also, this article mainly focuses on the problems with Wayland on the Raspberry Pi, which stem mainly from the old proprietary drivers and the slow pace at which Raspbian gets up-to-date software. For most users, the experience is much better.


The irony of the Raspberry Pi in particular is that while the ARM side has seen multiple order of magnitude jumps in performance, the GPU and compositor hardware have of course remained at its original 2009 atrocious level.

The result is that the CPU drawing on a screen-sized framebuffer is fast enough that Raspbian will probably never, ever make the jump to full bells-and-whistles mainline VC4 drawing and atomic DRM composition. It would just end up using a lot more memory and break a bunch of hacks in the proprietary drivers that people have come to rely on for essentially still garbage tier performance and a system that randomly freezes because you ran out of graphics memory (which is forever limited to 256 MiB, memory that you surrender on boot to the horrendous CMA system).

I like the Pi a lot as a PoC platform given the availability and the fully open-source supported driver we have for it. But the HW specifications of all the parts that Wayland cares about make it very clear that the thing was only ever meant to do 1080P when you are piping frames from the video decode engine straight to the compositor and out the pixel valves.


> wayland-rs does not wrap libwayland, but instead offers a pure Rust implementation of the Wayland protocol

Originally it only wrapped libwayland. Now it offers both libwayland and a pure Rust implementation, togglable with the Cargo features client_native and server_native.


Wayland, in practice, couple too much the "compositor" with the driver stack, and the desktop/wm with the "compositor", and THEN the desktop/compositor/wm with applications (e.g. Firefox menus working or not depending on the desktop env in use)

Then end result is that it explodes your test matrix. I won't use it while it stays the case, because I don't have time to fight with basic graphic stuff being broken all the time, and it is ridiculous to expect things to be not broken if the test matrix stays like that.

Plus it is starting to be old yet still missing half of the features, be it existing X features (ex: ssh -X), basic desktop GUI needs (ex: whatever is needed to implement Wine), or modern must-have (ex: colorimetry) (at least if was the situation a few months ago, hopefully there have been some progress since)


>X is the graphical interface for essentially all Unix derived desktops (all other competitors [link to Display PostScript wikipedia page] died decades ago).

Since when was an X-Windows extension a "competitor" to X? Display PostScript was simply a proprietary X-Windows extension that fell out of fashion, not a competitor to X-Windows.

NeWS was a competitor to X-Windows that died decades ago, but Display PostScript was never a "competitor" to X-Windows, just like the X Rendering Extension and PEX were never competitors, just extensions.

But at least the article gets credit for calling it X-Windows instead of X11, to annoy X fanatics. ;)

https://medium.com/@donhopkins/the-x-windows-disaster-128d39...


I'm not sure what protocol they used for remoting NeXTSTEP's native WindowServer, but I don't think it was X11. Some people did tunnel Display PostScript over X11, but that wasn't the only option and it was pretty kludgy.


That's right. DPS wasn't used for handling input events, or implementing the user interfaces widgets, which instead were implemented in Objective C running in another client process (usually running on the same machine), instead of PostScript running in the window server. DPS didn't handle events with PostScript in the DPS window server. Instead, it would ping-pong events over the network from the window server, to the Objective C client, then send PostScript drawing commands back to the window server. (Typically generic low level drawing commands, not a high level application specific protocol.)

So NeXTSTEP suffered from the same problems of X, with an inefficient low level non-extensible network protocol, slicing the client and server apart at the wrong level, because it didn't leverage its Turing-complete extensibility (now called "AJAX"), and just squandered PostScript for drawing (now called "canvas").

So for example, you couldn't use DPS to implement a visual PostScript debugger the way you could with NeWS, in the same way the Chrome JavaScript debugger is implemented in the same language it's debugging (which makes it easier and higher fidelity).

https://medium.com/@donhopkins/the-shape-of-psiber-space-oct...

>The Shape of PSIBER Space: PostScript Interactive Bug Eradication Routines — October 1989

>Abstract

>The PSIBER Space Deck is an interactive visual user interface to a graphical programming environment, the NeWS window system. It lets you display, manipulate, and navigate the data structures, programs, and processes living in the virtual memory space of NeWS. It is useful as a debugging tool, and as a hands on way to learn about programming in PostScript and NeWS.


Didn't NeXTSTEP use DPS without X?


NeXTSTEP for the NeXT workstation, started in 1987 and released in 1989, used a DPS server with its own protocol without X (the DPS code was licensed from Adobe, the same code that ran in laser printers like Apple's LaserWriter).

https://en.wikipedia.org/wiki/Display_PostScript

In 1985, two years before DPS was started and four years before NeXTSTEP was released, James Gosling and David Rosenthal at Sun developed their own PostScript interpreter for NeWS, originally called SunDew, which was distinct from and quite different than DPS, and wasn't licensed from Adobe.

http://www.chilton-computing.org.uk/inf/literature/books/wm/...

https://en.wikipedia.org/wiki/NeWS

Then in 1993, NeXT and Sun developed OpenStep for X-Windows/DPS.

https://en.wikipedia.org/wiki/OpenStep

One of the biggest architectural difference between NeXTSTEP/DPS and NeWS is that NeXTSTEP didn't take advantage of the technique we now call "AJAX," by implementing the user interface toolkit itself in PostScript running in the window server, to increase interactive response and reduce network messages and round trips.

https://news.ycombinator.com/item?id=13783967

NeXTSTEP wasn't trying to solve the remote desktop problem: the toolkit implemented in Objective C code just happened to be using local networking to talk to the DPS server, but in no way was optimized for slow network connections like NeWS was (and AJAX is).

You couldn't run NeXTSTEP applications over a 9600 baud Telebit TrailBlazer modem, but NeWS was great for that. I worked on the UniPress Emacs display driver for NeWS, which was quite usable over a modem, because you do stuff like text selection feedback, switching and dragging multiple tabbed windows around, and popping up and navigating pie menus, all implemented in PostScript running locally in the window server, without any network traffic!

https://www.donhopkins.com/home/code/emacs.ps.txt

https://www.youtube.com/watch?v=hhmU2B79EDU

NeWS was architecturally similar to what is now called AJAX, except that NeWS coherently:

used PostScript code instead of JavaScript for programming.

used PostScript graphics instead of DHTML and CSS for rendering.

used PostScript data instead of XML and JSON for data representation.


I guess the author is referring to NeXTSTEP and NeWS, which both used DPS.


NeWS definitely didn't use DPS. They were completely different implementations of PostScript, with very different design goals and features.


Can I just say that I think it's great that we now ceased to have a monoculture in display servers for Unix-based systems? It was the last one, I believe.

I mean, we've had choices for text-editors, for shells, for programming languages, for GUI toolkits, for desktop environments, for window managers, for remote-desktop servers and viewers, for ssl/tls implementations, for web browsers, and even for kernels.

The only thing everyone using Unix-based systems all had in common was that they had to use X for graphics. I think we had choices for implementations for X (Xf86?) before we ended up just using Xorg, but now we even have the choice to not use the X protocol.

Having choices is good. Having diversity is good. I hope we stop trying to see this one as everyone converging to using only one. We can have both, like we do now. I think that's the best.


Why not just use a full screen direct-to-hardware web browser as the window manager, and talk to clients, servers and other screens via HTTP, WebSockets, two-way streaming video and screen sharing via RTP, etc?

The reason X-Windows sucks is that it's not extensible, and Wayland is only incrementally and quantitatively better than X11 (like X12 or Y), not radically and qualitatively better (like NeWS or AJAX).

Wayland has the exact same fundamental problem that X-Windows suffers from. It's not extensible, so it misses the mark, just like X-Windows did, for the same reason. So there's not a good enough reason to switch, because Wayland failed to apply the lessons of Emacs and NeWS and AJAX. It should have been designed from the ground up around an extension language.

Web browsers are extensible in JavaScript and WebAssembly. And they have a some nice 2D and 3D graphics libraries too. Problem solved. No need for un-extensible Wayland.


> The reason X-Windows sucks is that it's not extensible

Years of configuring modules in XF86Config and matching things up between the server and the client and that thankfully going away quite a bit with Xorg would make my conclusion the opposite and yet also suggests it didn’t really matter as much.


Exactly, I mean, there was one thing that all the various slightly incompatible Linux distros agreed on. Finally we have fixed that.


I think it is important to point out that Wayland is a specification[0] and not some blob of software.

[0] https://wayland.freedesktop.org/docs/html/apa.html


So is x11? There used to be a couple of commercial x servers available for Microsoft windows, for example.


Even before Microsoft Windows: Quarterdeck DESQview/X, for example. And of course X-Windows ran on Lisp Machines.

https://en.wikipedia.org/wiki/DESQview

>DESQview/X

>Quarterdeck eventually also released a product named DESQview/X (DVX), which was an X Window System server running under DOS and DESQview and thus provided a GUI to which X software (mostly Unix) could be ported.

>DESQview/X had three window managers that it launched with, X/Motif, OPEN LOOK, and twm. The default package contained only twm, the others were costly optional extras, as was the ability to interact on TCP/IP networks. Mosaic was ported to DVX.

>DVX itself could serve DOS programs and the 16-bit Windows environment across the network as X programs, which made it useful for those who wished to run DOS and Windows programs from their Unix workstations. The same functionality was once available with NCD Wincenter.


So is everything else in the world of software. You begin with a specification and then you start writing code.


Well, sometimes. Paul Graham and others from the lisp-ness crowd talk about coding without a clear idea of where to go. I'm not in a place where I 'speak' any language clearly enough to write without a plan.

I suppose for most enterprise stuff what you say is true (how else will we commit to coding targets we won't hit for prices we can't afford on timelines that make dog races look slow? - I suppose I'm a little early in life to be jaded, but I am familiar with bureaucratic nightmares that value image over substance).


Ha! In the OSS world that's almost never true. Ihe specification for FUSE is the libfuse source, for instance.


Do note that the article is mainly about Wayland on the raspberry pi. Of course Wayland isn't the default most places yet (Fedora being the big exception, I think), though you can install it on most distros.

I've been Wayland/Sway on NixOS for a while, and I really like it.

One question though - the article seems to say that wlroots is a Rust project, but it seems to very much be a pure C project? (https://github.com/swaywm/wlroots)


> the article seems to say that wlroots is a Rust project, but it seems to very much be a pure C project?

Yeah. It uses meson/ninja to build. No Rust.


What do you like about it?

For general use it seems as if it'll be something that makes no difference to my day-to-day usage of my computer other than a warm fuzzy feeling that the underlying protocol is "right".


That's true - the two things I really like are no screen tearing ever (be that scrolling Firefox or YouTube videos) and that warm fuzzy feeling.

Edit: I also like Sway quite a bit (over i3/X) - its configuration (outputs, input devices, etc) makes a lot more sense and is a lot easier for me than trying to change stuff in different places and in different ways with X.


After reading http://techtrickery.com/keyloggers.html I am not sure if I ever want to have Xorg installed anymore. Wayland is great but it seems like having a fully featured *nix desktop without XWayland installed is still hard to achieve.


Wayland is insufficient in and of itself to prevent keylogging.

https://github.com/Aishou/wayland-keylogger

At present Linux desktops aren't very secure against user installed malicious software. It is however fortunate that most software is installed from curated repos.

It's not clear that just switching to wayland is worth much at this point in time.


That link is a blatant lie. Redirect each wayland client's stderr to a different term and you will see that the process (keyboard/mice input + graphics output) isolation is still working as intended.


It's not a blatant lie. There is probably no default install in which I would not feel it absolutely necessary to change every credential stored on a system if it ever ran a compromised binary and a new install from scratch.

At best you are hoping that the malicious binary someone tricked you into running didn't also take advantage of an additional vulnerability to compromise everything. Keeping in mind that your adversary has every opportunity to test against the same environment you are running.

The only linux environment that I've aware of that takes isolation really seriously is qubes and even that isolation could be violated in theory.

I want desktop applications to have features that right now require substantial permissions to effect. The primary defense is and will likely remain not to install malicious software in the first place by installing from curated sources.


It's a blatant lie because the author of that code snippet is trying to trick the reader into thinking that Wayland's isolation somehow has been broken, but that's not true at all.

In the real world, any secure desktop solution is going to require a reliable execution environment ("security is only as good as your weakest link"). If you don't trust the user to properly handle that, then you must ensure they don't do anything stupid or dangerous to themselves by restricting what they can do. For desktop applications this usually means to execute them in a sandbox (such as Flatpak). QubeOS tries to do something similar, but stumbles upon the inherently insecure design of the X Server, and has to work around it running separate X server instances for each unreliable X client.


It's definitely possible depending on your use case - I can run Firefox and a terminal without XWayland running, which is all I need. (There was a weird bug where Firefox under Wayland would open an X window first and then discard it, not sure if that's been fixed yet)


Last time (which was when Fedora first shipped it by default) I looked at Wayland, it took me less than an hour to run into several showstopper issues and move right back. The ones I remember include being incompatible with the proprietary Nvidia drivers (and no plan to ever fix that), huge input lag (apparently stemming from the, um, "interesting" idea to make the compositor responsible for all input) and lots of random stability issues. Have these things been fixed? Apart from a general dislike of X11, what are the reasons one should consider a switch nowadays?


> incompatible with the proprietary Nvidia drivers (and no plan to ever fix that)

I've heard that Nvidia is fixing this in KDE Plasma and maybe Gnome too. (Fuck proprietary drivers though, and fuck Nvidia.)

> huge input lag

that's odd. Gnome's compositor is not the fastest, but it generally works okay for many many people.

> the reasons one should consider a switch nowadays

- No screen tearing ever, every frame is perfect

- Real HiDPI support, different scales on different monitors, many compositors support "Apple-style" fractional scaling (render at ceil(scale) and downscale on GPU)

- Proper touchscreen support, without dragging the mouse pointer along

- Touchpad gesture support (this miiiight have been bolted onto X with XInput2 as well)


> Fuck proprietary drivers though, and fuck Nvidia.

I'd normally agree, but it's far easier for me to pick and choose software than hardware, and Nvidia makes the best GPUs by a huge margin (particularly if you care about power efficiency). Not to mention the issue of CUDA, which HPC and ML applications rely on pretty much exclusively and for which there's no support in open source drivers AFAIK.

As for the reasons to switch, I acknowledge your list as objective advantages. Unfortunately, I happen to be among the people who don't care for high DPI (1440p is more than enough for me), touchscreens (leave those to phones) or touchpads (TrackPoint forever), so I guess I'll be sticking to X11 for the foreseeable future. A guarantee against screen tearing is nice, but I've rarely seen it on the setups I run (admittedly, mostly on high end hardware), whereas low input lag, stability and driver support are things I am loathe to forgo.


> (Fuck proprietary drivers though, and fuck Nvidia.)

I love nVidia's drivers. They "just work" and I don't need to muck about trying to understand why this version of this video driver doesn't play right with that version of drm or this kms setting.

Don't get me wrong, there are benefits to being a proper, native component of a modular display system. But what's the point if none of them work, only support random subsets of the hardware, and crash left and right?


Good for you. I have the worst Linux experience in 20 years, because I was tempted to buy a high-end laptop that happened to have a nVidia graphics card (ThinkPad X1E). Never ever will I touch hardware with nVidia. Certainly, I might not be the target audience for this - I don't play games and work mostly in a terminal and browser.

Once I managed to make it work, and that I am now able to select between the NVidia and Intel - I have to tell, that I do not see any difference, in performance. However the performance is below what I was used on the previous computers - it might be due to the high resolution (3840x2160).

Connecting external monitors is a nightmare. It produces a lot of heat. And it made me lose so much time! In this case I do not even care about proprietary or open source drivers - I just want it to work.

I never had a more sluggish linux system since 1995. Even typing in the browser or in the terminal makes me make mistakes, so much lag is there.


> none of them work, only support random subsets of the hardware, and crash left and right

Anecdote for anecdote, you just described my experience with NVidia's dreadful hardware / drivers.


> I've heard that Nvidia is fixing this in KDE Plasma and maybe Gnome too

I'm sorry, are you saying that drivers now match specific DEs? That's a sufficiently hideous layering violation to make me automatically dislike Wayland, iff true.


That specific problem is related to NVidias refusal to support the open API components that every other player is standardizing around.

The reason it's becoming possible to run Wayland compositors on NVidia hardware just now is because the Gnome (and now KDE) teams have just given up and started implementing the NVidia-specific pieces. It's not really a Wayland problem, it's an NVidia problem.

You can run any Wayland compositor on AMD hardware without this issue.


This rant would be better directed at the landscape of cheap, poorly-supported ARM devices like the Raspberry Pi and its clones.

These manufacturers should be developing proper GPU drivers for mainline with full KMS/DRM/mesa support before they even sell their boards to the public claiming Linux support.

Wayland and Xorg work just fine on Intel integrated graphics, Intel has been setting the standard here for over a decade now.


The Pi is running on a set top box chip that has been "graciously" thrown over the wall by Broadcom. Broadcom are not friends of the open source community.

>These manufacturers should be developing proper GPU drivers for mainline with full KMS/DRM/mesa support before they even sell their boards to the public claiming Linux support.

Broadcom or the Pi Foundation? One doesn't care and one doesn't have the resources.


At least the Raspberry PI has documentation for its GPU.

The rant should be directed at ARM itself for not providing documentation for Mali.


There is a mainline driver for the RPi with full Mesa support. On e.g. Arch Linux ARM, you can just install a wayland compositor and run it.

It's just the conservatism of distros like Raspbian. 32-bit, old kernel, proprietary blobs, old packages (stable debian).


Some clones are better than others. This kernel https://github.com/TinkerBoard/debian_kernel supports DRM/KMS quite well.

Don’t know about Linux desktop, but for my embedded use case where I build stuff directly on top of drm, kms and gles, it works fine driving 2 displays, one of them is 4k.


Because it's just not ready yet. It doesn't cover all the needs it must cover in order to be considered superior to X.

At least that's the only reasonable explanation I can find.


Meh. As far as I'm concerned, Wayland is strictly inferior to X, so long as it fails to support network transparency / remoting. Talk to me when they have a real solution for this, besides VNC or supporting RDP.


Xorg remoting was poorly designed to accomodate DRI/DRM, video card acceleration and had adequate performance to only display primitives on screen. Thats the reasons of poor Xorg remoting adoption. I dont remember seeing anyone in this decade using it - it hands down lost to vnc/nx and rdp.


I use it, so I could care less about your "it hands down lost to vnc/nx and rdp" assessment. And until Wayland has a mechanism that has the flexibility of X remoting, it is a strictly inferior technology - for my purposes.

That doesn't mean that I may not be forced to adopt it due to market forces, but let's not pretend that this isn't a glaring hole in Wayland for many people.


couldn't care less.



not to mention that 'poor adoptation' is much more likely driven by 'isolated single user on ubuntu laptop that doesn't know this exists' usecase dominating in sheer numbers over 'network of unix workstations operated by group of knowledgable users' than 'application X is slightly slow over the network, boo'


Yeah, VNC is just way better. Even running a local Docker container and using X11 forwarding is extremely slow.


It depends on how you define better. VNC may be faster, but it's just because X carries a lot of historical cruft, I believe. Usability-wise, X is more flexible. I can move individual windows between computers in X. I can't do that with VNC.

I wouldn't mind doing the jump to Wayland if it were as flexible as X, but that doesn't seem to be the case. Correct me if I'm wrong.


It's not "historical cruft" that makes it slow. It's the fundamental architecture of X-Windows that makes it slow. Computers these days are network bound, not CPU or memory bound. It's not sluggish because it's executing too much code or running out of memory or too complicated. It's sluggish because it's making the wrong decisions to do the wrong things at the wrong times in the wrong places.


what's the point of running program locally, to be rendered locally, sending the primitives to Xorg server to be displayed somewhere else? is it using Xorg as a network proxy?


I think you got it kinda backwards, but due to confusing naming you wouldn’t be the first.

It’s more like I’m logging into a remote server (using stuff like SSH), starting applications there and getting the GUI up on my local desktop, like any other local window/app, due to X-forwarding (back to my Xorg server).

It’s not perfect, but it sure is a lot more “natural” and integrated into your desktop than VNC or RDP.

I hope we can keep something similar with Wayland (using maybe XWayland or other compatibility kludges). I think it’s pretty nice.


RDP has had remote app for a long time now, which is essentially the same thing: run program on remote machine, forward its window to local.


The Andrew Window Manager / Andrew User Interface System / Andrew User Environment supported remote display of windows on a workstation display before X-Windows, circa 1985. Andrew was used by lots of people at CMU where it was developed, and also externally. And a lot of the people who worked on Andrew also worked on X10, X11, NeWS, and even (much later) Java.

https://en.wikipedia.org/wiki/Andrew_Project

>Initially the system was prototyped on Sun Microsystems machines, and then to IBM RT PC series computers running a special IBM Academic Operating System. People involved in the project included James H. Morris, Nathaniel Borenstein, James Gosling, and David S. H. Rosenthal.

>The Andrew Window Manager (WM), a tiled (non-overlapping windows) window system which allowed remote display of windows on a workstation display. It was one of the first network-oriented window managers to run on Unix as a graphical display. As part of the CMU's partnership with IBM, IBM retained the licensing rights to WM. WM was meant to be licensed under reasonable terms, which CMU thought would resemble a relatively cheap UNIX license, while IBM sought a more lucrative licensing scheme. WM was later replaced by X11 from MIT. Its developers, Gosling and Rosenthal, would next develop the NeWS (Network extensible Window System).

Andrew died! Andrew is dead!

https://www.youtube.com/watch?v=nZMuBIJxmnA&feature=youtu.be...

How often mankind has wished for a world as peaceful and secure as the one Andrew provided.


Interesting, I've never heard of that facility before. I guess it wasn't promoted very heavily? Or maybe it's just because RDP comes from the MS world and I try very hard to avoid Windows and anything Windows-centric unless forced to.

Anyway, thanks for sharing that. I'll look into it.

Here's some info on RemoteApp for anybody else who's interested.

https://techcommunity.microsoft.com/t5/Enterprise-Mobility-S...

https://social.technet.microsoft.com/wiki/contents/articles/...


Thanks for the links!

Reading those though, I’m kind of underwhelmed.

You have to have a dedicated Windows Terminal Server installation (which I’m sure has expensive and complicated licensing), on that you have to “publish apps” (which seems like a process in itself), on the client you need to subscribe to feeds.

And after about 100 such individual steps... magic.

With X11 I just forward a socket, launch a normal program normally, and everything just works.

That’s just so much simpler, so much easier to work with and easier to understand. I can do it casually, on demand, when I need it. No preparation needed.

That Windows thing... looks expensive and something you which takes planning.


> Or maybe it's just because RDP comes from the MS world and I try very hard to avoid Windows and anything Windows-centric unless forced to.

It'll be due to that.

For the most part it works really well, though still not flawless after all these years.


in the usual case you ssh to a big fat box (the server) and run X clients; the X server is running on your ssh client box.


I run proxmox on one system to run a few VMs. I view the vm console displays remotely in the browser using novnc.

I thought it was pretty cool the first time I used it.


Uh? I much preferred NX instead of VNC because I didn't have to define the resolution when starting the server, plus it felt faster


> it hands down lost to vnc/nx and rdp.

Those solutions don't do what X remote desktop does, though. Namely, those let you share a desktop rather than providing independent remote desktops.


to my knowledge, to supply multiple X remote desktops, you need to SPAWN multiple X desktops. and thats no a proper way to scale its number - they all are independent and not sharing common resources(incl. incoming ports)


I think how X works with this (and a lot of other stuff) can be very fairly criticized. Despite the implementation drawbacks, though, I use this ability heavily and in practice it gives me no trouble. Plus, I'm unaware of any other solution to this use case.

It doesn't scale well if you're talking about dozens of simultaneous desktops, yes, but I rarely have more than three at a time. It's fine for that.


DRI/DRM makes the assumption that your GPU is plugged in the one and only machine hosting all your apps. This wasn't that crazy in 1998, but today the GPU is under my desk while myriad app hosts are racked in datacenters (where a GPU might be an expensive addon, but probably doesn't exist) and good remoting is so neglected that the industry is using web browsers as display servers to fill the gap.


No, web browsers are used as an application platform, not a display server. Especially with SPAs ("thick" clients).


I use ssh -X all the time.


nonono, you're simply not getting it.

I don't use it for reason X, therefore I can project this reasoning onto those that disagree, and dismiss them by calling them outdated. So, obviously your use case is irrelevant.


I've not seen anyone using VNC or remote Xorg for years. Is it really that important these days?


The bar for being important enough to not discard when it works and is the only solution to particular problems is not very high.

I know of no other method to forward individual programs GUIs to other machines. Abandoning X is abandoning a power we have.

You're right that it may not be used as much. Last time I used it was to debug why selenium tests running in chrome were failing in a server with a virtual display (Xvfb) when the tests would work in any developer's machine. I forwarded chrome's GUI from a docker container in a server in another room via my development machine. Its window was neatly tiled next to my other windows in my tiled window manager. You wouldn't be able to tell it wasn't running locally. I don't have to mess around with a desktop inside a desktop. Such neatness in UX is a comfort I'd like to keep.

I'm all for making a new, more efficient display server, but please don't take powers away.


I just use ssh/screen/emacs to work on remote servers. Why would I ever want an xterm?


I bring up remote Emacs frames (from existing sessions, using emacsclient) over ssh forwarded X11 all the time. I prefer this method over tty-only because of the better keyboard support (tty can't pass through all the modifiers or even all the ctrl combinations), image support and clipboard integration. (better colors used to be another reason but less so now that we have 24 bit color support in tty Emacs)

With the fast networks we have nowadays performance is quite good.

I dread the time (if it ever comes) when Wayland takes over to the point that X11 won't be practical to use. And I would use EXWM too which would be even more painful.


Have you tried tramp? I routinely use it to edit files on systems without emacs installed. Also you can edit files belonging to root, etc.


Yes, I use tramp as well for all those things. The remote Emacs session is more for everything else I do with Emacs, mail, chat, M-x shell sessions, dev sessions in progress with language servers running, maybe a debugger.

I use tty emacsclient plenty too (including from my phone) and I could certainly manage without remote X11 and perhaps even without EXWM (ouch) but not looking forward to the prospect of losing it, in exchange for... what? Smoother window transitions and scrolling? I don't want my windows to transition, I want jump scroll and things to pop in place instantly. I disable that shit even on Android.

So I guess I will stick with X11 as long as I can and reminisce about better days once it's gone.

(Emacs having its own remote display protocol would be a way to tackle the first part and maybe EXWM could be re-implemented for Wayland somehow too but neither of those things exist today)


> I use tty emacsclient plenty too (including from my phone)

oh my.


You can also expose the Emacs server socket and allow a local native GUI emacsclient to connect to it.


It would be amazing if it worked that way! But I don't think that it does. :(

emacsclient only tells the Emacs session it is talking to to create a new frame, either on an X11 display or on the tty where emacsclient is running. After that emacs does all the work, emacsclient just waits for emacs to tell it that it's done, it takes no active role in actually displaying stuff.

I would love to be wrong on this, please tell me if I am! I would love it if Emacs actually had its own remote display protocol.


Good point -- I haven't tried gnu emacs's X11 driver in ages, so I'll give it another try. Does gnu emacs's X11 display driver finally have a way to keep running after the X connection terminates, and reconnect to a new X server (or multiple X servers at once)?

Under screen, I keep long running emacs sessions with multiple shell buffers running for months and sometimes years, with all the files I'm working on opened up. Each shell buffer might be configured for some branch of some code, with an interactive python or whatever shell running the code connected to the database, with a bunch of useful commands and context in its history. It's a lot of work to recreate all that state.

(Digression: Bash has no way to save and merge and recreate and manage parallel history threads, does it? Or does the last shell that exits just stomp on the one history?).

Back in my Evil Software Hoarder days, I worked on the NeWS display driver for UniPress Emacs 2.20, and later on the NeWS display driver for gnu emacs. Here's a brochure from February 1988 about UniPress Emacs 2.20 and "SoftWire" (NeWS without graphics).

https://www.donhopkins.com/home/ties/scans/WhatIsEmacs.pdf

It supported multiple display drivers (text, X11, NeWS, SunView), as well as multiple window frames on each of those displays (which gnu emacs didn't support at the time), and you could disconnect and reconnect to a long running emacs later. In effect it was a "multi user emacs" since different users could type into multiple displays at the same time (although weird stuff could still happen since the classic Emacs interface wasn't designed for that).

Emacs 2.20 Demo (NeWS, multiple frames, tabbed windows, pie menus, hypermedia authoring):

https://www.youtube.com/watch?v=hhmU2B79EDU

Here are some examples of where the rubber hits the road in NeWS client/server programming of an emacs NeWS display driver. They both download a PostScript file to NeWS that handles most of the user interface, window management, menus, input handling, font measurement, text drawing, etc, and they have a corresponding C driver on the emacs side. There's also a "cps" file that defines the protocol (which is send in tokenized binary instead of plain text), and generates C headers and code stubs. Together they implement an optimized, high level, application specific "emacs protocol" that the client and server use to communicate:

Emacs 2.20 NeWS display driver (supporting multiple tabbed windows and pie menus in the NeWS Lite Toolkit):

https://www.donhopkins.com/home/code/emacs.ps.txt

https://www.donhopkins.com/home/code/TrmPS.c

Gnu Emacs 18 NeWS display driver (supporting a single tabbed windows and pie menus in The NeWS Toolkit 2.0):

https://www.donhopkins.com/home/code/emacs18/src/tnt.ps

https://www.donhopkins.com/home/code/emacs18/src/tnt.c

https://www.donhopkins.com/home/code/emacs18/src/tnt_cps.cps

  % Return the minimum size to keep emacs from core dumping
  %
  /minsize { % - => w h
    /?validate self send
    CharWidth 10 mul Border dup add add
    LineHeight 5 mul Border dup add add
    % XXX: Any smaller and it core dumps!
  } def


You're replying to a comment that gives an example of use of X11 forwarding that's not fulfilled by ssh/screen/emacs.


> Abandoning X is abandoning a power we have.

A power that literally no one uses.

Well, no one that matters anyway. For values of "matter" equivalent to "uses a modern desktop".

Whatever the case, modern development heavily favors coding for the common case, not supporting a flexible framework that can accommodate fringe cases. And in 2019, your use case is fringe.


If you abandon every feature a handful of users think nobody uses on any piece of complex software you will end up removing something that everyone uses.

People with opinions like yourself are why gnome software developers removed gui functionality for managing raid arrays from gnome disks with the explanation that people should just use btrfs or zfs. This left you with an easy gui to create a raid array that you wont be able to fix without learning cli tools.

https://web.archive.org/web/20140327002450/http://worldofgno...

At the time you couldn't insofar as I'm aware in the installer create a zfs or btrfs raid in 2014. Further reports of btrfs eating data were still disturbingly common and zfs wasn't in anyones official repos.

Regarding the "modern" desktop

"That might have been so if he had lived a few centuries earlier. At that time the humans still knew pretty well when a thing was proved and when it was not; and if it was proved they really believed it. They still connected thinking with doing and were prepared to alter their way of life as the result of a chain of reasoning. But what with the weekly press and other such weapons we have largely altered that. Your man has been accustomed, ever since he was a boy, to have a dozen incompatible philosophies dancing about together inside his head. He doesn't think of doctrines as primarily "true" of "false", but as "academic" or "practical", "outworn" or "contemporary", "conventional" or "ruthless". Jargon, not argument, is your best ally in keeping him from the Church. Don't waste time trying to make him think that materialism is true! Make him think it is strong, or stark, or courageous--that it is the philosophy of the future. That's the sort of thing he cares about."

C.S. Lewis The screwtape letters.


> A power that literally no one uses.

I just told how I used it.

> Well, no one that matters anyway.

I don't matter? That's kind of rude.

> Whatever the case, modern development heavily favors coding for the common case, not supporting a flexible framework that can accommodate fringe cases. And in 2019, your use case is fringe.

Are you going to say that accessibility features should also be discarded? Being deaf or blind are not the common case.


"Modernity", "network transparency is irrelevant in $CURRENT_YEAR" and "nobody uses those features anymore" are among the most commonly cited reasons why you must abandon X NOW and switch to Wayland. Those and "muh security". (The very real security issues with Xorg could be mitigated or eliminated completely with a more sophisticated X server design.)

Like it or not, you will have to come to grips with the fact that the Linux desktop and graphics-stack discussion is dominated by developers from the age of GNOME, who haven't put much thought in beyond how they personally use their own MacBooks. These are the ones calling the shots, and they've decided that X is obsolete, that network transparency is cruft that should be eliminated, and that Wayland is suited to task. So Wayland will be the supported solution going forward.

> Are you going to say that accessibility features should also be discarded? Being deaf or blind are not the common case.

Given the shit state of accessibility under Linux, I'd say yes, it is fringe to the people building the Linux desktop. If you're disabled, it makes much more sense to get a Mac or Windows machine.


I don't think it's really a fringe use case, I actually think it's much more common than you think.


I use ssh -X all the time.


I no longer use X, but when I did, I had to use ssh -XY


Is X over SSH that important? Not really. Is it handy enough for me to use it on a daily basis? Yes, without a doubt!

Everyone has a different use case though. I could just as well be using RemoteApp or something along those lines.


Remote Xorg is the easiest way to get GUI applications running under WSL on Windows. That's the only recent relevant application I've come across though.


Remote Xorg is important to me. I use it pretty much daily. I use VNC as well, but less often.


Remoting will be up to the compositor implementation, it's not specified in the Wayland protocol. For example, Weston has just added a "remoting" plugin last month. https://www.phoronix.com/scan.php?page=news_item&px=Weston-6... This also allows for some really crazy ideas like a Wayland compositor running in a browser for remote app access. https://github.com/udevbe/greenfield


After this many years, I think the ambition to support remoting at the per-window level just isn't important. In practice I've actually found it outright confusing when I have two windows operating inside two separate environments, but they're both in the same visual container.

VNC/RDP work well enough, and SSH works well enough for when you don't need a GUI.


Both RDP and Citrix do support publishing just the application, without desktop. It is actually pretty neat to have application, running somewhere in the server room, to be mixed with my local applications.

Going even further, the remote applications can run each on different machine in the Citrix cluster, so it is possible to load balance on per-app basis.


Don't these solutions simply crop the desktop out? You used to be able to see the desktop behind the app if you resized the window rapidly.


I'm able to see flashes of background color, but not desktop. Start menu and other desktop widgets are unavailable, except the keyboard widget, which has support coded into the client.


I tend to agree. The original promise of remote X11 (server side compute and thin clients) is provided via web apps these days.


ssh -X works well (enough) when you need a GUI.


That only works if you’re running X11, not on Wayland (in case it was unclear; I was momentarily confused, so I’m posting.)


Meh. As far as I'm concerned, Wayland is strictly superior to X, so long as it fails to support keyloggers.


Wayland is developed by people who don't take security seriously at all. So security arguments are irrelevant here and do not favor wayland in any way.

I'd say wayland is just a new thing that breaks everything in an attempt to break less.


Except it does actually support keyloggers while breaking the concept of global keybindings. It's like the worst of both worlds.


It’s a red herring anyway. All processes owned by a user can influence all other processes owned by that user, either directly or indirectly. It’s basically impossible to prevent. Don’t run code you don’t trust and don’t let things you don’t trust connect to your display server.


Well, you don't have to run an untrusted program as your user account, you can sandbox. If you run something in a jail and pass it a wayland socket, it will be able to display, but won't be able to modify your files.


Half-offtopic: I find it very interesting that Rasbian sticks to LXDE. I thought that was going to slowly disappear, with LXQt having developer attention and there being no path forward towards HiDPI and well, Wayland (because neither GTK2 nor Openbox support these).

LXQt currently still uses Openbox, but you can replace it with KWin rather easily. I don't know what is then still missing to create a proper LXQt Wayland session, but it seems feasible.

I guess, one does not really need HiDPI on a Raspberry Pi, but yeah, Wayland would be nice.


I recently switched back to Arch and decided to try out Wayland. So far, I have only had some problems with Firefox, which could be solved I'm just being lazy. My one wider complaint is no autokey support, which isn't planned on being added. Overall, solid replacement to X11


I'm not sure what exactly you mean by autokey, but you can implement various keyboard tricks on various levels…

If you have a compositor that supports plugins, such as Wayfire, you can write a plugin: https://github.com/myfreeweb/numbernine/blob/master/wf-plugi...

You can also do things on the evdev level by listening to keys and emitting new keys from a virtual device. My tool for that: https://github.com/myfreeweb/evscript


The article is focused on essentially old idea of framebuffers:

> Even more ideally these memory chunks would just be textures in the GPU.

And that idea is very far from modern world of high-DPI monitors.

In high-DPI UI you cannot operate on textures anymore.

Window surface shall be represented not by bitmap (that is O(N) complex to fill by CPU) but rather CommandLists:

   [opFillRect,0,0,100,100]
   [opFillPath,...]
   [opBlitBitmap,...]   
 
Window compositor shall pass such command lists to GPU for rendering.

This way to fill rectangle on window's surface will be O(1) complex operation - just to send [opFillRect,0,0,100,100] command to window (and so to GPU) for rendering.


I believe you are at the wrong stage, mentally. This is all about the compositor that puts the final image on your screen together. The input to this component is always a list of textures with associated meta information (offset on screen, scaling, alpha mixing, Z position.. whatever your compositor offers). How these textures are created is not the business of the compositor - and the routine case is of course through GPU drawing!

There are two things to mention here that are of particular importance on mobile systems. Lots of chips now have compositor hardware - separate silicon from the GPU that can read textures, scale them, blend them and push the resulting pixels directly to the screen. And second, the word "texture" is used here to mean "any format that GPU, compositor and video decode engine can read or produce" - in stark contrast to the old framebuffer approach, it is essential that things stay in their "native" format for as much of the pipeline as possible and are never CPU read or modified, which would require conversion.


Thanks for this great primer! Not quite through with the article, but I've recently been using my Raspberry Pi as a workstation (I mostly use SSH anyway), and I've had issues / confusion with hardware acceleration and X vs. Wayland for a long time.

I never seem to be able to get hardware acceleration to work properly in Linux on my MacBook---whenever I open up a webpage, CPU usage goes through the roof, unlike in Mac OS. Wayland seems to help a bit, but there seem to be lots of bugs than can only be fixed by the window manager.


I had a full Plasma Wayland session running on AArch64 RPi in mid 2016, so this is just wrong.

The only issue is that the Raspbian downstream kernel and broadcom's proprietary userspace drivers are a pretty big mess and not compatible with anything.

When not using any of them, you get a much better experience with software compatibility.


I don't mind if it exists on the Pi, so long as X is still an option. I use the remote desktop facilities X provides even more with Pis than with any other platform, and to the best of my knowledge, Wayland doesn't have anything like that.


Last time I coded for the Raspberry Pi, gen 2, I used Wayland with Sway on Arch Linux ARM. Fast and more responsive than the standard DE provided by Raspbian. Was running Sway on Raspbian but needed access to the latest version of Go.


I've been using it ever since it became available in Fedora, no more screen tearing out the box like X, but it does cause me issues on my ThinkPad with a Wacom Digitizer. I could probably fix it quite easily if I stop being lazy.


Did beos and haiku use X? I think not, then how do their graphics systems work?


Direct to hardware. There is a framebuffer driver that IIRC uses VESA primitives, and a few other more generic drivers with bit blitting and OpenGL support.


Has anybody tried porting that system over to Linux?


Everyone tied to that is much more focused on actually getting Haiku OS to work everywhere.


Haiku AppServer is home grown, using AntiGrainGeometry libs for rendering to Framebuffer.


> It delegated all of that to X windows and user space drivers.

and

> Further slowing down progress is X-Windows. X is the graphical interface for essentially all Unix derived desktops (all other competitors died decades ago).

I'm confused. What's "X windows" and "X-Windows?" Is he talking about the X Window System?

https://en.wikipedia.org/wiki/X_Window_System


I wish I'd seen it when it came out he sort of stopped at the first hurdle when it came to trying to compile.


Ubununtu, hilarious typo


"I can spell banana but sometimes I forget when to stop"


tl;dr: shitty graphics hardware vendors who don't care about Linux support.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: