Hacker News new | past | comments | ask | show | jobs | submit login
X: The First Fully Modular Software Disaster (art.net)
206 points by handpickednames on Aug 17, 2017 | hide | past | favorite | 136 comments



Ah, this is entertaining to contrast with https://news.ycombinator.com/item?id=15031814 "Why did software go off the rails". The "UNIX haters handbook" is extremely old, as if you couldn't tell from the references to Reagan and the 50-MIPS workstation (roughly equivalent to a $1 Cortex-M0 in today's money).

And in many ways they're not wrong:

- "X has defeated us by switching the meaning of client and server"

- "most computers running X run just four programs: xterm, xload, xclock, and a window manager" : the intervening 20 years have added a web browser. Almost all software run by the user is in either the xterm or the browser.

- ICCCM is hilariously complicated, although cut-and-paste largely works now. It's just that there are two different cut-and-paste mechanisms in play.

- client/server division of labour is still being fought over on the web

- X authentication is painful to do with the original mechanism, but was eventually fixed by ssh X forwarding

- Xdefaults is a great mystery, largely obsoleted by the GNOME people with their own mysterious pseudo-registries and, god help you, polkit

- X still has trouble with the wide variety of graphics hardware out there, although you can usually get 2 monitors working eventually.

- "NeWS an d NEXTSTEP were political failures because they suffer from the same two problems: oBNoXiOuS capitalization, and Amiga Persecution Attitude(TM)." : basically true, until they dropped the attitude and capitals and became Cocoa.


> - "X has defeated us by switching the meaning of client and server"

If you think in terms of network programming, "server" is the program that accepts incoming connections and handles requests from clients, and "client" is the program that initiates the connection and sends requests, it is not confusing at all.

The security part is true, though. X authentication sucks, and using ssh might make that usable (plus, encryption), but that is kind of like saying that eMail is not a secure communication channel, but PGP solved that.

But there is no alternative that solves the problems with X while keeping or improving upon its strengths (that I know of, I should add):

The "mechanism not policy" part is the reason it is possible to choose from several desktop environments.

Also, X has achieved something that no other current windowing environment can do: Transparent, cross-platform windowing. There are X servers available for all the free Unices, macOS, Windows, and once upon a time, I am told, for OpenVMS, too.


Indeed, X liberated us from expecting workstations to only be clients and servers to be some big iron where I go to get service. What X began, lightbulbs (with embedded servers) are now completing.


That and FTP. FTP Passive mode used to be an optional extension.


This has always struck me as an odd decision.

I am not an authority on designing network protocols by any standard, first using two separate connections for commands and data seems strange enough to begin with (at least to me). But I cannot imagine what possible reason the people who designed the protocol had.

To be clear, I am not certain if I would have done a better job.

Either way, I was very happy when I discovered sftp. :)


FTP always seemed to me to be a design that was never fully implemented. There's a hint that downloads should run in the background and in parallel while you look around the directories, but FTP clients and servers for a very long time were stubbornly single threaded.

One also has the impression that FTP was supposed to be integrated into your shell and avoid the clunky client entirely, but the design wasn't quite there to support it, so we got the worst of both worlds. A funky network protocol that ran into a lot of trouble when firewalls and NAT appeared and a primitive placeholder client that only downloads a single file at a time.


Separate control and data channels is quite a .. "telco" design. Also present in the much more recent SIP protocol.

Do remember that FTP is very, very old. It's from 1971.


The Unix Haters Handbook may be old, but it's a glorious combination of humor and legitimate commentary. I read it years ago while on a vacation, and I don't think I've ever laughed as much while reading any other book, ever.

Here's a link to the book in its full glory:

http://simson.net/ref/ugh.pdf


I reread UHH recently, and I don't find it has aged well. The humor is very dated, as is much of the commentary. I was keeping notes reading through it about what was no longer correct or relevant, but it became sort of pointless when the notes were, "This whole chapter is wrong, and so is the next. Who even has heard of csh at this point?" It definitely has its place in the hacker canon, but I'm not sure I could wholeheartedly recommend it to anyone who doesn't remember the computing scene back then.


It also ignited debate over what it was even called. Is it the "X-Window System", or abbreviated, "X-Window" or, like the British and their "maths", is it "X-Windows"? Not to be confused with "Windows®" of course.


>To annoy X fanatics, Don specifically asked that we include the hyphen after the letter "X,", as well as the plural of the word "Windows," in his chapter title.

Of course :-)


Is this even a debate? Wikipedia makes it clear that it is called "The X Window System", or "X" for short.

https://en.wikipedia.org/wiki/X_Window_System



It was not a debate really, it was a popular error that was much remarked upon. It was as much of a shibboleth then as people saying "SystemD" now is.


I think the US is the odd one out hete with your one unit of 'math'.

Mathematics -> maths

You don't say, "I am learning mathematic", unless you are a cretin.


> You don't say, "I am learning mathematic", unless you are a cretin.

So why is "maths" plural? It's sure as hell not countable. What is "a math"?

At least "mathematic" makes sense—you can deconstruct the morphology to understand this is "a lesson", i.e. the gerund of "to learn". It's also the natural english adjectival form of the greek word.

"maths" is just weird. You might as well use "magicks".


It has an 's' at the end, but it is still treated as a singular noun in most contexts. You don't say "My favorite subjects are mathematics", you say "My favorite subject is mathematics".


To be fair, you would also not say “My favorite subjects are games”; there’s only one subject, called “games”, even though the word “games” is a plural form of “game”.

I can’t really come up with a situation where you would want to refer to multiple “mathematicses”; it kind of just is something ethereal, not a _thing_ where it makes sense to refer to one or many.


Hypothetical_Bob: "I like applied and pure maths. Mathss are my favourite subjects. I don't know which one to pick for my A levels".


Most people abbreviate "economics" as "econ" why not "mathematics" as "math"?


This is the wrong place for this debate, but "math" is an abbreviation of "mathematics". Why would you take the end of a word you are abbreviating and add it to your abbreviation? I'll add that "mathematics" is a singular noun.


All of mathematics is connected at the root, so there is only one math.


I've long preferred "math's", which was actually in common use at some point in the 19th century. It's a contraction, like "bo's'n" :-).


"like the British and their "maths""

You should see my mathematic 8)


> "most computers running X run just four programs: xterm, xload, xclock, and a window manager" : the intervening 20 years have added a web browser. Almost all software run by the user is in either the xterm or the browser.

Don't forget random enterprise software installers that insist on a graphical mode even though they could never successfully be operated by someone who isn't comfortable on the command line


> X has had its share of $5,000 toilet seats -- like Sun's Open Look clock tool, which gobbles up 1.4 megabytes of real memory! If you sacrificed all the RAM from 22 Commodore 64s to clock tool, it still wouldn't have enough to tell you the time. Even the vanilla X11R4 "xclock" utility consumed 656K to run. And X's memory usage is increasing.

I just opened up "clocktab.com" on Chrome, the first simple clock I could find by googling "clock," and Chrome's task manager shows it using a full 42 megabytes of memory.


My friend complained when I got my Amiga. He said, "When you put a disk in the drive, it takes up 10K!"

I needed space on my phone the other day so I deleted the flashlight app, which turns an LED on and off, recovering 79 megabytes.


That isn't built in to your OS?


Found a pocket calculator / timer in a drawer the other day. That thing is probably not even measured in kB. And also run with 4 thumbnail size solar tiles.


Yes, these are hard-wired logic and just a handful of bytes for two to four registers (at least input and accumulator registers, everything in BCD). Only more advanced calculators use floating point.


I've been interested in how 4-function calculators and cheap clocks work for a while.

Most of the implementational details would appear to be buried in files on engineering workstations at Chinese factories.

I found a calculator teardown at http://electronupdate.blogspot.com.au/2016/08/reverse-engine... a while ago. The author says it's very old but doesn't provide a ballpark (eg, 1990, 1995, 1998, 2003, etc). It does look quite hairy (and possibly not particularly compactly designed?).

I would guess there's a ridiculously simple ALU somewhere in there, but I wonder what else is involved in the specific context of a high-production-volume fixed-functionality design.

I hope to find something similar for a clock at some point.


The other day, for testing purposes, I managed to make a minimal docker container with the smallest 'exit 0' binary I could make without any serious trickery, just gcc -static -s -Os, and was kind of embarrassed that it took 840 kB.

(I realize there are ways to get this down to probably the sub-1kB level, but I wanted something I could generate reliably on the fly in a shell script, so...)


The problem is glibc: https://github.com/lattera/glibc/blob/master/stdlib/exit.c

And by extension, the rather complex set of POSIX exit() semantics: http://pubs.opengroup.org/onlinepubs/000095399/functions/exi...

Using something like musl should save you some bloat, or simply call "return 0;" instead of exit().


We should have a competition for smallest analog clock in terms of running memory.


Here is the one: Sciter renders: https://sciter.com/temp/sciter-analog-clock.png - it takes 50 Mb of memory for the whole process.

This clock sample ( https://github.com/c-smile/sciter-sdk/blob/master/samples/gr... ) is a port of Mozilla's clock : https://codepen.io/anon/pen/vJpqOY

Chrome needs 6 separate processes to run this sample with total memory consumption of 575 Mb.


I fired up Windows 2.03 in DOSBox, with its clock app, available on archive.org[1], and it took up 60.1 MB. This is fun! I think the OS would run fine (at least, enough for the clock) with as little as 512K of ram[2].

Screenshot: https://i.imgur.com/09xjyPs.png

[1]: https://archive.org/details/msdos_win2_03

[2]: https://support.microsoft.com/en-us/help/32905/windows-versi... ("512K of memory or greater" for 2.03).


If you have screen of size 640*480 and rendering without AA then you can render by CPU. Otherwise, with modern hardware and 200 ppi monitors you will need GPU rendering with the whole infrastructure.


What is that properties window from? That is not the properties window I am used to seeing.


That's Process Explorer by Mark Russinovich


It would probably depend more on the OS and the underlying stack than anything.

E.g. on DOS, with direct memory access (plus maybe some INT 10h helpers to draw the digits; although Windows clock doesn't draw them) in video mode 13h, you could do this with a .COM program that would be under a kilobyte total in and of itself. If you counted DOS itself, you'd still be under 64k.


I just realized my date+time watch uses less than eight bytes of ram, but that's not really the point, is it.


The Unix Haters Handbook is pretty interesting reading, but I remember that the chapter on X was one of the weakest. It doesn't help that it starts with the hoary old myth that X named the 'server' and 'client' backwards. More relevant however is that from the vantage point of 2017, it becomes clear that X is one of the most unreasonably successful software architectures of all time.

Think about it: how many pieces of software design have remained in heavy use worldwide for 30 years. The fact that no one bothered to replace it in all that time is proof enough that whatever warts it has can't be that painful. (Now, finally, it seems like it's going to happen with Wayland, but it's not there yet.)

Not only that, but computer display technology has changed massively over X's lifetime. I'm writing this in a composited window manager hardware accelerated via natively-3D hardware outputting to multiple monitors of different resolutions over as many different digital display links. How much of this was even imagined in the 1980s? Yet the X protocol via it's extension mechanisms has proven adaptable enough to take in massive changes in the technology, and still keeps on ticking.


People hate on X quite a lot. I'm not even too fond of it personally. But it runs on /everything/ it's lighter than most display frameworks and it's very easy to program for.

And to your point, there isn't a single display framework as old as it is that's still used.

> I'm writing this in a composited window manager hardware accelerated via natively-3D hardware outputting to multiple monitors of different resolutions over as many different digital display links

And I'm writing this from a very underpowerd netbook with no working hardware graphics and everything still works. Try that with windows 10, wayland, coaca or whatever other display framework you like.


"how many pieces of software design have remained in heavy use worldwide for 30 years. The fact that no one bothered to replace it in all that time is proof enough that whatever warts it has can't be that painful. "

I'm not convinced modern software ecosystem optimizes for quality. Many CAD packages used today have roots as old or older. The dominant CAD format (at least in construction) is DWG which - I can tell you from experience - is about as nice to work with on programmatic level as trying to skin rotten cod. The data contained in this steaming heap of obfuscation is mostly trivial in complexity. Just because we have tools in use does not mean they could not be better - it's rather that once something is in use, social proof, sunk cost fallacy and some practical reasons kick in and development stagnates to dealing with the kinks in the established system.


I was just ranting to some people yesterday about how with HDMI there are still remnants of the original NTSC broadcast standard from nearly 80 years ago. Even though we've (finally!) reached the point where my digital computer is sending a digital image to my digital monitor with no unnecessary analog conversions in between.


One of my "favorite" parts of HDMI is that you can specify the resolution in two different ways depending on if you are connected to a computer monitor or a TV. Choosing the right one is sometimes important, a resolution might not work on your display if you specify it using the other format.


Just a guess, but does this have anything to do with most TVs getting the image margins cut when plugging a computer in the HDMI (and the magic "just scan" setting that fixes it?)


That sounds absolutely crazy, so I believe it. Can you elaborate though?


Look up CEA vs. DMT video modes. Typically this can become a problem when you need to specify the video mode before connecting the display, or if you're running through a KVM.


Dimethyltryptamine video modes?… Sounds about right.


> Think about it: how many pieces of software design have remained in heavy use worldwide for 30 years. The fact that no one bothered to replace it in all that time is proof enough that

I think you've got this backwards. The worse a piece of software is the harder it is to work out how it works, upgrade it, replace it with compatible software, etc.

More examples: plenty of people knew OpenSSL was awful but have you seen the code? Nobody sane is really going to want to work on it. It took a catastrophe to change that.

An even better example: LAPACK. It's been around since 1992, and until fairly recently was written in FOTRAN77. Yes, the one where function and variable names can't be longer than 6 letters or they won't fit on punch cards. Take a look:

http://www.netlib.org/lapack/explore-html/dc/dd2/group__doub...

Yes, nobody is going to rewrite most of that. (Actually there is Eigen now thankfully, but it took some time!)


I used to think the same, until I started using LAPACK extensively. It might look confusing at first, it may even be the most confusing api, except for all the others that are more confusing, and unusable from 3rd party platforms.

When I learned it, I realized that it is excellently designed.


> how many pieces of software design have remained in heavy use worldwide for 30 years.

Unix itself.


The Unix you're using today likely does not share a single line of code with the Unix you were using 30 years ago. It was rewritten.


Out of curiosity, I did blame on some files in the FreeBSD source code (https://github.com/freebsd/freebsd). In core utilities like "cat", "kill" and "mv", there are still quite a few lines dating back all the way to the original commits of BSD 4.4 Lite source 24 years ago. Example:

https://github.com/freebsd/freebsd/blame/e278a20c2ee54d8fa1a...

And if you look at lines that were changed, quite often they are style changes/fixes.

Unfortunately, the easy-to-trace history doesn't go back further than that, but I think it's reasonable to assume that some of these lines, at least, could be dated back to before 1987.


But BSD was a rewrite too. I don't know what the mix of Unix versions in common use was 30 years ago, but I assume AT&T's original was up there. That's the version I started with around that time period.

I was actually assuming that most people today were running Linux, but I forgot that Macs use pieces of BSD.


As far as I know, BSD never did a complete rewrite touching every single line of code. They rewrote the code that was inherited, or derived from, AT&T Unix. Which was most of it, but not everything.


On the contrary, there is a lot that is not rewritten at all.

For starters: The Unix that I (and quite a lot of other people) use today shares entire manual pages with the Unix of more than 30 years ago.

An example: The manual page for the ul command that was written by (then) Mark Horton is pretty much unchanged today in FreeBSD and NetBSD. For 34 years, it hasn't actually described the command correctly or fully.

* http://jdebp.eu./Proposals/ul-manual-page.html


https://github.com/dspinellis/unix-history-repo is a project that attempts to document Unix from its very first line all the way up to FreeBSDs HEAD (at least whenever it's imported, which might only be once a year). There's even a gource video showing the evolution.


I heard someone assert that millions of lines of code were stole, millions!


I'd say that X is the foremost example of actual software evolution.

In the sense that (in the words of my pathologist father when I asked him how hard drug discovery and design was),

"Nature is lazy before it's intelligent. At every turn, it would rather reuse a system designed for a completely different purpose than design something new from the ground up. Which is how we end up with single pathways affecting six completely unrelated systems in the body."


> Nature is lazy before it's intelligent. At every turn, it would rather reuse a system designed for a completely different purpose than design something new from the ground up.

So we can blame all the crazy things we do with computers and software on nature? ;-)

But seriously, that is a great insight. Both where evolution is concerned, and software development. (And other branches of engineering, too, I bet.)


I'm probably one of the only people who liked the .Xresources file.

Sure it's a text and you do regexes to specify objects, but I found it easy to customize and because much of it was automatic there would be a ton of the UI variables in it. You want a different color on the background of a particular text box? No problem. You want to change the font used in the app? No problem. You need to customize the contents of a menu? That might be possible! Every application used the same file so you only had to figure it out once.

Granted, you would leave 99% of the variables alone, but it was nice to have the option if you needed it.

Interestingly enough, I still have the file on my box, although I doubt it gets used much anymore. Looking at the file I forgot how you could do a sort of primitive themeing by specifying broad wildcards like "* MenuButton.Background" and "* Toggle.Font"[1]. That's probably a bit dangerous but I can't remember it ever exploding on me spectacularly. Probably because Athena widgets were so damn minimal to begin with.

Oh, man it even still has:

   Netscape4*blinkingEnabled: False
   Netscape4*myshopping.isEnabled: False
[1] The spaces aren't there in the file, they're just necessary to work around the way HN's markdown does italics.


Excerpted from Eric S. Raymond's The Unix Hater’s Handbook, Reconsidered (2008)[http://esr.ibiblio.org/?p=538]: This chapter begins unfortunately, with complaints about X’s performance and memory usage that seem rather quaint when comparing it to the applications of 14 years later. It continues with a fling at the sparseness of X applications circa 1990 which is unintentionally funny when read from within evince on a Linux desktop also hosting the Emacs instance I’m writing in, a toolbar with about a dozen applets on it, and a Web browser.

I judge that the authors’ rejection of mechanism/policy separation as a guiding principle of X was foundationally mistaken. I argued in The Art of Unix Programming that this principle gives X an ability to adapt to new technologies and new thinking about UI that no competitor has ever matched. I still think that’s true.

But not all the feces flung in this chapter is misdirected; Motif really did suck pretty badly, it’s a good thing it’s dead. ICCCM is about as horrible as the authors describe, but that’s hard to notice these days because modern toolkits and window managers do a pretty good job of hiding the ugliness from applications.

Though it’s not explicitly credited, I’m fairly sure most of this chapter was written by Don Hopkins. Don is a wizard hacker and a good man who got caught on the wrong side of history, investing a lot of effort in Sun’s NeWS just before it got steamrollered by X, and this chapter is best read as the same bitter lament for NeWS I heard from him face to face in the late 1980s.

Don may have been right, architecturally speaking. But X did not win by accident; it clobbered NeWS essentially because it was open source while NeWS was not. In the 20 years after 1987 that meant enough people put in enough work that X got un-broken, notably when Keith Packard came back after 2001 and completely rewrote the rendering core. The nasty resources system is pretty much bypassed by modern toolkits. X-extension hell and the device portability problems the authors were so aggrieved by turned out to be a temporary phenomenon while people were still working on understanding the 2D-graphics problem space.

That having been said, Olin Shivers’s rant about xauth is still pretty funny and I’m glad I haven’t had to use it in years.


Uh, as a frontend developer, I am kind of _shocked_ I've never read this before. There's a lot of fascinating parallels to modern web development. zitterbewegung noted the similarity to complaints about Electron apps' memory consumption, for example, but:

> The right graphical client/server model is to have an extensible server. Application programs on remote machines can download their own special extension on demand and share libraries in the server. Downloaded code can draw windows, track input eents, provide fast interactive feedback, and minimize network traffic by communicating with the application using a dynamic, high-level protocol.

Certainly sounds a heck of a lot like how web applications work (even though we're currently terrible at sharing libraries, heh).

> X gave programmers a way to display windows and pixels, but it didn't speak to buttons, menus, scroll bars, or any of the other necessary elements of a graphical user interface. Programmers invented their own. Soon the Unix community had six or so different interface standards.

Now _that's_ certainly familiar. Sure, the DOM is a heck of a lot closer to a platform for displaying complex UIs than X was, but it still falls so far short of what developers need that a plethora of frameworks, UI libraries, etc. have appeared and fragmented the community. You could also stretch a bit and say things like Google's work on web components are an attempt at a Motif-like standardization around one questionable standard, but I don't know if I'm quite cynical enough to make that jump.

> Even if you can get an X program to compile, there's no guarantee it'll work with your server. If an application requires an X extension that your server doesn't provide, then it fails. X applications can't extend the server themselves -- the extension has to be compiled and linked into the server.

While a lot of new browser features are polyfillable, a lot of the more advanced ones (e.g. service workers) are not, and users and developers are at the mercy of their browsers, much users would be with their X servers.

> Myth: X is "Device Independent"

The quirks discussed in this section apply to responsive web apps too. There's actually quite a bit of nuance in making fancy canvas, WebGL, or CSS transforms that look good on retina screens, etc.

I'm sure none of these comparisons truly map 1:1 to X development (having never done it myself), but damn if it doesn't remind me how cyclical software development has been over the past few decades. Not that that's a bad thing, just that some things are very, very hard :)


>Certainly sounds a heck of a lot like how web applications work (even though we're currently terrible at sharing libraries, heh).

NeWS was architecturally similar to what is now called AJAX, except that NeWS coherently:

+ used PostScript code instead of JavaScript for programming.

+ used PostScript graphics instead of DHTML and CSS for rendering.

+ used PostScript data instead of XML and JSON for data representation.

https://en.wikipedia.org/wiki/NeWS


In an alternate universe in which NeWS won out, someone developed a standard viewer for networked documents which provided standard routines for layout and document linking. Eventually, as more people got viewers, many applications began to be distributed directly as client-side interactive PostScript documents, eschewing the NeWS network protocol altogether in favor of PSON, a text-based protocol which had the advantage of working correctly through corporate firewalls. Someone developed a server-side runtime engine called Node.ps, and many people jumped on the bandwagon, claiming that it made sharing code between the client and server much easier. As PostScript development became more complex, preprocessor tools began proliferating, including a strongly-typed version of PostScript, known as TypeScript. Due to PostScript's lack of a good standard library, a company named "NPM" started a package repository, which was soon filled with tiny libraries for each PostScript procedure, eventually leading to the "string-length debacle" when an upset developer unpublished a five-line package...


Arthur van Hoff wrote an object oriented C to PostScript compiler called PdB, which is kind of like TypeScript!

https://compilers.iecc.com/comparch/article/93-01-152


+1. Would read again

Hahaha


Dynamic library loading (at runtime, hey the Client needs X library) was also a thing from the xBase databases of times bygone.


So this means we really need Wayland. Hope that somebody puts a EOL like Flash on X. Only then will all those projects move to Wayland. Even Python 2 to 3 took well over a decade and we still haven't reached EOL.


I still Wayland is a titanic mistake and that X could have been extended indefinitely instead without a flag day, without weird breakages, and without any loss of core features.


With a load of backwards compatibility for devices that don't make sense for a normal usage and patches for patches that were written fix bugs which were written to extend X. Better to start from scratch IMO.


I don't see the backward compatibility burden being very high. The core protocol just keeps working.


Microsoft spends millions for backward compatibility and they charge a fortune for that. This is an open source project with not as much resources and the core protocol of X was never made for GUI. It was made for displaying a cute clock on the screen when there was no concept of GUI. They extended it to do something it was never meant to do.

Youtube talk about it: https://www.youtube.com/watch?v=RIctzAQOe44


I dunno... modern X (XCB) doesn't seem so different to Wayland.

Both are asynchronous RPC protocols with bindings generated from a bunch of XML specs. Both use shared memory aggressively. Both share a lot of concepts. X just has more back-compat to deal with.


People over-emphasize a lot of the problems X has. A lot of the real original problems like high latency were evolved away. And while backwards compatibility can be annoying, that's just what you get for trying to do serious software engineering. So I agree with you that the differences between X and Wayland are greatly exaggerated.

However, there is one significant architectural difference between X and Wayland that is quite difficult to evolve away. In modern X, there are actually three processes involved: the client (app), the X server, and the compositor. In Wayland, there are only two: the client (app) and the compositor.

The X design is more fragile (state managed in 3 places instead of 2) hence ICCCM, and has more latency and overhead. However, it made a lot of sense back when the X server talked to the hardware directly and therefore needed both an integrated driver and special privileges. It'd be silly to implement drivers as part of compositors, so the separation of mechanism and policy was the right way to go back then.

Since then, however, the mechanism has basically entirely moved into the kernel (DRM/KMS) and Mesa (OpenGL). This happened in bits and pieces first with the evolution of the DRI protocol and then the jump to kernel mode setting.

That is the evolutionary development which lead to the move to Wayland makeing sense.

I suppose this 3-to-2-process transition could have been evolved by refactoring the core of the X server source (the whole protocol handling etc.) into a library that compositors then simply link against. But that would have been a truly herculean task with not very many natural intermediate steps. Implementing something like Wayland from scratch on the modern Linux graphics stack is actually much less work -- except perhaps for transitioning all the toolkits, but then again, the X server source lends itself fairly well to writing various "X-on-something-else" servers, since people have been doing that for a while, so there's a natural if slightly awkward solution for backwards compatibility.

So hacking up an initial prototype for Wayland could be done very quickly, but note that actual adoption still took a long time. But the point is that the Wayland path had more presentable intermediate steps (unlike the X server refactor), so that's the path that software evolution took.

(Man, this got a lot longer than originally intended...)


> In modern X, there are actually three processes involved: the client (app), the X server, and the compositor.

Is the compositor not optional?


Even if you don't have a compositor you still have at least a window manager, which has much the same issues (high latency, ICCCM).


IT should be, but i fear it is more and more taken for granted...


Well they were pretty much written by the same people, not that i have much love for XCB (as a project) as it has lead to some recompiling on my end...


I liked xwindows very much. Still do. What other system allows you to use your local window manager to control windows from multiple different machines?


RDP does.


Its'n the same. With RDP, you are sharing a whole desktop. With X protocol, you can share WINDOWS! You not need to run the whole windows manager/desktop on the remote machine. You can simply launch the X11 application and allow the local client to manage the windows. It's far more efficient and far more responsive (if is compressed and cached like does NX).

I remember connecting to my desktop from a computer on the university using NX protocol and running remote apps like was on local. And I don't had a superb internet connexion.


Quick note, RDP has extensions to forward just the app. It has been able to do that for a while.


My first UNIX was Xenix and I used quite a few variants since then including doing X programming, I know how X works.

It is called RemoteApp on Terminal Services and exists since 2008.

https://technet.microsoft.com/en-us/library/cc753844(WS.10)....


I remember when I first did that with a unix machine, immediately my perspective on desktop computing changed. i thought it was witchcraft.


One of the amazing things about X was how easy it was to snoop on others. There was one prof at my university who appeared to be working long hours until a grad student came up with the idea of dumping out the prof's frame buffer - and it turned out he was playing Tetris! Default settings were very permissive...


Dumping the frame buffer has nothing to do with X; you can’t blame X for bad permissions on the underlying device node in Unix. X didn’t create the device node, and is therefore not responsible for the permissions on it.


Maybe the GP means the DISPLAY? People couldn't be bothered using xauth and even just had xhost + (allow anyone).

When we got internet connectivity at Sydney University we used to just dump X-Windows from all over the world. One in particular was some displays in Sweden. Forget what was actually on them though.


It worked the other way round too. You could overwrite the screen buffer of anyone on the network. Usually with something offensive. Sun even provided a command to do it. This was hilarious in 1989.


You are right - I did mean display not frame buffer :-) Exactly as you say, most people used to run with "host +" at my university.


The engineering workstation lab at my alma mater had very permissive settings. When it got late and the students got tired, they would pop up xclock on other people's workstations. There was a lab rule aginst doing anything devious, and I don't recall it ever being violated. I'm sure those Suns are long gone now, and the relaxed policy with them.


Ages ago when I read this article I thought they missed the one major feature that X totally flubbed. IMHO X should have exported some way to do sound along with the graphics. Sound support on Unix was a nightmare for most of the 80s and 90s and even into the 2000s. Everything was different and broken in their own special ways and some were proprietary and very few (only Sun's halfassed network audio as the exception) worked over the network.

Most of the rest of the complaints come off as someone with an axe to grind, especially all of the hand-wringing over flipping the client and server model around. It's not like X was the first system to do that, active mode FTP does exactly the same thing. Back in the days before firewalls and NAT this was a legitimate design decision, they just ended up on the wrong side of history.

Besides, from a network architecture standpoint it makes sense. Your local X display doesn't know that you've started an app on a remote box until something tells it. Having it make the connection out would require some local broker to inform it that it needs to make the connection.

I will agree with one point, ICCCM sucks and has been overdue for a complete rework for at least 20 years now. That said, I'm not a fan of Gnome's Ctrl-Shift-C Ctrl-Shift-V workaround either.


"Self abuse kit" is such an appropriate way to describe Motif.


A few years ago I found a book on Motif user interface programming in a used bookstore, and started leafing through it. The entire book had exactly ONE picture. ONE. And it was some architectural diagram describing X, depicting a network of terminals and servers and happy little cloud things. The rest of the book (asymptotically, all of the book) described APIs and widgets and protocols and whatnot, all without the benefit of showing what the user would be interacting with.

That someone could write a book on UI programming and someone else would publish a book on UI programming without any actual depictions of UI is pretty much all I need to know about X and its community (although I know a lot more, and my actual exposure to X goes back to early versions that came on 9-track tapes directly from MIT and supported mostly just frame buffers on DEC Vaxen).

(For years -- and this may still be true -- the software to manage the equipment in Comcast's head end datacenters was controlled by some X-based UI. Now, I ain't gonna say "Them folks surely deserve it, because they done me wrong" but that miserable train-wreck of a system goes a long way towards explaining why it's often hard for Comcast to fix their stuff. I have un-fond memories of taking down whole QAMs because I was fool enough to click some check boxes in a dialog in the wrong order...)

You can write hideous UI in just about anything, but X seems to have a special place in the ecosystem.


It's because Motif is so ugly, they were embarrassed to include a screen snapshot.

Then TCL/Tk came along and emulated Motif's look and feel, only better because its default color was bisque.

"Bisque is Beautiful"

http://www.ucolick.org/~de/Tcl/pictures/

"The procedure tk_bisque is provided for backward compatibility: it restores the application's colors to the light brown (“bisque”) color scheme used in Tk 3.6 and earlier versions."

https://www.tcl.tk/man/tcl/TkCmd/palette.htm

http://mars.cs.utu.fi/BioInfer/files/doc/public/Tkinter.Misc...


Are you referring to the Motif programming manual(there are few motif programming books so I am assuming you are). If so that was a mutli volume effort with books devoted to particular subjects(ie the API would be one of them).


Win32 is a pleasure to program for, when compared with X/Athena/Motif.


I far prefer Motif to Gnome. It's much lighter, I think it's much nicer looking (Gnome's default theme is really really ugly IMO, Motif feels well defined and simple.) and seems to have far fewer dependancies.

I've written software that uses both and I don't remember thinking that one was significantly easier than the other, I did dislike that they took control of the main loop, Plan9's method of synchronous event polling is much nicer IMO.


GNOME isn't a toolkit, it's a full desktop environment. The toolkit is GTK+.

And yeah it is heavier than Motif. It does a lot more with it, though. Whether you think those things are valuable or not presumably depends on what software you use and your own sense of aesthetics.

Also when GTK+ was created, Motif was proprietary. There was Lesstif, but if I recall correctly that wasn't all that good, so there was a good incentive to develop a Free Software GUI toolkit.


I haven't used Motif in almost 20 years, and I don't think I've seen more than a handful of Motif widgets in the last 10 years, but I still can't get over how incredibly ugly and hard to use Motif was. People actually paid money for Motif!


I'm running mwm on my desktop machine now and I build emacs against Motif too.


tru master. Send a screenshot please, I like motif.


I don't have a running one on me, but here is an actual screen snapshot from 1990 or so, showing what happens when you resize XCalc again and again, as the layout rounding errors build up in the bounding boxes of the X Toolkit widgets used for the buttons.

And Motif is based on that very same code.

http://www.art.net/studios/hackers/hopkins/Don/unix-haters/x...

At one point I got frustrated and hacked the window manager (probably piewm) so I could pass it a command line parameter telling it the window ID on which to run instead of the root framebuffer, and then I ran it on the XCalc window, so I got window frames around all the buttons and could pop up menus on them, move them back to where they belonged, resize them, iconify them, etc.

Yay ICCCM! How powerful! It was totally worth ICCM being that complicated and pushing all that complexity into every other toolkit and application, just in order to make that trick possible just once.

http://www.freshports.org/x11-wm/piewm/


Ha, I didn't even read your nick, how unsurprising :)

you know what, I love http://www.crynwr.com/piewm/ screens a lot. I'm deep into a retro 8bit mindset these days. Very fitting.

Alas I never had to write code for it, but I like the idea and look.


Some people like to configure it to not have any frames at all, and operate it with pie menus bound to alt/command keys (which if you mouse ahead through quickly, they will not appear on the screen), rendering the window manager practically invisible.

Here's a version of the original X10 "uwm" window manager from which "piewm" is a distant descendent (pre-ICCCM, for X10, not X11), that I hacked to implement my original version of pie menus, and then integrated with FORTH, so you could program the window manager in FORTH (foreshadowing programming NeWS in PostScript), and even fork off light weight FORTH tasks to bounce the windows around!

http://www.donhopkins.com/home/archive/piemenu/uwm1/

http://www.donhopkins.com/home/archive/piemenu/uwm1/fuwm-mai...

Here's the bouncy window code, including some reverse polish notation 68k assembler code for 256/ and 256*:

http://www.donhopkins.com/home/archive/piemenu/uwm1/hacks.f


oh of course a pre NeWS postscript capable WM .. was it the first ?

funny http://www.donhopkins.com/home/archive/piemenu/uwm1/call-ema... ;)

ps: I couldn't locate the actual exec_string that interpret Forth


Emacs qualifies as a window manager in my book! ;)

That code you linked to is from Mitch Bradley's "Forthmacs", which ran on Sun workstations including 68k i86 and SPARC, and also Atari ST, Mac and other systems. He developed it into the "Open Boot ROM" architecture, which was used in Sun workstations and Apple PowerPC Macs as well as the OLPC children's laptop.

https://github.com/ForthHub/ForthFreak/blob/master/Forthmacs

https://en.wikipedia.org/wiki/Open_Firmware

http://wiki.laptop.org/go/Open_Firmware

On SunOS, Forthmacs had a library clink.f with the ability to dynamically relocate and link Unix libraries so that you could call them from Forthmacs, pass arguments on the stack, etc. SunOS didn't actually support shared libraries or dynamic address relocating at that time, so Forthmacs simply ran the Unix linker utility to create a file with the library relocated to the desired address space in the FORTH dictionary, and then read that file into memory, define its symbols in the FORTH dictionary, and let you access its variables and call its functions directly from FORTH!

That's how Mitch originally integrated MicroEmacs with Forth to make Forthmacs, and how I later integrated "uwm" into FORTH: I refactored uwm so instead of having an event loop in the main function, it was a library that could be called by FORTH, which would link the library in and run the main loop itself, calling into the library as needed to initialize and handle specific events (_uwm_init, _uwm_poop).

http://www.donhopkins.com/home/archive/piemenu/uwm1/fuwm-mai...

Here's the glue that links in the uwm library from fuwm.out:

http://www.donhopkins.com/home/archive/piemenu/uwm1/load-fuw...

    .( Loading...) cr
    requires tasking.f
    requires uwm.f
    requires clink.f
    .( Linking...) cr
    "" fuwm.out clink
    .( Linked!) cr


My head is spinning. Amazing history part still.


In the game Big Rigs: Over the Road Racing, there is no threshold to how fast you can go backwards. Keep accelerating backwards and you will drive straight off the map. Continue to accelerate and you will be at world coordinates where the details of your truck's model are far smaller than the distance between two adjacent floating-point numbers (which grows with increasing magnitude). This causes your truck to be rendered with bizarre geometry: the details subtly wiggle and change size slightly at first, then warp all over the place and become completely unrecognizable.

Your xcalc screen shot made me think of that and smile a bit.


oh that's what happens with early rigid body simulation, it's float unstable and can create absurd new coordinates .. leading to spikes.


Different strokes for different folks, I guess. Personally I'd rather use twm. I like the flat-looking widgets better.


OTOH every time I use CSS to try to lay something out in a manner that can handle different window sizes, I think nostalgically of the Motif Form widget.


I love Unix, but I have to admit there is some truth to this article.


You can still use X to run a local GUI hooked up to a remote machine, and 15 years ago that was pure voodoo magic.


Even today that's not exactly a common feature. Usually you need to start a full desktop (RDP, VNC, etc...) to do anything similar.

I still use X display forwarding regularly. Often for something like sending a Firefox window back so I can look at a page on a test network without needing a direct route to it.


RDP has supported running single apps remotely for a decade. Windows Server 2008 had this feature.


That still felt like magic every time I did at university. Popping up a local UI on my Windows machine for something running on the computer lab's Linux machines was good stuff.


PEX and GLX were really quite cool, and only recently matched by WebGL. Native 3D apps really could be opened remotely, then download display lists to the local store, and use local 3D rendering hardware.

There just wasn't very much PEX interoperability between different manufacturers, because of all the different extensions. Only a rare vanilla app could be remoted in a heterogeneous environment.

Some vendors, like E&S, had PEX as the native graphics API. You could not write a purely local app that couldn't be opened remotely, including stereo displays (CrystalEyes) and use of dialbox or spaceball input devices.


RDP supports networked DirectX, with the GPGPU on the server or client side.

And DirectX offers so much more than downgraded OpenGL ES 3.0 (aka WebGL 2.0).


I am not sure if the whole idea is considered bad or the way it has been implemented. In general I think X is very useful.


Why is configuring displays so damn difficult? It was never clear to me why there needed to be so many tools (xrandr, xorg, X, startx, xinit, xanorama) to just get a basic display working at the max resolution when other operating systems manage to have some sane plug-and-play behavior.

All of this seems to render linux pretty useless for hot-pluggable displays. I'm sure ubuntu has some sort of solution (I never use the desktop version); why can't this be integrated at the level of the X server (or hell, the graphics driver) itself? Is X too firmly baked to adjust to the needs of its users? Will wayland address this?


Reading this reminds me of people complaining about Electron for desktop apps.


Dammit... Now I want to install CDE again...


To me this is always what made linux fun.


[flagged]


We detached this off-topic flagged subthread from https://news.ycombinator.com/item?id=15038254.


I think you are being downvoted because none of that matters. It sounds like Eric Raymond might be a horrible person (I really don't know) but it doesn't matter for the sake of technical discussion. Do his technical opinions and technical ideas work and further the state of the art?

If Jeffrey Dahmer, wrote a sorting algorithm that is 10% faster than the fastest sort algorithm it bears no relation to him being a murderer. The sorting algorithm stands on its own merits or doesn't.

Just discard these people's moral and political stances while keeping any good technical ideas they have.


Since the comment mentions a personal interaction, I think it's fair for Don to set the record straight, although his comment was more than a bit long. "To clarify, Raymond is not my friend" would have sufficed.


A more realistic example would be the Reiser File System.

https://en.wikipedia.org/wiki/ReiserFS

Note the right-most column:

https://en.wikipedia.org/w/index.php?title=Comparison_of_fil...

I don't value his technical opinions either, but modulo his bullshit about mechanism versus policy (1), I generally agree with most of ESR's points in his review of the X11 chapter, and think Keith Packard and also Jim Gettys deserve credit and have done a wonderful job of rewriting the rendering core, but what they've done is a layer above and independent of and bypassing X (Cairo, Pango, etc, all of which ended up in Firefox, canvas, etc), so it's not a matter of X getting un-broken or cleaning up its act, it's a matter of people packing up their marbles and moving upwards and onwards, to browser and beyond!

(1) Will somebody please explain how XRotateBuffers is "mechanism, not policy"?

https://linux.die.net/man/3/xrotatebuffers


I considered bringing up Hans Reiser and didn't because it conflates what were already discussing with another issue: Support.

Hans Reiser and his tiny company was the primary point of contact for support on ReiserFS. With him in prison no support was left for it, no new updates were coming. This doesn't invalidate the academic legitimacy of his work on "Dancing Trees", other technical topics definitely didn't prevent other file systems from using algorithm advances he came up with.

Now stop mixing stuff that ought not to be mixed. Yes murderers, racists and xenophobes are horrible people, it doesn't make their every last statement wrong.


> Will somebody please explain how XRotateBuffers is "mechanism, not policy"?

XRotateBuffers is implemented using the RotateProperties protocol request, which given a list of property names, will rotate their contents by a given number of places as if the list were circular. This same operation could be done using a sequence of other protocol requests like GetProperty and ChangeProperty. The problem with doing that, though, is that another client attempting to perform the same operation at the same time could have its requests interleaved arbitrarily, leading to corruption or other errors. Each request is performed atomically, so a single request RotateProperties needed to be added to the protocol in order for the rotation operation to be performed safely.

[1] https://www.x.org/releases/X11R7.6/doc/xproto/x11protocol.tx... ; search for "RotateProperties"


This seems kind of like responding to a post about a specific bit of Orson Scott Card's writing advice with a blistering salvo about Card's homophobia and Islamophobia: they're real things deserving of criticism/debate, but they don't actually say much about his writing advice. An asshole with good writing advice is still an asshole, but it's also still good writing advice. ESR is in many respects a fruitcake,† but that doesn't, in and of itself, make his observations about the Unix-Haters' Handbook unworthy of bringing up in a discussion about the Unix-Haters' Handbook.

†Whether this crack is unfair to ESR or unfair to fruitcakes is left as an exercise for the reader.


It's called X Window, man. Singular, not plural.


From the bottom of the page:

To annoy X fanatics, Don specifically asked that we include the hyphen after the letter "X,", as well as the plural of the word "Windows," in his chapter title.


Actually it's called "X Window System" or simply "X".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: