"A truly portable X application is required to act like the persistent customer in Monty Python’s “Cheese Shop” sketch, or a grail seeker in “Monty Python and the Holy Grail.” Even the simplest applications must answer many difficult questions"
Read the original for the hilariously accurate dialog that follows…
this is my favorite quote from UHG "If the designers of X Windows built cars, there would be no fewer than five steering wheels hidden about the cockpit, none of which fol- lowed the same principles—but you’d be able to shift gears with your car stereo. Useful feature, that."
Author here, let me know if you have any questions or comments! Given how much attention this is getting, I should probably finally finish one of the three articles I have sitting around in the backlog.
"Unlike X11, where the graphics primitives were rather low-level and all input event handling involved round-trips to the client, NeWS was advanced enough that simple widgets, such as scroll bars and sliders, could be implemented entirely server-side, only sending high-level state changes to the client, more along the lines of “slider value is now set to 15” than “mouse button 2 released”. Similarly, the client could ask the server to render a widget in a given state, rather than repeatedly transmitting sequences of graphics primitives."
Windows native controls could be remoted? The point of the comment above is that NeWS communicated events at a higher semantic level than X between the display server and the application. This communication was network transparent.
Not remoted, but the semantic communication between the kernel and the application is at the event level for native controls such as the scrollbars example mentioned.
That's not how Windows GUI works, to some extent it's the other way around with all controls being implemented in client's user space code. The difference is that for windows, there is one standard client toolkit that is part of the OS API. BTW, even window decorations are drawn by the client.
The kernel side component does even somewhat less than X server, as all drawing is done either by direct writing to frame buffer from userspace or thru DRI-like direct rendering mechanism. Kernel side "display server" then only handles message queues and input and tracks which screen region belongs to which window (and set's up the shared memory accordingly, in theory).
For 16 bit windows, one might say that the userspace libraries are somehow part of the kernel (as there is no user/kernel split in win16 and one of the libraries is even called kernel.dll), but in win32 case these libraries are userspace.
On the other hand, NeWS principle is that server implements turing-complete virtual machine (based on postscript) that executes arbitrary code uploaded from clients (which then can define behavior of widgets, specify new ones or even create windows whose complete behavior is server-side).
> all drawing is done either by direct writing to frame buffer from userspace
Before Windows 10, though, GDI was in kernel space, so it was a bit higher level than that, no? Old-style GDI apps would send state-of-the-1980s-style graphics primitives to the server, while newer apps would just write directly to the framebuffer, right?
If you're not remoting, it's not difficult to work at a high level. The API presented to the application is easy to control. The interesting thing about NeWS was that a high level event could be defined by the application. The protocol needn't be designed to accommodate it a priori. The example mentioned sliders, not scrollbars. The display server wouldn't even know what a slider was. It would have been defined by the application.
The author appears to credit the X Window System or Microsoft Windows with repaint-on-expose. To my knowledge that was first implemented (along with regions) by Bill Atkinson for the Apple Lisa, because he misremembered what he saw on the visit to PARC; overlapped windows in the PARC systems at the time didn't automatically and efficiently repaint only the affected area.
The interactivity of this series is just amazingly helpful in visualizing what the heck is going on. Honestly, for me that makes this one of the few links on HN that I might actually read all the way through. And may even use it as a homeschool lesson for our kids, if it seems written simply enough (which at first glance it is).
For those who might want to understand why Wayland was created. Also tons of fun facts about X11. Really nice talk of Daniel Stone about X server from 2013:
Even my manager who worked on the original X/Motif codebase in the 80s when DEC and HP was working on it doesn't enjoy talking about these details. Pretty good write up, though.
Neither Vulkan nor WSI are essential for operation of X11 or Wayland. Vulkan is "just" a new API to talk to GPUs, completely separated from windowing (just as is OpenGL) and the WSI part of Vulkan aims for a standardized method to allow windowing systems to expose the low level graphics device they're working towards to client applications so that implementations of APIs like Vulkan (but also OpenGL) can talk to it. WSI is kind of a generalized API for what GLX or WGL are doing. This neither revolutionary nor a new technology. Just a unified, agreed upon, window system neutral API for something that's quite old.
> While I do personally believe Wayland is going to become The Linux Display Server Of The Future®
Best i can understand, wayland is a protocol spec rather than a set of code like X. Each compositor/wm is individually responsible for implementing said protocol and thus take over the job of X (input handling et al).
Meaning that there will no longer be an X server to act as a independent arbiter of behavior. Whatever the devs of whatever DE you are logged into will have the final word.
The more i learn of Wayland, the more i expect it to turn into a hairball to match X. Only now without a network option ("too insecure"), and with GPU acceleration.
BTW, why are we so bent out of shape about this seats concept? Why oh why are we continually trying to turn a single user piece of hardware into a desktop mainframe?!
Had he asked for money, I probably would not have sent any. But he asked for something that I can provide with ease, so I did. And I enjoyed it.
Is that really what this is about? Is it wrong to do good if it might benefit the do-gooder in some way? Can we all take a collective moment to think critically about that? :)
(anyway, you're wrong(ish?). Posting an issue does not appear to have added my name to https://github.com/magcius/xplain/graphs/contributors . Though I could have sworn that I've seen what you are talking about happen before. \shrug\, that's github's fault.)
An issue tracker is a tracker for issues, not the guestbook to the project. You could've emailed the author, link to him on social media, on your website, submit to reddit etc... but you choose to open an issue on his issue tracker, and put that here on HN in a comment. Even if you intended differently, it's how it look to an outsider.
On your last paragraph, yes, github used to show who reports issues as contributed to the project to which they reported. I recall at one point having reported an embarassing silly issue (a very stupid config mistake I made that I don't recall) to Pelican blogware and it went there among the contributed projects as a badge of dishonour. :) But now I checked, that section seems to be gone from the profile page on gihtub.
I don't have social media, and GitHub issues are indeed acceptable for giving public feedback of any sort, though I understand not all projects are ran this way. I clarified this point:
Personally I'd feel a bit uncomfortable if someone posted that kind of comment to me on Github and HN. I'd be grateful of course, but would feel pretty awkward.
The author seems more than ok with receiving these types of comments and almost seems to be fishing for them... Great writeup, but the number of times he's mentioned that it would be 'ok' to thank him or give him praise seems a bit over the top
> If you have any questions or comments, _or just want to thank me_, feel free to email me....
Excellent description of the (somewhat needlessly) complex X Window System (the Blit and Rio are so beautiful). Still makes me sad that COMPOSITE doesn't build in XQuartz for OS X. I posted on the mailing list, but got a pretty non-helpful response.. [0]
This looks like a great article but on a phone the margins are as wide as the text! And they used that wrapping mode that stretches lines out instead of truncating them which looks great if you don't want to read the text but is super annoying when you are.
This article series wasn't designed for phones! It uses heavy interactive figures and diagrams and I actually expected it would bring most phones to a halt with all of the drawing...
I don't know much about front-end dev and made something that looked decent on my laptop. PRs are most appreciated if you want to make it presentable on mobile devices.
It seemed to work fine on my iPhone. My only complaint was the font size was a bit too small. Also reader mode didn't seem to work so I wasn't able to use that as an alternative to get a reasonable font size.
How are the interactive windows that are displayed in the page created? Is that a complete simulation of X server done with JavaScript or is it something unrelated to X server?
Basically, I emulate what an X server would do, semantically, without implementing any of the actual wire protocol. The codebase is, I hope, pretty well commented and designed as something to learn from.
"A truly portable X application is required to act like the persistent customer in Monty Python’s “Cheese Shop” sketch, or a grail seeker in “Monty Python and the Holy Grail.” Even the simplest applications must answer many difficult questions"
Read the original for the hilariously accurate dialog that follows…