Hacker News new | past | comments | ask | show | jobs | submit login

I will say, while this is interesting and fun to see, using a trivial library like SDL adds almost nothing to the overhead and expands support to non *nix OSes.

There is definitely something to be said of bloat but probably not in this case. You could even keep supporting Linux versions as old as this promises by using legacy 1.x SDL.




"adds almost nothing to the overhead" is not always true. In the example code in the article he kept the pixbuf in the local X Server memory for low overhead and fast performance over the network. SDL always wants to send the pixmaps over the network. This is not a big deal for Minesweeper, but can be tough for action games.


You're just outlining a major flaw in the X Server protocol, not SDL. That's a unique situation to that specific system due to it's intrinsic network-oriented design and specifically the issue Wayland was designed to handle, which doesn't have this issue.

In addition, there are solutions for cache locality of the pixbuf for SDL that you can code around, if you specifically need higher performance X11 code on Linux.


The article just described how he was able to avoid this flaw in X11 by being mildly careful in how he structured his code. Making sure to do the bitmap copy only once and then issuing copyrect() calls to the server to have it do all of the blitting locally. With SDL it generally wants to do all of the blitting on the client and then push the whole window over as a giant pixmap for every frame. At least that's what it has done when I've tried to use it.


It's debatably even a flaw. Network transparency is pretty cool. This Minesweeper probably runs faster over an internet SSH tunnel to Australia than any pixel-based remote desktop protocol.

You feel it's a flaw because you only ever run applications locally. But more constraints are a side effect of more possibilities, because you have to program the lowest common denominator of all possible scenarios, so your program works in all of them.

That's how APIs work. They all have this tradeoff.


The only flaw is that there is an inefficient way to do it and you just have to know that and choose the correct way instead.


Say, Wayland doesn't and will never support remote displays will it?


Doesn't and never will. It's all based around transferring pixel data. You can write a VNC-like wayland proxy, which has been done (it's called waypipe) but it will never be as performant as something designed for minimal network traffic. Waypipe will never be able to blit sprites locally on the server, because Wayland clients don't do that.


Being able to support efficient use over the network is a flaw now? Wayland "solves" that flaw the same way that death cures disease and suffering.


Hasn't it been the case, for quite some time now, that VNC and RDP are more efficient over the network than X11 for modern graphical apps? Client-side font rendering, antialiasing, full-color graphics, alpha-blending, etc. have as far as I know neutered all of the benefits that X11 originally intended to deliver in terms of "efficient" network use.


It has, but only because X11 apps are programmed in ways that don't work well on slow networks. They are programmed so poorly for networks that VNC works better. You can write a network-efficient one if you want to, and it will work better than VNC. Meanwhile, all Wayland apps work the VNC way, by design.


I guess I don't understand why this comes off like a bad thing. X11 has ossified badly. Web apps directly achieve the goals of network transparency in a cross-platform way. Remote desktop works better in practice with VNC and RDP and those solutions are also cross-platform. Maybe in a world without Windows and macOS, X11's architecture would have been more relevant and would have evolved more. But looking at the state of affairs today, it just looks like a half-baked solution to a problem that doesn't quite exist.


I guess I don't understand why this comes off like a bad thing. What do you mean badly? We can make X12 that will fix the few actual insane points in an incompatible way (e.g. delete the concept of colourmaps, and declare some policy for window management events). Web apps are no better than X11, which is also cross-platform. Remote desktop only works better with apps that are developed to work better with remote desktop - which is all the major frameworks, but should we settle for the status quo or strive for improvement? (if we shouldn't strive for improvement, then why Wayland?). Other than moving the WM in-process and deleting the drawing commands you need to draw stuff and requiring co-operation of five different extensions for core features that are builtin operations in X11, how is Wayland's architecture much different from X11's again?


> Web apps are no better than X11

I'm not just talking about a Perl script running on Apache like it's 1999. I'm talking about globally distributed cloud apps. It might be theoretically possible with X11 but it doesn't actually exist. Maybe we could envision a world where x11s://docs.google.com transparently opens an SSH connection, sets up X forwarding back to my local X11 server, and allows me to work with "Google Docs" the X11 client app just like how a browser tab works today. But nobody's written that, and they haven't written any of the myriad pieces of infrastructure to allow it to scale to a global user base either. That's not even getting into the security and interoperability concerns between apps running on different hosts which the browser has (not always in the best way) already addressed.

> how is Wayland's architecture much different from X11's again?

Honestly, I don't know that Wayland is really that much better. Though I think abandoning network transparency was the right move, I can't say I feel the same about any other decision (mostly due to ignorance). It seems to have stabilized enough after 15+ years that the major desktop environments and distributions have (mostly) adopted it, but honestly I've never seen any tangible benefit as an end user. I too have asked "why not X12?" but I don't know that it was any more feasible. The competitors (Windows, macOS/iOS, Android) were all able to deliver more coherent experiences (and in a lot less time) because one entity owns the whole window system stack. Wayland set out to maintain a lot of the same agnosticism X11 had about particular details; while perhaps inevitable given the goals, the end result is nowhere near as simple and cohesive as the competition.


If your opinion of a technology is based on how widely it's deployed, I bet you think the iPhone is the best thing ever and touchscreens are the right way to control cars.


That's not what I said.

X11 did network transparency wrong. Nobody in that ecosystem stepped up to do it right. The web does it better, even with its warts. Citrix is going to find its way into the same graveyard before too long. This isn't a problem Wayland can solve given the way desktop and mobile apps already work.

Delivering a working desktop experience is more important than trying to achieve some kind of purity about what seemed like the right thing to do in the 1980s. Even RISC CPUs have adopted SIMD and task-specific instructions like cryptographic acceleration. VNC and RDP are good enough. Wayland has taken so long to reach maturity, I think, because it tried to do too much more than what was actually necessary.

But most of all, the biggest strength and weakness of open source is that it allows anyone and requires someone to put in the work. If you think X12 is the right solution, then all you need to do is make it happen. For all my gripes about Wayland, it exists in large part because nobody was really trying to keep X alive and working well on modern computers.


It's a flaw for performant client-specific/standalone graphical use, yes. Literally one that the Linux community fought with with multiple hacks (DRI, AIGLX, etc) through the years.

It's not a flaw if you want to run a thin-client from a central machine or otherwise offer a networked interfacing system, no.

One of those use cases is far more common today than the other.


sdl uses shared memory in the usual case, i think




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: