Hacker News new | past | comments | ask | show | jobs | submit login

> X11 is old and crufty, but also gets out of the way. Once a few utility functions to open the window, receive events, etc have been implemented, it can be forgotten and we can focus all our attention on the game. That’s very valuable. How many libraries, frameworks and development environments can say the same?

This is my thought as well. You can even avoid some of the grotty details of this article if you use Xlib as your interface instead of going in raw over a socket. Basic Xlib is surprisingly nice to work with, albeit with the caveat that you're managing every single pixel on the screen. For something like a game where you're not using system widgets it is all you need.

Where people ran into trouble is when they try to add the X Toolkit, which is far more opinionated and intrusive.




xlib is miserable, but most of the misery is the x11 protocol

http://www.art.net/~hopkins/Don/unix-haters/x-windows/disast... exaggerates slightly but is basically in keeping with my experience. win32 gdi is dramatically less painful. it's true that the x toolkit is far worse than xlib

if you do want to write a game where you manage every single pixel, sdl is a pretty good choice. i also wrote my own much smaller alternative called yeso: https://gitlab.com/kragen/bubbleos/blob/master/yeso/README.m...

tetris with yeso is about five pages of c: https://gitlab.com/kragen/bubbleos/blob/master/yeso/tetris.c

the stripped linux executable of tetris is 31.5k rather than 300k. it does need 10 megs of virtual memory though, but that's just because it's linked with glibc

i should do a minesweeper, eh?


>i should do a minesweeper, eh?

Go for it. I just finished a lazy port to C and SDL. Not counting SDL and the spritesheet it's 42Kb. It's a fun weekend hack.


You might also consider using xcb, which is more of a simple wrapper around the X11 binary protocol, rather than Xlib which leakily tries to abstract it. The famous example (noted in XCB's documentation) is calling XInternAtom in a loop to intern many atoms. Xlib forces you to send request, wait for response, send request, wait for response. XCB lets you send all the requests, then wait for all the responses.


Xlib is definitely custy but that example isn't really that convincing as you're not going to be interning atoms all the time but ideally only during initialization - after all the whole point of atoms is that you only pass the strings once over the protocol and then use the numeric IDs in subsequent requests.


Yes, but when you have 100 atoms and a 300ms round trip time (New Zealand to anywhere, or a satellite link in any part of the world) that's the difference between the application starting in 0.3 seconds or 30 seconds. Add a few more round trips for other stuff: 2 seconds or 32 seconds. Of course interning atoms isn't the only thing apps do on startup that is unnecessarily serialized. There could well be another 100 unnecessary round trips.

If you've ever actually tried using that configuration, you might notice that every part of every application suffers from this same problem. Almost all slowness of remote X11 used to be caused by stacking up round trip delays. Probably still is, though there's another cause now, which is transferring all the pixel data because apps treat it as a dumb pixel pipe.

This isn't a niche problem and it doesn't only affect application startup.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: