Hacker News new | past | comments | ask | show | jobs | submit login
Let's write a video game from scratch like it's 1987 (gaultier.github.io)
321 points by LorenDB 5 months ago | hide | past | favorite | 152 comments



> The result is a ~300 KiB statically linked executable, that requires no libraries, and uses a constant ~1 MiB of resident heap memory (allocated at the start, to hold the assets). That’s roughly a thousand times smaller in size than Microsoft’s. And it only is a few hundred lines of code.

And even with this impressive reduction in resource usage, it's actually huge for 1987! A PC of that age probably had 1 or 2 MB of RAM. The Super NES from 1990 only had 128Kb of RAM. Super Mario Word is only 512KB.

A PlayStation from 1994 had only 2MB of system RAM. And you had games like Metal Gear Solid or Silent Hill.


A PC in 1987 was more likely to have max 640kb of RAM, the "AT compatibles" (286 or better) were expensive still. We had an XT clone (by the company that later rebranded at Acer) bought in 1987 with 512kb RAM.


Yes. I wrote a version of Minesweeper for the Amstrad CPC, a home computer popular in 1987 (though I wrote it a few years later). I think it was about 5-10Kb in size, not 300. The CPC only had 64k of memory anyway, though a 128k model was available.


7yo me could not understand how people could possibly make software but I knew I wanted to be part of it. I loved my CPC 6128.


Even the Windows 95 Minesweeper was only a 24 kilobyte program.


As long as you did not count the large libraries it was calling into.


Probably a little later but I had an Amstrad 8086 as a teen. I think it was the first computer I bought with my own money.


16kb of the 64kb was reserved for the screen buffer if i remember correctly.


In 1987, I think you'd be very lucky to have that much RAM. 4MB and higher only started becoming standard as people ran Windows more - so Win 3.1 and beyond, and that was only released in 1992.


4 MB was considered a large amount of memory until the release of Windows 95. There were people who had that much, but it tended to be the domain of the workplace or people who ran higher end applications.

If I recall correctly, home computers tended to ship with between 4 MB and 8 MB of RAM just before the release of Windows 95. There were also plenty of people scrambling to upgrade their old PCs to meet the requirements of the new operating system, which was a minimum of 4 MB RAM.


It was over $100/MB for RAM in 1987. The price was declining until about 1990, then froze at about $40/MB for many years do to cartel-like behavior, then plummeted when competition increased around 1995. I was there when the price of RAM dropped 90% in a single year.


Like others have said, that would only be available on what would be a very costly machine for '87.

I distinctly remember the 386sx-16 I got late 1989 came with 1 megabyte and a 40mb hard drive for just under $4k from Price Club (now Costco), which was an unusually good price for something like that at the time.


By comparison, the original from https://minesweepergame.com/download/windows-31-minesweeper.... is 28kb. Might be interesting it disassemble, surely somebody's done that?


A lot of the work being done here by the program code was done in dynamically linked libraries in the original game.


A PC in 1987 didn't run X11 either though.

You needed something way more expensive to run X11 before 1990.


Yes and no.

Since we are talking about software written today, not just software available in 1987, X386 (which came out with X11R5 in 1991) was more than capable of running on a 386-class machine from 1987. Granted, a 386 class machine with 1MB of ram and a hard-disk would have been pushing $10k in 1987 (~$27k in 2024 dollars), so it wasn't a cheap machine.



I wonder how big the binary would be on a 1987 Amiga 500 would be then


Also PlayStation was notiorious in game development, by being the first games console with a C SDK, until then it was only Assembly.

When arcade games started to be written in C, it was still using mainframe and micros, with downlink cables to the development boards.


I have an actual commercial game from 1987 :) https://github.com/TheJare/stardust-48k


Just watched a play through at https://youtu.be/i5QV-J3JlAY and I can definitely say that the graphics in this game would have blown 9-year-old me away in 1987! Really impressive


Seriously great use of monochrome bitmap graphics. These are my favorite old computer games.

I really like frame locked, often using sprite system, type games. Atari VCS comes to mind, as do many other systems that have sprites and some means to sync with the display.

But the feel of those is crisp, sharp edged, fast, sometimes too perfect if you can maybe guess what I mean.

Bitmap games are different! The drawing, masking, and every other thing happens on the CPU. It feels a bit wild, and the feel varies. Number of enemies, missiles, etc... do often change the speed a little. And that's nice, kind of wild! The game feels alive in a way. Not living but not stale either.

The author packed a lot of shooter into a 256 x (whatever) screen. And the art is really good! Takes a bit of talent to make multiple baddies stand out amidst quite a bit of background detail.

And the sounds! Little bits of this and that packed into the game loops. That's a feel I like too. Often sounds are just a click or beep. But sometimes there is a little more, and this game does that.

Great work. Just my kind of game.


Thank you for your kind words! I drew most of the environment graphics and designed/encoded the maps in pen+paper as you can see. That style of graphics fascinated me since I saw Uridium on the C64 and then the Spectrum port. Julio Martin (a real artist) drew the ships and sprites and frame, the actually artistic stuff.

Fun bit: the explosions and possibly other sounds were just us pushing graphics bytes to the audio port (a 1-bit piece of metal which you had to flip in software to make it vibrate) as we rendered them. A poor man's source of noise.


Lol, I feel that!

On my 2600 game Ooze! I would drop item Y values into the audio registers as a poor man's way of making some cool sounds of enemy advancing.

Same for the player missile. As it travels, some variation of its Y value becomes the sound.

Tossing bytes at ports does insure the user associates the noise with the right on screen action.

Anyhow, great art! You did a fine job making things stand out and that is not easy.


https://en.wikipedia.org/wiki/Elite_(video_game)

There's always Elite, a whole space sim game in 1984


The disassembly is great, but I also love the hand-drawn maps. I wish modern games came with thick paper manuals detailing lore, mechanics, development notes and other goodies.


Wow, that's pretty cool. The game looks great!

Was it developed on an actual ZX Spectrum? I think I read that the development environment for the Spectrum port of R-Type was running on a 80286 PC, curious if this was common back in the day.


Yes, developed on a 100% standard Spectrum 48K with the rubber keyboard :) We had a Timex 3" double disc drive, unlike our previous (not commercial) game which we developed using regular cassette tapes.

We used HiSoft's GENS assembler. My brother reverse-engineered the microdrive code in GENS and replaced it with functions to access the Timex drive. That was a huge timesaver for us and other Spanish developers at the time. The source code and the GENS program had to fit in the 48K.

For our next Spectrum game, we used a hardware system to connect an Atari ST and develop on it. It was certainly faster and more comfortable, but the system was buggy as hell and crashed/corrupted the source almost daily.


It looks stunning for the time and system


Actual? Is this not reversed engineered?


We lost the source code in any usable form. Even if we dug up the printed copy, scanning it would be tedious as hell. :) So we ended up reverse engineering our own game, fun times.

But yeah, "actual" in that it's a real commercial game (30k copies or so) developed and released in 1987. If memory serves right, we started in late autumn'86 and it was published around November'87.


This is awesome. I listen to a video game history podcast with the founder of the video game history foundation, https://gamehistory.org/ , and the one thing he constantly brings up is to send him any and every person in the game industry with fun stories, weird bits and bobs of prototypes, and anything in between. If you've got the time I know that dude would love to pick your brain!


I didn't know about that site! I've queued up a few of the podcast episodes already. Thank you for the reference!


I think the "actual" in this case points to "commercial game".


From the README.md: "Also thank to Roberto Carlos Fernandez, who many years ago made me promise I'd give him a printed copy of the sources. That copy is buried in some box in storage somewhere, and we'd rather do this than go search for it."


To adapt the old saying, 3 months of reverse engineering can save you 3 hours in the storage room.


> X11 is old and crufty, but also gets out of the way. Once a few utility functions to open the window, receive events, etc have been implemented, it can be forgotten and we can focus all our attention on the game. That’s very valuable. How many libraries, frameworks and development environments can say the same?

This is my thought as well. You can even avoid some of the grotty details of this article if you use Xlib as your interface instead of going in raw over a socket. Basic Xlib is surprisingly nice to work with, albeit with the caveat that you're managing every single pixel on the screen. For something like a game where you're not using system widgets it is all you need.

Where people ran into trouble is when they try to add the X Toolkit, which is far more opinionated and intrusive.


xlib is miserable, but most of the misery is the x11 protocol

http://www.art.net/~hopkins/Don/unix-haters/x-windows/disast... exaggerates slightly but is basically in keeping with my experience. win32 gdi is dramatically less painful. it's true that the x toolkit is far worse than xlib

if you do want to write a game where you manage every single pixel, sdl is a pretty good choice. i also wrote my own much smaller alternative called yeso: https://gitlab.com/kragen/bubbleos/blob/master/yeso/README.m...

tetris with yeso is about five pages of c: https://gitlab.com/kragen/bubbleos/blob/master/yeso/tetris.c

the stripped linux executable of tetris is 31.5k rather than 300k. it does need 10 megs of virtual memory though, but that's just because it's linked with glibc

i should do a minesweeper, eh?


>i should do a minesweeper, eh?

Go for it. I just finished a lazy port to C and SDL. Not counting SDL and the spritesheet it's 42Kb. It's a fun weekend hack.


You might also consider using xcb, which is more of a simple wrapper around the X11 binary protocol, rather than Xlib which leakily tries to abstract it. The famous example (noted in XCB's documentation) is calling XInternAtom in a loop to intern many atoms. Xlib forces you to send request, wait for response, send request, wait for response. XCB lets you send all the requests, then wait for all the responses.


Xlib is definitely custy but that example isn't really that convincing as you're not going to be interning atoms all the time but ideally only during initialization - after all the whole point of atoms is that you only pass the strings once over the protocol and then use the numeric IDs in subsequent requests.


Yes, but when you have 100 atoms and a 300ms round trip time (New Zealand to anywhere, or a satellite link in any part of the world) that's the difference between the application starting in 0.3 seconds or 30 seconds. Add a few more round trips for other stuff: 2 seconds or 32 seconds. Of course interning atoms isn't the only thing apps do on startup that is unnecessarily serialized. There could well be another 100 unnecessary round trips.

If you've ever actually tried using that configuration, you might notice that every part of every application suffers from this same problem. Almost all slowness of remote X11 used to be caused by stacking up round trip delays. Probably still is, though there's another cause now, which is transferring all the pixel data because apps treat it as a dumb pixel pipe.

This isn't a niche problem and it doesn't only affect application startup.


One of the first DOS PC programs I made was a MineSweeper clone. It was done as a special request for some friends who had machines that were not up to running Windows, but were addicted to minesweeper from school computers. It was a little weird trying to implement a game I hadn't seen myself, but they gave me very precise descriptions (I think most of them have Math PhDs now)

I did it in Turbo Pascal with BGI graphics. I remember having problems with recursively uncovering empty tiles in large levels where mines were sparse. Recursive algorithms turned out to be rather tricky in general when the stack segment is 64k.

I added a starting disk of various diameters which let you pick a starting location without risk of explosion, which I think was appreciated.


Writing a game, or any software in 1987 would be painstaking compared to the luxury we have today. Back then it was normal to run DOS and DOS can only do one thing at the time. You open your editor, write code, close your editor, run the compiler, run your program, test your program, exit program, re-launch editor, etc. Over time small improvements where made to this flow like a DOS Shell and even things like DESQView that allow for basic multitasking.

This is probably a better description (from a code point of view) on what you had to do as a programmer to write a game in the late 80s / early 90s:

https://cosmodoc.org/


Erm, there was a reason why Turbo Pascal (and the other Borland stuff) was such a big deal.

And that dates to 1983.


I spend all day writing C++ or Python, and like playing around with Turbo Pascal on a circa-1984 Kaypro 2 as a hobby machine - the projects are certainly smaller, but my edit-compile-run loop on my Kaypro is usually faster in practice (even running on a 4 MHz Z80 with 64KB of RAM compared to an 8-core 3+ GHz workstation with 64GB of RAM) than my 'modern' work. It's genuinely crazy to me how usable a 40 year old machine is for development work.


TP was just so awesome, it was like a superpower compared to others waiting 10-20x as long for each build.

For me the big thing was all the latencies stacked together. Slow hard drives, slow floppies, god help you if you swapped, etc. The era was mostly waiting around for the beige machine to finally do its thing.


Thank goodness for modernity. Now we wait for white, black, or unpainted aluminum machines to finally do their thing. Sometimes, we never even get to see the machines. :(


No modernity for me, thank you! Now every time I add a component to my PC, it's a different color than the rest. It looks like a zebra, I swear.

Back then it was just "new" beige and "yellow-ish old worn" beige.


Zebras come in colors? Ours are all kinda monochrome.


Also, much like some people are Excel wizards, some people were Apple II monitor/mini-assembler wizards or MS-DOS DEBUG wizards, or whatever other thing already lived natively on the machine. If someone has strong knowledge of the machine, a handful of carefully targeted little software augmentations, and well-developed muscle memory for the commands, watching them use a debugger/monitor can be almost like watching someone use a REPL.


The annoying part most people don’t realize is that when your software crashed you often had to reboot the box


One would often design tools specifically for an application. Depending on how nuts you went with that it could be quite luxorious. Map editors are the obvious example but if you need to find space for a disassembler you might as well make it part of the application.


It was single tasking, but usually there was no need to close the editor to launch the compiler and test the program. IIRC, QBASIC compiled and ran the program on one of the F keys; even EDIT.COM had a subshell.


Turbo Pascal (and its sibling Turbo C) also had a text-mode IDE that could open multiple files, had mouse support, syntax highlighting etc. and if you ran your program, it would run it and when it was finished you were back in the IDE. You could even set breakpoints and step through your code.


Don't forget about Turbo Basic!

It was a great language and IDE for a young self-taught programmer.


Great "old-school" article! Intrigued to try Odin sometime.

> One interesting thing: in Odin, similarly to Zig, allocators are passed to functions wishing to allocate memory. Contrary to Zig though, Odin has a mechanism to make that less tedious (and more implicit as a result) by essentially passing the allocator as the last function argument which is optional.


One of zigs core tenets is “no hidden allocations”. You’ll never see zig language supporting a default value for allocator parameters to support hiding it at first glance.

Zig does have support for empowering library users to push coded settings in to libraries, and you could conceivably use this for writing this type of code. Although it’s probably not worthwhile.

Or you can just straight up default your library to using a specific allocator, fuck the calling code.

Anyway. Zig has patterns to do stuff like this, but it’s probably unwieldy in large projects and you’re better off just making it a parameter.

You can also look in to the std ArrayList, which provides managed and unmanaged variants for yet another way that you might write code that empowers users to set an allocator once, or set it every time.


The problem with "implicit allocation" is always multithreading.

An allocation library cannot serve two masters. Maximizing single thread performance is anathema to multithreaded performance.

I'd go further. If allocators don't matter, why are you using a systems programming language in the first place?


it sounds like implicitly passing the allocator as an extra parameter in every call would solve the problems you identify. if it's passed in a call-preserved register, it doesn't even make that implicit passing slower, because it requires zero instructions to not change the register before calling a subroutine

(when i've done similar things, it's been to pass an arena for inlined pointer-bumping allocation, a strategy widely used by runtimes that want to make allocation fast. normally this requires two registers, though chicken gets by with just one)


Sure, it does, sorta.

However, again, if you don't care about your allocators, why are you using a systems programming language? If you're willing to give up control over allocation, you are way, way better off in a managed memory (garbage collected or reference counted) language.

Systems programming is pain for gain. You give up a lot of convenience in order to have strict control over things--control of latency, control of memory, control of threading, etc. The price for that control is programming with a lot fewer abstractions and far less help from the language/compiler/interpreter to keep you from shooting yourself in the foot.

If you aren't using that "control" then you're just making life painful for yourself for no good reason.

Obviously, people can use whatever language they perfectly well want for whatever reason they want. Given how much I talk about and use C, Rust and Zig, people are always surprised that if they ask if they should use any of the systems languages (C/C++/Rust/Zig/etc.) my first response is always "Oh, hell, no."


passing an allocator as an implicit parameter doesn't give up any control over allocation; with that mechanism you can still pass a different allocator when you want (and i assume odin allows that, though i haven't tried it)

but i disagree pretty comprehensively with your comment. i don't agree that systems programming is pain, i don't agree with your definition of systems programming as programming with tight low-level control, i don't agree that tight low-level control is pain, i don't agree that abstraction is a necessary or useful thing to give up either for systems programming or to get tight low-level control (though it certainly is expedient), i don't agree that garbage collection (including reference counting) is the only way to simplify memory management, and i don't agree that either systems programming or tight low-level control requires accident-prone languages (and i'm especially surprised to see that assertion coming from an apparent rustacean)

that is, i recognize that these tradeoffs are possible. i just disagree that they're necessary


> that is, i recognize that these tradeoffs are possible. i just disagree that they're necessary

That's a nice theoretical position; however, the current programming languages as they exist disagree with you.

I would also point out the one of the problems in "systems programming" is that it encompass both "can run full blown Linux" and "slightly more CPU than a potato and has no RAM". Consequently, there are VERY different lenses looking at "systems programming".

> i don't agree that either systems programming or tight low-level control requires accident-prone languages (and i'm especially surprised to see that assertion coming from an apparent rustacean)

Rust is particularly poor when you can't define ownership at compile time. If you have something that you init once and then make read-only, you will be writing unsafe. If memory ownership passes between Rust and something else (say: memory between CPU and graphics card), you will be writing lots of unsafe. RPC via shared memory with a non-Rust process--prepare for pain.

Writing "unsafe Rust" is super difficult--moreso that straight C/C++. If you are writing enough of it, why are you in Rust?

You have to architect your solution around Rust to make the most of it and lots of things (especially stuff at runtime) are off limits. See: Cliff Biffle from Oxide and all the things he needed to do to make their RTOS completely defined at compile time because anything at runtime just gave Rust fits.


> because anything at runtime just gave Rust fits.

This is not the main reason that hubris does things up front. It does that because it makes for a significantly more robust system design. From the docs:

> We have chosen fixed system-level resource allocation rather than dynamic, because doing dynamic properly in a real-time system is hard. Yes, we are aware of work done in capability-based memory accounting, space banks, and the like.

https://hubris.oxide.computer/reference/#_pragmatism


> That's a nice theoretical position; however, the current programming languages as they exist disagree with you.

as i pointed out in another comment, c++ is a good example of having tight low-level control without programming with a lot fewer abstractions, and c++ is hardly a little-known language we can dismiss as irrelevant

so the current programming languages as they exist do not disagree with me


Not to mention we're talking about programming games here and you generally preload your assets and don't mess with them dynamically or your performance tanks. It's not only systems programming.


game engines typically do a lot of dynamic allocation, even aside from loading new levels; tight control over where that allocation happens and what to do when it fails is maybe the most important reason c++ is so popular in the space

c++ is a good example of having tight low-level control without programming with a lot fewer abstractions. indeed, the abundance of abstraction is what makes c++ usually so painful

i think it's reasonable to describe game engine programming as systems programming. i mean some of it, like writing interpreters, persistence frameworks, drivers for particular devices, and netcode, is pretty obviously right in the core of systems programming, but plausibly all of it is in that wheelhouse


> tight control over where that allocation happens and what to do when it fails is maybe the most important reason c++ is so popular in the space

It's also the most important reason why the C++ standard library is so unpopular in the space. All standard C++ containers allocate implicitly because often enough that's OK even in systems programming. What matters is that you can control the allocation more finely when you want.


yeah, agreed. when adding the stl to the standard library was being debated in the standards committee, microsoft forced the stl to use pluggable allocators, so you can easily make your std::vector allocate on your per-frame pointer-bumping heap, but often that's a poor substitute for not allocating at all


I heard some hype lately about Godot so took a look today... I'm super bummed that the wasm is 50MB minimum just to get the engine rendering a blank scene.

Seems like that could be further optimized especially for simple 2D games that don't use many features. I was impressed overall though. I hadn't looked at Godot in a long time.


If you run on web, yes. But on desktop 50mb to get a whole game engine seems pretty awesome.


What is a "whole game engine"? Ideally an exported game only includes the parts that are actually used and those should be much smaller than 50 MB for a simple example or even for most games.


The official “export templates” that you use need to be able to run any game you create with the stock engine, so they are fully featured. You have the option (and it’s recommended) to compile your own export templates, at which point you can specify which parts of the engine your specific game does and does not need (e.g. turn off the 3D, turn off the more advanced UI components, etc…)


It might be, but it’s extremely rare I see an app below 300mb these days.


also you need https and some tweaks to your web server's cors parameters to permit wasm to multithread. godot is amazing tho


I think they would love to get any insight into shrinking this!


50MB is nothing. Consider that a "small" footprint Electron app is at least 120mb.


Sure, but web apps are downloaded in their entirety when they’re opened. A 50mb website would be horrible. Could be expensive to host, too.

And, like electron, the size is almost entirely unnecessary. Even in the browser you can make quake in 13kb:

https://js13kgames.com/entries/q1k3


That isn't Quake


Just because many webapps are bloated doesn't make the size any less ridiculous.


All that means is that electron is bloated


Oh, the author has a cool example of a minimalistsic Wayland GUI application:

https://gaultier.github.io/blog/wayland_from_scratch.html


Good job!!

I did a 1-1 replica of the Windows 95 version of Minesweeper.

You can find it at https://github.com/danielricci/minesweeper

I didn’t do anything fancy in terms of reducing the size of the output file, but it was fun to try and replicate that game as best as I could.


Would love to try it but, knowing nothing about Java, I don't know how to run it. Could you add instructions to your github page on how to run it?


I will say, while this is interesting and fun to see, using a trivial library like SDL adds almost nothing to the overhead and expands support to non *nix OSes.

There is definitely something to be said of bloat but probably not in this case. You could even keep supporting Linux versions as old as this promises by using legacy 1.x SDL.


"adds almost nothing to the overhead" is not always true. In the example code in the article he kept the pixbuf in the local X Server memory for low overhead and fast performance over the network. SDL always wants to send the pixmaps over the network. This is not a big deal for Minesweeper, but can be tough for action games.


You're just outlining a major flaw in the X Server protocol, not SDL. That's a unique situation to that specific system due to it's intrinsic network-oriented design and specifically the issue Wayland was designed to handle, which doesn't have this issue.

In addition, there are solutions for cache locality of the pixbuf for SDL that you can code around, if you specifically need higher performance X11 code on Linux.


The article just described how he was able to avoid this flaw in X11 by being mildly careful in how he structured his code. Making sure to do the bitmap copy only once and then issuing copyrect() calls to the server to have it do all of the blitting locally. With SDL it generally wants to do all of the blitting on the client and then push the whole window over as a giant pixmap for every frame. At least that's what it has done when I've tried to use it.


It's debatably even a flaw. Network transparency is pretty cool. This Minesweeper probably runs faster over an internet SSH tunnel to Australia than any pixel-based remote desktop protocol.

You feel it's a flaw because you only ever run applications locally. But more constraints are a side effect of more possibilities, because you have to program the lowest common denominator of all possible scenarios, so your program works in all of them.

That's how APIs work. They all have this tradeoff.


The only flaw is that there is an inefficient way to do it and you just have to know that and choose the correct way instead.


Say, Wayland doesn't and will never support remote displays will it?


Doesn't and never will. It's all based around transferring pixel data. You can write a VNC-like wayland proxy, which has been done (it's called waypipe) but it will never be as performant as something designed for minimal network traffic. Waypipe will never be able to blit sprites locally on the server, because Wayland clients don't do that.


Being able to support efficient use over the network is a flaw now? Wayland "solves" that flaw the same way that death cures disease and suffering.


Hasn't it been the case, for quite some time now, that VNC and RDP are more efficient over the network than X11 for modern graphical apps? Client-side font rendering, antialiasing, full-color graphics, alpha-blending, etc. have as far as I know neutered all of the benefits that X11 originally intended to deliver in terms of "efficient" network use.


It has, but only because X11 apps are programmed in ways that don't work well on slow networks. They are programmed so poorly for networks that VNC works better. You can write a network-efficient one if you want to, and it will work better than VNC. Meanwhile, all Wayland apps work the VNC way, by design.


I guess I don't understand why this comes off like a bad thing. X11 has ossified badly. Web apps directly achieve the goals of network transparency in a cross-platform way. Remote desktop works better in practice with VNC and RDP and those solutions are also cross-platform. Maybe in a world without Windows and macOS, X11's architecture would have been more relevant and would have evolved more. But looking at the state of affairs today, it just looks like a half-baked solution to a problem that doesn't quite exist.


I guess I don't understand why this comes off like a bad thing. What do you mean badly? We can make X12 that will fix the few actual insane points in an incompatible way (e.g. delete the concept of colourmaps, and declare some policy for window management events). Web apps are no better than X11, which is also cross-platform. Remote desktop only works better with apps that are developed to work better with remote desktop - which is all the major frameworks, but should we settle for the status quo or strive for improvement? (if we shouldn't strive for improvement, then why Wayland?). Other than moving the WM in-process and deleting the drawing commands you need to draw stuff and requiring co-operation of five different extensions for core features that are builtin operations in X11, how is Wayland's architecture much different from X11's again?


> Web apps are no better than X11

I'm not just talking about a Perl script running on Apache like it's 1999. I'm talking about globally distributed cloud apps. It might be theoretically possible with X11 but it doesn't actually exist. Maybe we could envision a world where x11s://docs.google.com transparently opens an SSH connection, sets up X forwarding back to my local X11 server, and allows me to work with "Google Docs" the X11 client app just like how a browser tab works today. But nobody's written that, and they haven't written any of the myriad pieces of infrastructure to allow it to scale to a global user base either. That's not even getting into the security and interoperability concerns between apps running on different hosts which the browser has (not always in the best way) already addressed.

> how is Wayland's architecture much different from X11's again?

Honestly, I don't know that Wayland is really that much better. Though I think abandoning network transparency was the right move, I can't say I feel the same about any other decision (mostly due to ignorance). It seems to have stabilized enough after 15+ years that the major desktop environments and distributions have (mostly) adopted it, but honestly I've never seen any tangible benefit as an end user. I too have asked "why not X12?" but I don't know that it was any more feasible. The competitors (Windows, macOS/iOS, Android) were all able to deliver more coherent experiences (and in a lot less time) because one entity owns the whole window system stack. Wayland set out to maintain a lot of the same agnosticism X11 had about particular details; while perhaps inevitable given the goals, the end result is nowhere near as simple and cohesive as the competition.


If your opinion of a technology is based on how widely it's deployed, I bet you think the iPhone is the best thing ever and touchscreens are the right way to control cars.


That's not what I said.

X11 did network transparency wrong. Nobody in that ecosystem stepped up to do it right. The web does it better, even with its warts. Citrix is going to find its way into the same graveyard before too long. This isn't a problem Wayland can solve given the way desktop and mobile apps already work.

Delivering a working desktop experience is more important than trying to achieve some kind of purity about what seemed like the right thing to do in the 1980s. Even RISC CPUs have adopted SIMD and task-specific instructions like cryptographic acceleration. VNC and RDP are good enough. Wayland has taken so long to reach maturity, I think, because it tried to do too much more than what was actually necessary.

But most of all, the biggest strength and weakness of open source is that it allows anyone and requires someone to put in the work. If you think X12 is the right solution, then all you need to do is make it happen. For all my gripes about Wayland, it exists in large part because nobody was really trying to keep X alive and working well on modern computers.


It's a flaw for performant client-specific/standalone graphical use, yes. Literally one that the Linux community fought with with multiple hacks (DRI, AIGLX, etc) through the years.

It's not a flaw if you want to run a thin-client from a central machine or otherwise offer a networked interfacing system, no.

One of those use cases is far more common today than the other.


sdl uses shared memory in the usual case, i think


Different topic but this article got me lost down a rabbit hole looking for something similar for the TI86. Ah, memories...


My favorite TI graphing calculator story to tell was back in Algebra II class in high school, while studying polynomial expansion, I wrote a program on my TI-85 that would not only solve them, but also showed the work, so I literally only had to copy the exact output of the program and it looked exactly like I had done it by hand. I asked the teacher if using it would be cheating, and she said "If you know the material so well that you can write a program that actually shows the work, then you're going to ace the test anyway, so go ahead and use it, just don't share it with any of your friends."

The joke was on her, of course, because I didn't have any friends. :-(

Later I wrote a basic ray tracer for my TI-89. I even made it do 4x anti-aliasing by rendering the scene 4 times with the camera angle slightly moved and had a program that would rapidly move between the 4 rendered pics so that pixels that were dark for only some of the pictures would appear grey because of the screen's insanely slow response time. A basic "reflective sphere over a checkered plane" in that super low TI-89 resolution still took like 90 minutes and drained half the battery.


I was just listening to this podcast the other day: https://99percentinvisible.org/episode/empire-of-the-sum/

It tells the story of how TI got into the calculator market and the domination it achieved in the US classrooms (+ other interesting tidbits).


Asianometry has a good ~2 month old video on TI that goes into it's history as a chip maker, how it got into calculators and consumer products, and where it stands today.

https://youtu.be/Wu3FnasuE2s?si=cnOV7oPLc_MSYyyn


wonderful


Simon Tatham's Mines Windows exe stands at ~180KiB as seen at https://www.chiark.greenend.org.uk/~sgtatham/puzzles/


Can I say when you said “video game” am thinking arcade, dos, windows … or even mac (1984). Xwindows!!! … as someone said Sdl based … and unix curses based.

It is legitimate but by 1987 …


Even some games from 1984 or even earlier are amazingly complex, making you wonder how they made them in such short time with limited tools and manpower.


They didn't get distracted by phones, the internet, or multitasking. When you sat in front of your computer, you could only do one thing at a time on it, and you dedicated all your attention to it. You didn't have a company chat or email window popping up notifications about irrelevant stuff, you didn't get the urge to look up random things on Wikipedia, etc. You probably also got to sit in a small room without a lot of distractions too, instead of sitting in a huge open-office area next to the sales group.


And little prior art to draw inspiration from!

The 1980-2000 era was just a raw font of imagination for video games. Since then things have become more iterative and tend to focus on maximizing profits, though thankfully there is a lot of creativity in indie studios/solo devs being empowered by the increasing ease of modern game engines, and even some bigger studios here and there, like FromSoftware.


Passion + skill > Total compensation optimization + Javascript.


The complexity and creativity of early video games from the 1980s and even earlier are truly impressive


Yes, what people managed in pure assembly language was really impressive.

Unsurprisingly though they also spent a lot of time developing a lot of time-saving tools, like macro assemblers and higher level languages like C.

Probably the games on the Amiga and Atari ST was the last heyday of these kinds of development.


hey, are you the guy that tried to steal freenode?


That was a "rasengan".


It's noteworthy that it's impossible, by design, to write a statically linked Wayland executable. You have to load graphics drivers, because there's no way to just send either pixel data or drawing commands to the display server, like you can in X11. You have to put the pixel data in a GPU buffer using a GPU driver, then send the GPU buffer reference.


Not true. Here's how to send pixel data: https://wayland-book.com/surfaces/shared-memory.html


thank you!


Fun fact: the Windows 3.1 minesweeper had a cheat code! Typing:

  x y z z y S-Return
would cause the top left pixel of the screen to change color depending on whether the cursor was over a safe square or a bomb. Since you could plant flags before the timer started, it was possible to get rather unrealistic times.


That's awesome :) I used to cheat by setting the match to a custom size board with a height of 999 and width of 999. The board would not end up quite that large, however, clicking a single tile would reveal all other tiles of interest immediately, allowing me to mark them at my leisure.


Microsoft’s official Minesweeper app has ads, pay-to-win, and is hundreds of MBs

WTF? This is a showcase of everything wrong with the company today.


More generally, an example of what is wrong with the experience of using computers/internet today.


You sound like someone who doesn't own any MSFT stock.


That code is a bit too readable for that time period in games, hehe. See Doom:

https://github.com/id-Software/DOOM/blob/master/linuxdoom-1....


This source code looks perfectly legible to me. In fact, for C, it's the equivalent of T-Ball.

Nicely and consistently formatted, relevant comments, sane structure, along with decent fn, and var names.

What is your definition of "good, readable code?"

P.s. I've emailed the mods, but a currently dead child comment from a new account states essentially the same thing (my vouch was insufficient to rez): https://news.ycombinator.com/item?id=40742493


I must admit, despite programming games for a long time commercially, that code is not very clear to me.

If the code is perfectly legible to you, can you explain how R_DrawPlanes draws the ceiling and the floor, step by step, as a practical illustration? How long did it take? It took me maybe 5 minutes to understand how it works.

I think just about every function I read every day is easier to comprehend. And I review a lot of game engine code. I make no claims I’m a fantastic programmer, of course.


Look at the R_MapPlane function. Indentation in if statements is non-existent.


that function contains four if statements and all four of them use indentation, and the same brace indentation style is applied with perfect, machine-like precision throughout the entire file

the last two look like this

    if (fixedcolormap)
        ds_colormap = fixedcolormap;
    else
    {
        index = distance >> LIGHTZSHIFT;
 
        if (index >= MAXLIGHTZ )
            index = MAXLIGHTZ-1;

        ds_colormap = planezlight[index];
    }
that's indentation

maybe you're running some kind of browser extension that screwed up the formatting of the code in your browser. or maybe the problem is that the indentation uses 8-space tabs and your browser is rendering the tabs in some other way. (when i initially pasted the code in here, hn converted them to spaces.) what browser are you running? the code looks fine in linux firefox

the only exception to this perfect syntactic consistency in compound statements is that in two cases (lines 137 and 237) there are unnecessary braces around a single statement, turning it into block that could have been just a simple statement. i don't know if this is an artifact of the code's edit history or what


What? What's a pretty severe nit pick. Especially for 30 year old C code.

They didn't have no gofmt back then. We are spoiled today with the extreme consistency :) the skill of reading code includes reading even when it deviates from one's own preferred formatting (within reason, maliciously formatted code can be challenging, to say the least).

I must respectfully disagree about this being an issue worthy of anyone's attention, especially yours and mine.


>They didn't have no gofmt back then.

Didn't need gofmt as plenty of IDEs and editors had automatic indentation and syntax colouring already implemented.

1993 wasn't the dark ages you know.


I honestly don't see anything wrong with how it's done, can you please clarify? This is pretty much exactly the formatting I used for C always.


It could use more verbose variable names and comments to give the reader more context. So the bus factor is very low.

If this programmer leaves the team and their large codebase like this is left behind, it will likely become a calcified legacy monolith of code that is expensive to maintain.

Today, such code would not pass code review and possibly some automated pre-submit testing (Halstead’s metrics, Microsoft’s maintainability index, etc) in the AAA companies I worked for that cared about code maintainability.

It’s not specifically about formatting or syntax. It’s more about whether a different programmer can look and understand exactly what it does, and what cases it handles, and which it doesn’t, at first glance. And the other programmer can’t be Carmack-experienced or grey beards — it has to be the usual mid-level dude.

The code could even be written in a self-documenting code paradigm with parts extracted into small appropriately named functions. And it doesn’t need to be done to some perfectionist extreme, just a little bit more than what Carmack did. It just can’t be what we jokingly refer to in the industry as academic code — made for one author to understand and for every other reader to be impressed but not work with.

1993 was a different time, more code was academic, most of it was poorly documented. And there were good reasons not to have function call overheads. We even used to do loop unrolling because that saves a jump instruction and a few variable sets (you only increment the program counter as you go). So some of the reasons why this code is the way it is are good. But in readability, we have evolved a lot in the games industry.

So much so that Doom’s code is pretty hard to read for most programmers. I asked around at work, in a large AAA company, and the consensus is that it’s archaic. But you know, it’s still good code, it did what it had to do, I’m not bashing it.


It's pretty easy to post a link to a codebase and say "this is unreadable".

What do you find unreadable about it? What would you do differently now that there have been 30 years of software development between then and now?


In the games industry, we generally try to write code that can be maintained by an average programmer.

I would at least add more comments for context. Some teams have other ideas, like self-documenting paradigms with small functions and descriptive names.

If a typical programmer doesn’t at a glance understand what each line of each function does on screen, what all the variables mean, what changing any line would do, what is the usual data flowing through the function, and what are the limitations of the function, then it either can’t be maintained quickly and cheaply, or maintaining it will introduce unforeseen bugs.

But the key point is to not have a codebase that only a small core team can maintain.

If you want to see examples of the difference between this and modern in C-like code, see Unreal Engine’s source code. It will generally, at least in areas of frequent change, be much easier to read. I would expect a mid-level programmer to understand 90% of UE’s functions in 10 seconds each. And more experienced programmers usually understand most functions they’ve not seen in UE in a couple of seconds.

That’s not the case with Doom code. It took me up to 5 minutes to understand some of them. That means significantly worse readability. And I work in C++, C, C#, and other programming languages at a quite senior level in AAA games. So I don’t think this is a skill issue. It could be, of course.


In that time period the game would have been written in Assembly, Doom was still years away.


Man, that could use a healthy dose of clang-format.


How so? The formatting is quite consistent even if it is not the style you are used to.


I Remember that around 1985 was more simple to write basic games in the TI99/4A in TI Extended BASIC or Logo. The sprite support was an advantage from writing basic games on PCs without sprites support. I remember a performance similar to the Atari 2600 and not using Assembler.


The graphics chip of the TI99/4A (TMS9918A) was turned into a separate product line for other computer manufacturers, and became hugely influential in the 1980s. Sega used it in the SG-1000 and SC-3000, which led to enhanced clones being used in the Master System, Game Gear, and Genesis. It also became part of the MSX computer standard, which spawned another lineage from Yamaha. The overall design of the graphics hardware of the NES, Game Boy, and TurboGrafx-16 are also strikingly similar despite none of those systems being descended from anything that used the TI chips.


Great insight!

I always saw the TI99/4A as one of the "zillions" of "microcomputers" in the 70s/80s but from your comment I now learnt that it was not so simple regarding Texas Instruments, one of the leaders (if I remember well) of the semiconductor industry at that time. Also the ColecoVision [2] used a variant of that chip. The ColecoVision had the best version of Donkey Kong [3] available in game consoles.

Reading about the TMS9918A[1] now. Thank you very much.

[1] https://en.wikipedia.org/wiki/TMS9918

[2] https://en.wikipedia.org/wiki/ColecoVision

[3] https://archive.org/details/donkey-kong-game-manual-coleco-v...


I did this last year, built a zero player pixellated space simulator using pygame.


I'm often curious about how people organized / conceptualized development in the 60s, 70s and 80s.


In 1987 the game would have been written in Assembly.

This is more like 1992.

Michael Abrash's book was published in 1990.


I really appreciate the effort and attention to detail that went into this.


Honestly at that time I remember getting fed up using a library for the graphics of a simulation I was doing over a weekend.

So I just threw out the graphics library and wrote directly to the screen memory. Lots of games did that


"Let’s write a video game from scratch like it’s 1987"

"We will implement this in the Odin programming language"

checks Odin homepage...

"The project started one evening in late July 2016"


> Since authentication entries can be large, we have to allocate - the stack is only so big. It would be unfortunate to stack overflow because a hostname is a tiny bit too long in this file.

What? How big are your hostnames?


> I don’t exactly remember the rules though so it’s a best approximation.

What?


1 MB of assets? Huh?


Yeah, the entirety of Chip's Challenge for Windows 3.1 was less than half megabyte.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: