Hacker News new | past | comments | ask | show | jobs | submit login
Voxel Space: Comanche's terrain rendering in less than 20 lines of code (2020) (github.com/s-macke)
366 points by danbolt on April 1, 2021 | hide | past | favorite | 71 comments



It's really nice when something sells itself in being written in less than X lines of code and it's actually nice/simple code rather than weirdly abbreviated/overly dense stuff.


It is actually X lines of code, not 10*X lines of code condensed into X lines with code golfing techniques. :)


Don't remember the details but when I was coding on Outcast, the voxel engine was quite sophisticated there. Many optimisations were done to avoid multplications, divisions, use MIP maps, improvce CPU pipelining, etc. So although you can do something in 20 lines of code, there's much more to do to achieve a decent frame rate on old computers... But nice demo anyway (and for proper credits, I didn't write the voxel engine of Outcast, I was sitting in the same room of the dude who optimized it, and he was really good at that... fond memories !)


Great to have an Outcast developer right here on Hacker News! Outcast was a great game with awesome graphics. Unfortunately, Voxel Space died soon after.

Of course, this demo is just a starting point. I could optimize it even further, also in JavaScript or WebAssembly. But that's not the goal of the project. It is more a fan site for the technique.


Hey, of course. I didn't want to say your demo is wrong neither. We all end up coding "small versions of existing stuff" sooner are later. And the fun is always there !

(on my side I still participate in the demoscene, coding my n-th 3D engine for fun, so you see, once you get hooked... :-) )


That game was amazing. Still is, really. The particle effects hold up incredibly well even now.




In the Comanche GIF, the viewport tilts when the aircraft turns. Is anything special going on here, or is it simply a rotation of the projection? If it's just a rotation, what extra work does the engine need to do to fill in any gaps from the lines that will no longer be neatly aligned to pixels, like they would be when it's flat?


The lines are always drawn vertically from the point of view of the viewport, so they're always aligned to pixels.

This does limit the rotation allowed before the user starts to notice!


Author here.

That is exactly what is going on here. I just move the horizon line up and down. This works very well as long as the rotation remains small. That is also the reason, why Comanche was a helicopter game and not an airplane game.


Yeah, I've written this code twice now after seeing the algorithm described years ago in an article.

I did add my own "skew" parameter that would fake a little bit of horizon roll (skew by shifting up and down the column-renders to the screen along a shallow diagonal ... if you know what I mean). If you kept it small it looked like roll and added a sense of realism to the motion.


I played Comanche on a Sony VAIO display laptop at Circuit City for hours when I was young (my dad worked there). The speed and detail was indeed mind blowing back then, thanks for your excellent writeup and examples on how it worked!


Adding the tilting is a tiny tweak: https://news.ycombinator.com/item?id=21945633


Yeah, thanks for the reminder. Probably I have to add this part to the GitHub README. Because these lines are too simple to ignore.


The web demo was a lot of fun on my iPhone. I’m always frustrated by a lack of good arcade flying games with somewhere realistic pitch, roll and yaw controls. Everything feels like driving a car rather than flying.


I remember this game, the graphics were amazing, but the enemies were deterministic, i.e. if you restart a mission, the enemy copter always came from the same angle at the exact x seconds/minutes after the start.


Clearly time travel is involved. "Restart" actually means "go back in time".

:)


I don't think in Comanche they painted back to front, or at least I hope they didn't. It's best to render front to back, tracking the highest (in screen space) pixel you have drawn, and as you move further back, only drawing lines from that height. That "max height" is a zbuffer of sorts for each ray. [edit: I just read further down and they mention this]

Also, you want to render each ray/vertical screen line one at a time, rather than running each single step of the ray for all rays/vertical lines.

Once you are rendering each ray at once, it's easy to do a circular rather than flat projection, which feels a lot more stable when you rotate and admits higher fields of view without distortion.


> I don't think in Comanche they painted back to front, or at least I hope they didn't. It's best to render front to back, tracking the highest (in screen space) pixel you have drawn, and as you move further back, only drawing lines from that height. That "max height" is a zbuffer of sorts for each ray.

Discussed here <https://github.com/s-macke/VoxelSpace#more-performance>


> Also, you want to render each ray/vertical screen line one at a time, rather than running each single step of the ray for all rays/vertical lines.

This may require more performance. Remember, this algorithm was used for an early Nineties game.

The advantage of drawing along a line of same-distance (i.e. constant z) is that per pixel, there are no divisions, only additions and one multiplication (which could perhaps be optimized away?). Drawing vertical screen lines, you get the z-division for each step.

(Then again, perhaps the division could be removed with a look-up-table)


I wrote the code for this in late 1993 https://www.youtube.com/watch?v=9KoUyFhddIs&t=102 . You can find the C + assembly source code in https://archive.scene.org/mirrors/hornet/code/effects/land/i...

Yep, we used a lookup table with built-in levels of detail. Since rays always sample the terrain at the same Z distances from the camera, the lookup was trivial. Rendering vertical lines had additional benefits back then, including faster use of paged VGA video modes and, as described, doing circular projection to avoid deformations on the sides of the screen.


Interesting. It looks very nice!

I could imagine mixing in "normal" polygons with this approach may not be trivial, though. I couldn`t find much information about this circular projection online, some napkin-math tells me that straight lines in world-space don't project into straight lines in screen-space. just drawing a triangle is not trivial.


> It's best to render front to back, tracking the highest (in screen space) pixel you have drawn

As mentioned in the README of the linked repo:

Instead of drawing from back to the front we can draw from front to back. The advantage is, the we don't have to draw lines to the bottom of the screen every time because of occlusion. However, to guarantee occlusion we need an additional y-buffer. For every column, the highest y position is stored. Because we are drawing from the front to back, the visible part of the next line can only be larger then the highest line previously drawn.


He mentions the front to back optimization, keeping track of the max height.


He misses the additional optimization of tracking one ray at a time - depth first rather than breadth first. You don’t need to hold a complete z-buffer, just know the last height/distance you drew, and keep scanning out along the line until you find another terrain point high enough to see.


Author here.

But then you have a division of each coordinate. And this was more expensive than a memory lookup. So you might end up with a table as well. You have to test and optimize both approaches to be really sure.


I’m not sure why a division is necessary. When you send a ray out in a given direction through the screen, you can calculate its height in world space at distance z just by multiplication; and you can construct that ray entirely by multiplication and addition of values that are constant for the frame. So you can march a point out in increasing z along a ray pointing towards the bottom of the screen, then when it hits the ground, move it up the screen until it stops hitting the ground, then draw that line and start increasing z again. The perspective correction division just disappears completely.

I don’t disagree that you need to benchmark approaches to see if they will work, but it’s got to be interesting to consider if it’s possible to avoid allocating a buffer to store the height reached per column, right?


NovaLogic’s voxel work was really something else at the time, it was even partially used for their Delta Force games which is how they could have such huge maps and extremely long terrain draw distances in an era when games had to fog everything a short distance away from the camera or box you into corridors in order not to have to draw so many polygons.

This was released in 1999/2000 https://youtu.be/kxkSM6_F39g

Sadly voxel hardware acceleration was never really a thing and in the early 2000’s GPUs started leapfrogging a head of CPUs so much that you basically had to switch to polygons to get any decent performance.


The helicopter game about the helicopter that never was† :-)

† In mass production.


Novalogic patent... That is probably also used on the F-22 lighting 2. The game could run comfortably on a pentium mmx at 200mhz. But I can't understand how transformations other than rotation on the vertical axis can be performed.

Edit: Small analysis on some screenshots... Mountains on the horizon look very polygonal on F-22. I'm not sure it uses the same technology.


Could run on a 200mhz pentium ... but these days runs with barely 60 fps on my 10th gen Core i7 and makes my laptop fans go crazy ;-)


> But I can't understand how transformations other than rotation on the vertical axis can be performed.

Roll is visible in the game clip. I think it's not a true roll but just each column of pixels raised up or down so that the horizon is tilted appropriately. Given the fast movement and the nature of the scene, the user doesn't notice that vertical features in the scene are not rotated. It's neat.


IIRC every novalogic voxel game after comanche mixed polygons and voxels. Delta Force definitely did.


It could be hybrid rendering with voxels for near details then polygons or parallax backgrounds. That would be cool.


Great funny project!

But I can't get a local copy to run. I am not a web developer. Could anybody help me?

I cloned it from Github. But it doesn't seem to load the height maps from disk. What makes sense because websites are not allowed to access files, right?


Correct, modern browsers limit features when you just open a file.

You can launch a simple webserver to publish a directory to localhost port 8000 with `python -m SimpleHTTPServer`, and this should work.


Great, it just worked! Thank you!


This is a very compelling result considering the lack of complexity.

I wonder how far you could take this with some additional layers on top of height map and color.


The height map alone is enough for you to approximate a signed distance field, so you can do raymarching - which can give you pretty efficient light transport, volumetric fog, ambient occlusion, reflective water...


Cool! That gives me an idea: software raymarching for a 4k intro.


I suspect this could be done in reverse, for a profitable speedup.

So that is to say, by following the painter's algorithm (far to close), we draw a lot of pixels over and over again. If the closer terrain is rendered first (reverse painter's algorithm), fewer pixels have to be filled for the farther terrain. In some cases, none at all.

The basic premise is this: an element of the color/height map which is farther from the viewer will never occlude an element which is closer, regardless of its relative height, and regardless of the perspective projection. The elements of the map are colored, vertical columns. A column farther from the viewer cannot occlude a closer column.

Suppose our task is to render exactly two arbitrary elements into an empty frame buffer. We can confidently render the closer one first, and then render the farther one, painting the "pillar" from top to bottom, until we are about to hit a pixel which has already been colored. All those early breaks out of the column-painting loops can be expected show a marked speedup.

I'd be surprised if the 1992 game didn't do it that way; they were squeezing every cycle they could get out of 386 and 486 boxes.


Yup. A.k.a. depth testing†. TFA illustrates it under ‘More performance’: https://github.com/s-macke/VoxelSpace#more-performance

Edit: Originally, I had said: “Though in a highly constrained scenario such as this, with the right approach you could get away with a 1-bit depth buffer.” That would be true if you had one depth value per pixel, which is of course so common nowadays that no one even needs to think about it. Except in this case I should have thought about it, because this algorithm requires only one depth (or, alternatively, as TFA and parent comment explain, Y) value per column of pixels.

https://en.wikipedia.org/wiki/Z-buffering


From the article:

>More performance

>There are of course a lot of tricks to achieve higher performance.

>Instead of drawing from back to the front we can draw from front to back. The advantage is, the we don't have to draw lines to the bottom of the screen every time because of occlusion. However, to guarantee occlusion we need an additional y-buffer. For every column, the highest y position is stored. Because we are drawing from the front to back, the visible part of the next line can only be larger then the highest line previously drawn. Level of Detail. Render more details in front but less details far away.


> However, to guarantee occlusion we need an additional y-buffer

Not necessarily. If there is a uniformly colored background, and the pixels being drawn are of a different color, you can test the existing pixel value to determine whether it's a foreground object, or background. In the rare case that an object pixel value happens to be the same as the background, you can make an alteration in the least signficant bit of the least sensitive color to make it different.

If there is a complex background, chances are you have a full copy of it, and can refer to it to see if any pixel in the image has changed.

And in any case, a Y buffer is just the size of one scan line. E.g. using a 1024x768 screen as an example (3 byte RGB pairs), you need 1024x2 bytes for the Y buffer. That's .08% of the size of the frame buffer. You could use a scan line in the frame buffer for this storage. If you are not actually using the entire screen (because, say, the game shows the inside of a cockpit with instrumentation) there is no sacrifice at all; after being done using that temporary memory, you will draw over it anyway.


Yeah, that's how the optimization is done in the link I think.


Ah I remember when it came out, it was very impressive. It needed a very hefty 486 if I remember well, so I couldn't play it.

Years later, when I got a more powerful box, I tried playing it again, but the game speed was proportional to cpu speed I think so it was unplayable...

I should give it another go in dosbox!


It was written in assembly and required a boot disk to squeeze every available kilobyte out of the 640 a computer was guaranteed to have back then.


No, it was released in 1992 and lots of pcs had more than that at the time. It required a 386 (ideally 33mhz) with 4 megs of ram (I found the installation manual on an abandonware site), and was super fast on a 486.


You could try PCem with a period-accurate configuration instead of Dosbox.


I remember buying Comanche again years after it was released. But it was unplayable. The fuel depletion rate was somehow proportional to CPU speed, so on my (relatively) faster computer, the helicopter would run out of gas just a couple minutes into the level.


This is true of a lot of DOS simulator style games from this era, where they often let the game systems run at uncapped rates. They were typically just targeting a specific 386 or 486 CPU etc, so performance was pretty predictable on the customer's end hardware at the time. Wing Commander 3 is another example that is famously bad for this. When faster CPUs inevitably came along, these games basically broke or played differently due to the speed change. Back in early 90s many 486 systems had a button on the case marked "turbo" so you could slow down your 486 to play games optimized for earlier chips at the correct speed.

Dosbox offers ability to adjust CPU "speed" to help with this today, but it's often hard to get emulation speed right on early DOS games that ran uncapped.

> https://en.wikipedia.org/wiki/Turbo_button


Very interesting. Does anyone know if Magic Carpet by Bullfrog used a similar technique for rendering its terrain?


I don't have a source to back me up here, but IIRC, Magic Carpet used texture-mapped polygons.

Maybe fun trivia: James Schmalz trying to replicate Magic Carpet in pure assembly language and showing it to Tim Sweeney resulted in the embryo of UnrealEd and what later became the Unreal engine.


No, it's a polygon renderer of some sort. But it is quite restricted. Each "world" is a sphere, and checking again in a youtube video, it looks to me like the sphere has a regular tessellation. It's dynamic though: some of the game mechanics raise or level hills. I'd guess they found a way to special case a lot of the rasterization given the spherical restriction. Kinda like how Descent based everything off intersections of a unit cube.


I don't know, but based on a gameplay video it looks more like vector graphics to me. The terrain silhouette seems to have straight lines.

I don't know how they pulled that terrain rendering off with 1994 hardware. The limited rendering distance probably helps.


I seem to remember that Magic Carpet was a massive hardware hog and that it ran marginally well on the best you could buy in 1994. That was something like a Pentium 90.


>I seem to remember that Magic Carpet was a massive hardware hog

I'm not sure about that. A quick search shows that the (on the box) requirements were 33mhz 486, which seems to have been fairly reasonable for the time.


I think that the code could make one change to get rid of the additional height array. It just needs to flip the nested loops. Walk along the height field for each vertical line on the screen (2d line walk through the height field image) and keep track of the projected height at each step. Draw only the the difference between the previous projected height and the new projected height if the later is higher.


I was pondering on that solution as well, not entirely sure how cache would behave but in 91 memory was relatively fast compared to CPU so lookup tables were useful still (my thinking is that perspective could be calculated per row if rendering per row but if one would be using lookup tables for that division it wouldn't matter much by flipping to vertical), also if they were running modeX then doing vertical rendering would've been a benefit.

modeX memory layout has a write-enable bit for every 4th (or was it 8th?)column but the addresses was the same so you often would want to write every 4th pixel if doing something "exact" and then change the IO registers so it'd be every 4th again but offset by one, then repeating for offset 2 and 3 and then be done. Unless you were doing flat-shading where this write-enabling could be useful for writing 16 pixels in one 32bit word write with 4 writes enabled (this could also be useful for doing a half-resolution rendering by having 2 offsets enabled and then doing every other column).

(I really loved writing software-rendering code back in those days, glad to be getting much of that power back with shaders these days)


Don't forget, that you have to divide by the z-coordinate and this was also very expensive. With the height field you have to compute one time 1/z or perform a table lookup and then just multiply.

You have to test and optimize both variants to be really sure.


Frankly back in those days even multiply was kinda expensive, i fully expect either of the methods directions to use a combined lookup table for both the perspective divide AND multiplication.


It's even better than that flight sim that was embedded in Excel


I loved this game and played it a lot as a kid.


20 lines of code is a lie though


Indeed, the Javascript code is much larger. But the description tells you.

"Terrain rendering algorithm in less than 20 lines of code"

So, it just means, that a clean implementation of the terrain rendering algorithm takes 20 lines.

With initialization, keyboard handling, map selection, collision detection, fps display and some JavaScript optimizations it needs much more lines of code.


It’s not a lie at all, since it is specifically talking about the rendering algorithm, not the entire program.


I recall it also used 386 protected mode as you had to boot it from floppy, which was probably the first for a commercial game.

This and F15 were amazing games of the time. I bought the full thrustmaster setup for these.


386 protected mode was not tied to 'boot from floppy'. It was more of a do these set of instructions and you can flip the cpu in the 386 mode. Many of us had an extra boot floppy that would not load xms or ems which would conflict with protected mode (as it wanted the same memory as those extenders). I personally used a 4dos script that would turn things on and off with a set of prompts if needed. Special boot floppies were pretty much the norm for other systems though as they used it as a form of copy protection. Many of the later origin games would flip themselves into protected mode so that special path was needed. It may have been one of the first as that was right around the time many games started using protected mode (ultima 7 was another if I remember correctly). As memory prices has started to fall a bit and and it was 'only 150 dollars per MB'


Many commercial games in the 80s were booted from floppy. That wasn’t unusual. What sucked about this game was it used DOS but in a weird way, incompatible with every other DOS hack, so you had to set up your boot menus specifically for this game.


Winter games by EPYX booted directly to the game, ignoring DOS.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: