Nice writeup! Building raycasting engines is fun and rewarding. You can get a lot of visual impact with very small and easy to understand code. I took a stab at it and ended up adding portals which I thought was really fun (https://github.com/gh123man/Portal-Raycaster).
I've been thinking about re-writing my raycasting engine in Rust but never really found the time - glad to see someone has done it and documented it so well!
It appears my Appstore agreements have fallen out of date and needed renewal. That's a somewhat embarrassing - but it should be fixed now. Gate Escape should be available in all regions, and thanks for checking it out!
At least as implemented in these retro-style engines, it's a technique with a lot of limitations. Crawling pervasively 2D data, in 2d grids, and grid-aligned 2d wall data, interleaved with 2d screen-aligned sprites with a scaling factor applied. Lots of vertical alignment.
You wouldn't use the same technique to implement a "realistic" 6DOF flight simulator, for example. Wolfenstein 3D released in 1992 - and a mere year later, Microsoft Flight Simulator 5 would be released, with actual 3D meshes and arbitrary 3D rotations coming much earlier.
A lot of the comments in that thread appear to be flaming the OP for impersonating a legendary demoscene group, or something like that. I've always enjoy touristic, almost archaeological appeal of reading old subculture drama, and I wish I stumbled across them more often.
Wow, I didn’t know that WebAssembly had a “square root” instruction (although it sounds like that’s required for the floating point standard, so probably shouldn’t have been such a surprise).
Same here. I actually build a similar "3d renderer" and ran into the same perspective "issue" op had. Unfortunately I didn't figure out how to fix that back then. So I actually finally learned how that's solved almost 30 years later. I should fix my QB4 code :-)
I'm actually in the process of writing a demoscene 64k in Rust+WASM, that will be contained in a single index.html.
Actually not sure if I'll get around to finishing it, but currently a WebGPU-based renderer + scene graph + softsynth fits in about 30k compressed. That's unfortunately a bit big, but we'll see.
If I don't get around to finishing it I'll probably open source the code anyway, just as a fun starting point for others maybe.
A little bit of everything, but the WebGPU stuff requires a full interop layer to translate WebGPU calls on the Rust side into calls on the JS side (a poor man's wasm-bindgen basically).
Additionally, there's just a lot of code being generated for common containers. It would probably be simpler and smaller to create unsafe containers that just shift pointers to boxed structs around or something, but I haven't gotten around to that yet.
You'd still need the same kind of interop layer. The question then becomes if there are more or fewer classes and methods you'd need to wrap on the Rust side.
Not sure how old are you but if you were there 20-23 years ago you could’ve stumbled upon these .exe files that were 5-64kb in size(not megabyte!) with cool looking videos and music. Back then this was a miracle, it still is to me.
I wonder what the wasm-backend people are doing that the embedded-backend people aren’t — I’d have thought embedded was even more particular about small size, but I just started playing with rust-esp32 and found that “hello world” is 15MB (to be fair it’s only 4MB with debug info stripped)...
I don't think he mentions it, and I doubt it's relevant in your Hello World example, but if you have a lot of generic code, it's useful to extract the non-generic parts into a separate non-generic function. For example:
fn foo<T: AsRef<str>>(s: T) -> String {
bar(s.as_ref())
}
fn bar(s: &str) -> String {
// A whole bunch of other stuff
s.to_string()
}
You might want to look into the nonsense the demoscene folks get up to, then. .kkrieger is a pretty serious FPS in 96kB, 8088MPH gets 1k colours out of an 8088 with CGA graphics. Things like that
Not due to slow RAM, but due to slow CPUs. The CPU/memory latency gap wasn't such an issue yet at the start of the 90's. A 386 PC was pretty much the first consumer hardware where one could start thinking about doing any per-pixel work at all with the CPU in 320x200 at 1 byte per pixel.
Also depth buffers only ever were viable with dedicated rasterization hardware, and on early consumer "GPUs" they were a luxury because even when depth testing was supported, they reduced rendering performance by half - which basically meant "utterly unplayable". It was almost always better to determine surface visibility through other techniques.
I've been thinking about re-writing my raycasting engine in Rust but never really found the time - glad to see someone has done it and documented it so well!
If anyone else is interested in raycasting this page is also worth a read: https://lodev.org/cgtutor/raycasting.html