The end of the article speculates that the sheet system was dropped since the tiles appear scrambled. I'm wondering if the artists still used the sheet system, but a memory optimizer tool re-ordered the tiles to help free up a little more space on the ROM.
> I'm wondering if the artists still used the sheet system, but a memory optimizer tool re-ordered the tiles to help free up a little more space on the ROM.
Seems unlikely. Managing the "physical" layout of sprites on a sheet is a lot of work; there's no reason to do that if it's going to be thrown out by an optimizer in the end.
Besides, notice that, on the optimized SF2 sheet, all of the tiles for one sprite get written out in order; they aren't interleaved left-to-right like the tiles would have been on a sheet.
The way Super Street Fighter II has a mix of methodologies seems to count against that. Why only run this optimiser on the new art?
I could see artists might still use a grid when sketching and planning to keep a handle on sprite size / memory usage. But not in the hand packed sort of way, more just each sprite drawn unscrambled on a grid. You wouldn't go through all the rigmarole of hand optimising memory layouts if some stage in the build system is going to ignore it and do its own thing instead.
Because if you change the tiles, you need to redo all the tile number tables throughout the program. Much easier to only change it for new graphics added to the game than to rework a bunch of old stuff that works.
to save people some clicking, the word "Warrier" here is in fact misspelled intentionally by the article, which discusses how it was misspelled unintentionally by one of the original graphic designers of the game.
The only gaming console I still own is a SEGA Saturn - and a burned copy Street Fighter 2 Alpha practically lives in the disc tray. (I use an Action Replay cartridge solution for booting burned discs called ‘Pseudo Saturn Kai Lite’.)
I am fortunate enough to have a live-in music studio, and it actually lives attached to the secondary monitor in the studio for us to take breaks on and do a few fights to decompress. Pretty much never fails to be a hit.
I only briefly had a Saturn and Virtua Cop series is still one of my most intense memories. A light gun game at home with graphics that looked like the arcade to my childhood eyes!
I wonder if kids these days will look back on the ps5 similarly hard to imagine!
Tricks for packing data are still relevant today, though obviously hardware has changed so the tricks are different. One important concept on today's machines is that memory latency is huge relative to the cost of executing other instructions (eg: arithmetic, logic). In some senses, such packing has become even more important as time has gone on. There was a point in history when memory operations and ALU operations were about the same speed, so it made sense to keep even small computations in memory, and retrieve them later (eg: a lookup table, possibly pre-computed). Nowadays, it is often faster to re-compute values from scratch when needed, rather than going to memory, even for biggish computations.
In today's world, the GPU largely deals with textures, but the same rule holds — it can make sense to compute values rather than doing texture fetches. Anyway, I'd say your instinct is right — developers are still using lots of tricks to get things where they are. There is still a lot of competitive pressure to provide more, more, more and squeeze the most out of the hardware.
All that said, the 100GB download is often going to be audio/video. Then probably textures. There is certainly standard compression going on for a lot of that. But not the sort of manual "these texels go here, those texels go there" fiddling of data like described in this article. Though UV unwrapping is still and art and a science (:
Data compression is really good these days, and there are file formats and compression strategies that are optimized for gaming. The specific technique being used here is still used in 2D games, but there are spritesheet optimization tools available to do it automatically.
My understanding is that for the biggest games there's an intentional tradeoff to use more storage in exchange for faster load times, although the need for that has apparently lessened over time. If you go back and look at some of the big multi-disc games of the late '90s/early '00s you might be shocked by how much duplication there was in order to reduce the need for disc swapping. (You might also be shocked by how much of that disc space was required solely to support pre-rendered cutscenes.)
Part of the size problem in the last generation was because of duplication of assets to load faster from spinning hard drives. They bundled assets for level loads sequentially because it's quicker to have all the assets inline instead of seeking all over the place.
I imagine when the HDD-era engines are end-of-lined we'll get a slight shrink, but it'll probably be eclipsed by the growing number of high resolution assets pretty quickly.
That font seems to be designed for low vision readers. I'm not sure if you can extrapolate that to people without any sort of vision issues. I mean, I can read the essay just fine, and I don't find it particularly hard to read or anything. If anything, given that I'm already used to reading monospaced fonts all day, I think I can probably read it faster than if it was a font that I'm less familiar with.
https://news.ycombinator.com/item?id=29657343
Street Fighter II paper trails – allocating sprite space by hand (December 23, 2021 — 554 points, 76 comments)