My background is I wrote emulators for 24 systems and counting (higan and bsnes), and I want to try my hand at technical writing, in hopes of encouraging more people to get involved with emulation. I'll hopefully get better over time and with more feedback.
I'll be writing more about video emulation in the near future. I would like to cover techniques for interframe blending (the flickering shadow), composite artifacts (the waterfalls in Sonic the Hedgehog), color bleed (translucency in Kirby's Dream Land 3), screen curvature, scanlines, color subcarriers (and how shifting it causes shakiness you see in composite video), interlacing, phosphor decay, phosphor glow, aperture grilles, non-1:1 RGB pixel layouts (Game Gear LCD), etc.
I'm intending to write about all things emulation there, in fact. I want to cover input latency reduction/mitigation, removing audio aliasing above the Nyquist limit using FIR and IIR low-pass filters, designing thread schedulers and priority queues, provide mirroring and organizing of PDF datasheets for all the ICs used in retro gaming systems, etc.
Basically, I want to cover all the stuff you don't usually find in "how to write an emulator" tutorials. All the polishing stuff that takes a new emulator to a well-polished emulator.
It's not ready yet (the site is brand new), but I'll have an RSS feed at byuu.net/feed in a couple of days if anyone's interested in this sort of content.
I grew up coding demo effects on the C64 and the Amiga.
I always loved how the vblank synced ultra-smooth graphics slide over the screen while the CRT had a sort of magical "glow" to the picture.
I sit here by a nice iMac but often feel like that magical glow just isn't there even though the picture is 1000 times better than my old television I used.
Is there some kind of effect that can be applied to achieve this glow on a good LCD screen?
Or maybe it's just the lying of the nostalgia of childhood memories?
Phosphor glow is one of my favorite effects. One of these days I'd really love to have a software-mode C++ filter for that.
It can be done through filtering, but it's really hard to make it look good. A person named Psiga made this mock-up for Secret of Mana in Photoshop that has always astounded me: https://sites.google.com/site/psigamp3/PhosphorSimTest1.jpg
Unfortunately, no one's been able to figure out exactly how it was made to replicate it in software. But something like it would really add a lot.
We have pinged them in the past about it, actually ^^;; It's been a mini-obsession of mine since I found it, hahah. They regrettably don't remember how it was made.
Ah bummer. I would have thought they'd have kept the PSD lying around somewhere like I do and it would've shed some light. Maybe someday someone will figure out something that looks right!
I would like that, thank you! A few people have tried over the years, and we even contacted the person who made the original image, but he didn't remember how he did it ^-^;
You could probably use machine learning for fine-tuning the parameters of whatever fast approximation comes out of that though.
(having a plausible first-principles model and then automatically tuning the knobs until the error with a ground truth is minimised is a form of machine learning, right?)
Pretty sure a cross correlation of the source and resulting image will reveal most of the transformation's recipe, aside from a few details like the fine grid lines added in. If I have time tomorrow I might take a crack at it.
Look at CRT draw in slow motion. You have an ultra bright flash, then an alarmingly quick phosphor glow falloff.
You'd need, basically, a high hz HDR monitor with an actual sub-ms response time... so you can superbright blink the character, essentially black framing it.
The rest of the actual softness of the glow can be emulated conventionally with shaders (there are some super swanky terminals and emulators that have that part already done).
Although this flicker would indeed be necessary for a 100% accurate reproduction, I'm not sure that is really the thing that current CRT shaders are missing for feeling "just right". After all, the linked example image feels very convincing and yet is static.
Agreed, love this kind of look. I emulated a curved screen, phosphor glow and scan lines somewhat in my Unreal Engine based Z80 emulators, although I think the look could still be improved further. This is fairly easy to do with UE4 shaders and geometry for the curved screen, a bit harder to do all from scratch though in a standalone program I would think.
Thank you for taking the time to write your expertise down. Even if the article doesn't get much attention right now, the value of writing down knowledge is often understated.
For someone who has been so adamant about cycle-accurate emulation I was surprised to see such ad hoc color transformations being used. Has anyone in the emulation community tried to accurately estimate the end-to-end response of these systems? Something along the lines of using a colorimeter with a reference monitor (like a Sony PVM CRT) and homebrew ROM that generates test patterns.
There's certainly limits to idealism. This is currently the best we have for the listed systems, I'm afraid. Hopefully someone reading about this will take interest in trying some things like you mentioned and helping to improve the situation.
As mentioned, the systems that need this most don't have even frontlit displays, so a colorimeter would not work on them. There's also the question of what the correct contrast setting should be on the analog adjustment wheels these systems had. Ideally you'd want to capture multiple contrast settings, and then try and devise an underlying adjustment algorithm from all the data sets to approximate the full analog range of positions for it.
I'm not into emulation at all, but I wonder how low level the passive LCD panels were software controlled in game or system ROM?
The wikipedia page on Walsh functions says:
>They are also used in passive LCD panels as X and Y binary driving waveforms where the autocorrelation between X and Y can be made minimal for pixels that are off.
Not sure how applicable that is on which of the systems you are considering. Sorry for the rabbit hole ;)
I recently started playing around with RetroArch, and found it quite hard to make sense of all the shaders it comes with. :) Looking forward to the coming articles to get a better understanding of the parts that make up the pipeline from raw RGB pixels to something that looks like what I remember playing on my TV as a kid.
One question to the article: Do emulators generally have the color correction/gamma adjustments built in? If so is there typically a flag to turn it off, so it can be done in shaders instead?
I would imagine most emulators do not include colour-correction or gamma adjustments for RGB-native consoles, unless the "native" colours are particularly ugly.
For example, the original NES does not work with RGB, but with analogue NTSC signals, so an emulator has to do something to make the output appear on an RGB screen, and that implicitly involves colour-correction.
The SNES and Genesis do work with RGB, and the output is good enough to look reasonable on modern monitors, so most emulators would probably leave it as-is.
The Game Boy Advance works with RGB, but the built-in screen on earlier models was (as the article describes) low contrast, and games were made garish to compensate. Later models of GBA used better screens, and the GBA Player for the Gamecube output straight RGB to an ordinary television, so later games often used more sensible palettes, or even provided a palette option in the main menu. Thus, GBA emulators probably provide their own colour-correction, and almost certainly make it optional.
> the original NES does not work with RGB, but with analogue NTSC signals
I mean, the video hardware (video DAC? rasterizer?) in a NES is for NTSC/PAL, but that doesn't mean that NES palettes are specified in a YIQ colorspace or anything. The palette data is RGB; you don't have to do anything special to fill a NES-emulator framebuffer beyond what you'd be doing to render a GIF. (The gamma transforms mentioned in the article can be done during blitting with a LUT, but they could also just be done by applying a real gamma-curve transform to the framebuffer texture during screen compositing.)
> Unlike many other game consoles, the NES does not generate RGB or YUV and then encode that to composite. Instead, it generates NTSC video directly in the composite domain, which leads to interesting artifacts.
So far as NES software is concerned, it just tells the NES to use particular palette entries, and the NES hardware is responsible for coming up with an NTSC signal to send to the television. Because NES software doesn't care about specific RGB values, and because converting NTSC-to-RGB is hard, most emulators use an RGB palette for output, but fans have created a lot of alternative palettes over the years[2], and none of them are "correct".
Presumably whatever palette-entry-to-RGB encoding Nintendo's own emulators (e.g. NES VC, NES Online, the NES classic) use, should be considered at least somewhat "canonical", no?
And I don't mean because Nintendo has any special "auteur" control over what NES games "should" look like (they never expressed such control, since they never shipped a NES with a screen!)
Rather, I mean that the palette maps Nintendo uses in their emulators are probably the same maps Nintendo used, in the other direction, to do the original conversion of the RGB palettes of their PC art-assets, into NES palettes to wire into the hardware. I.e., if Mario's red coveralls in Donkey Kong (a Famicom launch title, so likely ported during hardware development) renders as #800000 on Nintendo's emulators, that's probably the same RGB color that was in the PCX file's palette that informed the tuning of the NTSC rasterizer's output for that particular palette entry.
Nintendo's RGB palettes have typically been what many would call "crap". Probably the NES Classic got closest. Wii VC was way too dark. None of them too horribly accurate.
I don't think they used PCX along the way at all. At one point, they'd digitize graphics by using LEDs to scan filled-in graph paper, one tile at a time. I think the closest Nintendo ever got to actually having an official RGB palette for the NES in the old days was the RGB PPU palette (which was very, very different from the colors output by the composite PPU).
The NES PPU is fairly well understood, and palettes can actually be calculated based on that (as opposed to some of the user-created palettes done with visual analysis). All of Nintendo's RGB palettes (AFAIK) are known and dumped.
i love the work you've done on your emulators and your writing--i look forward to the new site and more good content!
if i may offer a suggestion for getting more people involved, i would say please give serious consideration to a UI paradigm rework. i would love to contribute to higan but i find its interface completely impenetrable! in older versions, i was able to sorta figure out how to use icarus to import a ROM after a while, but i tried again a few months back and i couldn't for the life of me get anything to run.
you are a treasure to the emulation community so keep on doing what you love however you want to do it. i just wish i could figure out how to use higan!
I'm currently in the middle of a higan GUI rewrite, and I'd be most happy to get some feedback! It's really difficult because we can't simplify things that would break edge cases on other systems.
Big thanks to you as well as everyone else working on preserving this particular slice of our cultural heritage. I hope that your work will be appreciated more, esp. by game developers and policymakers, as time goes on.
Good stuff!
I like having both options - using the colours the displays were unable to display properly at the time or - on the other hand - getting as close as possible to the original experience.
As I understood it, those were not the two options discussed in the article.
Instead it was: use the color values specified in the code, ignoring that screens have changed and that they won't look the same at all, or account for that and make it look right.
The goal is really emulation, making it look like the original. Making games look better than the original is a harder job, that's more remastering the game to take advantage of newer hardware, like what is done on rereleases for new systems.
There are two kinds of emulation, though. There's emulation of experience, and then there's what an AV engineer would call a "monitor": a reproduction that shows you the data the system puts out, as-is, without any "client-side" transforms, even if most end-user equipment does them. Flat response curves and all that. In emulation, monitor-type emulators are often associated with debuggers, or a component of them.
Monitor-type emulators are more helpful in a platform SDK, as you can view the output of them through different end-user equipment to determine how you want to "master" your game's assets. Given that byuu's emulators aim to be cycle-accurate and bug-for-bug compatible with their consoles, they're pretty great for developing homebrew against, so I'd expect at least the option to have a "monitor" mode for video (and audio!) output.
Kind of curious what you'd use the monitor-type for in this case. For example, the gameboy one, you'd want to be able to see the jank oversaturated colors that are only there to make the actual screen display slightly better? Or maybe that one is just a bad example?
No, that's a perfect example. You'd want to be able to see those colors, to be sure that those are the colors your program is actually attempting to render. When you see the "right" jank, that means you've mastered everything correctly and it'll look right when you write it to a cart and test in on real hardware. It's a way to eyeball the intermediate layer, to know when things are going wrong at the intermediate layer, without having to interpret the problem "through" the filters that apply after it.
byuu, thanks for doing more writing. I have enjoyed following you on Twitter and look forward to reading more in depth work from you. Will definitely sign up for the RSS feed.
Writing emulators seems like really hard thing to do.
What is in it for you ? Is it about technical challenge, or something else people from outside might not know?
I was always into RPGs as a kid, and when I found out the US missed out on tons of them, I got into reverse engineering the games and fan translating them. I worked on Dragon Quest 5, Der Langrisser, Mother 3, etc.
Around 2004, I learned most of my code didn't run on real hardware, but ran on emulators. I found out why (the main reason: writing to video memory while the screen is drawing), and submitted patches to the SNES emulators of the time to work like real SNES hardware, but they were rejected because too many fan translations already relied on the old mistakes.
No one back then seemed to care about how accurate emulators were, so I set about writing my own SNES emulator with the goal of making it as perfect as possible, and it kind of spiraled out of control from there. Within a few years we were decapping chips to extract coprocessor firmware, I had to emulate the Game Boy for the Super Game Boy cartridge, the GBA CPU for this one Japanese chess game that had an ARM CPU in it, etc.
I guess I just really like the challenge, and never stopped adding more systems over time. The more systems I emulate, the more I already have most or all of the chips used in the system emulated. Eg ColecoVision took me an hour because I already had a Z80 core, a TMS9918 video core, and an SN76489 audio core. I can definitely see how MAME turned into what it is today now.
These days, the emulation is the easy part, and organizing all of this code (around 600,000 lines and counting), and getting all of these really unique pieces of hardware to all work in the same GUI, has become the challenging part. I have this really elaborate tree-view of the emulated systems to support zany things like the Genesis "Tower of Power", a complex scheduler that tracks timeslices in attoseconds, practically a full professional DSP stack for mixing multiple audio streams, Reed-Solomon decoding for CD systems, etc. I'm always learning new stuff, and there's always more to improve. I worry that I won't be able to wrap up higan in my lifetime.
There's not a lot of money in emulation, at most I've been offered $2500 for commercial licenses, but showing my hobby work to my employers landed me my last two jobs in software engineering, the latter of which is in Tokyo, Japan. Emulators literally got me halfway around the world. And I even got to work with the developers on Stephen Hawking's voice machine software at one point.
There's been some downsides, and I had a lot of maturing to do over the years, but on the whole, I wouldn't trade this hobby for anything.
> the GBA CPU for this one Japanese chess game that had an ARM CPU in it
I was curious about it, thus I wanted to read more about it and evidently, the source of Wikipedia is your website, but it seems like you took it offline and Web Archive doesn't have it either.
If I recall correctly, the article in question was just "we have now dumped the firmware from all known SNES co-processor chips".
The first SNES co-processor chip, the DSP-1, was based on a weird NEC DSP architecture, and if I recall correctly the part-number was figured out from markings on the chip die, and digging through NEC spec-sheets looking for things that approximately matched the chip's known capabilities. The instruction set and encoding was puzzled out by hand.
Luckily, DSP-2, DSP-3, and DSP-4 all used the same NEC DSP core, just with different firmware, so the same emulator could be used for all of them.
The ST010 and ST011 used a slightly different NEC DSP core, so they required a little more work, but after handling the DSP-1 they weren't too difficult.
The ST018 was incredibly daunting to begin with, since its firmware was much larger than all the other co-processors, and there were no identifying marks on the CPU die and no product sheets to dig up. As a last ditch effort, somebody just opened up the firmware in a hex editor and tried to figure out the instruction encoding from first principles... and eventually they said "that looks familiar", and sure enough it turned out to be the most popular CPU architecture on the planet.
There was also the Cx4, where segher had to reverse engineer the entire instruction set because it's a totally custom, undocumented ISA based on the Hitachi HG51BS architecture.
Specifically, I think the giveaway was the 4-bit condition codes at the beginning of ARM instructions. The code for always executing an instruction is 0b1110, and seeing almost all 32-bit words start with the same non-zero nibble is rather distinctive.
That's not a bad idea. When I was restoring older articles on my new site's CMS Markdown format, I stopped at around 2016, but I could go back to older ones.
Writing an emulator was on my list of “wizardly things beyond my reach” (along with compilers), but I highly recommend burning down that list.
While creating a complete and shareable emulator is a big undertaking, making a simple one for fun is surprisingly not too bad. You start with the CPU emulation, and there are tons of comprehensive tests, so you just fix those until you’re done.
You should expect to do a lot of reading though, and you’ll become very familiar with how the system you’re emulating works :-)
Also on this list: write your own toy programming language, even if it's just an interpreter that builds ASTs. Nothing broadened my view of computer programming more than that.
I think for many people emulators are a sweet spot between technical challenge and potential usefuleness. You can actually improve the state of the art, and it's fun and challenging, and people will actually be thankful!
What led you to repeat bits in the 9->24 example instead of rescale the values? i.e. '(n * 255) / 0b111'? Repeating bits seems like it could potentially give an uneven curve (though not significantly so), and I would have expected the DAC to roughly do the equivalent of the divide.
Historically as a graphics programmer I always see rescaling, potentially with a response curve like sRGB on the back end.
My background is I wrote emulators for 24 systems and counting (higan and bsnes), and I want to try my hand at technical writing, in hopes of encouraging more people to get involved with emulation. I'll hopefully get better over time and with more feedback.
I'll be writing more about video emulation in the near future. I would like to cover techniques for interframe blending (the flickering shadow), composite artifacts (the waterfalls in Sonic the Hedgehog), color bleed (translucency in Kirby's Dream Land 3), screen curvature, scanlines, color subcarriers (and how shifting it causes shakiness you see in composite video), interlacing, phosphor decay, phosphor glow, aperture grilles, non-1:1 RGB pixel layouts (Game Gear LCD), etc.
I'm intending to write about all things emulation there, in fact. I want to cover input latency reduction/mitigation, removing audio aliasing above the Nyquist limit using FIR and IIR low-pass filters, designing thread schedulers and priority queues, provide mirroring and organizing of PDF datasheets for all the ICs used in retro gaming systems, etc.
Basically, I want to cover all the stuff you don't usually find in "how to write an emulator" tutorials. All the polishing stuff that takes a new emulator to a well-polished emulator.
It's not ready yet (the site is brand new), but I'll have an RSS feed at byuu.net/feed in a couple of days if anyone's interested in this sort of content.
Thanks for reading ^-^