Hacker News new | past | comments | ask | show | jobs | submit login

> The first detail is that most computer monitors run in 24-bit color mode, which provides 8-bits of color detail for the red, green, and blue channels. But most older game systems do not specify colors in that precision.

> For instance, the Sega Genesis encodes 9-bit colors, giving 3-bits per channel.

> The solution to this is that the source bits should repeat to fill in all of the target bits.

  000 -> 000 000 00...
  010 -> 010 010 01...
  011 -> 011 011 01...
  111 -> 111 111 11...
This is interesting and efficient approach. Interpolating a 3-bit number (0-7) into a 8-bit one (0-255) can be done by dividing the first one by 7 (to normalize it) and the multiplying it by 255 (to stretch it to the full range). The operation order can be changed to avoid having smaller-than-1 floating numbers in the first step. So, we basically need to multiply each number by 255/7~36.4.

  0 -> 0 * 36.4 = 0
  1 -> 1 * 36.4 = 36
  2 -> 2 * 36.4 = 73
  3 -> 3 * 36.4 = 109
  ...
  7 -> 7 * 36.4 = 255
So, these multiplications give exactly the same results as repeating the bits in the byuu's article and bits operations are much cheaper. I have some intuition on how does it work (we're increasing each "part" of the number by the same factor), but not a math explanation.



The calculation `red = r << 5 | r << 2 | r >> 1` is equivalent to `red = floor(r * (1 << 5 | 1 << 2 | 1 >> 1))`, which is `red = floor(r * 36.5)`.


I thought it would be good to get people familiar with bit-twiddling, as you will be doing a whole lot of that when writing retro emulators.

This is one of my favorite sites on the internet: https://graphics.stanford.edu/~seander/bithacks.html


But please keep in mind that today's compilers are (most of the time) very smart. Hacker's Delight is somewhat outdated today. There's no need anymore for writing "x << 2" instead of "x * 4".

I recommend Matt Godbolt's Talk(s) about compiler cleverness: https://www.youtube.com/watch?v=bSkpMdDe4g4 (see the 30 minute mark for the multiplication example)


I'd say it's about intent. Depending on the context, "x * 4" is less clear than the bitshift (For example, when packing multiple values into an integer)


The point isn't "always do multiplication" but to do what makes logical sense. Don't pick your operators for performance, pick them for readability. The compiler will handle the performance aspect for you.


Absolutely, couldn't agree more. I just wanted to clarify why the bit-shifting approach was equivalent to the previous comment's multiplication by 36.4.

> This is one of my favorite sites on the internet: https://graphics.stanford.edu/~seander/bithacks.html

Yes - I also enjoyed the book "Hacker's Delight", but haven't got round to reading the second edition yet.


The first time I found this page, I lost the rest of my afternoon reading it through. Highly recommend.


I think I get what you're aiming at in your explanation, but you're not being quite explicit enough about how those shifts map to multiplications, and having to assume a world where 1 >> 1 is 0.5 is... consistent, but counterintuitive.

But I think what you're aiming to say is:

r << 5 | r << 2 | r >> 1 amounts (because r is constrained to three bits) to being the same as

r << 5 + r << 2 + r >> 1

And r << 5 is r * 32, r << 2 is r * 4, and r >> 1 is floor(r / 2)

So the whole thing is

r * 32 + r * 4 + floor(r * 0.5)

which is the same as

floor(r * 32 + r * 4 + r * 0.5)

or

floor(r * 36.5)


The described bit operations are only suitable when the input range is a power of two, whereas your described math is suitable for almost all systems. That part of the article would do better to explain that you simply want to map your highest value to 0xFF and your lowest to 0x00, mapping linearly, rather than talking about mapping bits. (It also doesn't mention HDR or 10-bit output, both of which could improve visual quality in CRT emulation.)

Also, by implicitly doing the mapping to 8-bit first and the gamma stuff later, some precision is lost, but this shouldn't be a concern until you reach around five input bits per component.


There's definitely room to improve the article. I'm intending it to be more of a Wiki-style site with additions over time.

Indeed internally my emulator calculates 16-bits per channel, performs gamma, saturation, luminance adjust, color emulation etc, and then reduces that to your target display. I have support for 30-bit monitors, but so far that only works on Xorg. I even considered floating-point internally for colors, but felt that was way too much overkill in the end.


Here is a generic function that will turn any bit-depth into any other bit-depth:

    uint64_t convert(uint64_t color, uint sourceDepth, uint targetDepth) {
      if(sourceDepth == 0 || targetDepth == 0) return 0;
      while(sourceDepth < targetDepth) {
        color = (color << sourceDepth) | color;
        sourceDepth += sourceDepth;
      }
      if(targetDepth < sourceDepth) color >>= (sourceDepth - targetDepth);
      return color;
    }
In fact, you probably should use normalization when converting arbitrary levels. A floating point multiply is almost certainly cheaper than a while loop. But hey, why not? Just don't use it for any real-time stuff unless you build a color palette cache.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: