Hacker News new | past | comments | ask | show | jobs | submit login

The hardware may seem very limiting but in fact you can often exploit the limitations to achieve even better effects --- for example, the LCD's relatively slow refresh rate means you can do temporal dithering to go beyond 2 bits of grayscale, and I believe modifying the GPU registers multiple times per frame, while it's drawing, is another relatively common trick.

You can look at some of the GameBoy demos from the demoscene to get an idea of what others have been able to get from the hardware:

http://www.pouet.net/prodlist.php?platform%5B%5D=Gameboy&pag...




One very common effect is to change the x offset of the background mid-frame to create parallax effects. The introduction to Link's Awakening does that for instance: https://youtu.be/qoj8F0ymwpI?t=23s

It looks like you have multiple layers in the background moving at different speeds but of course on the GB you only have one background layer so it's just the game modifying the offset when it reaches certain lines.

To help with that the console lets you configure an interruption when the GPU reaches a certain line so you don't have to poll the line register or use an external timer.


Showing that the design of the GB dates back to the time of the NES and C64. The tight coupling between the CPU and the CRT beam meant that you had similar interrupts there.

And as those systems supported color displays (in a limited fashion), it was used for things like swapping palettes mid draw.

While it takes whole different level of effort to do so, it is quite impressive what can be done on limited hardware when one do not have multiple layers of abstractions on top.


Just about every architecture works like this because ultimately to drive a screen you need to send display data down the wire in order.

Modern systems are so capable that using raster interrupts wouldn’t add anything but if you change the display without Vsync you can see that it still works by scanning lines.


These days i would have thought that a LCD or OLED screen would have been able to handle directly addressing each pixel.

the reason for the CRT requiring the data "serially" was the electron beam that refreshed the phosphor.


All the (uncompressed) video protocols still work like in the good old time of CRTs, you have the notion of horizontal and vertical blanking. Image is sent top to bottom, left to right (well, progressive video is, but let's not get into interlacing).

Actually in many protocols you have to keep the blanking periods because people crammed tons of things in them over the years (audio, subtitles, timecode, remote control protocols, various metadata...).

Having random access to the pixels would be beneficial if you only wanted to only update specific portions of the screen but you have to know which parts of the screen have to be updated (either by diffing each frame with the previous or by being tightly coupled with the drawing code and know which parts have been updated). However in very dynamic content like movies or video games you'll often end up having the entire screen changing at once so you'll have to be able to support that anyway and the additional complexity of optimizing for the simpler cases is probably not worth it.


What features would that add? The only one I can think of is saving energy for static content, but you can mostly achieve that by varying the refresh rate and most of the energy used by the display goes to the backlight anyway.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: