Hacker News new | past | comments | ask | show | jobs | submit login

Speaking of drawing canvases not corresponding to the hardware, I have an incomplete memory and I wonder if any of you can help:

I recall that the original PlayStation, or WebTV, or both, had virtual canvas that was higher-resolution than a typical NTSC television of the time. And I vaguely recall that a technique was used whereby the pixel(s) that were shown on the NTSC TV display alternated between two full-screen frames rapidly (30 frames per second? I actually think it might have been much slower; I think you could actually perceive the flickering alternating when motion was frozen, but this was not really an issue when games were being played or TV shows being watched). Through some trick of human perception, the alternating of frames yielded the perception of a higher-resolution display. I recall Microsoft's early Interactive Television set-top-boxes not using this technique (I was a usability specialist working on it at the time), and the quality of text rendering in particular was noticeably poorer. Anyway, I have been unable to find documentation for how this technique worked, and I find it hard to google. Any links would be appreciated!




As mentioned by the sibling post, I think you're describing interlacing, which was the way broadcast TV worked but was not usually used by video game consoles. However, the PlayStation did support interlaced (or high-resulation) modes.

It was a trade off. NTSC CRT TVs displayed 60 "fields" per second (50 for PAL), with every other field being offset vertically by a tiny amount, creating a single frame 30 (25 for PAL) times per second with double the vertical resolution of an individual field. This did create artifacts most noticeable with fast moving objects, in particular it was one of the effects leading to a really stupid idea that creatures called "rods" existed which were invisible to human eyes but visible to cameras[0].

Video game consoles typically skipped the extra half-scanline at the end of every field that would normally be used to offset the next field and treated each field as a full frame. Lower vertical resolution, but higher FPS and simpler video hardware.

Given that WebTV would have had a lot of text and not a lot of motion, I would guess that it used an interlaced mode for the extra resolution.

Earlier consoles, like the NES, had flickering sprites sometimes purely for the reason that the hardware could only display so many (8? in NES case) sprites per scanline, so rendering was sometimes juggled on a frame-by-frame basis to fit more.

Also interesting note: The Atari 2600 (VCS) totally had the ability to do interlacing due to the fine-level control over sync offered (read: forced) by the rather spartan Television Interface Adapter. It was never used by a commercial game, but home brewers have created games using it. It is also worth noting that VCS games were frequently creating out of spec sync signals and it can really screw with capture hardware.

[0] https://en.wikipedia.org/wiki/Rod_(optical_phenomenon)


Are you perchance trying to describe interlaced video: https://en.wikipedia.org/wiki/Interlaced_video




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: