Laser pixels can conceivably deliver low persistence and a multiplied 'frame rate' by splitting each burst of light into, e.g. four fibers, and looping three of them to provide physical delay. So the eye would be hit four times per frame, giving a strobe rate of say 240 hz on a 60 hz display.
Such a display would be made of pure cash, of course. Can't have everything.
Those four frames would still have the strobing problem when the eye is tracking a moving object. Instead of perceiving a solid, moving object, I think we'd see four copies of the object as the eye moved but the four strobes of the laser did not.
Could these problems also be addressed by using a regular, full persistence display reflected in a moving mirror that tracks the eye as it saccades?
Aside, this stuff is really fascinating, and it's great to see the field of human-computer interaction pushing up against formerly unknown biological phenomena.
Given perfect eye tracking, that could work for the vestibulo-ocular reflex case where the eye counter-rotates to keep the world stable during head rotation, also given that no object is close enough for the parallax from the eye moving in space during head rotstion to be noticable.
It would not fix smooth persuit tracking of a moving object while the head is stationary though, as you could not selectively shift the moving object to match eye rotation while also keeping the background stationary.
Mirrors were my first thought as well, though in a slightly different way; you could have the mirror track the scene instead, counteracting head movement during a single frame's persistence and then snapping back between frames to continue the motion (probably with some display that could provide partial-frame persistence). It seems like this would address VOR, which from my reading of the article sounded like a bigger problem than saccading. A quick googling turned up piezo mirrors [1] with specs that look like they'd be in the right ballpark.
There are some excellent new web-based motion test animations, that demonstrates eye-tracking-based motion blur, which is quite relevant to Mike's paper:
View these links in Chrome or another web browser that supports perfect VSYNC animations. The pages will attempt to detect if your browser supports VSYNC. IE10+ works well (up to 100Hz), Chrome works well (up to >144Hz), and Safari6+ (iPad's work too). FireFox 22 will judder too much; FireFox 24+ pre-beta adds VSYNC support -- I am the person who convinced Mozilla to add native support for 120fps animations as Bugzilla@Mozilla #856427; now in FF24 pre-beta.
I wonder if anyone has ever constructed a real-world replica of a Tron-like grid to see if the visual instability mentioned ~60% down the page isn't caused by being in a very dark environment with very bright lines running across it, rather than an artifact of the display.
The effect happens in a real room with a strobe light going, but that's the same phenonemon - low persistence. It also happens in other virtual settings with lower contrast and brighter overall illumination (although it's less pronounced), so it's not just a matter of a dark environment and bright lines. It doesn't happen in the exact same virtual Tron-like room if persistence is set to full, and it's reduced at half persistence. So it does appear to be an artifact of low persistence.
If you're running "ToastyX Strobelight" (easy lightboost programming utility) on a 120Hz LightBoost-compatible monitor -- hit Control+Alt+Plus to enable LightBoost strobing, then Control+Alt+Minus to disable LightBoost strobing. Control+Alt+1 will program the strobing to 1.4 milliseconds, while Control+Alt+0 will program the strobing to 2.4 milliseconds (oscilloscope measurements). The 1.4ms-vs-2.4ms is actually noticeable in the castle at the top of http://www.testufo.com/#test=photo&pps=1440
It's a very interesting exercise in hacking LightBoost 2D as a programmable-persistence computer monitor (100% software hack; no hardware modifications needed; no 3D kit needed)
>I have no way of knowing any of that for sure, though, since I’ve never seen a 1000 Hz head-mounted display myself, and don’t ever expect to.
I'm not so sure. On the first post in the sequence, someone had left a comment about DLP, which has extremely fast switching times. This was agreed to be impractical on the grounds that DMD pixels are either on or off and can only achieve intermediate levels via PWM (which takes enough time to do).
I initially didn't think much about it, but it festered in my head for a few hours, and I started researching it a bit. I think it could actually work.
The thing is, our eyes are expecting continuous input, yes, but they don't actually have the bandwidth to really pull in anywhere near a thousand 24 bit images every second. The problem is the quantization, not the actual amount of information. Obviously a DMD is not going to fix the spacial quantization, but I believe it can fix the quantization in time.
The DMD I was looking at can be switched by its controller at 2.9 kHz for arbitrary input, and 4.2 kHz for pre-loaded input. From what I can tell, there's other overhead going on there, and it looks like you should be able to get it up to about 5.5kHz if you built a custom controller.
What I propose is to run the display at 5500 fps... at 1 bit per channel. Hear me out!
If we set the threshold for detecting small variations in light at 200Hz (this is what is recommended for PWM dimming of LEDs, for example), then that gives us about 28 pulses to work with, within the eye's tolerance. Now we have some options. Maybe the cost of DMDs goes way down in the next few years. In that case, we could stack them (increasing the new LEDs' intensities by powers of 2) and get exponentially more levels with each DMD. As it stands now, that would get expensive very quickly - with only one for each eye, the QVGA version of this is already at about $200.
Of course we do want colors, so we'd divide up our time between the RGB channels. Give, say, 6 levels to blue, 8 to red, and 14 to green.
So by using temporally dithered 1-bit frames, we should be able to get (as far as our eyes can tell) continuous motion (but still quantized to pixel boundaries). Now, thinking at the 5ms scale instead of the 200μs scale, we should be able to do more dithering. You'd be surprised at how much you can get away with even with purely spatial dithering with only 6.5 bits per pixel if you divide those bits up right.
Edit: I realized I got mixed up between my modes of thinking. The controller doesn't drive the DMD full speed because it leaves gaps between each channel. When I worked out that 5.5 kHz figure, I was still thinking of at least one DMD per channel. So you get even fewer colors than I thought.
One thing I forgot to mention though, is that as far as I can tell from these data sheets, the bottleneck by far is bandwidth. Having studied it for a while, I'm not entirely sure that we don't already have the technology to build a single-DMD full color 1kHz DLP. There is however no doubt that you could build a full color 1kHz DLP using multiple DMDs. I'm just not sure it would fit in an HMD.
"I was initially skeptical of the importance of low persistence, preferring a push for 120hz, but it is a BIG DEAL."