Millisecond-level feedback is incredibly exciting. It's only at these rates that the limits of human perception seem to no longer detect the disparity between the real world and digital interaction.
This strongly reminds me of a video[1] from Microsoft Research a few years ago in which touch-based interaction was demonstrated at 1ms latency. It's surprisingly more realistic than even the 10ms level.
>only at these rates that the limits of human perception seem to no longer detect the disparity
IF you use the brute force approach of vomiting pictures in general direction of an eye.
Human vision system is incredibly meager and clever at the same time. Your eye doesnt really capture whole picture all at once, you have a huge blind spot in the middle + part of your nose obscures vision, you can only see sharp shapes in the center and fast movement on the boundaries, data stream consists of shaky blurry fragmented mess. Its the brain that filters and glues it all together into coherent picture.
Example - go to a mirror and look into your eye. Now find another person and look into their eye, you will be surprised to see yours was steady, and theirs is all over the place.
Now check out your blind spot http://io9.com/5804116/why-every-human-has-a-blind-spot---an...
Saccades produce a lot of blurry artefacts that are simply thrown away. I read somewhere brain ignores about 2-3 hours of visual data a day.
We already have incredibly fast micromirrors, I wonder if someone is working on a microprojector that tracks saccades and displays only part of the picture eye is currently fixated at. This would allow constructing high resolution scenes using lower resolution projector. 120 Fov requires >500mpixels, but you can only see ~7mpixels at a time. Quick back of napkin calculation tells be you could bump perceived resolution of Occulus by ~40x
That video is from 2012 and represents 100ms as typical at that time. I wonder if we have made any progress in the three years that have passed. Does anyone know what the touch latency of typical Android device/application would be?
I'd be curious to also know what sort of keyboard/mouse to display latency PCs typically have. I've heard some talk about display input latency, but I don't remember hearing anything about e.g. keyboard latency or OS latency (in this context)
As much as i agree that latency reigns the human mind supreme, i find that video just a little bit disingenuous, since they compare vector drawing with bitmap drawing.
What about drawing in bitmap with low latency and building the vector drawing from buffered user input in the background? Then replace bitmap with vector when it's available. It could be low latency and vector drawing at the same time.
I'm so glad that we've finally moved past the silly arguments that human reaction times are too slow to need more than 60fps. It's not about reaction times, it's about latency. The future, in this regard, appears bright.
I remember getting so excited with Johnny Lee demoing the movable projected displays 7 years ago on youtube, and thinking that this could be the future - youtube.com/watch?v=liMcMmaewig
This looks like a really big step toward getting something like this into every house :)
This strongly reminds me of a video[1] from Microsoft Research a few years ago in which touch-based interaction was demonstrated at 1ms latency. It's surprisingly more realistic than even the 10ms level.
[1] https://youtu.be/vOvQCPLkPt4?t=52s