It sounds like we would benefit from GPU and display manufacturers providing a set of standard low-level control primitives so that people like Carmack who know what they are doing can really play around with the entire pipeline without having to worry about all the things the cards and the displays are doing behind their backs.
For example, a GPU could have a set of standard settings with full buffering and all the other things that "help them win the framerate wars", but a developer should be able to turn all of that off when needed.
It's the same with displays. LCD manufacturers could, I suppose, allow a modern day "CRT Mode 13h" where you just have scan lines mapped to memory buffers and whatever shows up in those buffers gets turned into a pixel as quickly as possible.
Are there technical challenges preventing this from happening, or is it mainly inertia and lack of need from the current market?
3D graphics drivers used to have more debug options and visualization capabilities. However, unscrupulous individuals took advantage of these to cheat in video games so many vendors just ripped them out.
That's why Carmack is able to get modified versions of these drivers directly, but you and I are not.
I don't agree with that approach at all, but it's part of the reality.
>3D graphics drivers used to have more debug options and visualization capabilities. However, unscrupulous individuals took advantage of these to cheat in video games so many vendors just ripped them out.
Why the duck would video card manufactures care if people cheat in video games?
Heck, why would even GAME manufactures care if people cheat in video games?
Can somebody clear up if that is the real reason those debug options etc were taken out?
> Why the duck would video card manufactures care if people cheat in video games?
They care because game makers care. If they want their logo in the pre-game splashes and similar they need to not be a problem to the people who decide what goes there or it'll end up costing more than it would cost the compatetion. Also they certainly don't want someone like EA coming out and explicitly saying they don't recommend their line of graphics systems.
> Heck, why would even GAME manufactures care if people cheat in video games?
Way back when, they didn't. Well they did, but only for the first X weeks after which any sectret and big ending reveals were public knowledge anyway afte3r which they'd leak out cheat codes themselves to increase interest inthe game from more casual players.
Now they care because their customer base cares. The perception that someone might get an advantage by using a particular card will put a lot of people off. Even if it isn't really possible (because the game is well enough designed that such hacks won't really give any advantage) the perseption amongst the general public that it might be is enough to be concerned about.
> Can somebody clear up if that is the real reason those debug options etc were taken out?
Not unlikely: they were probably pig sick of people breaking things and blaming them for the resulting mess (I tweaked X and your card overheated and my computer crashed and I lost three days of unsaved work, waah, waah, waaaaaahhh).
Or people tweaking the settings, making things far worse in some circumstances, and then assuming that the card is crap (and telling everyone) because to does X badly without thinking that their tweaks might have a little to do with that.
Or they were sick of getting many support queries about the options, or having to make an effort to monitor populat forums for people distributing blatently bad advice about them so they can nip the above problems in the bud. Time is money and reputation management can be expensive especially if you are having to do it retroactively.
Or all of the above. Basically people. People are a problem.
I never meant to imply it was the only reason, just that it was one of them. Another is the support considerations; each option you add to the drivers to tweak rendering output is yet another permutation of testing.
In addition, some of the options better fit the old fixed pipeline architecture of the early 2000s; forcing some of the rendering options older drivers used to have isn't really possible in a fully programmable world.
But to expound a bit on the cheating angle, one manufacturer (Asus) even went so far as to specifically market these features as a competitive advantage for customers of their products:
With that said, users that installed the debug versions of the windows DirectX framework (historically) and the various developer tools (such as nVidia insight) have some of the same capabilities that used to be built-in to older drivers.
Developers have combated this by using programs like punkbuster or rolling their own tools (Blizzard wrote "Warden").
Game makers care because if the game experience suffers because of cheaters, griefers and so forth, the customers won't come back. If you want to sell things to players in the game, the players have to want to play.
Anti-cheat is a big deal.
So I would say that undetectable debug features, ones that can be turned on without indicating this to the title (or cheat detection code on the platform) are bad for game economies.
Since card makers want to sell graphics cards, they have the same goal as the title developers. Not surprising they'd turn these off (or at least make them very visible).
> Heck, why would even GAME manufactures care if people cheat in video games?
Singleplayer, maybe. But Multiplayer? Very annoying for everybody involved. Having a multiplayer opponent cheat is like having a singleplayer bug that means you cannot advance in the game.
Game manufacturers care because their customers care, and because the same cheats often are related to copy-protection breaking. If you're going to cheat at videogames, why not do so in games you didn't buy?
Regarding your last point: it's getting better. Modern TVs have "game mode", where latency is minimized. The latest LCD panels use embedded Display Port, eDP. The latest eDP standard introduces a frame buffer on the LCD itself, where you can just write deltas and tell it to "swap". I think this is quite similar to what you're proposing, in concept.
You may already know this, but the "Game Mode" on modern TVs is just turning off the 120Hz/240Hz interpolation that is done to "improve" the picture (or is required for certain 3D systems to work).
The interpolation introduces significant latency that its obvious (and frustrating) during gaming.
I actually run my TV in game mode by default, because the interpolation done in 120/240Hz mode makes everything look like it is slightly unreal and shot on video. Definitely uncanny valley territory.
Is there a detectable (to human) difference between a 24fps screen and a 48fps screen where the image only changes every other frame. I can see how this would work with film based projectors, but my understanding of TVs is that pixels are always on and simply change states between frames, so 'changing' to an identical frame should have no effect.
Having the frame change every second frame would mean that the screen would be changing the picture only half the time, while it normally is always changing the color of some pixel.
I found this [1], which seems to be a good introduction to the new features and also why eDP is poised to replace LVDS (which I, as a non hardware guy, had to deal with in a previous embedded device project and hope to never deal with again).
Carmack has also mentioned in the past we could solve other buffer problem with non-isochronous displays, essentially a display that doesn't run on a fixed hz cycle but rather outputs lines to the screen as it comes down the pipeline at whatever framerate the host system can handle
That's why I think the default should always be "babysitter mode". Unless you give a specific set of commands, the GPU does what it does and the display does its thing.
Regarding people who turn on "expert mode" even when they don't know what they're doing, that doesn't seem to be a good reason for not having it. The Mark Twain quote about censorship being like "telling a grown man he can't have a steak because a baby can't chew it" comes to mind.
Let's say a hardware manufacturer writes a firmware that exposes calls that can degrade or destroy the device if misused, and the OS driver in turn exposes those calls to third party programs.
If a program wreaks the device, who is the end user going to blame -- the program, the driver, the manufacturer, or all three?
Apple attempt to do this sort of "approval" the with apps on the app store and there have been lots of examples of apps which break the rules getting approved (and then being pulled when they got popular). What makes you think graphics card manufacturers would be able to do that with games?
Believe it or not but we have a similar problem with CDROM drives (let alone any drives coming after). You can't disable the cache on most, and on many that claim to, they lie. This slows down accurate ripping by a factor of 40X on some drives.
On the topic of latency I would also love to see a TV that didn't have any of the crummy video processing chips so many have these days. I don't need anything other than big screen, hdmi in, and no freakin' input latency please because I'm trying to play Rock Band and you are really messing things up! (You can admittedly at least disable this in some tvs... often by delving into hidden engineering menus. Sometimes you can't disable them at all!)
> On the topic of latency I would also love to see a TV that didn't have any of the crummy video processing chips so many have these days.
Many modern TV's have a special "game mode" setting that turns off all of this post-processing to reduce display latency. I use it with great success on my Panasonic plasma TV.
"a set of standard low-level control primitives ... Are there technical challenges preventing this from happening, or is it mainly inertia and lack of need from the current market?"
It's easier to iterate faster without having to deal with a standardization committee. See OpenGL history for some context.
Yea that works, until someone realises they can boost performance by changing the semantics of the "I know what I'm doing" call. Then you'll need yet another "no really I know what I'm doing" flag and on and on, or app black/whitelisting and so on.
I don't think that's needed. What's needed is just to let competition work. If one GPU is less laggy in VR than another people will by that GPU for VR. No need to work around the driver, demand will make the drivers better.
> Conventional computer interfaces are generally not as latency demanding as virtual reality, but sensitive users can tell the difference in mouse response down to the same 20 milliseconds or so, making it worthwhile to apply these techniques even in applications without a VR focus.
I wish people would start treating text editors that they are developing this way. As hard real time systems. Don't care much about virtual reality, but I'm sick of text (and code) editors with unpredictable latency of basic operations [like typing a character, making find call, etc].
Darius Bacon and I have spent some time trying to figure out a buffer data structure that can scale to large files while guaranteeing predictable latency for basic operations like typing a character. (What's "making find call"?)
I think you mean "soft real time", though. A hard real-time system is one which is unusable if it fails to meet a deadline even once; a jet engine control system, say, where failing to meet a deadline could result in engine parts penetrating the fuselage. A "soft real time" system is one where failing to meet a deadline is a failure, but tolerable if infrequent; say, once every million keystrokes, or every 7 days of video play time.
Would you be willing to pay a recurring fee for using an editor? I have toyed with writing an editor at multiple points but have backed away because I think in the grand scheme of things its a one time sale to a pretty small market (The number of people who care enough about text editing that they would pay for a good editor when they see one)
I pay, effectively, a recurring fee for PyCharm. The personal license is €89 and comes with 1 year of upgrades. As they've been able to release upgrades after that year expired that looked interesting, I paid another €50 for it the next year support.
Personally I'm not keen on a recurring payment unless there's a clear benefit. For an editor, that'd be having a team committed to making it better. The JetBrain guys have shown good progress.
Doesn't seem to be too small a market for sublime or textmate.
What I would pay for would be an editor in the style of ST2 but with a few IDE features like being able to jump to method definitions by shift clicking etc.
Basically a very lightweight IDE with good text editing features (like VIM style keyboard shortcuts). Integrate a terminal into it, but do it in a very slick way with the ability to run "recipe" type commands.
For example if I forget the switches for a git command, I should be able to search for roughly what I want and have a result return which will populate a command in the terminal.
This might mean hyper focusing on a few languages however.
Not sure I would like the recurring model for just using the editor. Since this would likely mean you would have to implement some sort of always on cloud integration into the editor which would annoy me and make me worry about losing access to my editor.
What I might pay for on a recurring basis would be access to a repo of high quality plugins that were well integration with every version of the editor. Also maybe some functionality to allow me to share the editor session with someone remotely.
I dont want to gimp the user experience or implement cloud features just to get people to pay. At the same time if I am going to be spending 2-3 years to build something I would rather it be something that has a recurring income stream.
This is a good question - I dont think I have a good answer. I lean towards not implementing a check or stop your use. Maybe a monthly reminder for one / two months saying you have not paid. After that I would just assume that the user has stopped using my editor.
In order to justify a monthly payment I feel you would really have to be pushing out frequent high quality updates that just kept making it better and better.
If development started to get stagnant I would probably "forget" to renew it after my CC expired or something like that. Or at some point I'd just feel "I've given this guy enough money" and cancel even if I kept using the editor.
I totally agree about the Lightweight IDE thing. IMO Sublime is close to being great, but some missing features must be preventing people from writing great plugins like those in Emacs world.
Since I havent implemented anything I cant make any claims, what I can answer is things that I miss in the current landscape:
a) Fast startup times
b) Effective utilization of multiple monitors out of the box
c) Embedded prompt for dynamic languages
d) Minimal lag in the UI
e) Fast regex search over multiple files
f) Multi file mode - I want to be able to have multiple files mapped to the same buffer and have the ability to quicky seek to files
g) Outline mode that is code aware
h) Integrated lint mode
i) < 10 meg download size
All of the above in the same editor. Currently sublime comes the closest, I have also happily used emacs in the past but sublime feels snappier than emacs currently.
I guess if emacs could be as snappy and pretty as sublime what it would scratch all my needs, because modulo some polish it has most of the other features.
> I wish people would start treating text editors that they are developing this way.
IDEs. Most of the things people do in modern IDEs were done in Smalltalk (albeit, with uglier interfaces and some manual steps) over a decade back. One of the differences, though, is that these things were mostly highly responsive, because they were bloat-free. (Can't speak for IBM VisualAge. Also, one could rightly complain that they were feature-poor in their stock configurations.) This is in stark contrast to other IDEs and app server frameworks, which would take forever to start up, or forever to restart, or forever to complete an operation. One example, Extract Method, was always instantaneous. It's still lugubrious even in modern IDEs like XCode4.5.
The same rules that apply to Web Apps also apply to programmer's tools. Delays are yuck. Responsiveness is yum!
I'm also a musician, and from what I've seen playing around in Audacity, 5 milliseconds is definitely inaudible, even to some of the most talented, golden-eared musicians I know. I suspect that 20 milliseconds is going to start to bug some people, though they won't be able to say why.
Also, awareness of subtle modulations of very small variations in rhythm and how they affect the feel of music seems to be an indicator of intelligence and aesthetic awareness. So often, the difference between blah and great involves things that are subtle and hard to put your finger on.
> One of the differences, though, is that these things were mostly highly responsive, because they were bloat-free.
That's not my recollection of the Smalltalk IDE at all. I especially remember how selecting was crazy slow: press down the button, move the mouse to the end of the line and the selection takes two seconds to catch up with the cursor.
> That's not my recollection of the Smalltalk IDE at all.
Which one and from what time period? My experience was mostly with VisualWorks. Even just with that, there are variations. Going from version 3.* to 5i was actually a step down because of hastily released stuff from a company in turmoil. Also, lots of open source/free implementations started behind the curve in terms of technology. (Still doing bytecode interpretation instead of JIT.)
I get stutters with Sublime Text on my ancient MacBook Pro, but I'm often crushing the io systems (like a big Dropbox resync at the same time as a database compaction), so that may explain some of it.
I think head-mounted-display virtual reality is a huge missed opportunity for next-gen consoles. They could have really done it right in a way that PCs can't yet, with special hardware support for low latency and stereo rendering, and a guaranteed large audience so game studios could justify applying their full budgets to AAA VR titles. It would be a huge differentiator at a time when consoles seem to be converging to a very similar place. It could be as big as the Wii was.
I don't think they were a missed opportunity. In order for them to 'miss' they had to be 'possible' which they haven't been for a long time. Its only lately where various things have come together to get close. I have followed the 'head mounted display' technology closely since its early inception in the mid-90's as a means of providing military pilots with situational awareness.
The first problem with head mounted displays was resolution. Both in color space and in pixel space. Early displays were monochrome (either green or red based on LED display drivers) and had a roughly 256 x 256 dot pitch. It was ok for targeting reticules and basic instrumentation display (like an artificial horizon). Early work also wanted transparency (look through) displays because there wasn't a way to display what you were looking at in enough fidelity to do both the visual field and the indicator field.
Later a company called Colorado Micro Displays (CMD) came to market and started offering color, but 320 x 200 was the best resolution they could do. Higher resolution displays cost tens of thousands of dollars and were essentially hand crafted out of unobtainum. Direct retinal illumination displays attempted to get past strict display resolution issues (a friend of mine worked on one of the x-y positioning device for doing this, it had 500 angstrom repeatability! And was, as expected, insanely expensive)
It really hasn't been until recently when DLP type systems at sufficient resolution to eliminate the x-y type beam director have come to market. Combined with better LED phosphors to bring the cost of an RGB display with better than VGA (640 x 480) resolution into something practical.
The incessant push to higher resolution phone screens has created the capability to make opaque but high resolution screens which can enable something like the Oculus Rift type displays. That requirement stems from how far away from the eyes you have to put the screen and thus the torque moment that is applied to your head. There was a great system at NASA which used 10" displays but the 'head mounted' part was more like sticking your head into some weird steadycam kind of device.
Finally there is the challenge of both high fidelity and high frequency head orientation technologies. Prior to about 2005 the best you could do was a laser gyro for motion and an accelerometer for inertial reference. Doing that at the necessary frequency (typically 1000 updates per second) didn't become cost effective until about 2009.
Now however, many of the technologies have finally matured to the point where OR can be done in small quantities for perhaps $2K/unit.
A 'game console' is struggles if it costs more then $300. So as a 'controller' option, and even as the 'whole console' option, these sorts of glasses are still about 3 - 5 years from hitting a price point that makes them the 'killer' peripheral. And of course because they aren't here yet, they can't really have 'missed' :-)
$2K/unit? That's a gigantic overestimate. High quality head tracking is practically free today thanks to advances in cameras and MEMS sensors; you don't need laser gyros. The Oculus Rift costs $300, and is almost good enough for a home console. The only major improvement needed is a better display, which needn't cost more if sourced in quantities large enough, which Sony or Microsoft could definitely do. $500 for a complete system with display, controller, and console is definitely achievable, and that's the price point the PS3 started at. VR is absolutely feasible today.
As I said further downthread, I think a lot of people got disillusioned with VR because there's a lot of crappy hardware out there. Even the expensive stuff is crap. I tried Canon's augmented reality system at SIGGRAPH last year and the latency and FOV were awful, despite the $120,000 cost. But it doesn't have to be that way, and the Oculus Rift is the proof.
Nintendo saw the possibility of doing something new with the Wii, using then-new technology (MEMS sensors, tiny low-power cameras) to make an old, lame, expensive concept (motion control) work for the mass market, and was rewarded handsomely for it. If someone had that foresight with VR they could be making a killing right now.
I too look forward to the OR system shipping. Go through the latest parts list for the OR device and check on availability 2 years ago. Things are moving along and that is great, they weren't there when Nintendo was using MEMS accelerometers and CMOS camera modules in their remote.
I think the monumental failure of Nintendo's Virtual Boy pretty strongly illustrated that if you can't do this perfectly, you shouldn't do it at all. Within the current constraints (including budget), making a "good enough" product is clearly not feasible. As this article points out, even a few milliseconds too much causes massive problems (worst case: severe nausea – car sickness).
On top of that, it's not even clear that people want this. Full-on Virtual Reality is a bit antisocial and hyper realism isn't the only thing that makes games immersive and/or fun.
Virtual Boy pretty strongly illustrated that if you can't do this perfectly, you shouldn't do it at all.
Absolutely. But the Virtual Boy was clearly before its time. The hardware just wasn't ready.
Within the current constraints (including budget), making a "good enough" product is clearly not feasible.
I think that's clearly not true. The response from everyone who's tried the Oculus Rift has been that real VR has finally arrived. And the Oculus Rift is one guy's prototype; the resources of a company like Microsoft or Sony could drastically improve it in several dimensions. Specifically, they could source a higher-res, higher frame rate, lighter, lower-latency OLED display, and connect it to rendering hardware with all the latency-reducing features John Carmack just described. These features aren't necessarily expensive from a hardware POV; it's just that nobody's seen the need to implement them yet.
> Specifically, they could source a higher-res, higher frame rate, lighter, lower-latency OLED display
None of those are actually remotely as important as the fact you're still using your head as a dumb camera joystick, there's no ability to track eye movement and the mismatch between what you're seeing and your inner-ear pressure is going to make a number of people throw up.[1]
These are fundamental problems that have been a part of HMDs for the last twenty years of me playing with them and the Rift addresses absolutely none of them. The Rift itself is not significantly different than its HMD contemporaries, it's actually aiming for the low-end gamer market and so far seems to be mostly pushing 'What's old is new again' without having made significant progress along the way. The most interesting part of the Rift kit is the tracker, not the actual HMD itself.
[1]: In a study conducted by U.S. Army Research Institute for the Behavioral and Social Sciences in a report published May 1995 titled "Technical Report 1027 - Simulator Sickness in Virtual Environments", out of 742 pilot exposures from 11 military flight simulators, "approximately half of the pilots (334) reported post-effects of some kind: 250 (34%) reported that symptoms dissipated in less than 1 hour, 44 (6%) reported that symptoms lasted longer than 4 hours, and 28 (4%) reported that symptoms lasted longer than 6 hours. There were also 4 (1%) reported cases of spontaneously occurring flashbacks."
Read up on 'Simulation Sickness' for more details.
Eye tracking would be awesome but it's not required for good VR. I fail to see how "using your head as a dumb camera joystick" is a problem; that's the whole point of VR.
Motion sickness is a real problem for some people (but not all). With low latency and thoughtful game design I think it can be mitigated. The bigger problem is the social acceptability of blocking your entire field of vision for long periods of time. I won't pretend that VR doesn't have problems, but the payoff is large enough that the problems are worth tackling.
> I fail to see how "using your head as a dumb camera joystick" is a problem; that's the whole point of VR.
Perhaps you should try playing some VR games for a while. I clocked quite a few hours playing the old Virtuality SU2000 games like Dactyl Nightmare and I've idly kept up with HMDs. The chief problem is that without eye tracking, it's incredibly UNFUN to use your head for movements that your eyes could otherwise have done for you. Mind you when the HMDs were much heavier back then it sucked a lot more, but it's still pretty shitty not being able to glance aside. Nope, gotta move your entire head for absolutely everything related to what you're currently seeing, if you move your head for any reason you can't maintain focus on objects naturally, etc.
> Motion sickness is a real problem for some people (but not all).
Probably not something you should underestimate. See Nintendo's 3DS launch and about-face on their stance on pushing 3D once they found a small but significant percentage of their users could not actually see the 3D effect. This lead to policy that the 3D effect could not be used for anything related to actual gameplay mechanics, reducing it entirely to an optional gimmick.
Simulation sickness affects even more people than the 3D issue. It's a real problem if you want to go mainstream.
Just to clarify my position, I'm not against the Rift nor do I have anything against HMDs. I simply see the Rift as a step along the way to whatever device truly popularizes the tech. I don't think we're there yet.
it's incredibly UNFUN to use your head for movements that your eyes could otherwise have done for you.
That's only a problem if your FOV is too small. With a large enough FOV and a light display there's no reason why eye and head movements shouldn't work exactly as they do in real life. The Oculus Rift has twice the angular field of view of the SU2000 in both dimensions, for 4x the subtended angular area.
Edit: Oh I see, your complaint is about using the head tracking to control a game, e.g. by pointing a gun. Yes, I think that's a bad idea. Head tracking should only control the camera. All game interaction should happen through a controller. A motion controller like the Razer Hydra would probably work well for this.
if you move your head for any reason you can't maintain focus on objects
Again, only true for crappy hardware. With a good enough display, 120 FPS, and low latency, there's no reason why tracking moving objects shouldn't work just fine.
I think a lot of people got disillusioned with VR because there's a lot of crappy hardware out there. Even the expensive stuff is crap. I tried Canon's augmented reality system at SIGGRAPH last year and the latency and FOV were awful, despite the $120,000 cost. But it doesn't have to be that way, and the Oculus Rift is the proof.
We seem to be so close to agreement here that I'll just have to vote you up and take you at your word regarding the Rift. It's entirely possible my experiences have all been unfortunate ones. I've not tried the Rift yet, though by all means I will. :)
Well, my word about the Oculus Rift is secondhand, since I haven't received mine yet. And I haven't tried the Sensics device you mentioned below; that hardware looks quite nice, so it would be quite disappointing if that level of device still wasn't good enough for a great VR experience. My opinion may change after using the Rift for a while, but given what people are saying about it I'm still optimistic :)
Modeless, can you please answer this "trick" question?
If a flat piece of cardboard that is 10 inches wide occupies 10 degrees of my horizontal FOV, how many degrees of my horizontal FOV will 20 inches wide flat piece of cardboard occupy? Both cardboards are positioned at the same distance from my eyes, of course.
> The chief problem is that without eye tracking, it's incredibly UNFUN to use your head for movements that your eyes could otherwise have done for you. Mind you when the HMDs were much heavier back then it sucked a lot more, but it's still pretty shitty not being able to glance aside.
Isn't that more of a problem of field of view? With a wide enough field of view, you could just look at whatever you wanted to. Basically make it like real life, where what you look at is the combination of where your head is pointed, which the computer needs to use to update the screen, and where your eye is looking, which the computer doesn't need to care about. Or are you thinking of using eye tracking to something else, like determining what you're pointing at like a mouse? Yet another option you have in VR is what you're targeting, like how Dactyl Nightmare uses the gun you're holding to map directly into the virtual world's gun. Using the head position to AIM rather than just LOOK seems like a bad way of doing things.
Yes, you're absolutely right that a lot of what I'm complaining about would be solved with sufficient FOV and things certainly have improved since the SU2000 system in that regard. But we're not at 'sufficient' at the moment, at least in my experience based upon trying more recent examples, like an obscenely expensive Sensics kit. It's possible that it will be sufficient long before eye-tracking winds up in consumer HMDs.
But yes, the other part of my previous remark regarding 'dumb joystick' was more directly related to using the head for aiming/pointer duties, which also unfortunately has cropped up before and I quickly conflated the two issues together.
>This lead to policy that the 3D effect could not be used for anything related to actual gameplay mechanics, reducing it entirely to an optional gimmick.
Because the working theory on what causes simulator sickness is related to the inner-ear detecting motion. If your eyes are detecting motion, but it does not line up with what your inner-ear pressure is telling your body, the end result is the body emptying its contents under the theory that it's been poisoned. Simply improving the quality of the picture isn't going to solve that.
It seems to me the oculus rift can accrately account for all head motion when you are standing still. There is no need for eye-tracking: All the rift has to do is detect all head bobbing & rotation, and update the view acocrdingly. Your eyes can still look around on the lcd screen.
What the rift _can't_ do is account for long term linear acceleration (eg in a car or walking around), it's true. If you had a large empty room to walk around in with the rift on, even this could be accounted for, however.
I think VR is fundamentally broken, because you're going to try to react to things in a way that doesn't work. With a head-mounted display, the game can react as you move your head. Great. But then you'll instinctively try to walk towards something and it won't work (or you'll hit the wall). Same with picking things up, etc.
It seems like it's deep in the uncanny valley of experience.
It's not difficult to learn not to start walking when you're in VR. I've seen some people lean left and right when they play an FPS for the first few times, but they get used to it. VR with a simple controller though is just not good enough. You'll need arm/hand tracking and probably a gun controller for shooters. If you can track 6 degrees of freedom of the gun controller and 6 DOF of the head, then you have your first immersive VR right there.
I think it's just a matter of time before the other pieces to the VR puzzle catch up. Omni-directional treadmills are already in existence and are a basic step towards allowing you to walk in a VR environment. Developments in prosthetics with 'feeling' could help pave way towards shoes, gloves or other garments that could provide you with the sensory experiences of walking on different surfaces or picking up in-game items.
But at that point you've gone from a games console being something that takes up a small amount of space underneath your TV to something that takes up an entire room. From requiring you to pick up a controller to putting on an entire outfit.
I'm just not sure the average consumer is in any way ready/willing for that.
> I think the first successful VR you'll see will be closer to an arcade machine than a home console.
I was just thinking about this. My development rig has 2 27" screens. If I added a 3rd one, this would cover most of my field of view horizontally. What if someone sold a console rig packaged with 3 screen and a mount? (Work with IKEA to produce a compatible cheap desk.) This kind of thing is already done and is awesome for vehicular games. (Racing, tank, aircraft...) A 3 screen rig also could still strike a kid as "awesome," so would draw money from the wallets of parents.
Or, for arcade-type settings, how about a seat with a projected 360 dome, HOTAS controls, and motion simulation?
Except, we adapt. I think Uncanny Valley will slowly shrink - Things will improve, and we'll get used to some of the oddities.
I mean, he mentions it in the article: People were used to 400ms latency and still had fun. Is there a much tighter threshold here? Sure. But I'm all for it, and I'm sure a lot of people are. Not everyone, but things improve and change.
Well Oculus Rift should support everything from PC's to consoles. But I agree. Stuff like Oculus Rift is the true next-gen computing. The PS4 and next Xbox leave me pretty cold, especially when I already have a laptop that can already play all the latest games.
I think the PS4's insane memory bandwidth (8GB of memory on a 170GB/sec GDDR5 bus) makes it a very interesting contender against PCs (dual-channel DDR3 maxes out at 34GB/s).
Given the PS4 is using an off-the-shelf x86 processor, how long do you think it'll be until other OEMs are pitching laptops with these same CPUs that have GDDR5 controllers for main memory?
How long until Intel matches?
How long until basically every CPU supports it and every manufacturer has one or more such systems?
They may be using an off-the-shelf x86 core, but their actual CPU die may have all kinds of customizations on it (like the addition of a GDDR5 memory controller). AIUI GDDR5 memory is much more expensive than DDR3. So it may happen eventually, but I don't think it will be any time soon, and believe me, I really want PCs to be better than consoles in every way.
It just seems to me that whatever they're doing with AMD, AMD is going to be able to farm out to other parties. Given that both MS and Sony are using AMD cores paired with GDDR5 memory controllers, it seems almost certain that there's nothing approaching an exclusivity contract to prevent AMD from now selling that bundle to any maker of gaming PCs or SteamBox builds or whatever else.
And the current/historical cost differential between DDR and GDDR5 certainly seemed to be a considerable problem back when these specs were first rumored. But the following things suggest that's not likely to be an issue preventing it from hitting other gaming PCs:
1. Sony managed to spec 8GB of it into a box they simply can't afford to subsidize greatly at the outset. [1]
2. Sony and MS both need to know the cost of any expensive components will drop massively over the next couple years.
3. PC gamers readily and regularly accept higher prices for their machines than console makers could ever stomach.
So I don't see it becoming common. Truly very few non-gamers would care about the performance delta. But it seems like a thing that will be on offer from multiple vendors, for feasible prices, before long. (Particularly given that AMD could really use any edge against Intel.)
[1] People wrote off a lot of the rumors on that basis alone. If Sony is confident that it can not only afford GDDR5 for system memory, but that it can afford so much, there has to be a massive price cut incoming.
With the proliferation of smartphones and projects like Google Glass and Oculus Rift taking off, I really think we'll start to see custom chipsets to improve the performance of augmented and virtual reality systems (battery life, lower latency, and more processing power).
> Updating the imagery in a head mounted display (HMD) based on a head tracking sensor is a subtly different challenge than most human / computer interactions.
Doesn't the military already have this solved for head mounted displays for attack helicopters and 4++ generation fighter jets? Heck, they have augmented reality displays for that matter.
EDIT: Many of these problems could be solved by putting an entire purpose-built gaming rig in the headset. There are nicely capable chips for mobile devices with lots of power in the GPU. Design such a system from the ground-up for low latency. Accelerometer and head-tracking inputs would be low-level interrupts, for example.
I wonder if this is how the military contractors solved this?
"Doesn't the military already have this solved for head mounted displays for attack helicopters and 4++ generation fighter jets? "
Far from it. There was a recall a few years ago over glitches in a fighter jet HMD. It's still an active area of research and no less than 5 different systems employed by the US military divisions
You could cut latency further if you were able to measure the muscle contractions in your neck area.
Another option is smart eye tracking too. Generally if you are going to move your head in a particular direction, you'll likely start moving your eyes first.
The view update part was the most confusing, but I tried to understand it this way:
Imagine you had a magical game engine that rendered the entire world perfectly accurately for every point in space and direction a viewer could possibly be looking at. All you had to do was say:
RenderGameFrameForEveryPossiblePoint();
... // Who knows how much time
viewerPosition = QuicklyGetViewerPosition();
TellTheGPUToShowWorldAccordingTo(viewerPosition);
Well then, you could postpone calling those last two functions until the absolute last minute. This way you have very little or no movement of the viewer's head between when you read their position and when you show the view for that position.
But naturally, RenderGameFrameForEveryPossiblePoint() is slightly out of bounds of current technology. A lot of what Carmack was discussing, as I understood it, was simulating this effect as closely as possible. The way to do that, it seems, is:
That final bit is just a perspective transformation of a bunch of rendering that was already computed and given to the GPU. But if the viewer moves too quickly, you can easily move somewhere in the world that wasn't actually rendered, or your perspective could shift such that and object that was once occluded is now visible, or vice versa. It seems a lot of the complexity is there.
The last thing he talked about, time warping, seems to be a similar thing only it's scanline by scanline. So in effect you're saying "hey, video card and display, I know you're going to force me to draw a whole frame at once, so I'm going to give you a frame where each scanline gets rendered a little bit into the future according to where the player is moving."
The effect on a monitor would probably look like a forward shear, but on an HMD (if done correctly), it would correct for the natural shear caused by having to "freeze frame" the viewer's perspective for one entire frame instead of just a scanline.
Some of this may be woefully incorrect, but it was how I explained it to myself. Please correct anything that's wrong or overly simplified.
That time warping has already been implemented in the project of Lagless MAME. It compensates for input and display lag by always rendering a few frames into the future. It's commonly used for games where frame-accurate timing is critical, notably 2D scrolling shmups and fighting games.
Lagless MAME renders into the future assuming that the state of the input controls remains constant over that future time, and saves the emulation state every frame. When a button is pressed or released or whatever, Lagless MAME rewinds to the saved state for that frame and quickly re-emulates from that point forward. So the result is to send your input back in time past the lag, to the moment in the emulation exactly synchronized to when you saw it on the screen. The experience isn't perfect -- your spaceship would jump a few pixels then move smoothly -- but by and large it's far superior to playing with the actual lag.
This technique could be used for lag compensation in almost any environment. The limiting factor is the cost of re-computing several frames of game state on every input action. Of course, as Carmack says, actually eliminating lag is far preferable to masking it with such techniques.
The lag compensation in Guitar Hero and Rock Band games works essentially this way too.
It's amazing how much time Carmack spends working around the "added cleverness" of systems created by people, rather than the inherent difficulties of the problem. These days it also seems true about programming in general.
For example, a GPU could have a set of standard settings with full buffering and all the other things that "help them win the framerate wars", but a developer should be able to turn all of that off when needed.
It's the same with displays. LCD manufacturers could, I suppose, allow a modern day "CRT Mode 13h" where you just have scan lines mapped to memory buffers and whatever shows up in those buffers gets turned into a pixel as quickly as possible.
Are there technical challenges preventing this from happening, or is it mainly inertia and lack of need from the current market?