The author persistently conflates AR and MR throughout the posting, evaluating the Quest Pro as an AR device but the concluding at the end it's a terrible "MR" device because it failed at their AR tests. It seems to suggest they really think these terms are synonymous but they aren't. While AR aims to enable display and interaction with the real world but with augmentations, MR aims to bring virtual objects into the real world and allow you to interact with those. You really don't need such a high fidelity view of the real world to do that. The virtual objects are rendered with very high fidelity.
I am fairly confused by the low res cameras used in the QPro. I have to assume Meta were just heavily constrained by the need for them to do double duty as tracking cameras and pass through cameras, so higher quality cameras just weren't available. But why only a single color pass through camera? This really doesn't make much sense other than if you allow that the processor on the Quest Pro is so limited that even if it had both feeds it probably couldn't process them.
I think it's actually great news that there is so much optimization left to do. Seems like they should be able to improve it a lot over time, and in the meantime it's pretty adequate to enable exploration of some unique experiences.
As someone close to this in the industry, I definitely agree.
I also really don't know why he decided to deemphasize the perspective and depth correctness so much. He mentions it here:
> In this case, they were willing to sacrifice image quality to try to make the position of things in the real world agree with where virtual objects appear. To some degree, they have accomplished this goal. But the image quality and level of distortion, particularly of “close things,” which includes the user’s hands, is so bad that it seems like a pyrrhic victory.
I don't think this is even close to capturing how important depth and perspective correct passthrough is.
Reprojecting the passthrough image onto a 3D representation of the world mesh to reconstruct a perspective-correct view is the difference between a novelty that quickly gives people headaches and something that people can actually wear and look through for an extended period of time.
Varjo, as a counterexample, uses incredibly high-resolution cameras for their passthrough. The image quality is excellent, text is readable, contrast is good, etc. However, they make no effort to reproject their passthrough in terms of depth reconstruction. The result is a passthrough image that is very sharp, but is instantly, painfully, nauseatingly uncomfortable when walking around or looking at closeup objects alongside a distant background.
The importance of depth-correct passthrough reprojection (essentially, spacewarp using the depth info of the scene reconstruction mesh) absolutely cannot be understated and is a make or break for general adoption of any MR device. Karl is doing the industry a disservice with this article.
I’ll admit, I have completely cooked breakfast (eggs and toast, rummaging, flipping and all) with the Quest Pro passthrough, while watching a video on a giant virtual screen floating in my kitchen. The depth accuracy is very impressive.
yes I think the success of the reprojection is a much under appreciated achievement. The author repeatedly questions their claim of "seamless" but its one of the most stunning aspects that you can, for example, see your real arm outside the headset exactly map onto your virtual arm and hand inside it, as do other objects that bridge from external reality to the virtual reality view. This "seamless" nature of it is really impressive when you consider that the whole view is actually being completely reprojected and not just displayed natively from the camera feeds.
Yeah, I really don't think people understand how critical the reprojection is. A lot of folks, apparently Karl Guttag included, think that the resolution of the passthrough is the most important factor for usefulness. And/or that the resolution of the cameras is even directly related to the quality of the passthrough to begin with! The pixel count is just one of the dozens of quality metrics, and is arguably actually one of the less important ones.
I also can't tell if KGuttag believes that you can simply pass the video feed through to the user and not have an absolute usability disaster. His paragraph about the "pyrrhic victory" sort of implies that not preferring depth-correct scene reprojection was an option. In my experience, that is absolutely non-negotiable in terms of shipping a device that's usable to the majority of consumers.
Another reason why folks also don't understand why passthrough is so energy intensive. You're also running an entire additional pass of steresocopic spacewarp on what is essentially four different camera feeds, on an incredibly high resolution scene reconstruction/SLAM mesh.
In a regular Quest 2 (not the pro as discussed here) you can use pass-through and actually reach out and grab a glass of water and drink it. It's super impressive and actually makes the pass-through useful for the things that you use pass-through for.
Would it be nice to be able to actually read text and not have distortion when objects get too close? Yes, of course. But it's not necessary.
We are actively using passthrough in VRWorkout (to be aware of your surroundings while jumping and knee high running) and I have to say that even the low quality Quest 2 passthrough feels magical for that [1]. If you do look at 0:20 you can see that it really shines when the virtual hands interact with real objects
Exactly. Reading text or having 20/20 vision is so, so, so not the most important part of passthrough! Comfort, accuracy when reaching out and grabbing things, walking/running, avoiding obstacles on the ground, pets/humans, etc. Non-depth-correct passthrough risks all of these things.
Yeah I've been really enjoying the passthough as the homescreen background feature they added, it also works on WebXR pages too now which has been fun for the small experiments/toys I like to make on my downtime
Just so I understand correctly - the fact that the cameras are places a couple of inches in front of where you're actual eyeballs are causes a lot of discomfort? And so you need to warp the camera feed to look like it should from your eyeball's perspective?
I'm not sure if it's entirely due to the camera position or if there are other factors (e.g. differences between the camera optics and your eye's) but I can confirm that using a device without reprojection results in quite extreme distortion for objects closer than an arm's length.
There's also the distance between the eyes not matching that of the cameras. And the fact that the 3d scene that isn't from the cameras does not have your eyes in the same position.
Personally, I actively reject this distinction between AR & MR. For as long as I've been aware of the concept, the term "Augmented Reality" covered both use cases, including virtual objects embedded into the real world. The distinction never struck me as interesting or useful; virtual objects embedded into the real world being the most useful way to interact with augmentations to the real world.
Moreover, "Mixed Reality" strikes me as marketing-speak. Compare the search trends for AR [1] vs MR [2]. While both terms have been in use for a long time, AR was always more prevalent, with a notable rise in interest around 2009. My first encounter with the term MR was in marketing materials for Microsoft's HoloLens circa 2015. Now compare the search trends for MR [2] with HoloLens [3]. Searches for MR don't begin to trend upward until after the introduction of HoloLens, where Microsoft's marketing materials tried to make a big deal of said distinction and the rest of the world started to run with it. As a supporting argument for the original broader usage of the term AR, I'll cite discussions of a 2007 anime series, Dennō Coil [4], which explored various examples of fantastical AR usage. Probably just a coincidence, but the search trends for Dennō Coil [4] do precede the search trends for AR [1] rather closely.
Naturally, all the usual caveats about word meanings changing over time apply, but this one is a pet peeve of mine.
I feel the same way. I worked at Microsoft on HoloLens and most the folks I knew there felt like Mixed Reality was modern marketing spin on AR. It makes sense from a marketing perspective since HoloLens was a very different beast than something like Google Glass, but I find the distinction needlessly confusing nowadays. To me, AR is just augmenting reality on a spectrum from 2D fixed overlay to depth-aware and beyond. I find naming arbitrary detents on the spectrum unproductive. The general public has enough confusion around VR vs AR, especially when pass-through enters the picture. To throw MR in the mix furthers the confusion.
I don't begrudge people getting grumpy about terms evolving through gradual misuse over time. It's certainly annoying. What I would say though is that if you have specific subsets of functionality that demarcate clearly through various dimensions - how hard they are to do, the type of technology needed, the typical use cases and real world value they create - then you really need some way to distinguish them if we are going to talk about it.
Alternatively, if you are going to just stick to one hyper general term then you can't really complain that product X is bad at Y because it doesn't do every single thing the general term Y encompasses. Language is fluid for a reason - it adapts to our needs to define and distinguish things. In this case, the Quest Pro does a subset of things quite well, while completely leaving other aspects of AR on the table. Whatever we refer to that subset as, it's not really valid to then say it's generically bad just because we don't have good words to describe what it does do.
Mixed reality as a term dates at least back to 1994 - https://en.wikipedia.org/wiki/Reality%E2%80%93virtuality_con.... Microsoft adopted it because they were doing surface mapping and occlusion culling, something that early AR (circa 2008-2012ish) really couldn't do.
Yes, the term "Mixed Reality" had some academic use going back that far, but I'd interpret that article and its citations quite differently.
These early MR references use the term to mean a spectrum from "Augmented Reality" to "Augmented Virtuality". The AV term never really caught on, and the Wikipedia link for AV [1] actually just redirects to their page for MR [2]. I can't think of many notable examples of AV at the moment, though something like "Mario Kart Live: Home_Circuit" [3], as mentioned on the Wikipedia page for MR [2], might fit the bill, in the sense that reality is being used to augment the game. Uses of haptic feedback in VR might also count as AV.
This usage of MR has nothing to do with how Microsoft began using the term. It's almost as if they embraced an academic term for the sake of marketing, extended the meaning, and successfully extinguished the original academic definition.
AR will never be able to mix depth to objects with the real world due to the physics of image planes. MR could do that, so it might be one key difference.
You're mixing up the term "AR" with specific implementations of it.
People are now using "MR" to mean something distinct from AR but GP and myself are arguing that the term AR was broad enough to encompass passthrough, overlay and any future developments.
If someone built a VR headset that used small projectors rather than flat panel displays, we wouldn't need to invent a new term for it. The AR <> VR spectrum was all we needed and MR is just muddle on top of that.
Full lightfield AR displays would have no issue with this. Even some hypothetical giga-sandwich of waveguide planes like a Magic Leap 1 on steroids could do an effective approximation of this.
(Assuming by "mix depth to objects with the real world" you mean "having virtual objects alongside real objects with convincing depth alignment")
> While AR aims to enable display and interaction with the real world but with augmentations, MR aims to bring virtual objects into the real world and allow you to interact with those.
Would you mind giving user-facing examples to illustrate the difference? As in something you consider AR but not MR and vice-versa.
I've googled a bit but I don't understand the distinction between "enable display and interaction with the real world but with [virtual] augmentations" and "bring virtual objects into the real world and allow you to interact with them". As someone with little AR/VR/MR experience these sound like two ways of phrasing the same thing. Thanks!
Consider something like Gravity Sketch [0] as MR. You are designing a virtual object and it is useful to bring it into the real world to (a) collaborate on it with others in a shared workspace and (b) envision it within its real context. For example, you can design a chair and actually place it in a living room while you are working on it to see how it looks.
An AR example would be the classic overlay of text on a sign you are looking at with a version that is translated into your language.
Not working in the field, I’m also having a hard time to grasp the difference. Is one MR because it’s 3D and the other AR because of 2D ? And is the collaboration part a defining feature ?
Where would you set Pokemon Go’s “AR view” [0] kind of implementations for instance ? (First link I found was for a headset, but I’m also actually curious how you qualify the phone only version of it)
Or what would it change if in you Gravity Sketch example each user would be seeing colors or rendering adapted to their vision/preferences making it a separate experience for each of them ?
I think for example: AR is a graph/dashboard overlay on real world objects, processing them and commenting on them but not forgetting the result is on a 2d screen. While MR is like a cube that falls on the table you see through the screen and will act if you punch it with you hand (all as seen by the camera) and doesn't care your experience of the MR world is bound to a 2D screen, it is virtually exist in any space around you.
This blog post is written by Karl Guttag who is someone who is interested in AR, but not very interested in VR. The reason he is coming at it from an AR angle is that is what kind of device he is looking for. If you look at the rest of his blog you can see many articles related to AR.
That’s fine, but it’s okay to call out the author’s bias. They contributed wonderful tests of the limitations of the headset. But the headset is only a failure for the authors use case. (AR)
Having used Hololens vs several passthrough-AR devices (Quest 2 etc) I'd take "poor image quality" over "limited FOV and the need to use it in a darkened room" any day.
Until AR display tech gets significantly better I think AR based on passthrough cameras is the only sensible approach.
You are correct that MR today implies that the Virtual image is locked to the real world with some form of SLAM.
I was a bit sloppy in that regards. I was more worried about what was going on. But when combined with the word "passthrough" to see the real world, I tend to drop back/slip to the term AR passthrough which is what this type of thing was called for years. MR, as I remember it, is a more recent term used to distinguish different types of AR. Then things flipped and XR (=AR/VR/MR) was used to mean was AR used to mean.
My guess is that late in the program, the importance of passthrough was elevated, perhaps in response to Apple rumors, but they were stuck with the hardware that was in the pipeline. The passthrough available is a big improvement over the Quest 2 for use in seeing the grouse-level surroundings, but not up to regular use in full Mixed Reality.
There are probably decades of "optimizations" left to do.
I also think there is a lot to be said for a VR headset with even mediocre pass through. I find VR very engaging and I think it can be very useful -- you can have the screen space of many monitors if the headset has good enough resolution (not good enough on a quest), plus there is a ton you can do with modeling and visualization.
But it is a big turn off to have no visual idea of what is going on around you. Just to be able to press a button and be able to see that there is someone standing around you, or you are too close to a couch, is a big win for a VR headset in my opinion. Obviously the visual fidelity will improve in the display and the pass through. But just starting to get some pass through is very nice.
I can't help but view the (myriad) odd choices in the Quest Pro in the light of Carmack's resignation letter. It really does have the camel feeling of a horse designed by committee, instead of anything filled with 'give a damn'.
I hope the folks there can figure out the operational difficulties, and find a way past whatever org chart drama is the source of the dysfunction. The Quest 2 has been a delight, but enough time has passed that the limitations are starting to chafe. I would love to see a true Quest 3 made my people who actually like the things.
It's definitely a compromise device. My gut feeling is various factors delayed its delivery and they ultimately were left with the choice of, deliver it now as it is or potentially lose the whole business market due to being overrun by Apple, HTC or others in that arena. Nearly all the interesting tech in the Quest Pro will be outdated within a year - it would be impossible for them to ship it if they waited even another 6 months. So I think they said "ship it" and honestly I think they were right - as people say, ultimately "shipping is everything" [0]. Witness Apple struggling year after year to get their AR/VR product out. It's actually really hard, and if you wait for all your problems to be solved you will never ship anything. The product may be full of compromises but the process of shipping it will put Meta miles ahead of those that still haven't got anything out the door yet.
I mean, historically, "it isn't very good, but at least it exists" has _not_ been a particularly effective approach to consumer product launches, not if the competition is waiting a bit longer to bring out something good.
The problem is that we dont exactly need ar/vr: we can put screens around for a fraction of the cost, and video games are nice to sit down: we can run after each other while giggling in the forest if we want some movement fun and dont exactly need anything overlayed.
I can understand they try to see far to anticipate a future where they'd make a lot of money being useful, but there's such low latency and richness of perception with normal atomic real world interactions that I'm not sure we can or want to replace it with something that simply adds a paid intermediary.
It s like adding a blockchain to transact cash for instance.
> While AR aims to enable display and interaction with the real world but with augmentations, MR aims to bring virtual objects into the real world and allow you to interact with those.
That's literally the same thing in terms of hardware. If you want to bring virtual objects in the real world, and not have weird z-sorting issues all over the place, you need to have an understanding of the real world, which in turn gives you the possibility to do augmentation.
Beside, MR is just a confusingly overloaded term these days. It can refer to:
* Microsoft's Mixed Reality brand of VR headsets (that completely lack any AR/MR features)
* filming people playing VR games in third person and mixing the game footage into it with LIV or similar software
* doing AR with VR headsets and pass-through cameras, instead of see-through optics (e.g. Lynx R1)
> But why only a single color pass through camera?
Pico4 makes the same mistake. I really don't get it. Lenovo Mirage Solo back in 2018 used two front facing cameras at a proper IPD, and while the image is still quite low resolution, having actual 3D with zero distortion made the thing feel like wearing actual glasses, not like looking at a weird camera feed. It's a ginormous quality improvement for very little extra effort. Using pass-through on that thing is still the only time I could literally forget I was wearing a headset.
On mobile, so I can’t properly view/appreciate the detail-shots, but this is exactly the kind of post I’ve been waiting to see written covering the Quest Pro.
I tried the Quest 2 last year mostly out of curiosity for the use cases that Meta themselves would go on to push as the primary focuses of the Quest Pro; VR productivity and virtual monitors. I was left drastically underwhelmed at the time with the resolution and viewing experience from the Quest 2 (among other things), and after hearing that the Pro wouldn’t have much of an upgrade in that department I was really curious what actual productivity focused power users would think of it.
This blog/series of posts looks to be just that, and I’ve added it to my feed reader for the rest of the series!
If you’re interested in virtual monitors, and willing to be a bit ahem bleeding edge, try the NReal Air. They’re no replacement for a high quality 34” curved monitor but come kinda close to a nice 25” 1080p monitor IME.
They suffer the same issues as other VR solutions (focusing on fine text can be hard, it shakes on your head, etc). But unlike the Quest, you can set it to “Aircast” mode which basically replaces an VR/AR experience and turns the headset into a true “head mounted monitor” - which is different than their special software. I tried it with their virtual monitor software, and it’s unusable. If you move your head, it refreshes the monitor you’re viewing at like 2fps so it’s super jerky and not smooth.
It’s not good enough to replace a real stationary monitor for me yet, but it’s probably good enough for someone constantly on the go, or for airplane usage (extra privacy). They’re also far more comfortable than a Quest since they don’t have batteries or a processor built in.
I want to add that I think they shake more than the quest because you can’t strap it in tightly like you can with a quest and a hard strap
Self promote: https://primitive.io is a VR enabled collaborative immersive development environment where we have optimized code readability in every respect
Your comment is the first time I heard about these glasses. The $379 price tag seems quite reasonable and I was really thinking about buying a pair. After looking a bit closer at the specs I think I will wait for a 90hz+ version.
But else, they seem great. It's cool that they draw power from the connected device, one battery less to have to worry about.
Maybe to explain why I want more than 60hz - I wouldn't buy them to be productive, but for gaming. I've been gaming with 120hz for quite a few years now, and don't really want to half my framerate. Even when gaming in VR I'm experiencing between 60 and 90fps.
I guess gaming is the only thing which prevents me from buying these glasses, for work 60hz is enough and most movies are 24fps anyways...
> Your comment is the first time I heard about these glasses.
They had a huge media blitz recently on YouTube. They got a bunch of YouTubers to talk about how great they were paired with the Steam Deck. Apparently they extend the battery life too because the built-in screen uses more energy than the glasses.
I don’t do much gaming where specs matter (mostly co-op of casual games), and I’ve never used them for gaming. I imagine I wouldn’t mind them too much depending on the game. I bet a game like factorio would struggle since there’s a lot of UI elements that are small and it’d be hard to distinguish with the glasses instead of a big high-res monitor. Other games would probably be fine though (again except for frame rate).
Didn't even think about this use-case...Sounds fun during a plane flight, holding the Steam Deck in your lap and just resting your head the way you fell comfortable. Probably a bit jarring for the other passengers :) But also to just lay on your couch or even in bed.
> I bet a game like factorio would struggle since there’s a lot of UI elements that are small
Bet, yeah. Probably great for first-person, third-person and racing games. Games which rely heavily on UIs and menus are probably not that fun. I don't know how sharp the screens on these glasses are, but I couldn't imagine playing Factorio on a HTC Vive. Granted, the Vive is old and screens have gotten better, but it's the only reference I have.
> They’re no replacement for a high quality 34” curved monitor but come kinda close to a nice 25” 1080p monitor IME.
So, this is something I think I'm pretty much missing here. Acceptable 25" 1080p monitors are pretty cheap these days; why would a potential user not just get some of those?
VR might or might not be useful for some things (I'm pretty much unconvinced so far) but the "virtual monitor" use case is just baffling to me. Actual monitors exist.
> the "virtual monitor" use case is just baffling to me. Actual monitors exist.
First of all, it works with phones, tablets, game consoles, computers, etc. Anything with a display output, so keep that in mind. The use-case isn’t just big desktop computers or even stationary devices.
1. Travel, you can use this on an airplane, which is nice for frequent travels, or if you want to watch a movie on a bigger screen on a plane. It also works in hospital beds, or anywhere else you may want to enjoy a screen but not bring monitors or TVs.
2. Ergonomics, you can lay down and use the glasses a lot easier than a mounted monitor overhead (and a lot less risky). This is useful both for finding ergonomic positions for general life, and also for people who may be disabled, or otherwise have some bodily issue that forces them to lay in bed (in hospital, elderly, etc).
3. It’s simply a bigger screen. It fills your vision, it’s a lot bigger than a 25” display, even if thats roughly the equivalent quality. If you want to Eg. Watch a movie or play games, it’s a better experience. I’ve heard YouTubers like it for their Steam Deck.
4. It’s a lot smaller than a monitor. If you live alone in a studio, this could save you space on having a dedicated desk and desk equipment. You can just work at your table, and it packs away to a backpack when you want to eat.
5. You could even walk and wear it as a HUD - theres a company that makes live translation software for the headset, where you can read “subtitles” of people talking.
I enjoy using my quest pro on the couch or in bed - it is a great way to watch shows on Netflix or play stationary games like Tetris or Cubism. Forget about 25", it is more like a full projector-size movie on my wall!
>I tried the Quest 2 last year mostly out of curiosity for the use cases that Meta themselves would go on to push as the primary focuses of the Quest Pro; VR productivity and virtual monitors. I was left drastically underwhelmed at the time with the resolution and viewing experience from the Quest 2
That's not really a fair assessment of the Quest 2. The Quest 2 is just a budget (400 Euros) gaming console, and for that it works great.
A 400 dollar gaming console with a theoretical resolution of 1,832 × 1,920 per eye, is obviously not gonna perform well if you use it to emulate high resolution virtual monitors meant to sit further away from your eyes, further decreasing the perceived resolution, so you can read text and perform productivity tasks. Of course it will suck at that, but it was never meant to do that.
That's like complaining your Toyota Aygo sucks at offroading or track racing.
Isn't the Quest 2 the newest headset they offer? Why wouldn't it be fair to try that out and see if it worked, and come to the conclusion that it was underwhelming?
No, the Quest 2 has been out since 2020. The Quest Pro is the latest.
It's unfair because the Quest 2 is mainly a console and was never marketed as a productivity AR device. That's like trying to do your job on a PS5 and claim it's underwhelming, or take your Ferrari off-roading and claim it's underwhelming or trying ML workloads on a Gameboy and claim it's underwhelming.
You are are of course free to try those scenarios and have your opinion that the devices are underwhelming, but nobody can take your point seriously.
The Quest Pro is "newly" available now (as of late Oct 22 it appears), but the GP said they had tried last year... I guess now that we're in 2023, that could mean some time in 2022, though. I took their comment to mean that they tried out the 2 in anticipation of the release of the Pro, some time in the more distant past.
> after hearing that the Pro wouldn’t have much of an upgrade in that department
Not sure where you heard that, the Quest Pro absolutely offers a massive upgrade in the viewing experience, specifically for the case of virtual monitors. It's mostly in the lenses but it's an absolutely huge difference.
The primary use case for passthrough is not AR, it was not intended to be projecting stuff over room interior, and as such, it's crap at a thing that it is explicitely not optimized for.
The primary use case for passthrough is bringing stuff into VR, specifically your hands (when used in hand tracking mode), and your keyboard. Your monitor is projected into VR by being hooked to your headset. Any other function's utility has been traded off for these purposes.
That's an extremely limited view of passthrough. Passthrough will be most useful for spatial applications like telepresence, in-situ learning, modeling and creation, and visualization, where remaining present and aware of your surroundings is preferable to the undesirable immersion of VR. The bet here is that useful interactions with virtual objects and interfaces don't benefit from disorienting virtual environments.
For an example of this on Quest Pro, check out Wooorld. It's a fun hangout app and is arguably a much better experience in passthrough.
Actually I think opposite is true. The camera is far off from the eye, so in any case you wouldn't feel the immersion, and I think making the camera better would make you more disoriented, rather than less.
I had the same experience with other headsets such as the Varjo XR-3, which feels like you’re looking at a camera feed.
However, the Quest 2 and Pro apply some smart reprojection to correct this, and it feels like everything is almost exactly in the right place.
You do notice the artifacts of this reprojection when you’re holding objects close to your face, when you’re moving objects around, or when you’re looking at thin objects such as wires.
I also recall John Carmack talking about this topic in one of his unscripted talks at Meta Connect, but I’ll have to dig that up.
That's a non-issue. Lenovo Mirage Solo had a stereo cameras at a proper IPD and the immersion of pass-through on that thing is still by far the best thing I ever experienced in a VR headset. The little offset between the cameras and your eyes is unnoticeable unless you try to touch your nose or do other stuff really close to the headset. And if you really need to, you could get rid of that offset as well (45° mirror and mount the cameras vertical to have them capture the same image as your eye would).
The problem with most other VR headsets is that they either have only a single camera for pass-through or use the tracking cameras that are mounted far away from your eyes. Doing the remapping of perspective in software just doesn't give good results.
It's a weird point to spend so much ink on though. The author explicitly raises the possiblity that the marketing fluff could just be marketing fluff, but then seem to go on as if this can't be the explanation.
Seems obvious to me that the "Mixed reality" stuff is just marketing fluff, if you pay any attention to the technical evolution of this product line when interpreting these complaints.
I’m interested in it for a hybrid VR-AR for flight sim cockpits. The Varjo Aero is used for this, allowing you to build a cockpit replica and interact with it via pass thru but still using VR for outside the cockpit.
A new VR arcade opened up in my area and I visited with my son. They were using HTC Vive headsets, which retail for $1,000+. It’s the first time I really used VR, especially in a couch coop mode setup. The following struck me:
(1) There is no killer, must have game for VR. Beat Saber seems to be the most popular and mainstream game but not nearly engaging enough to convince the mass market to buy one or more VR headsets. Related, we cycled through about 5 games and none of graphics were particular realistic or immersive. I didn’t walk a way from the experience thinking wow! And note I’ve had this “wow” experience playing PS4 games on a higher end TV that supports HDR.
(2) We had a 1 hour session with the goggles and I felt quite sea sick during and after gameplay. The headset was also uncomfortable after 30 minutes or so. Playing games while sea sick is about as much fun as it sounds. It’s pretty awful.
(3) The arcade was expensive. I paid nearly $100 for 1 hour for 2 people in a MCOL area. One of the reasons for the high cost is the arcade required an employee to help us cycle through games and explain gameplay and controls. It dawned on me after the experience that my son and I would have to buy two headsets, not just one, for any couch coop play. Maybe we’d have to buy two copies of games as well? Gaming is really becoming a social experience, which is why Fortnight is such a big hit because it’s essentially free and cross platform.
VR is dead in the water unless they bring to market must-have games (or other apps and Beat Saber is nowhere close to this), solve the sea sickness problem, and somehow bring down the cost so two headsets cost about as much as a current generation console.
I think it’s also very premature for Meta to go after the business market. It first needs to convince geeks like me to buy and evangelize about VR. If anything, my experience at the VR arcade convinced me NOT to buy VR goggles. Good luck building out a new market if this is a typical reaction to the state of VR today.
> and I felt quite sea sick during and after gameplay
Part of the problem here is that you tried to push through it. Basically the only way for someone who immediately experiences VR sickness to acclimate is to stop after 5 to 15 minutes, give it a while to go away, return to it, and repeat the cycle a couple of times until it stops completely. If you just try to push through it, that actually makes it worse by aggravating the reaction you're having.
Of course, an arcade making the (poor) decision to brush over any issues isn't going to tell you this.
> The headset was also uncomfortable after 30 minutes or so.
I'd bet they were using first-party headstraps, which for whatever Godawful reason are almost universally uncomfortable across every brand. Companies like BoboVR produce a different style of much-more-comfortable 'halo' strap arrangement that rests the weight on your head more like a hard hat instead of clamping it to your face.
> It dawned on me after the experience that my son and I would have to buy two headsets, not just one, for any couch coop play.
There are a couple of games that mix one VR player with one or more non-VR players. Carly and the Reaperman, for example, has one player in VR manipulating the environment, and the other playing a traditional platformer. Another example (and an excellent party game) is Keep Talking and Nobody Explodes, where one player is defusing a bomb in VR but only the players outside VR have the bomb defusing manual.
There’s also Phasmophobia, a ghost-hunting game that supports VR. You can play with people without them needing a VR headset.
It’s also one of those types of games that’s completely different in VR than regular desktop. Without VR, it’s a fairly fun game with a really good learning curve to it. On VR, it’s a completely different game; the atmosphere is amped up by like 1,000 and it becomes the scary game it was meant to be. I wasn’t even able to complete the tutorial on VR.
> Part of the problem here is that you tried to push through it. Basically the only way for someone who immediately experiences VR sickness to acclimate is to stop after 5 to 15 minutes, give it a while to go away, return to it, and repeat the cycle a couple of times until it stops completely.
Considering the context of adoption obstacles, this is worse than VR just making people feel nauseous.
If VR made you feel bad, be sure to both remember how bad it made you feel, and also voluntarily make yourself feel bad again, repeatedly, until (maybe) you won't feel as bad anymore, and then you can have the day-1 experience? Am I tracking this correctly?
The "problem" with VR is that it doesn't yet have the mass market appeal of traditional consoles or PC gaming. For higher-end VR gaming, that's mostly because the hardware is (as you say) quite expensive and compromised in various ways; however, it's an absolute game-changer for some people, e.g. flight and space sim players, for whom none of the issues you've raised are relevant. For lower-end stuff, I imagine it's facing the same old problem of trying to market a new product category to consumers who don't really understand or care.
I can't really speak to what Meta's doing, but as somebody who pre-ordered a Vive at launch and has had some of my most compelling gaming experiences ever in VR, I think you should recognize that you aren't really in the target market for higher-end VR gaming. The absolute size of that market is certainly a concern for the continued development of VR tech, but so far it seems to be doing alright as a relatively niche gaming sector (see: the Valve Index).
> somehow bring down the cost so two headsets cost about as much as a current generation console
Half Life: Alyx is extremely realistic and immersive and is the game I most commonly use to show people VR for the first time. I have a nice PC gaming rig however - I'm not sure what it's like otherwise.
If you want to see the most barbaric side your friends have to offer, put them into Blade & Sorcery, if they don't mind graphic violence.
My gf is the sweetest, non-confrontational human I know. She once cried because she saw a dead frog on the street.
After 5 minutes in Blade & Sorcery she was brutally decapitating already dead enemies while giggling, throwing them off cliffs while watching how they fall down etc.
When she took off the glasses after 10m we both just had to laugh, as she was heavily breathing and sweating from all the fighting. One of the craziest things I've ever seen her do.
Same things has happened for the two other people I put into this game, and for myself.
Just wanted to chime in because I agree with you about HL: Alyx. Currently playing it, and it's the best thing I've ever witnessed in VR. Crouching behind a virtual wall because enemies are shooting and throwing grenades at you never felt so terrifyingly good.
HL:A is a masterpiece. VR was made for that game, not the other way around. I spent like 3k buying the Index and building a new PC just to play it and it was absolutely worth it.
I've never felt such a consistently visceral adrenaline response from a game before.
a) 10 million Quest 2 devices have been sold. There is already a market.
b) Business market is already a large adopter of VR devices e.g. for training, warehouses, architecture/car etc demos etc. Meta is big enough to tackle both markets at once. And Quest Pro helps to validate newer technologies that will trickle down into their cheaper models.
c) Motion sickness is better on newer devices and from research videos Meta already understands what are the causes of it. It's now largely an engineering question of how to get solutions translated into shipping products.
My employer could send me one. I wouldn't use it. I don't want to be virtually present in meetings. I've left the office and can multitask now (eg do household chores, go for walks). I'm not going back to an office, physical or virtual.
> My employer could send me one. I wouldn't use it.
I would, if it was high quality enough to be comfortable for long term use, and there was decent enough virtual desktop software to have many floating windows of all the software you use.
already commonplace for employers to track mouse movement and the like for monitoring remote workers, sometimes to have video requirements as well (where video is not just another option to work with colleagues but a policy requirement so that managers/owners can surveil), because that's what they're able to track with the tech within a laptop. give them more sensors and they won't leave it to workers to decide (except to decide to leave those jobs in search of ones that surveil less). workers can organize against these conditions but it's not left to individual consent.
laptops don't track my physical behaviors besides mouse movement and keystrokes. they're far less aware of actual presence, attention or physical activity and are easily maskable
Of course not 100% of people would use it. That doesn't mean that it isn't a giant profitable market. Video conferencing has the additional problem that you can't be on camera while you're wearing a VR headset, so I think it isn't even a major use case.
This is the main issue with it right now, and part of the problem has been that to get AAA quality games on it you need to teather because none of the existing headsets have anything close to the power needed to run a decent game, not to mention they get very uncomfortable, very quickly.
I mentioned elsewhere about Apples apparent headset thats possibly coming out this year. This _could_ be where Apple can shine. Whilst they've not exactly got a great rep as a gaming power house, if their headset does indeed end up using their M2 architecture it means its going to be extremely power efficient and very light weight - solving two of the biggest issues. But it also means its got graphics capabilities for most modern AAA titles.
Obviously they'd need to work with a few studios to incentivise them to develop for their OS, but given their history of entering and significantly disrupting markets I'm sure they'd have no problem bringing along a few decent studios and getting them to work with their headset.
Combine that with their existing App Store and Arcade+ services and you've got thousdands of developers itching to get their games released on the platform.
If they can successfully replicate the popularity of their iOS developer ecosystem it would completely destroy Meta and their quite frankly crap little selection of apps.
---
For what it's worth I found initially I felt pretty awful using it, it does take time to adjust and you're best using it in short bursts for the first few weeks.
Recently I decided to do something stupid and try out the F1 2022 game on it (via cable plugged into pc), expecting it to make me feel extremely sick. I was pleasently surprised at how amazingly good it felt. I was able to complete a couple of races without feeling sick at all - its definitely an acclimatization process that the manufacturers need to make a lot clearer is needed.
Dang, are there really no other HL:A quality games for VR? I picked a Valve Index and have been playing through Alyx an hour or so at a time and each time I think to myself "I never want this game to end." I've been trying to find some games to play after it but haven't really found any.
I've installed the HL2 VR mod and played that a few times, but without teleporting movement I got sick quiet easily. I saw another commenter saying you have to acclimate yourself to that, so I suppose I will try that in between sessions of Alyx.
Here's to hoping someone makes something as polished and as beautiful as HL:A
Try "Into the Radius", it's far lower budget but has that same immersive wow factor through smart art choices and clever interactions with physics objects.
Was it the original HTC Vive? Or even the pro? Because both of those are pretty old by now, with even the pro being a pretty minor upgrade to the original that came out in 2015. I'm pretty sure the hardware improved a lot, and having used the quest 2 (just once or twice at the office, not really into VR/gaming in general) it's a massively different experience from the older vive.
I had to laugh because you typify so much the average commenter about VR ... "I tried it once for an hour and here is my essay writing off the whole field".
> I think it’s also very premature for Meta to go after the business market.
I strongly disagree here. The enterprise market is relatively cost-insensitive, and so it's a great vertical to target. The partnership with Microsoft is strategically very sound; if you can get Teams inside the Quest headset, then you can plausibly replace every lightweight Windows laptop with a Quest Pro (i.e. for road warriors doing spreadsheet/doc/email/collaboration work, not developers).
The question is, how far away are they from replacing monitors with VR displays? I suspect this generation is not the one that wins that battle (though Immersed was making this work on Quest 2, so maybe), but it plausibly could be next generation, in which case this is the correct time to be building the MS partnership.
On gaming -- we are multiple generations away from VR being more than tech demos, IMO. Beat Saber was fun, but pretty much everything else feels boring after 30 mins. I agree with your assessment that VR gaming is quite underwhelming, but I disagree that they need to win over geeks to crack the enterprise market; they just need to get Nadella on stage to provide a compelling use-case to enterprise users, talking in the enterprise language.
> The question is, how far away are they from replacing monitors with VR displays?
The Quest2 is a massive pain to use for an extended period of time, but I think it totally proves it’s possible. You can browse, use teams, anything browser related easily. I haven’t used the QPro but I imagine it’s a nicer experience (except for battery).
Resolution is a concern, and IMO they should have made a high res but otherwise low spec version for the Pro. Maybe they just wanted to flex all the possible tech in research (eg the fancy controllers). The road-warrior angle is totally a real demographic and I think this could be a real win in a few generations (I say 3-5 years at current pace). I suspect it’ll first look for like the NReal Air based on using it, since it supplements an existing computer with a bigger screen. I think another demographic is the opportunity to kill the WFH desk. Any table becomes good enough, or a treadmill… or anything. When Covid started, I had to go buy a desk and monitor since I didn’t use one at home. I know lots of coworkers who lived in small apartments and had to make big compromises to wfh in their space.
Unlimited monitors in a small, light package. Potentially much better ergonomics (unproven), float your monitors above your hammock or comfortable chair or whatever. Potentially much better collaboration, since you can be “in the room” with your team, see their facial expressions, etc.
Almost none of that is something road warriors from the parent comment need or ask for. Do they want to edit spreadsheets in a lower resolution, headache causing, uncomfortable headset without even seeing their hands or the keyboard properly? My guess would be no.
The unlimited monitor argument is only relevant if the resolution, battery, ergonomics, pass-through, etc vastly improve. So at this stage it's still a fantasy.
> Potentially much better collaboration, since you can be “in the room” with your team, see their facial expressions, etc.
Seems like you are talking about some theoretical future AR or MR headset. As of today MQP can not do any of that.
There's a VR place near me called Sandbox VR that gets good reviews on Yelp and Google. My family went and it was... okay. We played together for about 45 minutes and we all came away with the same opinion - it was kind of fun, but none of us really wanted to play it again.
I don't know which headset they used but I do know we each had an entire PC in a backpack strapped to us.
Fairly sure they use HTC vive which seems to be the standard for overpriced VR experiences. It’s mostly fun to play in a group I think. But the actual games are very underwhelming - they feel like tech demos. And on top of that the staff never sets IPD or anything so unless you adjust it yourself things can be a blurry mess.
The experience of a game like Alyx on the Index/Quest is about 100x better.
I did something similar twice and it was meh. Trying real games was mind blowing on the other hand, and buying the go and then the quest was amazing. Different experiences. I’m waiting for the quest 3 personally
I’ve been checking in on VR occasionally over the past 30 years (first VR experience was playing Dactyl Nightmare in 1991 and 1992). VR still feels like it has a long, long way to go.
Have you tried half life Alyx? I’ve yet to see someone say that after they tried the state of the art experiences.
Also, I hear people say that video games are dumb and pointless sometimes, which reminds me that I might myself not realize when something has incredible potential. I think a lot of people might never see the potential in VR, when it’s already there with millions of people using the Quest already
Alyx has potential but I can’t play it for long because I get sick. For me, that game would work better as a traditional console title.
I think the best VR game of this generation is Beat Saber.
And I 100% agree with you about VR being here. Meta has sold something like 10 million headsets. VR is here and it’s doing well. It’s probably not going to overtake other forms of gaming and that’s okay. It doesn’t have to. Ten million users means it’s a big success.
For me, the real “killer” VR experience is playing Walkabout Mini Golf with a friend remotely on Meta Quest 2’s. I have never felt like a remote person was right _there_ as I have when playing that game. And this is with silly, head-only avatars.
The combination of such realistic feeling gameplay and multiplayer is way more compelling than I thought it would be.
> We had a 1 hour session with the goggles and I felt quite sea sick during and after gameplay. The headset was also uncomfortable after 30 minutes or so. Playing games while sea sick is about as much fun as it sounds. It’s pretty awful.
This reduces significantly wit time. Also on the first time you don't know which setting works for you, having correct IPD, straps etc. When I used my quest 2 for the first time, I could hardly use it for 10 minutes, and now I could use it for 1 hours. I think if I use it everyday even that limit will be increased.
VR device is for using at home, slowly familiarizing yourself with it and using it for short bursts until you got used to it and don't feel any bad effects.
I did this and its amazing for games and especially simulators like DCS.
> They were using HTC Vive headsets, which retail for $1,000+.
Slight nitpick:
Those would be the HTC Vive Pro headsets. The "HTC Vive" came out in 2016 and is a pretty shitty device by today's standards. Also, it retailed for $600.
> We had a 1 hour session with the goggles and I felt quite sea sick during and after gameplay.
This varies from person to person, but most people only get sick when playing games where the real-world movement doesn't match game-world movement. For example, most people can play Beat Saber with no motion sickness problems, but will feel sick almost instantly if they play a racing game or a game that lets you walk by using a controller input.
I've had VR since late 2016 and this still rings true. I can play PokerStars VR for 4+ hours and my only complaint will be the weight of the headset starting to feel overbearing. On the other hand, 2 minutes of Assetto Corsa and I feel like I'm gonna lose my lunch.
> The arcade was expensive. I paid nearly $100 for 1 hour for 2 people in a MCOL area. One of the reasons for the high cost is the arcade required an employee to help us cycle through games and explain gameplay and controls.
Shitty experiences in VR arcades are exacerbated by the fact that they're usually staffed by people that just needed a job, and not VR enthusiasts that care about sharing their excitement for the platform.
(EDIT: Also, as someone else mentioned, having IPD configured properly is crucial. If the staff aren't setting IPD, then it can create headaches and sickness.)
> solve the sea sickness problem
It's a difficult problem to solve because the core issue is the mismatch between movement in the worlds as mentioned above. When your eyes detect movement, but the vestibular system in your inner ear says your not, your monkey brain thinks you ate something poisonous and creates the urge to purge.
A company called Otolith Labs has supposedly created a device that potentially solves the problem (https://otolithlabs.com/vertigo/) by making your vestibular system essentially send static, but they're targeting it to sufferers of vertigo rather than VR players.
> and somehow bring down the cost so two headsets cost about as much as a current generation console.
As others have said, the Meta Quest 2 is $400. And it's PC VR compatible, so if you have a gaming PC, you can use it with that.
> I think it’s also very premature for Meta to go after the business market.
We're still years from business use for VR aside from maybe virtual meetings with avatars. I think the biggest limiter is resolution. Text needs to be decently large to be readable in VR. I have a 1440p monitor that only uses what, 45 degrees of my eyes' FOV? When I think of what I would use VR for, I'm thinking essentially unlimited monitors. I could have several pages of documentation visible at any time.
I've been a complete cynic of VR/AR since in started. I just didn't see any point to it ourside games (which don't interest me).
But someone in our office had a Quest Pro and I was blown away. It's the first time I've ever felt like I wanted to use or build thing in or for VR, and it was the Mixed Reality mode that make it like that.
Unlike this review, I found mixed review very impressive. I suspect it was because I was viewing it primarily as a VR device with just some limited capability for real world interaction as opposed to an AR device. I think when viewed through a VR-first lens it's very impressive.
I still don't know what I'd actually want to use it for, but it was the first time I felt like Meta might be actually onto something.
Games is the point and is all VR is good for for the foreseeable future. The problem is that Meta are not content with that, they want it to be some "social" thing so they can make $$$ instead of just $
As a Quest2 user the passthrough view is only usable to find back the controllers and clear my surroundings, so I genuinely wondered how much better it got on the Pro with all the “project a secondary display” push.
From these results, it looks like you’ll still be SOL if you forgot which key on your keyboard has the backtick, or try to quickly glance at your phone, or check your room’s air quality or temperature to adjust as needed.
From the pictures it still looks twice nicer than the Quest2, but we’re so far away from something actually workable.
It's good enough to hunt and peck a keyboard but a phone screen is usually too blown out. The Pro sits off from your face more so you'd just see a phone or a keyboard through the gap anyhow.
Having a virtual movie theater has been the killer feature for me. I sit in on my comfy couch and stream movies from my computer into the headset. Awesome. For me it is better than buying a real TV for the purpose of movie watching.
Also VR Chat (also an app) has been awesome for connecting and talking with new people. Environments from orbiting high above the earth in space to a rooftop restauraunt in Paris make it fun to spend a few hours with new people.
As other have said, inside other VR application there is a lot of "neat" "potential" but hampered execution due to software, hardware, or both. I always tell people we've made some gigantic leaps but we're still in the early stages.
As someone with first-hand experience in implementing pass-through AR, I found the review well-made (e.g., they measured photon-to-photon latency) and extremely interesting. Here's a few observations I made:
1. 30 Hz frame rate and 50ms photon-to-photon latency sound so bad that I'm surprised they launched this at all. I would be very interested in trying this out to see how (un)comfortable the experience is. What I would have expected is pass-through feed rate synchronized to display frame rate (i.e., 90 Hz) and photon-to-photon latency no more than perhaps 2 frames (22ms). At 90 Hz and 11ms you can actually play ping-pong while wearing a pass-through headset.
2. I can relate with the exposure, saturation, and noise issues. The existing well-tuned video ISP pipes aren't suitable for low latency, and building a new one that achieves low latency and gives a good picture in indoor lighting from an essentially smartphone sensor with exposure time < 11ms and doesn't add multi-frame latency is HARD.
3. Above points to one possible explanation for the 30Hz capture; maybe they used a cheap sensor that can't produce usable picture in < 11ms exposure time (in dimly lit indoor environment) and thus had to go with 30Hz capture. Would be interesting to repeat the experiment in well-lit (outdoor) environment to see if the capture pipe kicks into a higher framerate.
4. I wonder if they do any warping to correct for the camera positions and how well that works. For AR-focused headset the ideal pass-through camera positions would approximate eye positions to minimize need for distortion. Even shifting capture positions a few centimeters forward from the eyes (which you'd need to do if you don't replace eyes with cameras or play tricks with optics) creates a subtle but noticeable effect unless corrected.
My Quest 2 is 99% used to play 5 on 5 Onward VR. To all you Onward PC users, my apologies. I thought I was done with video games but it's fantastic. The experience reminds of showing up to public basketball courts during the summer. You get on a team with some random people. Sometimes you're the best on the court, sometimes you're the worst. By I love the Quest 2 and would replace it if my got broken simply to keep playing Onward.
There is a learning curve. If you jump straight into the 5 on 5 matches without some basics down it’s not much fun. Do some single player against bots. Then play coop against bots. Then do multiplayer gun game. Once you can win a round of gun game, IMO, you are ready for assault, uplink and escort.
The main thing is knowing maps and the radio. A team that uses the radio well and can call out locations will destroy a team of sharp shooting lone wolfs.
There are people who take it serious and those that are trying to figure things out. It’s all good. People will teach you things. Also the competitive lobbies are marked as such, so just avoid those.
> I also ran a small experiment where I sent a camera flash off in a dimly lit room and measured the delay with light sensors using one sensor to trigger an oscilloscope’s capture and the second light sensor to capture the display response. It typically took a little over four 90Hz frame times (~40 to 50ms) for the flash to show up on the image.
The tear is an artifact with how the screen capture works. The tear is not actually sent to the display.
People who capture Quest footage using scrcpy are well aware of this downside compared to using the built in recording feature which doesn't have the same tearing issue.
As a side note, has anyone seen a test/review of the simula one (https://simulavr.com/)? I think they actually have the correct ideas about making a business/developer headset and I'm seriously considering pre-ordering (but have had quite bad experiences with pre-ordering in the past, so have been holding off for now).
Though we finished our first Simula One "Review Unit" back in September (this being a stripped down prototype which demonstrates display, optics, and main board functionality), we haven't released a prototype to testers yet due to it being too unpolished. You can see some videos of it in action however here[1].
Here are some things on the horizon for us to finish before we release test units to YouTubers:
- Add an IPD adjustment mechanism[2]. Currently, our prototype's IPD is locked in place until we add this functionality, which makes it really poor for testing since there's lots of variation in IPD.
- Incorporate our front-facing cameras for our AR Mode. On this front, we recently shipped our Camera Boards to PCB fab[3]. Note that our cameras support 2448x2048 per eye resolution, and are placed more directly over the eyes than the Quest Pro (avoiding the issues discussed in the parent post). For a fuller comparison against the Quest Pro see[4].
- Replace our FPGA eval board with our own custom FPGA. (Just started on this; will likely discuss it during our next blog update).
- Integrate our detachable compute pack to the back of the headset (right now our compute pack is just dangling).
So all in all, while much of the core functionality is there, our headset is still too unpolished to send to YouTubers.
Also: I understand your skepticism RE preordering. We try to keep a consistent stream of blog updates to at least let people know what's going on engineering wise.
I agree with most of the points here. I think Meta is missing the point with their uncanny valley avatars and focus on business applications. Personal connection like events and fitness training is key. These applications don’t rely on high definition. MR fitness is going to skyrocket this year and next year with color pass through. Checkout Liteboxer and how they are using chroma keyed trainers to make it feel like they are in your home. The quest pro is better balanced and touches your face less too.
Right under the price on this page https://www.meta.com/quest/quest-pro/ the first feature is AR / passthrough. In fact, most of the page is dedicated to things other than gaming.
> With breakthrough high resolution mixed reality you can engage effortlessly with the virtual world while maintaining presence in your physical space in hi-def color.
It literally never says AR. It says "mixed reality". In mixed reality, the pass through (real world) quality is secondary to the virtual objects that are rendered within it. These virtual are rendered at high resolution, hence we have "high resolution" mixed reality.
People seem to just not understand that mixed reality is different to augmented reality?
Well, I'm sure in future versions it will be a lot better. But my experience with my Meta Quest Pro is that it's more for getting a vague idea of your surroundings than just interacting with them.
For me, the only time I've seen it pop up is when I step out of my "zone". Rather than just flashing red, I get a vague understanding of where I'm standing and who's nearby. I've never had a problem with it at all, even though I acknowledge the picture isn't particularly great.
> But my experience with my Meta Quest Pro is that it's more for getting a vague idea of your surroundings than just interacting with them.
The point of the article is that this experience, while good, is juxtaposed with Meta's marketing claim that it's a mixed reality device. You're saying the device is good at the thing the author says it was probably built for. Mixed reality implies that you're seamlessly switching between environments, as if there wasn't a distinction.
It’s meant to be good enough to see someone walk up to you or not hit something like a dog underfoot. Maybe find a keyboard key instead of only touch type. It’s not meant to be as clear as your glasses so you can drive or take a vision test.
It's not promising to be able to drive with it. You can have the device on and see other people and the space around you while still interacting with virtual objects. You can safely operate the device in a cluttered space without clearing a 3x3 meter space.
Do you have a citation for Meta saying something like that?
The article implies they’ve been hyping up AR (which matches my tiny knowledge of the product) and that seems useless if the Reality part of AR is blurry.
The article is effectively lying. Try searching their page [0] for augmented reality. It's not there. They refer to "mixed reality" specifically because that is actually different to AR, specifically in the sense that "reality" is not the primary focus of the experience.
I really, really hope that whatever Apple ends up releasing completely wipes the floor with Meta's headsets. They need some real competition to force them to really push the limits.
It's going to be interesting to see what happens if the rumors about Apple's being extremely lightweight are true - it wouldn't be surprising if it is given their tiny but efficient SOC design. This is what will kill Meta if they arent careful. They're about to get a competitor who's able to deliver performance levels significantly higher, and at lower power consumption and weight.
Not having the battery be on the device but plugged in the phone or on a waist strap, and using more expensive, but smaller pancake vs fresnel lenses can make the effected perceived weight a lot.
Steve Mann has build a HDR welding helmet[1] some years ago as a proof of concept. But I haven't seen any consumer products going in that direction yet. Closest I have seen is people strapping IR lights to their Quest2 to get night vision (works due to Quest2 cameras not having the typical IR filter) [2].
All of these are possible, individually, with current technology (e.g. "being able to see in extremely wide spectrum bands compared to human eyesight" means IR/nightvision cameras), but it's currently cost-prohibitive to combine them all into one device
The product (marketing) decision to focus on AR passthrough does seem really strange. If I was going to use a VR device for work, then, maybe if the resolution was high enough it could be nice to have much more space, but, I'd really only care that I'm not bumping into things. I'm not exactly sure why this AR route is being pushed before it's ready -- living in the world through cameras seems like a much harder problem to be solved after getting the base VR case to work.
IMO the more interesting business cases would be around simulated environments even as simple as viewing 360 videos for places where workers would be expected to know what's going on when they enter.
Super compelling screenshots, but isn’t the thesis of “Meta Quest Pro (not?) viable for work” best proven out by somebody actually doing a shiny project using one? And if nobody does then maybe it really is for sure snake oil?
For example the the pix comparing the headset to high-end Canon VR180 .. that suggests maybe the viz should just be bigger?
Great write-up about the faults, but there’s gotta be a work task that can work? Beyond beat saber?
I've heard that some of the AR stuff got nerfed because of the ability of IR cameras to see through some fabrics and Facebook not wanting to have those headlines in the newspapers. I just heard it as a rumour though, perhaps an insider with a throwaway account can "shed some light".
While I don't have insider info, I highly doubt this. The Quest 2 (which I own) uses monochrome cameras without IR cut filters, and they do passthrough with those. So any problems with privacy would have been found already with the current device.
I did some calculations this morning after reading The Information's (paywalled, sorry) article [1] on the Apple VR device which has 4K per eye resolution. My 27" 4K monitor sits about 28 inches away from my eyes on my desk. This translates into about 46 degree field of view, or about 83 pixels per degree. I can't see pixels at that distance.
If we take 60 pixels per degree as the Apple "Retina" standard, and if we assume that the field of view of the Apple VR device is ideal (120 degrees), then we would need a horizontal resolution of ~7k to match my setup at the "Retina" standard. If you compromise a bit on FOV, you can squeeze it down to 6k, which is within the reach of "today's" current display technology [2].
The Apple VR device is rumored to be powered by some custom silicon with foveated rendering optimizations (render only what you are looking at with the highest resolution) and is maxing out an M2 class GPU with unknown # of GPU cores on the device per The Information article. A doubling of GPU performance may get us to the 6K display per eye, optimistically the v2 of the Apple VR device?
Is it possible they pivoted from a consumer device to a business device mid stream and so did not do justice to the functionality? They probably should have waited to iron out the issues before releasing.
How good is the resolution? Can you already use the device as a monitor-replacement? (I.e., coding behind a virtual monitor on an exotic location, while you're just at home behind the desk)
It's just usable as a monitor replacement, but it would be a big stretch to say it's "better". If you have even a cheap a monitor available you'd nearly always be better off actually using it. The scenario where it's better are going to be specific use cases like:
- it's helpful to have it be gigantic - bigger than a real monitor could ever be
- you have a real need for more monitors than would be practical to buy or position in your work environment
- you will be highly mobile and moving around so much that it's going to be better to take a headset with you than try to get monitors everwhere
- you really would like to collaborate with someone else who also has a Quest Pro, in which case you can work together without being physically together
- you highly benefit from / enjoy working in alternative environments (eg: being highly focused, free of distractions, relaxed, surrounded by beatiful scenery etc)
I actually find the latter point is the biggest reason I do it. Even when I'm surrounded by distractions I can go into a completely calm virtual environment and get a crap load of work done.
Just adding my two cents here - I bought a quest pro hoping to use it as a monitor replacement while traveling for a couple of months. I found it pretty much unworkable after trying to make it work across multiple different virtual desktop apps.
You can get something up and running - but it feels akin to staring at a 1080p screen with a weird delay. I felt way more productive just looking at my tiny laptop screen.
Your mileage may vary, obviously. I really liked the idea of working in a distraction free virtual environment, but I feel like the tech isn’t there yet unless you’re an enthusiast whose willing to put up with all of the strangeness.
Yes in some settings you experience enough latency to be a problem, it's a good point. I was able to eliminate that but I have fairly optimal setup. It's definitely one of the holes in the "take monitors everywhere you go" that you will end up also needing good wifi everywhere you go that is shared b/w laptop and headset.
The last point is something I've been doing with a much lower resolution headset already. Having lecture recordings up really large with no distractions helps me to understand them and staying concentrated on them much more easily.
Resolution is roughly around 1800x1920 over 100° FOV. That means you get around 19 pixel-per-degree. Retina resolution is 60 pixel-per-degree. Acceptable text readability starts at around 30 pixel-per-degree.
So no, for most common uses this is nowhere near good enough. It's like having an old 720p LCD TV on your desk. It works for casual web browsing and movies, but far away from the image quality of a modern 4k monitor.
Nreal Air at around 49 pixel-per-degree is the closest thing to a monitor-replacement you can currently find, even has a plain HDMI input. Does however come with a lot of other short comings (no 6DOF tracking, monitor image not tracked, low FOV).
The pixel per degree of the Quest Pro isn't uniform. The peak ppd is 22. PPD for headsets typically refers to the peak ppd and not the average. The 49 ppd figure for the nreal air is a peak ppd measurement.
It is good enough that you can imagine the possibility of using it. Not good enough to actually use it like this (IMO/IME- There’s stories here of people doing this).
I don’t think people fully grasp how much power and compute it takes to record and process video (me included). let me know how long that canon r5 can record for continuously, and how many batteries it’ll go through for that recording time. It wasn’t that long ago that it was taxing for desktop computers to play a 4k video file.
With cameras the problem isn't the power or compute. It's the heat dissipation.
Cameras like the Sigma FP are built around the heat sink and have vents all around to dissipate the heat during long recording sessions. Most typical DSLRs simply have recording limits e.g. Canon R6 = 30 mins.
Pretty big issue for a device they are aiming to be around your head for hours at a time.
That is a valid problem, but with MR pass through there is also a compute issue. The cameras are not in the same location as the users eyes. They have a wider parallax than the users eyes and are forward several inches. There is no simple transformation to correct the video to have the same perspective as the user. Instead a 3d representation of the environment must be created, camera imagery projected on to it, and then rendered to the position of the user’s eye. All of this done as fast as possible and highest rate possible. This explodes in compute with increasing resolutions. MQP is doing this with 3 cameras, color compositing, and without the aide of a depth camera.
Well, that's not exactly right -they're two sides of a coin. twice the pixels equals twice the power at almost every level. The pixel level, the image signal processor level, the processor, and storage.
In addition to the recording limits on a DSLR, you might need to rest it for a good while if you're filming for a long time with too much light on a hot day. It shuts off when the sensor's too hot.
I’m not a AR/VR bull but I think Apple’s position is extremely underestimated.
It’s almost like every Apple product has been developed and refined for the future AR/VR moment. Custom silicon. Fast camera HDR. LiDAR. People/scene/room/boundary detection. AirPod Max fit and finish. Spatial audio. 50 other things…
Dedicated Neural Processors on iPhones. Constant focus on improving battery life with awesome performance. Wearables that work best when using the iPhone as the main hub.
Apple is absolutely gearing up to dominate AR by having a sleek set of glasses that use your iPhone for the graphics rendering. And they'll succeed.
It's interesting in that context that they seem to have met with a much bigger challenge than they expected in actually delivering on their AR/VR product. Current indications are that it's going to be quite mind blowing but at the same time, highly encumbered by compromises as well (encumbered by wires, external battery packs, etc).
They have a mouse with its charging port at the bottom. Their first Pencil iteration plugs into the bottom of the iPad like a weird antenna for charging. I could totally see them do ugly compromises in that area for a first iteration.
Having used some of apples existing AR stuff… I would believe they can ship their existing AR stuff and figure out how to make 2 screens that are nearly identical display the data.
I am fairly confused by the low res cameras used in the QPro. I have to assume Meta were just heavily constrained by the need for them to do double duty as tracking cameras and pass through cameras, so higher quality cameras just weren't available. But why only a single color pass through camera? This really doesn't make much sense other than if you allow that the processor on the Quest Pro is so limited that even if it had both feeds it probably couldn't process them.
I think it's actually great news that there is so much optimization left to do. Seems like they should be able to improve it a lot over time, and in the meantime it's pretty adequate to enable exploration of some unique experiences.
edit: VR => MR