Hacker News new | past | comments | ask | show | jobs | submit login
New report on Apple’s VR headset: 8K in each eye, potential $3k price tag (arstechnica.com)
306 points by CharlesW on Feb 4, 2021 | hide | past | favorite | 455 comments



Bear in mind that the iPad was rumored to cost >$1000 but then launched at $499.

On the other hand it might sense to launch this as a pro-workstation for the usual kinds of task: architects, designers and engineers being able to visualize and interact with models directly in space.

If they can make that affordable and usable for pro designers at every level, (which $3500 clearly would be), they will have a hit. A consumer gaming device can come later.


Having done a fair bit of AR/VR work in medical and industrial settings, I've always thought the 'killer app' of VR is not in gaming.

It's this very weekend: The Super Bowl.

Well, not the Super Bowl precisely, but in experiences like sports or other time/space limited events. Sitting ring-side for the next big match. Or at the 50-yard line at the Super Bowl. Or at the finish line of the 100-m dash. All for, I dunno, $15? And the cost for the company is just rigging a google-maps-like camera ball and streaming the data out. Especially as the quarantine grinds onwards, a better form of escapism that AR/VR provides may have a big market for time/space limited events.


Ehh. I've owned a VR headset for a long time now, and I think many people underestimate how fundamentally isolating being in VR is. Humans are social creatures and unless you live by yourself and want to escape into VR, you're a lot more likely to enjoy watching the big game with friends and family so you can shoot the shit, and share the experience.

Don't get me wrong... VR is and can be a lot of fun, but I just don't see it being this next big thing that people seem to hope it will be. On the other hand, I think when someone solves AR, I suspect that may very well produce a seismic change across society with widespread adoption.


Sports are a special case as, arguably, they're mostly boring and their main value is the social context - cheering for players/teams, spectating together with family and friends. In contrast, the market for serious movies or single-player videogames suggest that there is a lot of demand for "lone time" experiences.

I think a big factor in lackluster adoption of VR is... space. VR games require space, which is expensive, and people seem to have less of it in recent years. On top of that, usable space disappears very quickly once you start a family, which cuts out a good chunk of the target audience.


Games do you are right, but productivity work is much like sitting at a desk in a cubicle -- just with so much much more you can do, even now. Sure, today there are a lot of caveats, like you need to touch type and transitioning between controllers and keyboard is sub optimal on the hardware side; while the software is still all 99.9% game oriented, there's no native multitasking, even little things like cut/copy/paste are a hassle. Today, your only option is some version of screen casting with all its multiple window resolution/scaling pain. But once you get it working it is hard not to be amazed working on a 10foot tall visio diagram and by the time Apple rolls anything out all these problems will have been solved for all practical purposes.


>> Sports are a special case as, arguably, they're mostly boring

This only applies to american football, where you have on average 12 minutes of actual gameplay over 3.5 hour match.

Soccer or basketball are much more dynamic, and therefore can be entertaining on their own, no social context required.


And rugby, rugby league, Aussie rules etc are essentially aerobic physical contact sports not anaerobic impact sports. These sports have relatively few stops, and certainly no time for advertisements outside of the half or quarter time breaks. And there are no bulky pads and helmets, and the physicality is just as dynamic, if not more, than US football.

Cricket is like baseball, where it tends to be on in the background and you pay attention at each delivery/pitch. Cricket can last 5 days though...

As an outsider in the US I was struck by the crazy high percentage of time dedicated to advertisements - even to the extent of intentional breaks for that purpose.


Why the downvotes? This is a valid opinion.

As a former american football player I find that watching the game is not as exciting as playing it. From the outside, and if you don't know about all the micromanagement and stuff going on, american football looks rather "static".

Other sports are much more "obviously dynamic" and are therefore probably better suited to be watched through TV or VR.


I had to do a search for that [0] and surprisingly (for me, not watching the sport except snippets in movies) it's actually real. The downvotes may come from people who took that as an unrealistic exaggeration.

[0] https://www.marketwatch.com/story/there-were-just-1206-minut...


I’ve also never watched American football, so forgive me. This sounds incredibly dull. Can someone ELI5? What is going on for the rest of the game? Why do people watch?


A lot of the time in the second half of the game is intentionally wasted by the team with the lead.

A lot of the time throughout the game is wasted intentionally by game procedure, e.g. refs are setting up for the next play.

Of course there are varied reasons individuals have to watch, but the overwhelming majority of it's market success can be ascribed to tribal desire to see your team win.


To anyone else confused: ELI5 means "Explain (it) Like I'm Five (years old)." Now I'll try.

Caveats: I don't watch American football on TV, but I watched it some when I was a teenager. I mostly remember how to play, and while I don't think it'd be safe to do it at my age, I do like to play catch with an American football whenever I get the chance. I wouldn't mind watching a game now, but only live and very close to the field.

The thing is there's a lot of strategy, and each play is a meticulously, often spontaneously, planned action by the Offense (team with the ball) which the Defense does its best to disrupt.

That disruption is almost always at least partially successful, forcing improvisation on both sides. In particular the quarterback (guy who throws the ball) often has to rethink the plan in real time as people chase after him -- he really really doesn't want to be tackled, for his own protection as well as for the game -- and somehow make this work with people who are running around at some distance from him. When it works it can be awe-inspiring and utterly unexpected.

These things -- the strategy, the plan on the field, the disruption, the improvisation, the danger -- are all directly and intimately connected to the physical and mental abilities of the players, which are routinely pushed to their limits, sometimes to the point of causing permanent damage.

This is all the stuff of human drama, and as in most human dramas the decisive physical events don't take up that much of the time.

Another thing I'd add is that compared to a lot of other sports American Football offers an interesting combination of extreme chaos (the tackle, etc) and extreme precision (the long-distance pass to the wide receiver).

Soccer by contrast never has that much precision and is much more about mastering the ever-present chaos, most of it very slow paced; basketball can go long stretches without getting very chaotic at all despite things constantly happening. (I know some people will take umbrage here, but I'm ready to defend these points!)

I can't think of another sport that so closely tracks an idealized version of warfare and group combat. Though I'd love to see other examples.

All that said, familiarity has a lot to do with how much you can enjoy watching a lot of sports. If you grew up surrounded by the culture of Sport X there's probably stuff in it you will like to watch.

I like to watch squash, because I played it enough to understand what's going on. I have a friend who watches hours of snooker online. Even golf draws an audience. Even cricket!


> I can't think of another sport that so closely tracks an idealized version of warfare and group combat.

I've heard the term "it's chess but played with fridges" before and I think that fits.

Besides that: great explanation.


An NFL game consists of a series of designed plays, and the 12 minutes is just the execution time. In between plays, there are formation changes and setup, so fans can guess at what the offense is planning and how the defense is reacting. (That happens in other sports too, but here there's a formal beginning and end to each play.) Then there are also breaks between quarters and after changes of possession, where genuinely nothing is going on.


Also cricket and baseball quite frankly.


> Sports are a special case as, arguably, they're mostly boring and their main value is the social context

I think this is a bit of a fringe opinion. I can certainly see how people appreciate the social aspect of sports events with necessarily appreciating the sports. But I don’t think I wake up in the middle of the night to watch F1 alone for the social benefit.


yup,im watching f1 for the last 20 years mostly alone. there are boring races but those would not be fun even if i watched it with somebody else.


Just remember. If you accidentally fall asleep, Lewis will somehow lose. I nodded off during Monza this year and then the moment I was asleep all hell broke loose.


Tim Cook said the same thing about VR in 2017. He didn't like that it was isolating, and was much more excited about the potential of AR.

https://www.google.com/amp/s/www.bbc.com/news/amp/technology...


I totally agree; VR is very isolating, while AR is all about enhancing and augmenting one's interface with the real world. There is no contest: the real world will win, simply because the real world is sophisticated to start with, while anyplace in VR is what you see only, and that's a façade.


VR is isolating because its the blank canvas precursor to AR. AR is just a datastream for VR all sexed up for the media. What matters are the graphics engines/OSes. VR Chat is not isolating. Is it VR or AR to change your camera scale and 3d zoom into a reconstructed electron microscope scene and look around? Hugs will increase in value surely..


VRChat w positional audio is superior to just talking on the phone or thru a video conference I’d say


I was never arguing that VRChat/social apps are worse than other remote social experiences (phone call / video conference, etc.). My point is that it is very isolating and impossible to share the experience in any meaningful way with folks around you, or just be present at all while doing something in VR.

Like I can play Mario on my TV while sharing that experience with my kids or chatting with my wife about things, while still being around my family in a shared living space. VR just isn't cut out for that. I do use it for sim-racing (which is pretty much a perfect fit for VR headsets) which is anyway a situation where you need to be distraction free and kinda isolated from everyone else so it works well in those types of scenarios.


Counterpoint: The same argument is true of phones, and that has largely not stopped people from being regularly severely distracted by them even in social settings.


> My point is that it is very isolating and impossible to share the experience in any meaningful way with folks around you

There are an awfully large number of people who are currently literally isolated and unable to share any experiences with anyone.


Even sans pandemic, my family all lives several states away (drivable), and my extended family are all even further (flying distance). In person connnections are great, but it’s not like I can just pop over for dinner after work.

I haven’t gotten any of them to join me on the VR train, but as it grows I think that’s a big use case for it. You can hang out together with someone in a VR space in a way that video chat doesn’t capture.


For what it's worth, the VR meetup in my city is hosted on Zoom and not VR.


You could be at multiple super bowl parties/other events at once in VR. I know it's not exactly the same as being physically proximate but it offers some advantages that being there irl can't offer. You and all your friends in the lower stands at halfway marker all game. Maybe you want to all "sit" wherever the ball is on the field as it moves up or down the field of play. Maybe you have a specific spot in the stadium you like to sit at and you don't have to fight with others for that spot. I don't know it sounds kind of interesting for some things. I'd still personally rather be at a concert in person, I guess.


I'm not arguing against VR for shared social experiences over the internet with remote participants. I actually do think that it can be great for that, especially once technology catches up some more. My point though is that if you already have family or friends in your home, then it is pretty much impossible to have any sort of quality social interactions in VR and you are much better off enjoying things IRL, even if it isn't as immersive as what a VR experience could produce.


>you're a lot more likely to enjoy watching the big game with friends and family

This is more a comment on the state of VR software than the technology itself.

With the Apple product described, it's pretty easy to imagine software that allows you to have a "virtual box seat" where you can get online to watch the game with your family/friends, even if you aren't all in the same place, and use the tech to both watch the game and talk to each other when nothing's going on with plays.

Alternatively, you can use the cameras in the app to merge your immediate surroundings with the game feed, so you see the people beside you in a part of the picture, or in the aforementioned "virtual" box seats. You can watch the game as if you're on the side lines (or on the ball) and see people around you react to plays, too. With eye tracking tech, the headset knows when you're looking at the game vs. your surroundings and can react accordingly.

I think Apple producing a VR headset would be a very good thing for the market. Like or hate Apple as a company, they tend to set the standard for industrial design and UI technology, and their products drive other products' features and uses.


> Humans are social creatures

Yes, humans evolved to live in groups. However, we did not evolve to live in huge cities, surrounded by millions of people and tall concrete boxes. That would scare the soul out of our ancestors. My point is that we can't always judge the technology by what's "natural" to us. Brain is extremely flexible, and children who grew up with VR might just want to live in it, instead of the "real" world.


> However, we did not evolve to live in huge cities, surrounded by millions of people and tall concrete boxes.

Glancing around it seems quite apparent that we’ve done exactly that.


We adjusted our culture, but our brains are likely the same as they were before the mega-cities came into existence. My point is that we shouldn't necessarily judge the technology by what's "natural" to us, i.e. "VR isn't going fly because we're social creatures".


"Evolved == went on" to live in big cities vs "evolved == adapted as an evolutionary process" to live in big cities, etc, are two very different things.


Evolution adapts to present conditions, not future conditions, and on a time scale of millenias. It chanced into a feature set that let us change our environment much faster than that. We're still adapted to the lifestyle we had many thousands of years ago.


>Yes, humans evolved to live in groups. However, we did not evolve to live in huge cities, surrounded by millions of people and tall concrete boxes. That would scare the soul out of our ancestors.

Well, it gets us to stress, depression, suicides, the loneliness epidemic, and other such day-to-day suffering today too (plus obesity, poor physique, etc).

Living in a better environment, without the "huge cities, surrounded by millions of people and tall concrete boxes" but keeping the modern advancements in medicine, sanitation, and so on, we would probably live 20+ years more...


Just curious - have you ever lived in a rural area? There's quite a lot of desperation, obesity, addiction, linking, etc. in rural areas too.


I agree about it not being the next big thing. But, for some people who are extremely introverted, it can be the best thing ever. I think it has a lot of therapeutic use, although underdeveloped.

I find it to be disorienting, relaxing, and exhilarating, like a drug. To me, gives me the feelings of nitrous oxide without the strange nitrous oxide sensation.


I started and got out of VR due to the complete lack of anyone grasping how isolating and empty it is without other people, and those other people really ought to look non-generic. I'm an "original 3D graphics researcher" from the 80's, and have created 3D production pipelines as one of my career specialties. VR needs an entire "face lift" where it includes people, at a moderately personal appearance level. When girls can look like themselves and it requires less effort than their own getting dressed each day, while also presenting their real time actual facial expressions they will use VR, if for nothing else, a gossip (social media) platform. But my main point is, until a person can go into VR and be themselves, look and act like themselves, VR is alien.


I totally agree and is probably the main reason I don't use my Rift more - it takes me to another location.

> unless you live by yourself

Is a very important point, I think the social (especially with covid) aspect of VR for people that are not together might be the angle that Apple is looking at.


Presumably, this isolation would be less of a concern with mixed reality. Just place “the game” in a shared middle of the room and blend your friends into the stands.


If you do live alone, do you feel vulnerable using VR? Not being remotely aware of surroundings sounds like it has the potential to be disconcerting.


When I’m playing in VR and no one else is at home, I feel no more vulnerable than if I had my nose buried in a book. I do keep the door locked and the blinds down, though.


It really does depend on the app, I was once playing the game Hotdogs Horseshoes and Handgrenades. It has this western level which is wide open. But I really hated it, it felt really isolated. The echos only made things worse.

But Onward VR feels really social.


For people who are really sports fans (which might be a little out of the Hacker News audience, I know) not having your friends and family talking during the game might be a pro, not a con.


What about if you could sit next to your friends in VR to share the excitement even if they're physically remote from you?


This is exactly why 3D movie has been a complete failure.


100% this. Why not do full audio/video capture of the best seat in the house for every sporting event. I am very bearish on VR in general, but as a sports fan this is something I would use about 480 hours per season. Apple actually acquired NextVR that was doing this sort of thing:

https://9to5mac.com/2020/05/14/apple-nextr-ar-headset/


Even the best seat in the house is not the best seat when the action is at the other end of the playfield. I've seen some sporting events where they have multiple cameras set up to always get the best view of the action. They have a man who watches all the camera feeds and switches to whichever one is offering the best view of the action. It's even better than the best seat in the house.


Back in the 90's, during the first attempt at mainstream VR, some company surrounded the World Cup Finals with 16 cameras, and they published an application that was the World Cup Finals as a 3D soccer field with 3D players, and a virtual camera that could be placed and moved anywhere during gameplay. The company was one of the video card manufacturers of the time, promoting their large and bulky VR headset. This was around '93-94?


I don't really understand your argument. Videogames revenue is higher than movies and NA sports combined. That said, there's enough room for everything.

>rigging a google-maps-like camera ball

This is also not really how it works at all. Volumetric capture is a lot more complicated than 3D TV which is considered a dud.


It's hard to drink beer and eat snacks when you are wearing a headset and are blind to the world around you.


I was surprised how good the video passthrough on the Oculus Quest is.


They've been trying to do this but they always put the 360 camera somewhere really dumb, like next to the goal (this is soccer I've tried it with) and just normal head height, so you can't actually see the game at all.

I agree if they put the camera somewhere cool, like right above the pitch kind of like god-view, that would be awesome.

Not sure I'd pay 3,500 for it though....


> they always put the 360 camera somewhere really dumb, like next to the goal

Human stereo vision doesn't work at long distances, because of the geometry - the exact distance depends on how good your eyesight is, but somewhere around 18 feet [1] the brain starts using other cues because the difference between what your left and right eyes see is so minuscule.

This all happens unconsciously so you don't notice - but when you estimate the speed of a car 100 meters away, you actually start comparing its motion to the road markings, using your knowledge of about how big a car is, and so on.

So a bird's eye view doesn't really benefit from stereo vision. I suspect they like to put in those weird close-up shots to make the people paying $$$$ for 3D notice the 3D effect and feel they've got their money's worth.

[1] http://scecinfo.usc.edu/geowall/stereohow.html


I wouldn’t really care about stereo vision. I just want to watch the game from a great position.

Also I’ve tried some experiments with VR looking down over a race track with virtual race going on and it’s totally awesome- but that was in full 3D so more exciting I guess.


Yeah, they should just put the cameras on the players' helmets.. (not sure if serious).


They could have tens or hundreds of cameras, and reconstruct what a virtual cameraman moving on the field with the action would record.

I think such a virtual camera technically almost is possible. Getting a smoothly moving camera image that doesn’t get distracted by ball-like objects (e.g. a bald referee, as happened in soccer recently) and tunes zoom levels to show just enough action probably is the hardest part.


Intel did this for American Football but it looks like it needed some serious processing so maybe not possible LIVE yet.

Would probably also need some kind of remote GPU too like Google Stadia if you’re going to get any kind of real-life picture quality.

https://www.intel.co.uk/content/www/uk/en/sports/technology/...


Haha, maybe I would pay 3500 for that...


Yes, no idea why this isn’t already happening


It is. The NBA is partnered with Oculus and games can be viewed live in 360 video


Apple purchased NextVR last year. They had previously been screening NBA matches (and theatre/comedy) on the Quest. They were one of my favourite apps..

https://www.theverge.com/2020/5/14/21211254/apple-confirms-n...


The resolution on today's consumer headsets still isn't there to provide good experience (maybe https://varjo.com/ can work but it's not for consumers). Oculus Quest also last only 2-3 hours after fully charged. Main reason at least for me is it gets quite uncomfortable after wearing it for long (~30 min) due to weight and sweat


IMO the resolution is there so long as you have one of the latest headsets and are viewing a VR180 8K feed (my reference is the Quest 2). Live casting of an 8K feed probably is the limiting factor here (not the headset hardware).


It’s 8K per eye and to meet latency expectations right now you’d need to send the whole 180/360 video with enough resolution through it’s entirety so that there’s always enough pixels within the spherical polygon the eye frustrums create. Far more than an 8K feed and in particular I don’t know of any cameras capturing at that resolution?

360 video is also quite nauseating for people as it has no positional component (e.g. moving your head forward brings the scene with it). What we really want are streaming light fields.


VR180 6-8K video at 90hz is such a visible improvement compared with lower quality formats on Quest 2. But there is so little content available at high resolutions and high frame rates. I just hope this will improve (at least slowly) in time.


>The resolution on today's consumer headsets still isn't there to provide good experience (maybe https://varjo.com/ can work but it's not for consumers).

I think the display hardware is good enough, but I've only been able to get that quality by manually copying over 8k video files and playing them on the device itself. Not sure why they can't match this quality when streaming...


>experiences like sports or other time/space limited event

How would streaming work with a custom point/angle of view? Can the participant walk, move head - etc. Unless all the possible cameras are streamed AND processed locally the latency would destroy the experience.


VR is very immersive but multiple telephoto camera angles provide much better views of concerts and sporting events.


You could probably take it a step further. Use lots of cameras all around the stadium from various angles then use machine learning or whatever to recreate the game in a 3d world. Then, let people move and fly around through that world. Get up close to the players, fly anywhere you want.


Add the ability to watch the game with all your friends next to you in VR sharing the excitement just like you would if you were in the stadium sitting next to each other and that's a killer app IMO. I think the sky sports app in oculus is like that but haven't tested it


To put it another way, the killer app had better not be games if Apple is going to succeed with a $3K headset. Apple, a proprietary Apple processor, and $3K are all things that tend to make game developers run far, far, away.


The problem with this is that it’s not VR though. It’s 3D from a fixed point.

You can’t look around unless the camera is mounted on motors to allow for that, and even then you’d need to be the only viewer otherwise who’s head does it track?


Why not do multiple cameras and broadcast each of the feeds? Then it's up to the software for each user to do head tracking.


Even better: you're in IMAX movies, but nobody is chewing nachos.


I wonder if the killer feature for AR/VR is as simple and unsexy as replacing my ridiculous array of massive monitors.


Sounds like it would avoid the risk of motion sickness too, as your movement in VR would reflect movement in real world.


> A consumer gaming device can come later.

They had a partnership with Valve back in 2017 for Vive support on MacOS... Just three years later it was canceled. (There were actually more VR games running on Linux at that point than on MacOS, thanks to Proton.) I'm not expecting Apple to try to enter the VR gaming space again anytime soon.


There are more games on the iOS app store than exist on steam.

I don't think apple needs valve or steam's support to make their own VR gaming platform just as they didn't need their support to make iOS a larger app-store for games than steam is.


I don’t think one can really compare the games on iOS to those on Steam.


No, but you also couldn’t compare games on prior mobile devices to those which iOS enabled.

I think the more appropriate thing to consider is that Apple has a lot of Developers familiar with its platform


The Oculus Quest has shown that a lot of people are very happy playing 'iOS' quality games on VR.


The Oculus Quest can also be used to play PCVR games.


> There are more games on the iOS app store than exist on steam.

What's the figure when you compute the total engineering-hours into the games, though?


I see what you're getting at, but I think you probably want to look at total engineering hours per game if you want to get there. There were about 45,000 games on Steam at the end of 2020, based on what I've been able to pick up from news reports and Statista; there are over 950,000 games for iOS. Even presuming the average good iOS game takes fewer engineering hours and presuming there's more quick-buck cookie-cutter crap on iOS, there's almost certainly more engineering hours spent there than there is on any other platform.

But, I think you're still on to something. The biggest strike against Apple in this space is the same strike they've always had: they've never seemed to take gaming truly seriously at a corporate level, by which I mostly mean courting (and spending money on) AAA-class developers. Maybe this will change, but they have a very long track record of snatching defeat from the jaws of victory in this area.


I don't agree. 99% of games on ios are cookie cutter or trivially small. I'd wager the average engineering hours put into a steam game is >100x the hours put into an average ios game.


A mobile device strikes me as different here - it's more consumer centric where VR right now at least is more hobbist/professional centric. I think they could away w/ the higher price.

I'll add when the iPhone came out it was way way more than it's competition. It was effectively the first $100/month w/ 2 year contract phone bill as compared to other devices where $35/month was common.


Is there any indication that this will be a mobile device?

I haven’t seen any.


It will be standalone. Probably more powerful than Snapdragon XR2 powered devices like Quest 2.


Microsoft’s hololens2 is $3.5k available now for developers.


We've had ours on back order for 18 months and counting. It is technically out there, but hard to get for companies.


Maybe try ordering from insight.com? Or check the hololens subreddit, apparently there are a number of companies which can get one out to customers within a few days.


Right - I don’t think Apple is likely to release a ‘developers’ system.

I think it would be aimed at actual designers and engineers etc.

Presumably they would partner with some key developers like Autodesk to have some apps ready at launch time.


Isn't hololens2 significantly lower resolution? You can't really have a virtual workspace unless the resolution is incredibly high.


Hololens 2 is not a virtual reality headset. It's a wearable HUD and UI (since you can control it with hand gestures). It's an industrial inspections and automation tool - and for those use cases the users love it. For offerings in this space built around Hololens 2 see for example https://fieldtech.trimble.com/en/products/mixed-reality/trim...

The virtual workspace for hololens is the physical world. The view area is a bit small. But the idea is not in HL to create a virtual world, but to overlay information to the real world.


Even better, IMHO based on anecdotal personal experience


Yeah I've used a hololens for a building walkthrough (showing a new fitout) and whilst it was a cool experience, the resolution was really low, and the field of view was tiny - just a small square in each eye.

The latency tracking was on par with other headsets, which I consider quite good.

Disclaimer: I work for MSFT, opinions my own, yadda yadda yadda.


Apple haven't quite convinced me they're serious about being in the pro design segment. They seemed to just quietly forget about graphics and video professionals, focusing on iOS and the consumer side instead. Sure, there's a fresh mac pro lineup now, but is it going to be another 6 years until the next gen?


What would they have done differently if they hadn’t forgotten about the design segment?


The switch to "Apple Silicon" is about cost savings so they can further ignore the Mac, because the future of the company is in services revenue.


> If they can make that affordable and usable for pro designers at every level, (which $3500 clearly would be)

I don’t know in what world would that be affordable.

Maybe if companies buy it for their employees which makes this an enterprise product like the Mac Pro or software like Logic Pro.

It would be insane of Apple to invest billions in development to end up with a product that is not targeted for regular consumers.


> enterprise product like the Mac Pro or software like Logic Pro.

How is Logic Pro an enterprise product? It's one of the most affordable pro-grade DAWs.

Also, I've seen a lot of non-enterprise Mac Pro's for, and that's just /one/ example, freelance video editors.


It would be affordable if you are using it for your job and actually getting paid.

Also - nobody is saying there won’t be a consumer device.

Only that they don’t have to start there to be successful.


I mean hololens are the same price.


If Apple’s headset is like a Hololens that would be a turn off honestly. I expect more from them.


> they will have a hit.

This right here is Appleism.

And why wouldn't Occulus, HTC, Magic leap or big boy Microsoft have a hit?


Why is the AirPod product line a near-Fortune 500 company sized hit on its own? Why did no other company do this?

Why is the Apple Watch dominating it's segment?

Why have Android tablet sales fallen flat compared to Apple's?

Why did the iPhone destroy the prospects of Windows Phone and Blackberry and force Android to completely revamp it's UI prior to release?

Why is the long in the tooth Macintosh line having a sales renaissance?

Apple TV is about the only recent vertical I can think of where Apple hasn't outpaced its competitors in recent years, and even there it's UI is appreciably better than the competition, pricing and incredibly stupid remote being its downfall.

I think you can answer your own question by answering mine.


You mean why are people, of a specific demographic, buying luxury items from Apple Inc?

People are buying the luxury brand, like peopke who buy a Mercedes or Ferrari, not the functional qualities of a product. Airpods are good, my Bose headphones are phenominal. At work I use an iMac which is great but my own PC is a beast.

Appleism is the cult of brand.


Because they are focusing on games.


> Bear in mind that the iPad was rumored to cost >$1000 but then launched at $499.

The early iPad days replaced by the Pro suffix. Or even worse. AirPods Pro Max.

Apple’s product line those days has a lot of fluctuations in their pricing. (On purpose) and they seem to have consumers for both.


Yet apple's latest headphones are quite pricey. And the mac pro.


There's no way a Mac could render at that resolution when we have 600 dollar 2k headsets severely stressing top of the line gpus.


The article mentions rendering only the fovea in high resolution. The key advantage is it does not need to render all 33Mpx equally well.


and as mentioned in the article it’s a common technique that all vr headsets are already doing


Really? The article says “well-known method”, not “all vr headsets already doing”. From what I can find, DeepFovea and others were only work in progress (summer 2020) at best, in contrast to mainstream tech. Is there more info on that?


I don't think most support foveated rendering, currently.

But there are consumed ones available / becoming available:

* HP Reverb G2 Omnicept Edition

* HTC Vive Pro Eye


Just because they track eyes doesn't mean they gonna use it for rendering. That's probably for input.


Directly from the HP product page:

"NVIDIA VR Ready Quadro or GeForce Turing based GPU required for foveated rendering. For developers, Unity foveated rendering plug-in and run-time are also required and available from the Unity Store or HP Developer Portal"


Nobody does that and it's not possible due to GPU latency.

Eyeballs rotate very fast, up to 1000°/s. Fovea has just under 1° of high resolution. You gonna need to render 1000 FPS with 1 frame of latency to catch up.

GPUs can't do that. They evolved for decades optimizing for throughput at 60Hz, not latency at 1000Hz.


IIRC snappy eye movement makes us essentialy blind not only during the movement itself, but for a brief time after the eye settles also.

Foveated rendering (where IR cameras look where pupils are looking, passing this info to PC and the frames rendering with high detail only in that spot) was already demonstrated to be working well and fast enough to not be noticable.

It is not present in any consumer VR headset, and the software side is also not yet plug&play


Would this enable me to look off to the edge of my headset and not have it be blurry? It's been rough training myself to turn my head and made me realize how much I rely on that ability for vision in general.


Unfortunately no, that's an effect of lens distortion.


I could see this as a virtual office, as some of the comments speculate.

If I had a set of AR glasses that projected what appeared to be an 8K monitor on top of my dining room table, and integrated with my MacBook Pro for input / output, and that had batteries to last a workday, I’d pay $2K/$3K. Even more if it worked well at brightness levels I’d have in my backyard.

Never mind gaming, mobile high-quality virtual office is good enough.


If the screens in the goggles themselves are 8k, the "monitor" is only going to be a small fraction of that unless you have your face up really close. (Although you could effectively have much higher than 8k when you do stick your face right up next to it.)


Yeah the idea of having a better virtual office really isn't as great as it seems when you try it out. Basically extra latency to not see all of the stuff on your desk and have the entire world skip frames when you do something that maxes out your CPU or GPU. Not to mention the aformentioned image-in-an-image quality problems.


I think this can be solved. This may surprise many, but I use virtual reality for all of my heavy reading, without any issue whatsoever. It basically feels like I am reading on a movie screen.

While my use case is different than most, I personally have a print related disability (severe convergence insufficiency), which requires the use of assistive technology. Anyways, the app I use is called Retinopsy Look VR, which is available in Viveport. It is intended for people with visual impairments and is super adjustable. I think the adjustability is key. To augment my reading experience, I use a screen reader called Kurzweil 3000, which reads the text aloud to me, with the sentence being read in yellow and the word being read aloud in green, simultaneously.

I use a Valve Index headset with prescription lenses that are adapted to the headset (I got them from VR Optician). I also use a laptop with an i9 10th gen processor, a 2080 Super video card, 32 GB RAM, and SSDs as hard drives. The only thing I do not like about my setup is the fact that it is not wireless and also the fact that base stations (“lighthouses”) are required.


I think specialized use cases are an absolute hit on VR devices. Where else can you get consumer priced hardware that you can create a completely custom tuned to the user version without needing custom hardware? That being said I never did manage to find a customized version that was more efficient for the common use case, after all monitors/books/phone screens are already the customized optimized consumer hardware for most people.


Agreed. I am actually working on making an app that effectively does all of this natively, that is extremely customizable, for both Oculus and SteamVR systems.

You may want to check out SeeingVRToolkit, which was made by Microsoft to make VR accessible to the visually impaired. Retinopsy Look VR utilizes this.

See: https://github.com/microsoft/SeeingVRtoolkit

With Retinopsy VR, reading dense and long material in VR is extremely easy and immersive. I find it far more enjoyable than reading physical books, even without using a screenreader.

I also have ADHD, and reading in VR is far more immersive (and especially with a screen reader utilizing multimodal highlighting). I can learn a lot better because the text is right in my face, being read aloud to me, with changing colors highlighted to the audio, which I cannot escape and drift off from in VR.

Anyways, I do all my coding in VR using Retinopsy VR. It allows me to really buckle down and focus. I sit on the couch reclined and I have my keyboard and mouse on a very stable lap desk, the Couchmaster Cycon 2. I also have headset strap stabilizers/modifiers from Studioform Creative so I can use my headset for several hours.


This was amazing to read thank you for taking the time to discuss all of these VR tools and how they are used. I have a friend in a similar situation who is getting an Oculus Quest 2 soon and this information will be very useful to them.

Best of luck with the app development it sounds fantastic m


You are so welcome!

You have to read this article, which was written by somebody who is visually impaired. He explains why the experience of VR is so much better for people with visual impairments, in so many ways, compared to any other assistive technologies. He states why it is so much more helpful: https://www.alphr.com/virtual-reality/1008932/vr-vision-loss...

There is a visually impaired user who uses the Oculus Quest and Oculus Rift. It is the first person on this thread. You may want to investigate their posts: https://forums.oculusvr.com/community/discussion/86303/low-v...

Here are all of the big accessibility groups that I know of, for VR: https://udl.berkeley.edu/accessibility/xr-accessibility

There is also a very large (500+ person) VR accessibility group: https://www.meetup.com/a11yvr/

This is an accessibility design podcast for VR: https://xrforlearning.io/podcast/designing-accessibility-int...

This is the Microsoft SeeingVR website, with the open source code and YouTube video: https://www.microsoft.com/en-us/research/project/seeingvr/

I hope this helps. I have been looking into this recently, and I will need all of these resources for my project.


Those are terrific resources, thank you!


See mirage[0] - this solves the "can't see your desk" problem.

A more recent seems to be ThinkReality (by Lenovo) which prominently features a 'extra screens' application on the home page[1].

0: https://www.mirage-app.com/

1: https://www.lenovo.com/us/en/thinkrealitya3


I think the softwre side answers are there, just pump more high quality information in faster and it's easily solved. The problem is i just punts it to hardware. Both the lenovo product and hololens have absymal fovs and resolution but you can at least look to the real world great. Otoh normal headsets can display the virtual world in great detail and double the fov but not only is it taxing on the hardware to do so you need to bring in the real world insted. In either scenario you get pretty bad tradeoffs at this point, and I've tried both approaches for quite a while.


Would it need to display and confine your workspace to a "monitor"? Couldn't it show floating windows of some kind in 3D space?


I personally would like my 2d rectangles to be in a limited 2d rectangle area for organisation and not cluttered all over the 3d space.


Why? I mostly have zero interest in being immersed in meetings. Most people aren't IRL most of the time. There's potentially value where people need to be on-site in other contexts such as repairing equipment.


Laptop + magical AR/VR headset means you can always have your 3 monitor workspace, even at the coffee shop. Plus, you get fully private screens.


That sounds utterly unappealing. If I'm in a coffeeshop I have zero reason to want to cut off from the outside world. Otherwise I wouldn't be in a coffeeshop. Most of the reason I'd work in a coffeeshop is for an ambient social vibe. And for many things I do a 13" laptop screen is fine. (And I probably wouldn't feel comfortable being utterly cut off from the environment in an urban space.)


You should look into how the Hololens is able to display floating windows. You're not cut off any more than the physically equivalent monitor would obscure.

A full face headset would probably give off a socially isolating presence, at least today, though.


The obvious use case is on an airplane (or bus or train).


I definitely do not want a person flailing hands in their VR world, while sitting near me on an airplane/bus/whatever.


Hmmm... so even at 8K per lens, the floating display(s) is(are) how much of that 8K?

So we're back to VGA resolution on your "virtual display".

Yeah, no thanks.


The FOV of these devices is something like 100 degrees. There's not a lot of wasted space. Who knows what this theoretical Apple device is like though.


The FOV of AR headsets like Hololens is very far from 100°.


If the FoV is less then it drives home that the pixel per degree is even higher!

This is a closed face VR headset with presumably more traditional screens and lenses and unlike a Hololens or MagicLeap. The device sounds closer to VR headsets like the Vive and Quest, which are mostly around 90-110 degrees.


This is described as a joint AR/VR device. I don’t expect we’d be totally cut off, probably more like an overlay of computery stuff while still being able to see the real world behind it. Sort of like a virtual screen.


Correct, it sounds like they are getting the AR using a 'passthrough' mode stitched together from the multiple cameras and some sensor fusion to give you an eye-level perspective live video feed they can manipulate with all manner of neat effects.

Wearing an HMD is always going to disconnect your attention from where you are and teleport it to someplace not shared by those around you, there is no way around that and it's not a bug either. AR just makes that separation a little fuzzier.


The real challenge, is going to be some way to not look like a dork while using it in public. If I had to have faith in anyone to design something that could square that circle it would be Apple (and maybe Sony). But it's a tall order.

Of course, inputs are weird too. Would it need to come with a pocketable keyboard or something?


It might be better with AR, where you still get the ambiance but with virtual monitors.


So moving your head a lot? You would look like a giant weirdo, right?


Pretty sure the giant headset is a dead giveaway. But just like airpod stems, it'll be a status symbol if Apple deems it so.


People 50 years ago would have said that about people who aimlessly slide their thumb up and down a glass screen on a device the size of a deck of cards.


50 years ago people were busy staring at slices of dead trees while experiencing detailed hallucinations.


(I still say that)


I have no idea how that works with AR overlaying monitors on a physical worldview. So you're writing something overlaid on the view around the coffeeshop? The idea of AR is more to give a HUD that provides information about what you're looking at.


AR is commonly used to insert solid objects into the real world. See Pokemon Go and most HoloLens games. And the HoloLens app where a virtual dog lives in your house.


HoloLens is not able to render opaque objects. They always have some level of translucency, and the darker the colour of the object, the more translucent it is.


Why even have "monitors", just drag your windows around on a hemisphere in front of you that takes up your whole FOV.


When you share your screen in Zoom/Hangouts/whatever, the monitor boundary is nice to isolate stuff. This is of course because Zoom doesn’t let me “add” n app to existing share after having selected a single app - nothing that can’t be fixed


Then the next step is apps that are 3D-aware, to let them draw beyond just a flat plane.


> even at the coffee shop

Lol. Just picturing the reactions of the hip baristas in SF who already disdain techbros for spending 5 hours at the coffee shop with their laptops/headphones and their one cup of coffee when said techbros upgrade to VR headsets.


To do AR, it'll need a camera. People hated google glass because of the camera.


People will get over it. Especially if Apple does it.

I remember clearly in 1999 going to dinner with co-workers, all of us pulling out our cell phones for something and one of the co-workers wives call us all geeks for even having a cell phone. I'm sure that same person is now more addicted to her smartphone than her husband.

The same will happen with AR. It will seem "ewww gross" until it doesn't


Do you mean to pass the real world through, or for scanning the world?

If the display is see-through - perhaps with a removable view shield for switching AR/VR - you can have pass-through without a camera. Like magic leap, but without the magic "black pixels" tech.

Alternately, if it has lidar you could display a wireframe / point cloud version of the environment. I don't think people will consider a lidar scan to be "a picture of me," even if it actually captures more detail.

Alternately again, it could have a camera but just never expose that feed to the real world or allow recording. It's their own closed hardware after all.


For scanning it. It's a good point that something like lidar might be more socially acceptable.


>To do AR, it'll need a camera. People hated google glass because of the camera.

I remember the fuss, but it's so weird -- everyone carries a smartphone with camera with them all the time, even into public bathrooms. If someone wanted to record me I think I'd have a better chance of noticing someone looking at me with their face than I would someone doing it while pretending to be scrolling twitter.


The article suggests it will have 12 cameras.


AR doesn’t necessarily need a camera. Might be possible with IR structured light?


To augment something you need to have whatever is there to begin with - so they either need a camera or you need transparent screens. An IR structured light depth camera is still ... well ... a camera.


It obviates the need for a ridiculous number of business trips.


I disagree. Business trips won't ever be a thing of the past. Most people enjoy traveling and see it as a perk rather than hassle.


Not those who travel 3x per week for one hour meetings


I'm confused by people saying that they would use VR for work. My head hurts after one or maybe two hours of using it, and if I would work whole day in it?... ugh.


At least when I think of what could be done with a virtual work environment, I imagine it being used to go beyond the limitations of a workstation. We’ve been stuck with the same peripherals for 50+ years, the same GUIs for 25 and we’ve been trying to balance sedentary office work with health for as long as we’ve had swivel chairs. VR allows you to be on your feet moving around and literally putting your hands into the computer while engaging with a truly 3D environment, if we can’t revolutionize work with those gimmes then we deserve our RSI and back pain.


While that sounds like it would work in theory, we've been experimenting with 3D interfaces for work for decades and... it just hasn't caught on. I used to have a 3D desktop application back in the day, where your character could walk through a virtual room to open up a browser, apps, etc. It was a gimmick. Nothing beats hitting cmd+space and three letters to start an app (besides having it open already / cmd-tabbing to it)


Sure some things would stay the same but what if I rephrased it and asked: if you had an infinite budget, what would your ideal office look like?

There have already been inklings of the new interfaces we could invent like Tilt Brush. We shouldn’t be thinking along the lines of how we could do easy things differently, we need to think how can we do hard things intuitively?

Edit: another consideration is that if we want to be less sedentary maybe it is worthwhile actually getting up and walking over to a filing cabinet in order to open a file browser. If it were an option it would probably be more effective than setting reminders to get up and move.


The headaches are commonly a result of low resolution, low frame-rate, insufficient lighting, or bad hardware design (too tight or heavy or not adjustable for your head/face/glasses/etc). These are nontrivial problems to solve.


Very much disagree. I have a Vive Cosmos Elite which has 90Hz refresh rate (pretty standard apart from the Index) and 1440x1700 resolution, which is a bit higher than the average. IMO it has the most comfortable halo ring design for the headband.

That said, I can spend about an hour in it. I can be having the time of my life but after an hour, I need to take a break if I want to jump back in.

I'd laugh in your face if you told me you planned on working 40 hour weeks in this thing.


I know plenty of people who claim they can't stare at a monitor for more than 30 minutes or they get headaches yet we don't laugh in the face of people who do stare at monitors all day long.

It sucks if it doesn't work for you. Hopefully they'll find solutions so more people can be comfortable. For those of us who are already comfortable we'll be happy to use what's available now.


Yes. Not to mention position tracking latency, frame latency, jittery/slow controls, mismatched depth cues (parallax vs. lens focus), and on and on...


I know what parallax is, but what is lens focus? Is that an area seeking improvement in AR research?


Close one eye and focus on something really close with the open eye. Notice that things that are far away are blurry. Now still with one eye open, focus on something far away, and notice that things nearby are blurry. VR doesn't replicate this.


This is a very active area of research, and I'm pretty confident it will be a standard feature within the next five years. There are also light field displays, which are super interesting, but I'm pretty sure they are cost prohibitive for consumer devices.


There's also the odd sensation you can get after extended VR use that your limbs arent really your limbs.


I can get that sensation once in a while when I'm driving for an hour or so in my car. My brain kinda makes the car an extended part of the body so it's a bit of a strange sensation to suddenly "rediscover" your arms and hands as your actual limbs. First time I experienced it I thought it was because I needed a break from driving but I wasn't even remotely tired or unfocused. I was maybe too much immersed like what happens in VR?


Same; even with 8K screens, I'm sure my mild eye problems (glasses) would cause enough issues for me to prefer a regular screen.

I mean I'd love to be proven wrong, put one of these things on and everything being full of stars, but at the moment I'm skeptical.

My experience with VR has been limited to an HTC Vive. While a game like Elite Dangerous feels great, I had a lot of trouble reading text on it. Probably the same reason why I can't get along with binoculars, I just can't seem to focus on things with both eyes?


> While a game like Elite Dangerous feels great, I had a lot of trouble reading text on it. Probably the same reason why I can't get along with binoculars, I just can't seem to focus on things with both eyes?

This is basically textbook definition of convergence insufficiency, which is underdiagnosed on a population level. You should absolutely see an ophthalmologist about it, preferably one at an academic medical institution, as they will be less likely to miss it. They can also modify your eyeglass prescription to help with this issue. This does warrant seeing an eye doctor over.

Here is some info on convergence insufficiency: https://my.clevelandclinic.org/health/diseases/17895-converg...

These tips may help you, but probably not for VR: https://www.bouldervt.com/wp-content/uploads/sites/478/2015/...

I personally have severe convergence insufficiency (most middle-aged adults end up with mild convergence insufficiency and have prism/powered lenses) due to a rare immune mediated neurological condition. This requires me to see a neuro-ophtalmologist. I also have astigmatism and nearsightedness. I only wear glasses to read/drive/VR. I have lenses from VR Optician for my VR headsets.

My favorite way to read is using VR. I recommend using Retinopsy Look VR (link: https://www.viveport.com/5445b338-0944-49a8-80ce-7c0f4ea7709...) (which is only available on Viveport, however I have it set up to boot from Steam), which allows you to adjust the distance from screen (focal viewing length), screen size, screen curvature, and screen tilt. This helps tremendously with convergence insufficiency.

Anyways, I personally use Retinopsy Look VR in combination with a screenreader called Kurzweil 3000 (https://www.kurzweiledu.com/k3000-firefly/overview.html) which reads the text aloud to me. When the text is being read aloud, the sentence being read is highlighted in yellow while the word currently being read aloud is highlighted in green. This is done simultaneously, and it helps me better absorb the material.


>My experience with VR has been limited to an HTC Vive. While a game like Elite Dangerous feels great, I had a lot of trouble reading text on it. Probably the same reason why I can't get along with binoculars, I just can't seem to focus on things with both eyes?

You might have a problem focusing with both your eyes, but the reason you couldn't read the text in Elite was simply that the Vive is a very low resolution headset compared to more modern models. Everyone had to lean into the display screens to read them on that display.


Wearing a HMD reminds me of a diving mask - its a little awkward and can get fogged up and etc -- but the overall benefit of seeing underwater as an experience makes up for the inconvenience. My guess most of the problems are a result of limited early hardware and limited early software, both of which are expected to change over time ;}


I'd imagine it would just be for meetings that benefit from being able to share a space and work together on the same models.


I think the comment said he wanted an AR headset. Which is a bit different than VR.


An AR headset like you describe would be killer: get rid of the laptop form factor, put everything into a phone sized box, and you can sit at a table with just a keyboard and mouse and work.


I'm wondering why Apple doesn't create a device that is much cheaper, has no augmented reality, and is just for consuming TV/movies with A/V quality matching that of the best cinemas. Pay off the device with a monthly subscription that includes Apple TV+. Something like $600-700 would get a lot of people interested.


The cheaper devices already exist. An Oculus Quest is basically that. However I don't think the refresh rate or image quality is really up to scratch.

And if this were just a question of image quality, I wouldn't make a big deal about it, but poor image quality means something resembling motion sickness will set in pretty quickly, even if you're just watching a video.

One time I tried to do some yoga while watching TV with an Oculus Go and I stopped immediately, huge mistake, never trying that again. But based on my knowledge of the tech it seems possible that such a thing might be possible at a $3000 price point.

If the external cameras are low enough latency that I could basically have a TV floating at the perfect viewing distance while walking around and doing chores or exercising I would buy one in a heartbeat. Though I'm not sure that's even possible or if that's just guaranteed motion sickness. I would say if it's possible, it's definitely not going to be possible for under $2000, at least not for 3-5 years.


The problem with image quality is that you either need to use the built-in desktop view, or an app that captures and pipes your desktop view to your VR headset. Both of these approaches don't do any upscaling of your monitor's resolution, so if your monitor itself isn't 4k you have to do one of two things:

- use a vr video viewer to view downloaded 4k content

- use Mirage desktop to create a virtual desktop[0], then start oculus/steamvr, and use that to stream the 4k desktop view. this isn't actually made for regular vr, though, so i don't consider it a great solution

As a side note, you can't view DRM-protected content with any current solution (maybe other than with webvr, but I haven't seen Netflix add a VR button yet), so the only way to actually watch content is via piracy - not something trivial to get into for people without the time to maintain a torrent client+VPN+Plex setup.

0: https://www.mirage-app.com/


The trouble with VR video and Oculus Go is that 3-DOF has high degree of discomfort when you try to move and the world seems to move with you. There has been some interesting work on synthesizing new viewpoints using deep learning with rather impressive results. That is something that could bring a level of immersion comparable to 3D content to video sources.


Like a TV? I just don’t see wearables for consuming entertainment catching on and having mass market appeal


In my imagination, teen girls would love to be able to use AR to talk to their friends with their friend appearing like a hologram in their room over their friend appearing on their phone ala facetime.


I don't enjoy wearing glasses when I watch TV. I'm lucky that my vision isn't that bad that I need too.


Furthermore, I would be willing to use 16-bit colors for higher resolution.


16 bit colours on a pair of 8K screens is the best part of half a gigabyte per frame...

I assume sir will be wanting 60 or 120 FPS as well?

:-)


Foveated rendering will cut that in half or more.


For some purposes.

There's still 72 raw megapixels in the two displays. Even if you "cheat" and run rendered output for the edges in 4K or 1080p, something still has to drive each RGD emitter for each of those pixels, even if you're painting blocks of 4 or 16 of them all the same colour...


802.11ay can hit that at 15fps theoretically, and if apple culls peripheral video, it may need way less bandwidth!


VR at 15fps would make you feel sick.


wifi is a horrid idea, any retransmissions/interference would cause an immediate/noticeable latency. The distance for full speed is low and any object between the antenas, e.g. a human, would have noticeable effects.


7680x4320x2pixels x2 bytes/pixel ~ 127 MBytes/frame.


Ahhh, right. I'd misinterpreted that "16 bit colour" as wanting 16 bits per colour 6 byte pixels, not 2 byte pixels.


I read it the same way. I guess the phrasing ("I would be willing") should have made it clear, but it's an understandable misunderstanding: it's used both ways:

https://www.photoshopessentials.com/essentials/16-bit/

https://www.computerhope.com/issues/ch001557.htm


Yeah, but even at 100 MByte/frame, we talk enormous bitrates for sensible frame rates.


I wonder if a little bit of dithering would be enough to compensate for that at that high a resolution. The actual area of our eyes that can resolve colour is quite small, so it might work.


This is pretty funny to me. With VR you can go anywhere you can possibly imagine, and you chose an office.


With VR/AR you can’t “go anywhere” (not until the experience is Matrix-like anyway). You can just look at stuff. Like on a screen. Just a bit better than a traditional screen in a few ways, and worse in others. It’s not going to replace travelling to interesting places any time soon.


Well if the objective is to be productive I dont think it is a stupid take. I would pick that or a monk cell.


I never believed in VR as a mainstream thing but AR keeps me curious. There's a Hololens demo on youtube that shows a bunch of virtual monitors of arbitrary size and shape floating around the user. That's potentially awesome. Think the desktop "windows" concepts but with virtual monitors, 3D UI elements where useful. This could have potential. I'd say the technology to make it workable as something you wanna wear for 8 hours a day still isn't there yet, though, and probably won't be until some major breakthrough. It's not just resolution, it's battery, contrast, size... I don't see anything I'd personally want to use with current or next-year tech.


That's basically the premise of the Oculus Quest, and my experience with virtual offices there are extremely mixed. I don't even find any of the apps to be particularly bad, but the inconvenience of using VR as a user shell becomes apparent very quickly. Small gestures that used to be a centimeter of movement are abstracted into larger, easier to read gestures that just tire you out. With that being said, I still found a few "professional" uses for it: I was particularly impressed by how easy it was to load up a Blender project and step inside of it.


You may be interested in https://immersedvr.com/. Some videos suggest that their coders use their VR workspace to code the app itself. Not exactly what you described of course :)


Fwiw I use this a couple hours a day a few times a week for coding. It’s great for focus and completely comfortable/readable as long as the text size isn’t tiny.


iSpatial is also an interesting experience. ImmersedVR is basically a 4+ screen desktop extended workspace while iSpatial gives each window its own screen in multi-level opp center environment. They both have some problems scaling certain content and are works in progress but are the standouts for getting things done in VR. Provided you can touch type well enough.


This is indeed the concept behind Lenovo’s forthcoming glasses: https://www.lenovo.com/us/en/thinkrealitya3


If it’s contact lenses you could have terminal with your eyes closed!


ditto. I bought a oculus quest 2 purely based on video from immersedvr. its notw quite there yet but I'm enjoying beat saber in the meantime


So long as your dining table is black and has no reflections, should be great...


i would absolutely abhor being mandated or otherwise pressured to use this device for work.

i would love forcing or otherwise pressuring my employees to have this strapped on their face for work 24/7.


Tame your expectations.

I'm pretty sure Apple's gaslighting field has yet not found a way to bend laws of physics.

Everybody wants a sleek, everyday wearable device, without usability corner cases, but such are physically impossible to make.

Portability, visibility, usability. Choose one.

First, power requirements demand either external power, or extreme power austerity cutting into display, and graphics.

Recon Instruments had first really practical battery powered HUD goggles, and they barely lasted more than an hour in real life use.

The lowest possible power at which you can provide just any much rich graphics is 250mw — the lowest end of most energy efficient SoCs. Possibly, Apple can bruteforce it with 5nm custom ASICs, but not by much, 100mw at max possible with CMOS cells.

Even if you have a magic chip with hypothetical 0 Watt power usage, you will not improve the situation by much as your display will still eat at least 1W, or usually up to 2.5-4W if you use any waveguides.

Daylight visibility is essentially impossible without displays outputting at least 100000+ nits. There are no tricks around that, that's just physics.

There are very few technologies on the horizon which are physically capable of achieving anything better.

Monolithic devices are one, and only ones which can offer sub 1W power use at any much good visual quality.

But, they are very expensive to make. All existing makers are manufacturing on lab scales only. Getting manufacturing out of the lab, and into fabs is impossible without the process technology, getting out of the lab too. And this shortens the list of credible contenders to just 1 company in the world, and guess, it's nor Apple.

Third, even if you agree to a wired up design, where do you get 16k video from? Have you ever seen how thick DP 2.0 cables are?

How do you get latency down?

How do you make even simplest AR interactions not require the wearer to also wear a Quad SLI videogaming PC?

How do you transport 40 gigabits per second of video without having the I/O PHY eat more than the system itself? Only optics can do it under 1W.

Apple can surely put its silicon to good use here, but even a purpose made ASIC video system will be on the edge. It will limit them to the most basic graphics, and pretty much hardcoded, and handcoded use cases to extract maximum power efficiency, and workaround hardware limitations.

It will be a basic HUD, maybe with some GFX, and good video playback options. Essentially an Apple Watch, except you wear it on the head.

I bet they will intentionally limit its functionality to not to let users see performance limitations, and corner cases too often.


I agree that the 8k displays sound ambitious but they may have some tricks up their sleeves if they are doing foveated rendering using eye tracking. I think they’ll primarily focus on VR but they may do AR via camera pass through. They graphics in VR mode will be much better than what your claiming though, current stand alone headsets like the Quest 2 have good enough graphics to create a sense of immersion where you forget that what you’re interacting with isn’t actually there. That’s more than enough for virtual meetings and office spaces. If they can nail the facial animations by tracking eye and mouth movement so that non verbal communication crosses the gap they’ll have a winner. The current offerings for meeting and social apps in VR are almost there so Apple has a good shot at pushing it over the line.


I get that it is believe it when you see it, but 16K over USB-C is supposed to happen.


The #1 use of VR headsets is porn and porn apps are growing. Porn apps have some advantages over video apps because

* a VR porn app is 6DOF, VR video is 3DOF * a VR porn character has more presence. Their eyes can follow you, they can turn to face you, they can reach to your face, hands. * You can interact at a VR character. You can't with a video

In other words, either Apple is going to need to support porn apps on the the app store or they're going to need to allow 3rd party stores.


What's the source on this? I see people keep saying that but from what I've seen, the #1 use of VR is as a gaming console, with a focus on games that embrace the fitness aspect of VR. The Quest 2 has an insane mainstream potential, too, and most people won't bother rooting it.

Speaking of, I'm going back to beat saber. You might have been misled by its name if you think that's a porn app. :) Email's in my profile if anyone is up for some multi.

--

PS. It's pretty obvious to see the people in the comments here who haven't used VR headsets. Eyestrain and headaches are mostly a non issue with recent models, but what they still are is clammy. They get warm. Super uncomfortable for work purposes. Apple's design could solve that I suppose.

As a non-apple-user, I'm excited to see if they can produce something as lightweight as the Quest 2 that is much higher end. I would buy that in a heartbeat, even at a 3k price tag. (Yes, mostly just for beat saber...)


> What's the source on this?

Based on the GP's username it's probably from a self-survey.


In the context of the rest of this comment, and the one you're referring to, I wonder how many people assume 'Beat Saber' is a euphemism.

(https://beatsaber.com/)


Yep! And I can only sing praises about this game. It's some of the most fun I've ever had playing a video game. IMO it's the killer app of VR.


I've never played Beat Saber but one of the first VR games I played was Space Pirate Trainer, basically like a space invaders kinda game but you have two guns. Or a gun and a shield. Great game IMO.


Beatsaber is SOOOOOOO much better, give it a try ASAP!


It also just logged past $180,000,000 sales


They deserve every penny.

It's a once-in-a-decade game.


Oh come on man join he Echo VR crew, your walls and all those 12 year old kids won’t punch themselves.


While I have no specific source for VR, it's been kind of a meme for a long time that porn has been one of the major driving forces in the tech world and for tech adoption [0]

I guess the nature of the content makes it difficult to actually surface that because only a minority of people are open/vocal about their porn consumption habits or in what ways they are consuming it.

Case in point: Most people have no issue letting their online-social circle know that games they are playing, like in the way Steam surfaces it to friends.

While at the same time there is some demand by people to hide the more "embarrasing" games from their library they don't want others to know they have in their library [1]

On a personal note: I have to agree with the parent comment that porn is actually the kind of content that makes the advantages of VR really shine on the immersive end, particularly when coupled with 3D audio like that of the Index.

Fitness games are cool too, but they suffer from usually being very fictionalized in setting and art style. So while they take you to a place, you inherently know that place is fake.

With porn VR content its usually a real setting, with real people, and you are suddenly right in the middle of it, which adds a lot to the feeling of being immersed in an actual place, and not just some virtual space made up of virtual polygons.

[0] http://edition.cnn.com/2010/TECH/04/23/porn.technology/index...

[1] https://www.reddit.com/r/Steam/comments/9wvfbw/a_way_to_hide...


One can make the argument that Beat Saber is the #1 use for VR right now. It not only tops Oculus' Top-Selling list by a comfortable 3x margin (assuming # of ratings are a good proxy for sales), but also the Most Popular section, ahead of YouTube, Rec Room and VR Chat. These are social (media) apps that are free, while Beat Saber costs $29 + additional in-game purchases. It's crazy how popular this game is within its niche. It's like a paid, single-player game had more downloads than WhatsApp on the App Store.

It's no wonder that Facebook gobbled it up years ago. Any Quest competitor that comes without Beat Saber is going to be though sell, and I assume that it would be strategically missing from any future Apple VR Store...


It's also possible that they are different from you (e.g. they have an IPD that's not in the range the headsets are designed for, or they are more sensitive to the vergence-accomodation-conflict) and still experience eye strain/headaches. I do own a headset and I also love beat saber and alyx, but I can't use it for too long.


Safari is Apple's usual answer for this kind of thing, and it looks like Safari may be getting WebXR support as we speak: https://twitter.com/rufus31415/status/1357022695553200130


WebXR will not give you the same experience as a native app. For VR you're always pushing the limits of tech. VR in Safari will never be that.


Well you're going to have to live with it because Apple will never allow porn apps on the App Store. They will certainly not allow alternative app distribution either.

Safari is the only loophole you get on iOS, unless and until an antitrust court forces Apple to change.


For reference, this is a Steve Jobs policy (which is undoubtedly going to stay):

> https://youtu.be/tJeEuxn9mug?t=1131


I mean it’s not just safari. You can find porn via the Twitter or Reddit apps to name a couple. Apple just won’t allow apps that are exclusively designed for porn.


Indeed, but it doesn't have to. It is a complementary tech. You have web apps which have no hope to match native apps with performance and yet people are using them thanks to ease of distribution etc.


This is not the best way to think about it - it’s not like mobile. WebXR offers distinct opportunities from native. It’s not a hierarchy. There are some things you can only do with one or the other.


Why do you say that?


performance of javascript cannot match c++.

Wasm helps. you also need state of the art decoders. And really, full GPU compute access.

streaming will be a problem for awhile to come. youll want to download in advance, something that the web is not yet set up for.

It’s doable, certainly. But the web will make most things harder. When smartphones came about h264 was already quite far along in adoption (released in 2003). VR video is not at that state.

Maybe Apple has an answer for all this. But I don’t think so. VR video is still under development.

By the time VR video is reasonable to do though, i expect things to progress on the web quickly, maybe lagging 6months-year.


Numerous site serve up VR video, including porn sites. That already established.


This isn't about video, it's about 3D apps. The top 3D porn apps need power similar to the top AAA video games to simulate hair, physics, and to render realistic looking characters. In the same way AAA videos games don't run well in the browser., VR apps in the browser are just as bad.

Also note: even if we're just talking video Safari does not support a webpage going fullscreen which is arguably required to do VR video. But just to be clear again, this isn't about videos, it's about high end real time 3D porn


Not 6dof!


WASM doesn't just help, it and PWA (progressive web apps) will be exactly what you're looking for - for this and numerous other things.

Google/Android is on board, Microsoft is making Outlook a PWA, so they're on board.

All up to Apple to enjoy some pressure and get with the times.


WASM is only for the CPU. It doesn’t solve the GPU.


>The #1 use of VR headsets is porn and porn apps are growing

What exactly do you mean by this? Are you saying that right now, the most popular use of VR headsets is for pornography? Or are you speculating on how VR headsets will be used in the future?


> Are you saying that right now, the most popular use of VR headsets is for pornography?

Yes, that's what I'm saying. It's a "dirty little secret". Most of it is VR video at the moment but apps are growing quickly. New titles appearing on steam all the time and appearing on the top 10 selling titles.


I have all adult content on on my steam account (just checked my filters), and top selling VR section shows me Alyx, Beat Saber, Borderlands and other popular games as usual. Is there an additional setting to see that?


About which apps are you talking specifically?

I‘m a long time VR user and except for some concepts I haven’t seen anything like what you describe, certainly not that popular.


Personally I got a Rift S this week and I can say with no shame that i've tried it and it's quite an impressive experience.

They even provide scripts for your automated toys (I don't own one but i'm really curious about them.) to sync with the videos.


which apps would you say are pushing the industry forward? there's a lot of junk out there. asking for a friend.


Why would it have too be a pornapp in App Store? Apple allows movie players which play any movies of the right data format (mp4, etc). Why not a VR player with data files obtained elsewhere? The game consoles already have VR playing capability for VR formatted data files (e.g. Wipeout on PS4). As long as a book is either pdf or epub it can be trivially read regardless of content. I don't see an issue here.


Imagine spending 3K USD to watch porn. If there is truly a first world problem, this is it :P


You'd think so, but the VR market has grown substantially since 2018. Beat Saber, VRChat are games that you can sink thousands of hours into.


Apple CPU/hardware + Apple display technology + Apple software would easily justify the 3k price. And people would pay it. Not just because it's Apple, but because Apple has the means to vertically integrate the entire tech stack into a high end product that far surpasses what their competitors are capable of. Google Android software and Qualcomm hardware have been trying for a decade and still can't match Apple.

So yea, Apple is poised to dominate VR. The next big ticket, high margin device that every human being on the planet will want. Forget the Apple car. If they can achieve 8K VR, they are steps away from having a device that can put you in the same 'meeting room' with someone else (like your friends, family, co-workers, etc..) and have it be visually indistinguishable from the real thing.


Apple doesn't dominate games so its not so clear they would dominate VR. Apple will need to nail the killer app to jump start adoption and an app store. The majority of Apple's full stack integration stops right before the content.


If you ask me, video games aren’t the future of VR. They’re fun but the most memorable experiences are social. VR FaceTime + shared experiences and spaces will take the cake.


If an online multiplayer, 3D shared space with the ability to exchange information and have fun isn't a game, then what is?


that's just semantics. Who is to say that a social, shared experience/spaces can't be a game. or vice versa


Yea, this will probably push VR to critical mass to get widespread adoption. Their brand loyalty is crazy, they could sell 20m toasters a year if they wanted.


Does Apple manufacture any display panels?


It appears they now are going to manufacture t heir own designs through TSMC.


Rumors indicate this to only being a transitionary product for AR. Sorta like a dev kit, but not named that way.


The future of AR is VR. With 8K screens you just redraw reality.


AR seems far more interesting long-term. Real-time information about what you're looking at. VR seems much more entertainment, architecture,etc. focused--games, exploration, etc. Which isn't small but not necessarily transformational.


The idea is you have cameras on the outside and just composite digital objects on top of the video feed.

Partially transparent solutions have issues like never being able to make your digital object completely opaque. Probably our materials science will get there one day, but what's the point when HD cameras + screens let you do the same sooner rather than later.


Like I said the future of AR is VR. People are already recreating their rooms and spaces in VR. Just extend that to wearing the device all the time. People dont want to see reality. They want to see their own version of it. Cloudy days become sunny days, etc.. etc..


Sure, but the quality of those 8k screens are important. Reality has very high dynamic range, refresh rate, and bit depth :)


Retinal projection is the future of VR IMHO.


"just"


8k seems... optimistic. The PPI of such a display would be off the charts. Not to mention the kind of hardware required to run such a device. The NVIDIA RTX 3090 can barely run a 3D 8k game at 30fps, let alone in 3D for VR. What would you even connect such a device with? Displayport and HDMI wouldn't support two 8k streams.


Since they do mention eye-tracking they can heavily exploit foveated rendering which will make the resolution extremely high at the exact point where your eyes are looking and much lower in the outer regions of your vision. This is practically unnoticeable (if done right) and allows for much more interesting performance optimizations. Full 2x8K in 90Hz is impossible otherwise.


Part of the problem with foveation is its pretty noticeable and mildly annoying for not a huge gain.


Around the early/mid eighties, I got to play in a flight simulator for the new F1-11 avionics package Australia was buying (my dad ran the project building the sims).

It had a pair of Silicon Graphics Reality Engine IIs, one projecting a lower res image over the entire half-spherical screen, and the other driving a projector mounted on a gimbal that tracked the flight helmet - to display a high resolution image in the direct field of view of the pilot.

It was _possible_ if you tried, to "trick" the system so you could notice from the pilots seat what it was doing. But it was _remarkable_ the difference between sitting in the seat with the helmet on, and watching from behind where you could really obviously see the high res patch of sky moving around. Enemy planes turned from Space Invaders kind of pixel art into recognisable Russian fighter planes when the pilot looked at them. The "immersive reality" while flying the sim was amazing.


It greatly depends on the techniques used for rendering. Current rendering engines are focusing on pixel perfection at every part of the screen as they don't know where you are looking. More and more games use a hybrid between path/raytraced effects and other shader effects that could benefit from knowing where the viewer is actually looking. Especially raytracing can get huge speedups from sampling less rays: https://www.peterstefek.me/focused-render.html

Nvidia has researched in temporarally stable resoultion reduction at the edges (needed or you'll notice flickers in the blurring) as well as enhancing contrast which the eye is more sensitive to in the peripheral vision than sharp details.

Put a lot more research into this as well as proper support in the major 3D game engines and we have a winner.


> pixel perfection at every part of the screen

Current lenses have quite a pronounced sweetspot in the centre of the vision so high resolution is wasted at the edges.

"fixed foveated rendering" is supported with Oculus and implemented directly in some games to reduce resolution at the edges just without eye-tracking so you can notice it if you move your eyes instead of your head. There is also "dynamic fixed foveated rendering" to ramp up/down for the current rendering load.


Which version(s) have you experienced? (Any that track eyes?)

I've only used Quest 1, with _fixed_ foveated, and while it's noticeable, it's good enough that I could see a generation or two of improvement pushing it beyond noticeability.


Unnoticeable if done right.


Thankfully, if there's any large company on Earth that really cares about perfecting UX, it's Apple.


I think part of the problem is the peripheral area where you wouldn't notice but still see is so thin that you don't save a lot. Maybe Apple has cracked it, though.


Might also be that peripheral vision is more sensitive to movement, and the edges where the lower res and higher res rendering meets could make for distracting discontinuities that look to the reptile brain (or the preprocessing in the retina/optic nerve/visual cortex) like a tiger...


>Apple will liberally use an already-known VR technique that involves using eye-tracking to render objects in the user's periphery at a lower fidelity than what the user is focusing on.

This seems to be describing foveated rendering, which is reducing the image quality in your peripheral vision, because you are less likely to notice it there. It requires tracking where the eye to so you know what part of the screen the eye is looking at.

The RTX 3090 is likely rendering the whole 8k screen at a consistent quality level, whereas foveated rendering would mean that only the part of the display that the eye is actually focused on would be rendered at full quality. If Apple could pull off the tracking well enough (accurately, with low latency), they could probably save a lot of GPU power by lowering render quality outside of what you're looking at.

https://en.wikipedia.org/wiki/Foveated_rendering


a) Deep-learning super sampling (DLSS) allows upscaling to 8K at a low cost.

b) VR uses variable rate shading because we have a lower visual acuity in our peripheral vision.

c) VR rendering typically "shares" a significant amount of the work between the two viewports. E.g.: one set of "commands" are rendered simultaneously into two buffers with different view transforms. Textures and meshes are cached once and rendered twice, so the bandwidth requirements aren't actually doubled.

d) Display stream compression (DSC) and similar technologies would work well for VR because the viewport is always in motion with a high refresh rate. One could even imagine sending a H.265 compressed stream wirelessly at a mere gigabit, which is fantastically high bitrate video but well within current WiFi capabilities.

e) There will be future developments as well, we're not stuck with current technology. Keep in mind that current era flagship GPUs are manufactured on silicon processes that are about 3 generations old! By the time this VR kit hits the mainstream market, GPUs could be manufactured on a 3 nm TSMC process and easily put out 90fps in 8K resolution.


Regarding (d), if you have a Quest / Quest 2, there is no need to merely imagine; this is how WiFi + Virtual Desktop works now for PCVR connectivity, and it’s excellent. (Not sure if it tops out at a gigabit but it’s much more than enough for a great experience at 4K of the Quest 2.)


Especially if Apple keeps pushing their custom GPU's (which are already at 3nm). The current M1 GPU is somewhere between a GTX 1050 and a 1070 - so still a few generations old. With a bit more focus on the GPU part (and use a bit more power) they might be able to pull it off.


DP 2.0 supports two 8K streams @ 120hz with 10 bit color over USB C: https://en.wikipedia.org/wiki/DisplayPort#2.0 Requires DSC though.


The article mentions use of something like a M1 processor, so I'd assume this is something like the Oculus Quest where the rendering is done on the VR headset itself.

It'd also explain the price tag, if you have to buy both a high quality display and a speedy GPU that's been glued together.


I wonder if the bulk of the resolution is to handle this pass through video in a fixed pipeline. Programmable rendering could be at a much lower resolution and scaled up in hardware.


Like mentioned in the article, they would only be rendering what’s directly in the clearest part of the eye at full resolution, so the idea of actually needing to transmit two full 8k streams is not right.

Also, it sounds like this would be more of an all-in-one device, where it’d handle the rendering instead of connecting to a separate computer, so the rendering performance is more likely to be the limitation than any kind of transmission limitations anyways.


If you want the latest AAA game in ultra quality, then of course no GPU can run it at 8k. But I'm pretty sure a RTX 3090 can run older games at 8k.


I suspect an RTX 3090 could run many Oculus Quest 2 games at 8k.


This is an intentional leak.

Apple is trying to see what the reaction is. They've got some negative press recently for pricing things a little too high (say a $1k monitor stand or $700 wheels). They're watching to see what the reaction to the rumor is. If people say 'no one would ever pay that', then they'll dial it back a bit. If the reaction is 'eh, maybe for those specs', then full steam ahead.

Either way, they can just deny the leak as a rumor without any basis.


Seems like a conspiracy theory. Apple has years of experience creating products that people love at all different price points (original iPhone was crazy expensive, for example).

If the product is legitimately better than everything else on the market, it doesn't matter how expensive it is on initial launch.


The original iPhone was more expensive compared to other phones but it was not priced like an Apple hardware product for professionals. Between 500$ and 600$. 600$ is about 780$ in today dollars.


Absurd.

You think massive companies don't price their overpriced things with price-elasticity in mind?!


From the article, it would have "an outward display that could be used to show content to people near you or to check information when the headset is not on your head"

That detail seems a little suspect to me in this whole story


What's the necessary resolution in each eye needed to be "retina"? Once we reach that will it be the end of the "monitor era"? A lot of interesting things can happen once you no longer need monitors. Though sharing with others will still be a practical limitation so I doubt they'll go anywhere for now.

Once you no longer have monitors I think there will have to be a UI and UX shift around how you interact with things. Windows made sense because you had a fixed set of boundaries. If the boundaries no longer exist it probably doesn't make sense to have windows in the traditional sense anymore. You could have an infinite set of context sensitive menus that appear as necessary.


A field of view about 180 degrees wide, 135 degrees high, 1 arc minute in maximum pixel size, and updated 100 times per second… twice, to account for full stereography, and using 48 bits of color.

180 * 135 * 60 * 60 * 100 * 48 = 419904000000,

420 billion bits per second, without compression.


Keep in mind that the 90FPS benchmark is merely required to achieve "presence," which is the phenomenon of the brain accepting the virtual reality as reality. Higher framerates are always beneficial (as this also reduces latency).

> 420Gbps

I assume foveated compression (a pet idea of mine that I have no doubt someone else also has) is probably the only way to realistically deliver the stream to the HMD (assuming that some rendering pipeline that can deliver the uncompressed stream exists).

You also have to be concerned about micro-saccades, which may increase the effective resolution of the eye.


Right, 420Gb/s is the worst-case number. If you can track the fovea accurately, at 100 Hz you need to keep at least a 9 degree circle at full resolution -- more likely an 18 degree circle -- and even so, there's no mechanical system likely that's going to keep up with that motion, so you need a display screen at full resolution across the entire surface, even if you're only painting full resolution across a small sector each frame.

Also also note: there's head motion to track and events to respond to. If you're playing a movie, that's pretty much your best case because you only have to pan and tilt a stream of things you already know about. Playing a game is medium bad, because you can predict the range of possible events. Doing AR mediation is worst, because you can't predict the rest of the universe.


When you say "foveated compression" do you mean >Apple will liberally use an already-known VR technique that involves using eye-tracking to render objects in the user's periphery at a lower fidelity than what the user is focusing on. ?


No foveated compression is not foveated rendering. It is using the same general idea for a different area of the problem: a higher bitrate (on the wire, or across the wireless link) for in-fovea.

Due to latency, you can't render the fovea area only; you need to cover quite a bit more in case the eye moves between submitting the command buffer and the final image being ready.

Compression can happen much later in the process, where the latency is much more predictable or even fixed. This is useful because massive bandwidth is easiest along traces, harder along wires and hardest wirelessly.


Interesting recent discussions on Foveated Rendering https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


In the future there may be an integrated display technology that physically cannot display an arbitrary combination of pixels at high speed, but can only do so for the space of "images which humans are likely to see" which I suspect is an unimaginably smaller possibility space.

I suppose something like a neural representation of a scene that is directly rendered on the underlying hardware, without first being converted to a raw bit stream.


This is an interesting return to the way that CRTs worked. They were technically 0-dimensional displays (a single beam changing in intensity, which was moved by a non-programmable magnet around the tube)

In our modern future display, we would similarly blast data to only where it is needed at an instant in time.


Foveated imaging can dramatically decrease the required bandwidth. We have about 6 million cone cells and about 1 million of them is in the fovea centralis. Rods are more numerous, but they are mostly for low-light vision.


What's the resolution in terms of a screen? Like height by width in pixels? Sorry if that's obvious from your comment, coming at this with just basic consumer knowledge about displays.


The eye doesn't care about pixels, the eye cares about the smallest detail that it can distinguish. If you specify a particular distance you can figure out how small the pixels have to be at that range so that you can't see them. It's around one half of an arc-minute, which is to say, 1/120th of a degree.

https://en.wikipedia.org/wiki/Visual_acuity is a really useful overview.

Now, we know that human eyes can only cover a little less than 180 degrees horizontally, and about 135 degrees vertically, so we could multiply each of those by 120 and get:

180 * 120 = 21600 across by 135 *120 = 16200 down. That's a screen where every pixel is guaranteed not to be distinguishable from the ones around it. If you had this on a hemispherical screen 10cm from your eye, your square pixels need to be no larger than 0.0145 mm on a diagonal, or if you prefer have a dot pitch of 0.0145 mm. Current state of commercial availability smartphone screens have dot pitches of about 0.045 mm, about 3 times too big.

Does that help?


Probably can cut that down further due to :

* People's vision getting worse with age

* Each OLED pixel consists of 4 subpixels, so may only need to reach that acuity at sub pixel resolution

* Acuity drops and eyestrain increases at the edges of vision so likely only need maybe 75% of our range U/D L/R at the highest resolution, and the outer edges can have much lower resolution

Other hacks for foveated rendering include rendering peripheral vision at lower resolution and monochromatic, and not rendering in the blind spot, since our brains will just fill in those areas. May not be worth the complexity, but there's a lot of hacks that could take advantage of our visual system's quirks. I imagine ray tracing makes these hacks even easier to implement.


From the logic of the GP, that would be:

Height: 135 * 60 (135 degrees at 60 pixels per degree)

Width: 180 * 60

So 87 megapixels, or a 10.8K display with a 4:3 aspect ratio


Depends on the field of view angle you want to achive, but 8K is unlikely to be enough.

Back in 2015 Michael Abrash said:

In order for it to have retinal resolution across a field of view of 180 degrees it would have to have something on the order of 16K by 16K resolution [per eye]. [1]

--

[1] https://youtu.be/UDu-cnXI8E8?t=4232


> Apple will liberally use an already-known VR technique that involves using eye-tracking to render objects in the user's periphery at a lower fidelity than what the user is focusing on.

Should be enough if they don't have to render that full view, but only what the retina is focusing on I would think.


Which suggest and interesting solution: a very small high res display surrounded by much lower res areas that can move around in such a way that it always sits right where the eye is looking.


I think it would be easier, more reliable, and more energy efficient to wait for microled displays, which seem to be right around the corner.

I get the impression that they might be available by the end of the year or next year, and might be something Apple has access to?


that solves the gpu problem of high-resolution rendering and the bandwidth issue, but is the density of pixels in the display enough to fool the eye?


I feel like the windows are there to create context for the human brain, not because they resemble the shape of the screen. These context may look a little different to windows, but I don't think they will be radically different in principle and will still maintain a distinct resemblance to the physical world. That's what our brains are moulded to best interface with after all.


About 3000ppi

Example headset that used a small display with this pixel density.

https://arstechnica.com/gadgets/2019/02/retina-resolution-he...


This is the opposite direction for VR. We are at a prime time to be using algorithms that can read face gestures, track eye location, we need much better full body tracking.

Users are happy with 2K per eye, but they're not happy on how nobody but Oculus Quest is making an affordable and well featured headset. It's even perfectly useable entirely wirelessly with Virtual Desktop.

Apple has the arm tech to be making high performance mobile headsets that can use their high quality cameras for tracking. Oculus almost has no controller hand tracking down, they're limited by the resolution and number of camera trackers though.


Wait, you asked all users?

Because 2k per eye seems enough until you realize that it gives you less than 20 ppd (pixels per degree), while 2560x1440 27'' monitor at 50cm gives you 40 ppd

To get same ppd on 27'' monitor at 50cm it would be 1111x625, for 24'' at 75cm its 700x400!!

My point is that while current ppd might be fine for some, its very low for anything that requires detail. Any small text or gauges or similar is simply not readable in VR while being crystal clear on FHD monitor, just because the field of view is stretched so wide in VR headsets.

You can say that small text isnt important in VR, but that would be dismissing the fact that small text and higher feature resolution can increase information density dramatically, which is always desired. After all, you aren't browsing this site on a screen with 640x480 resolution, are you?


Absolutely, the walled garden of Apple combined with the high price tag will lead no where.


I bought an Oculus Quest 2 when it came out and it's pretty fantastic. The whole experience with setup and gaming is excellent, especially for the price.

There are plenty of little rough edges that I feel like Apple would do a good job of smoothing off. Things like not being able to get phone notifications or use bluetooth headphones.


Were you as blown away just by the setup and first tutorial "game" as I was? I bought it on a whim over the holidays when I was upgrading my PC. Haven't used a VR headset since the Oculus devkit 2. Had no idea how far along everything had come; particularly tracking.

By the end of the first day I had my tape measure out moving my couch, coffee table, and clearing out the living room. It's the VR space now.


Probably. The "game" where I danced with the robot blew me away. It really felt like I was dancing with someone else.


The most interesting thing about this is what it says about the work Apple has been putting into graphics hardware. Dual 8k displays that I assume are being driven by Apple graphics hardware. Lots of R&D for future M1s or discrete cards for Mac Pros.


> Apple will liberally use an already-known VR technique that involves using eye-tracking to render objects in the user's periphery at a lower fidelity than what the user is focusing on.

I would be shocked if this is anything but a gimmick, with all sorts of upscaling and rendering tricks.

I very highly doubt apple is all of a sudden the producer of the best gpu in the world, with a wide margin. Nvidia claims their 3090 ( a $1500 card ) is capable of 8k 60fps. In benchmarks, this is only possible in certain easy to render games.

The idea that apple can double that, and push a higher refresh rate for smooth vr (usually 90fps minimum) is wild.


To a point, of course, they'll use all the tricks they can. But if it genuinely isn't in the ballpark, why use 8k screens instead of 4k?

They at least think they can drive enough real pixels to make the more exotic/expensive option worth choosing.

Edit- and the M1, already, is by certain metrics already the best 'CPU' in the world- there isn't any particular reason to think Apple incapable, considering their access to cutting edge chip manufacturing, of doing the same with GPUs if they wanted to throw the silicon and resources at it.


> upscaling and rendering tricks.

Just a note, Nvidia DLSS 2.0 is so good, that it doesn't really even matter if it's "just a trick". I wouldn't discount upscaling via AI.


Do we know what the effects are likely to be of wearing a VR headset for hours at a time? I don't think our eyes are going to be happy with a fixed focal length for an extended period of time.

At least when I'm working on a laptop/desktop, I'm constantly looking around the room, at my keyboard etc.


Plenty of people are spending many hours in VR these days. It's not affecting their eyesight. It may not be good for young children when their vision is still developing, but for adults there is no issue. Especially once you're old enough to have presbyopia; then you can't focus your eyes anyway so it's moot. And presbyopia happens to everyone, VR or not.


I understand that this is still a big problem in the air force when training fighter pilots with VR headsets. The fixed focal length means extended use still leaves pilots disoriented even after taking the headsets off. I would imagine this will hamper real world adoption and keep VR pretty niche for the time being. Using light-fields is the obvious solution but that's still a pretty nascent tech.

It's not entirely clear what problem VR is trying to solve in a lot of situations where's it's claimed to apply (like office use). There's obviously some places where it could be quite useful, but light-field fixed screens might offer a lot of similar benefits for a lot of these applications and at this point at least, that tech is a bit further along (and might be more comfortable for users).


I'm not that worried about that since I already spend most of the day staring at a monitor at a fixed distance. But, I would be a lot more worried about the added weight. My posture is not the greatest and having a bulky headset weighing down the head levering the backbone could be problematic if done for long stretches of time.


VR Exercises and posture corrections add-ons


The main thing is that its often troublesome wearing glasses with them. You can get the lenses changed to your prescription and you can wear some headsets with glasses. Sometimes the light shines in the side of the lens making it distracting.


Curious of this too, I don't want to go blind!


There is not a fixed focal distance in VR. The screens are an inch in front of your eye, but the objects you look at can be (virtually) at any distance. It’s like normal vision.

The question of the effect on vision of shining screens into your eyes for hours at a time still stands.

Edit: whoops, guess I was wrong about that!


This is inaccurate. Unlike IRL in which objects can appear at varying focal distances, requiring the eye muscles to constantly adjust the focal distance of the eye, the two images in a VR headset (one for each eye) remain at a constant focal distance as determined by the lenses.

The 3D illusion in VR comes from stereoscopy. The objects that appear in VR are actually all at the same focal distance.


So why do you need glasses in VR? Just trying to understand.


Because there are lenses in the headset that focus the two images at a set focal distant (between a couple of meters and infinity focus, depending on the headset).

You need glasses for the same reason one would need glasses to see a flat image several meters aways, or to see the stars as the case may be.


Over the years ive found it increasingly difficult to find a way it could be harmful: latest nugget that comes to mind is that daylight is 1000s of times more intense than any indoor light


I don't believe this to be true - don't you need a light field for this to be possible? i.e. what magic leap was doing?


It is obviously saliently true just via direct experience. Your eyes focus at different distances when looking at 3d objects in VR. You can feel it!

I am not a physicist by any means, and I am not sure what a "light field" exactly is in terms of engineering, but as a everyday skeptic I am generally rather wary of anything Magic Leap puts out.


You are confusing optical focus (like a camera lens or the lens in your eye) with stereopsis - the mechanism by which your brain detects depth information through parallax.

The VR headset shows each eye a slightly different image. Objects appear in slightly different locations in each image. When you look at these virtual objects, the relative angle between your eyes changes which creates the illusion of depth. (This is my layman understanding. I am open to correction).

The optical focal distance remains constant, however.


I've followed up and verified that indeed, most all VR displays have a static focal distance. Thank you for taking the time to correct my and others misconception on the topic!


Nah, they have fixed focus (I think at infinity, but that might not be true). You use other depth cues in normal VR systems, but not focal depth.


Folks here are discussing the difference between AR and VR. My own take is that it's a difference of implementation. Quality VR with quality external cameras is going to give you a MUCH better experience than would AR glasses, and can of course do many things that VR glasses could not. I do think that AR glasses are much more practical and will have broader applicability. But perhaps not so much after a great VR implementation by Apple.


VR is fail-deadly, AR is fail-safe by design and there's no way of getting around that. When VR fails, it's a black screen covering your head, whereas AR is the world around you.

There are many circumstances where AR is far better than VR, even with perfect VR


AR is also much, much harder. Refresh rate doesn't just have to fool your brain, it needs to "stick" to reality. Plus, you're adding environment tracking and integration. If you're using a glasses model, the only techniques I've heard of darken the environment and the CG element lightens things--this makes integration difficult.

I imagine you could use some tricks if you did AR with cameras pointing out and displayed them on a screen (this seems to be what the parent is implying), but you go back to "failure" being a black screen. In practice, I haven't seen this be a big deal. The display is black when it is off (like your phone assuming it doesn't put the screen to sleep). If any rendering misses a frame they distort the previous frame in the buffer to match any head movements (in AR likely reintegrate with new footage).


So you take them off when they fail. And file a bug report.


I’m imagining a world where they’re so ubiquitous that they’re used when walking or driving, etc, because they provide a HUD. This is far down the line.


Perhaps not that far. But in that case there will be more regulated testing involved, and the subsystem that feeds the camera to the display would probably be a hardened subsystem running on a separate processor.


But at the end of the day, the battery can still die. If your car battery dies, the car doesn't seize up and the brakes don't stop working. The power steering goes away and the power braking, so they're harder to press, but you can still get the job done.

Having a curtain come over your eyes is an unacceptable solution.


Again, you can simply take them off. Perhaps scarier to contemplate is a future humanity that just couldn't cope with real, unfiltered pixels.


You’re driving 60 miles per hour. In traffic. You suddenly can’t see. How long can you be blind before a crash happens?

You’re in an in-danger state. That’s why I said “fail deadly”


Interesting point. But is in any more "in danger" than sneezing - where I loose vision for a couple seconds? It shouldn't take more than a couple seconds to flip up my googles (they should be designed with "flip up" capability anyway). Another point is that we have have self-driving cars before we have such high-quality VR goggles.


Also relevant is the issues they've had getting this right with fighter jet helmets.

https://foxtrotalpha.jalopnik.com/this-is-the-f-35s-third-ge...


Dual 8K Resolution, I am assuming it wont be 60Hz, it will need to be 90 or 120hz.

I am not even sure top of the line PC can do that today. Even Dual 4K at consistent 120fps is pushing the limit.

At VR headset range with 8K Per eye is a display that is nearly 3000ppi. My guess is that 8K is the wrong figure and it is actually derived from 3000ppi. Which is similar to Varjo [1] HMD.

At lastly, this is "The Information". Which means it is an Apple leak. Intentional or Not. Over the years they have been doing PR pieces and other intentional leaks for Apple. It will mostly be the same tactic as someone else has mentioned, setting consumer pricing expectation.

[1] https://www.vrfocus.com/2019/02/the-varjo-vr-1-is-a-6000-hea...


They will use eye tracking and foveated rendering to reduce the hardware requirements.


Why will it need to be 90 or 120 Hz?


60 hz in VR feels disorienting. I recently upgraded my video card to an RTX 3090, which allowed me to finally go from 90hz to 120hz using my Index. It was incredible how much the experience improved. I used to have a vive, and when I first got the Index the headset felt pretty similar. But once I improved the refresh rate it was a night and day difference in feel.


Motion Sickness. You can try it with FPS.

For me it is funny because the more realistic or better the graphics, a higher baseline frame rate is required. So my brain sort of work great with Counter Strike at 60 Fps, where the graphics are low Res and not any good by modern standard. Once it gets to very good / realistic level my brain sort of half interpret it as real thing. And expect "real" world motion. At this point 120fps feel so much better while 60fps cause me headache.

If we are not limited by computation constraint I would want to push for even higher frame rate rather than higher resolution.


You need a high refresh rate to keep convincing your brain of the illusion. If you dip down to 60hz and below, VR sickness can be a real problem.


idk if this prediction is already all over the place, but sure sounds like the iPad pricing: Rumors say $1k, launched at $499 for base model, lots of press about how affordable it is.


Isn’t this a clear indication that Apple will have its own stand-alone GPU line? This plus the car news from earlier this week made me think Apple already has its own GPUs to rival or surpass Nvidia.

Edited for clarification.


Apple already has its own GPU line. It is in all iPads, iPhones, and M1 Macs.


Apple already puts GPUs in its chips. If you're suggesting a line of PCIe cards, I don't think that's indicated at all.


Saw a really interesting talk about the human eye in regards to resolution limits.

A single frame for an eye is something on the order of 1-10 megapixels, but the signal processing of the brain with the vibrating eye effect create something approaching a gigapixel image using what may be analogous to temporal resolution enhancement:

https://www.osapublishing.org/boe/fulltext.cfm?uri=boe-8-3-1...


Yes... human vision is best thought of as a stream of visual information (i.e. temporally related data). A single frame isn’t really a meaningful measure of human visual capacity.


I was hoping that Apple would be making AR spectacles, and not a VR headset.

I was really excited by the possibilities of Google's Glass and really disappointed when they killed it off.

I want a HUD in my glasses. I want to read text messages and get navigation directions without having to look down at a screen.


I used Google Glass in person for about 10 minutes, and after so much oh-ing and awing over the tech, for me it was pretty disappointing to experience. It felt like I was using a cheap Android system from 2012 overlayed onto reality and, uh, it wasn't that great. That being said, I only got to see a few basic apps in action, and maybe I didn't get the full tour of the hardware's abilities.

Personally, I think that VR will win out, even for AR applications (or maybe my semantics are incorrect). I can imagine a VR system that uses external cameras to mimic a person's natural field of view and overlays additional information in the process of displaying it. It seems like a more available route to a fully immersive "augmented reality" than the current glasses based prototypes.


> It felt like I was using a cheap Android system from 2012 overlayed onto reality and, uh, it wasn't that great.

Considering that Google Glass was created in 2013 with hardware smaller than a phone, thats actually exactly what they made! After Google Glass got canned, all of the talented engineers went to different projects. I assume their enterprise version is basically the same 2013 glass.


Judging any bleeding edge tech by its first prototype is not particularly useful. The point is not what Google Glass was, but what could've become with steady improvements.

VR cannot replace AR anytime soon due to some obvious safety issues. At the moment they are and will remain separate things IMO


You're looking for a Hololens 2 like device, maybe HL3 will get there. HL2 is quite good imho, I've browsed HN while drinking coffee on the balcony. It occludes based on the environment (i.e. you can't see objects through walls)

To get this you need outward facing cameras, which people will balk at for privacy.


Honestly, I think the bulk of the benefit of Google Glass can be had with a smartwatch. Its not like Glass was able to do any AR or scene detection.


That’s not true at all, off the top of my head there was:

-WordLens, which could translate and overlay text in different languages

-Star Chart, which overlayed the name of the star/planet/constellation you were looking at

-Don’t remember the name of the app, but it would tell you the color hex code of whatever you focused the lens on

Several others that I can’t think of. I agree that a smart watch overlaps in terms of notifications but the glass absolutely did AR and scene detection that put it on another level.


The display did not augment reality, it was simply a small display in your view. It also did not spatially scan but it did know your rough location and where north was.

It did things a smartphone could do but much worse. It's main advantage over a phone was that it was always on and didn't need a hand to hold it, which are key benefits of a smart watch.


Ok but if you are looking through that small display and you are seeing digital labels placed on real items that you are looking at, how is that not augmented reality?

Like I said, there’s definitely overlap but you’re way underselling some key features. Smartwatch also can’t take a photo of whatever you’re looking at simply by winking.


>how is that not augmented reality

Again, if that's how you want to define augmented reality then we already did it much better on phones.

A smart watch achieves the goal of a passive HUD better than Glass did.

Whats left is a head mounted camera and I don't find that very compelling. The tradeoff between wearing glasses and having a much worse camera vs not having to use your hands to shoot a photo was not good enough in that device.

I mean, you're free to like it. You have to admit that its not just me who thought Glass did not have enough compelling features. Hasn't the market spoken for Glass?


I wouldn’t say the market has spoken for glass. It was underpowered, expensive, and ahead of its time (not to mention stigmatized socially). Kinda like saying the market had spoken for the iPad because the Newton wasn’t successful.

I agree that a smartwatch is a better passive HUD (although it is not really “heads up”...), but there was certainly more functionality in the glass than passive HUD + head mounted camera.

I’m curious if you bought one?


>I’m curious if you bought one?

Indeed I did.


Yes, it'd be great if they could somehow solve all the engineering problems with AR headsets and bring something usable to market. It'd also be great if they could make batteries that are 10 times as energy dense and processors that don't produce waste heat.

They can make an amazing VR headset right now, a comparable AR headset is many many years if not decades away.


Google glass and some other products do this already. Although products like glass and Hololens (and Apples new headset, apparently) are all targeted at enterprise, not consumer, there's nothing stopping individuals from buying them


I myself heard those would be explicitly VR glasses, though transparent, but without any "mixed reality" type fluff.


with 8k in each eye and passthrough cameras these will probably feel like glasses.


That would be step up from the current state of the art https://varjo.com/products/xr-3/

Focus area 70 ppd uOLED, 1920 x 1920 px per eye

Peripheral area 30 PPD LCD, 2880 x 2720 px per eye

field of view (115°).

price €5495 + then some


I can’t wait for a truly usable VR headset. I’ve tried them all, and they’re all pretty terrible in terms of resolution, screen door, etc.

The content, and non-visual tech is all there. Everything is primed. Now we just need a headset that has proper resolution. I’m hoping Apple gets this right, clearly they understand the true issue with VR by focusing on resolution.


The HP Reverb G2 has exceptional resolution that basically eliminates screen door effect.

I think the bigger challenge is not screen resolution but lenses. The G2 has quite a narrow sweetspot so although the centre of your vision is basically monitor quality and text looks perfect, the quality drops off towards the edges so it doesn't really work for reading where you want to scan left to right with eye movement. It does work well for simulations where moving your head is a more natural way to look around.


I wouldn't say terrible, just like I wouldn't say the gameboy advance had terrible resolution. Sure, it does now, but it'll certainly get better with time and it's hard for me to think that it's the primary issue with VR.

I suppose it's the issue if your expectation of VR is resolution-sensitive. But good VR apps will just treat that as a limitation and factor it in. GBA apps of the time were pixel-art, and resolution was never its "main problem" (batteries were), because you weren't using it to watch movies or something.

IMO a lot of the true VR issues have been fixed by now. Pass-through, standalone wireless, baseless hand tracking, no more headaches, etc… A lot of the work that remains is incremental improvements (such as resolution, battery life, FOV, weight, tracking accuracy). I would personally like to see some very-low-latency wireless earbud support, too. Once again the Quest 2's over-the-ear system is great.

The most severe issues I have with VR right now are very much not software/hardware related, but rather have to do with sweating: Band getting dirty, lenses fogging up. The fact a lot of VR apps are fitness based makes this a serious problem, at least for me.

The real "true issue" of VR currently is probably the lack of apps. There's fairly few killer apps, especially on the much more restrictive Quest line.

But I agree with you there's a huge potential for "retina" VR.


Perhaps it’s because I play games I expect to be more realistic in nature. When I complain about resolution I’m not talking about beat saber.

I never really feel fully submerged in VR. I blame that on a few things:

1. Resolution. This is the big one for me.

2. Headset. The weight, fogging up (like you mentioned), the fact that I can still tell it’s there.

3. Sound (you mentioned part of this). Sound is important for full submerging.


Hm, I guess I'm just not looking for immersion. Like, sure, it'd be super cool to be fully immersed but it's not really my priority. And advance after advance probably won't actually get me there, short of a holodeck.

PS to help with fogging up, if it bothers you as much as it did me, i found that lightly heating the lenses with a blow dryer for like five seconds lets me play without fog for nearly an hour.


Sooo looking forward to being able to live fully in a virtual world. Virtual cash, virtual socially distanced "relationships" and friends, a virtual body so I don't have to work out, and Neuralink attached to my brain (or whatever's left of that) so other people can make decisions for me. What a relief it's all coming together


>Sooo looking forward to being able to live fully in a virtual world. Virtual cash, virtual socially distanced "relationships" and friends, a virtual body so I don't have to work out

A surprising aspect of VR is that this is actually the complete opposite, just how much your real body actually matters. VR is exhausting. 10 minutes of Thrill of the Fight and I'm physically incapable of continuing. An hour of Rec Room or Echo VR and I have to stop from the sweat buildup. Even hanging out and just talking VR Chat is so more physical (standing, actively interacting with both arms, potentially full body tracking) than sitting at a desk during the day job, more comparable to something like being at a conference where at the the end of day you really feel it, all the standing around and slow walking, since in total it was more than an average day.


This is super sci-fi but i'm wondering if we'll get anywhere near incubating a brain (or simulating existing brains) so that it doesn't have an aging body to deteriorate and render it useless (of course, this wouldn't solve other brain-damaging disease). A virtual environment would be used to simulate the inputs and record the outputs so that you effectively live longer//forever (perhaps you could choose to terminate at any time).

This is basically the plot of San Junipero but with the vast resources going into ML, neuroscience, and anti-aging, it's not hard to imagine this being a possibility within the next 100 years.


It's an interesting premise to look at where this tech will take us. Decades ago social media was seen as a technology to bring people together with no borders to share information in a possible new utopia. Even though elements of that have occured, we have seen that its also caused a myraid of mental health / well being issues by rigging into peoples reward systems where likes become dopamine hits with hours of each day spent endlessly scrowling through feeds. In a way this tech concerns me in that its another enitcing route to retreat from the natural world.


Same as television did?


> The headset (which the report says is codenamed N301) will be able to display rich 3D graphics at that resolution, the report says, thanks to an ultrafast M1 chip successor and because Apple will liberally use an already-known VR technique that involves using eye-tracking to render objects in the user's periphery at a lower fidelity than what the user is focusing on.

That would be superb. I could consider paying $3k for the dual 8K headset if I can skip buying a separate gaming pc for running a flight simulator. (Not to mention that 2x8K capable rig would cost a LOT!)


I bet this uses DLSS to get the resolution. I see no one talking about it but why push a high bitrate when you can just upscale with AI.


DLSS is a NVIDIA technology. Apple and NVIDIA don't quite like each other.


They could be using "DLSS" to refer to any sort of highly capable resolution improving software, not nVidia's specifically.


So Apple (known maker of Galaxy phones) is going to be using DLSS for their Oculus Quest?


Ooooh, I see. You’re choosing to give people a hard time and nitpick, rather than respond to their intent. Good choice :)


This is like those guys who are like

"No, you can't SLI two of those cards"

"Oh, I guess I should just get Nvidia then"

"Of course you could Crossfire them"

wat

Nice one. Hope you're proud.


All of the Apple rumors from 'reliable' sources recently have pointed to the first round of hardware being specifically a) intentionally expensive and b) a developer kit, not a consumer product. While $3k and a pair of 8k displays seems a bit extreme, it would fit the current narrative.


A lot of comments here are talking about sharing a virtual office or working with VR. No one seems to note that if you cover your face with VR headset no one can see your facial expressions. The article says "more than a dozen cameras".

What if this is actually a helmet? Apple called the FaceID dot matrix projector and capturing a camera before IIRC. It would be possible to capture facial expressions, and even depict the user reasonably accurately in the virtual space. I know trying to represent human facial expressions in 3D is likely a doomed attempt. But it is nice to imagine that Apple might be thinking a bit ahead.


Oculus has already put in a ton of research effort into replicating facial expressions in VR. You don't need a full helmet for that, but you _do_ need at least a few cameras pointed towards your face: https://uploadvr.com/frl-multiview-face-tracking/

There's a full lecture on the subject here: https://www.youtube.com/watch?v=gpdX9jkhv2U


That technology is being developed... but I can also tell you from experience playing multiplayer VR games, you can convey a lot of emotion simply using hand motions.


Well if true that would be a good news. Can't wait that more R&D is pushed into VR so that it becomes more than a toy. And especially Apple who knows how to release hardware that works


types in broken MacBook keyboard

Jokes aside, I agree - I think they're taking their time with this and it might bring something interesting to the scene.


Jony is gone. We can expect stuff to actually work now rather than just get thinner and shitter.


> And especially Apple who knows how to release hardware that works

All the "serious" HMDs on the market today work just fine - Valve have put some serious engineering into their tracking technology, for example.


If this is happening, it will be for creatives. As Apple's first customer base, and focus for innovation and new products are always creative people; designers, video editors, 3D modelers, game artists, architects, etc... A lightweight VR tool, fully integrated with macOS ecosystem, will be presenting a new way of working. I don't think without a proper innovative way Apple will release a standart VR product, into a already troubled industry.


If anything, what I’m mostly looking forward here is mac support for VR. If I was able to use current Quest 2 with link to a mac, that would make me very happy.


I think the visual qualities of current high-end VR headsets are limited by the optical systems. 2K/eye is already reached their limitation even at the center of the view (Pimax released the 4K/eye headset, but it has wider FOV).

When I read the report first, I thought 8K/eye was nonsense because it was a physical limitation rather than a technical problem. Isn't it true?


Productivity strongly correlates with screen size. Immersive VR/AR experience would replace all large displays, and could allow complete redesign of a human-computer interaction. No more trying to fit windows on a screen.

I think that the Apple AirPod Max is an experiment to test the structure. That's why they are so heavy.


Anyone I know can't stand using VR headset for more than few hours. The headset gets sweaty and I get dizzy. I don't think this is getting solved soon. And I don't see significant upgrade from 3 monitors with i3 anyway.


I somehow read it as per eye pricing and instantly went to "wow apple is getting really stingy these days".


That is quite cheap compared to competing professional XR headsets.

Varjo XR-3: 8369 USD Canon MREAL S1: 39000 USD


As a casual gamer who pays $2k for a TV, $3k sounds ok for something better than a tv. Sign me up.


I like they way people fixate on the hardware, not the billions of dollars it will cost for the software.

Needing 8K models and exponential extra difficulties will fix Cyberpunk 2077 and design software?


I hope they focused on the hand-tracking problem.

AR is basically unusable without proper tracking, and it felt like most limiting thing using the MagicLeap1 and Hololens1/2.


Thank God, with oculus being hostile towards consumers this provides a much-needed option,. I strongly suspect this will be the first year pro model, and next year so we'll see a consumer model. Assuming it runs on an a12 or an a14 chip existing iOS apps, I.e Unity projects will run out of the box. So I could imagine developers being able to get games out within a month or so.

I do wonder if Unity has some type of top secret team which has early access to the Apple apis ( at least a phone number to call Apple's internal tech team/


An A12, or even an A14 chip can’t push dual 8k video streams today. Maybe an A14x or M1x with way more GPU cores could pull it off, but even then I think we’re talking about another huge leap over what we’ve seen out of Apple Silicon thus far.

That said, this excites me - I wouldn’t be buying a gen 1 at 3k, but down the road I could see replacing a laptop or two with this kind of thing.


Agreed, I’ll pay $3000 just to keep Facebook from invading my (VR) life.


Don't expect this to be a consumer focused VR headset. MR is for work, and this will be closer to a hololens usecase than a general purpose gaming system.


While the cultural and cognitive effects feel potentially threatening, that augmented reality could become real in my lifetime is exciting.


Priced like an enterprise product rather than a consumer product, similar to Microsoft's HoloLens.

I wonder how well that things selling.


The problem with Apple is that although they care about privacy, they don't deeply care about the addictiveness of their products.

30-50 years from now:

* in society, concerns about privacy will become fringe

* concern about addiction to computational worlds and deeply impaired cognition due to this addiction — initially via iPhones and ultimately by VR — will be the the true concern of the era, not unlike cigarettes


> they don't deeply care about the addictiveness of their products

Apple products aren't inherently addictive. It's the applications that are.

Also they do provide Screen Time and other features to control your usage of their devices.


Off-topic: just want to say hi to people reading this comment thread in the future to see what we all thought about this product before it launched. Perhaps by then it'll become as ubiquitous as iPhone and you're all giggling at how naive we were for doubting Apple once again. waves


Which Apple computer would even have a GPU to use it?


I heard that they hit a snag few years ago when they ran into spat with Beijing University which controls the critical display tech.

I also heard that iGoggles projects was closed, and launched again 3 times already.


Apple FaceTime is already the best person to person video experience... this could be the best person to scene experience. Ie. Tour ancient Egypt with Apple VR.


I wouldn’t even pay half that.


$3k sounds like a reasonable price point to me.


And they will pay


They will, but the deeper question is why. People are invested in the Apple ecoystem. They have an iWatch and it only really works with their iPhone. They have iTunes with their mac pro. Apple is incredibly good at brand and interaction.

So how would a VR headset work with everything?

1) the iphone has the best facial tracking camera and software (better even than desktop and webcam and GPU)

2) like the Occulus, you should only need a phone to connect.

3) only 1 vr headset currently has an internal camera able to track faces and eyes. Facial expression, iris movement and gaze tracking is hugely missing currently, but trivial with the iphone.

I would have the iphone being the base station and do the full body tracking "just put it on your desk". I would also put their existing face tracking and eye tracking tech inside the vr headset. I'd make it so the headset only needs an iphone to connect. I'd make it cordless only.


[flagged]


Hey some of us Apple fanboys are cheap asses :)


That sounds like an oxymoron.


Why? You can be an apple fan and just not in the right bracket to afford the latest devices. I know plenty of people who love apple but they are still on iPhone 7 and 2015 macbook since they cannot afford anything newer. That's absolutely fine. It's like I love Omega watches but I can't afford a brand new one - so I have a vintage one instead since that was what I could afford.


My point still stands. I wouldn't call someone who buys a vintage Omega instead of a Casio "cheap ass".


Well you wouldn't, but then maybe you haven't met some watch snobs. To some people if you can't afford a brand new Rolex/omega and have to "settle" for a vintage one you're a cheap ass.

My point is that any group for literally any product ever will have people trying to one up each other.


It is literally an Apple Reality Distortion Field.


How are they going to handle bandwidth for 8k in each eye? wired, Wifi, bluetooth, freaking lasers?


On the device.


freaking lasers attached to the head because every consumer deserves a warm-UX?!


Above all else, Apple must do one very, very important thing to make this product succeed:

Do not let Robert Scoble take a photo of himself in the shower wearing this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: