Hacker News new | past | comments | ask | show | jobs | submit login

In a word: immersion.



Just to be clear, that's not doing something different.

Also, I'm skeptical that immersion matters all that much. People manage to get quite immersed in their phones, after all.


There’s not much to argue here, but thanks for making your guesses heard! I like mine, lol. Only time will tell how much people value “feeling like you’re there” while chatting with the grandparents, watching a movie, or, yes, browsing the internet. Although as this great article points out, the vision pro is mostly AR so it’s more about “feeling like that is herel”


As attested by people who have had the Vision Pro demo at WWDC, afterwards, looking at the screen on a laptop feels very limiting and artificial. Your engagement is quite different when you are fully surrounded by an environment and can interact with it fluidly vs the separation of a small, rigidly bound rectangle.


3D is known for its strong novelty effect. At least 5 times people have declared that it will change everything: the Brewster Stereoscope, the DOD purchasing 100k Viewmasters in the 40s, the 1950s wave of 3D movies, the 90s wave of VR, and the Avatar-era wave of 3D movies and TV. As you may have noticed, none of those things are more than a rounding error in the market. And that's before we get to exciting failures like Magic Leap, which people were similarly excited about.

And I'll again point you to phones. Despite tablets, laptops, and ever-larger TVs, people spend an awful lot of time on those small, rigidly bound rectangles.


I don't need immersion when looking at a website.

The actual potential "killer app" is something like a holodeck: a movie that it puts you inside of so there isn't any screen at all.

The problem is that the only part of their $3500 device that is better for this than an oculus quest is the screen resolution. All of that other stuff makes operating menus and AR aspects better, things that are useful for tasks you can already do on a device 1/4 the price.


> All of that other stuff makes operating menus and AR aspects better, things that are useful for tasks you can already do on a device 1/4 the price.

As a Quest Pro owner, that is exactly what a headset needs. Passthrough on the quest is a blurry, noisy mess that feels like looking through two original iPhone cameras attached to your eyes. The VR part is great currently, and you get that “for free” if your screens/cameras can do an impressive AR mode. I can work in VR mode on the quest, but the AR mode gets way too distracting when you can’t read a phone notification that popped up on the device in your peripherals.

The screens aren’t the only upgrade though, hand tracking on the quest is mediocre currently, doing simple hand gestures requires me to do them repeatedly often as they don’t get recognized. Eye gaze + small gestures on the Vision Pro are the “pinch to zoom” moment of the VR space.


I'm a quest 2 owner so I'm used to even worse passthrough. The thing is though that I don't mind, because I don't really have any _use_ for passthrough beyond moving a bit of furniture or finding the controllers. Being able to read off my phone screen while wearing the headset isn't a concern, if I'm taking a break from the headset for a minute or two I'll just take it off.

Hand tracking and eye gestures are really nice to have for sure and they make navigating menus feel like actual magic, but again the vast vast majority of what I'm doing when I'm wearing a headset is not navigating menus. This is the point I was making: I don't highly value navigation because if there IS an application in that style I'm probably just going to do it on a phone or PC, menus are not immerse and gain nothing from existing in 3d space.

If I want to move around in a virtual space that's larger than the physical space I'm in I essentially need a controller for smooth movement unless I'm going to be making constant hand gestures.


I agree with you on websites. But I'm skeptical of this:

> The actual potential "killer app" is something like a holodeck: a movie that it puts you inside of so there isn't any screen at all.

We'll see how this turns out, as Hollywood is doing a lot of VR experimentation. But most modern movies will not translate at all to VR, because they control attention and aesthetic experience very precisely, all based on the constraint of the viewing frame.

I think the experiences compatible with a user looking around as a facehugger display allows will have to be much more like dioramas. They may be very interesting, but they won't be movies.


We've seen this with 3D films where many subtle aspects of cinematography (e.g. depth of field) don't directly translate over. But I think it'll be infinitely worse for "VR" films. Even setting aside basic stuff like what should the viewer be looking at, some of the most basic tool of film-making — cutting from one scene to the next scene — could be jarring in a way that it isn't for a rectangular porthole.


For sure. One of the big questions for me is "what does the viewer understand?" So much of modern film is very quick. There's low redundancy. So if you miss a look or a motion, you're lost. If you can't be sure where the viewer was looking, you create a lot of uncertainty about what they know. So look-anywhere entertainment will have to have a much lower information rate, much higher redundancy, or allow the viewer to control the pacing. That sounds a lot more like Myst than John Wick.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: