Hacker News new | past | comments | ask | show | jobs | submit login
Notes on Vision Pro (andymatuschak.org)
577 points by firloop on June 6, 2023 | hide | past | favorite | 688 comments



I use quite a few Apple products every day of the last 10 years, and I’m very impressed with certainly all the technology and reviews and explanations of how this device works and what the user experience will be. However, this is the first Apple device that really makes me pause. All of the marketing material, the WWDC videos, all of it feels very uncomfortable to see all these people isolating themselves in a room with big goggles on. I find it really hard to comprehend this is the direction that technology is taking our lives. Imagine walking into a house, and a lot of the people are just sitting around by themselves in corners with goggles on. The whole thing just feels very strange and post apocalyptic to me.

And these 3D spacial moment recordings, imagine children growing up in a house where when something nice happens, the parent rushes to put on goggles and stare at them through them, their little virtual eyes displayed on the outside, it’s frankly creepy to me.


You're taking a backwards-looking view of this.

Imagine if ancient man, living in the woods, saw people of today living in cities. "It feels very uncomfortable seeing all of these people isolating themselves inside buildings".

In fact the very opposite is the case. Now, we go into nature to isolate ourselves.

In the future, going into virtual reality will be where you go to interact with others, and you'll take the goggles off to isolate yourself.

There are enormous advantages to living (as much as possible of) your life in VR:

- your home can be arbitrarily large at zero cost and without taking any land away from anyone else

- you can change the decor whenever you want at zero cost

- you can paint walls as ornately as you like, at zero cost, immediately, without even any prep work required

- you spend zero time on travelling

- you can instantly hang out with friends in foreign countries for free without even needing a visa

As long as the technology is good enough (and please remember that qualifier, because people normally respond with an implicit assumption that the technology is not good enough) - as long as the technology is good enough VR is strictly better than current reality.


> There are enormous advantages to living (as much as possible of) your life in VR:

> - your home can be arbitrarily large at zero cost and without taking any land away from anyone else

Except it isn't.

> - you can change the decor whenever you want at zero cost

Except you can't.

> - you can paint walls as ornately as you like, at zero cost, immediately, without even any prep work required

Except you can't.

> - you spend zero time on travelling

Except you don't.

> - you can instantly hang out with friends in foreign countries for free without even needing a visa

Except you don't.

This is an incredibly dystopian view.

Let's not kid ourselves that VR will not be monetized to the breaking point just like any other platform. Enshitification is inevitable.

In other words, corporations will do what they already do, sell lies, create demand for those lies and in exchange demand more of the irreplaceable things like time, land and resources.


> Except you can't.

Some of these objections sound like saying we're not really listening to music on our phones, I guess because there's not really a band playing, or because of the lossy sound encoding or something.

In fact, we _are_ listening to music, in a way that people seem to enjoy and which has been democratized far beyond previous possibilities for listening to music.


First of all, people still go to concerts and will not stop anytime soon. Even if people enjoy music in a variety of ways, people very much still enjoy live music for being live. So it is a bad counter example.

Secondly, all of the examples above were examples of trying to avoid reality. A pathological form of escapism. Why push so hard for such a shitty future where escapism is the only option? Black Mirror is a warning not a manual. Are you really that unimaginative that you can not imagine a better future than the shittiest of dystopias? Also, good enough technology is not an argument. More optimistic scifi like Star Trek has the holodeck with a realism level of ~100%. Yet people don't just spend all of their time in it because reality is amazing in it's own way. Why bring about a future where reality is so undesirable?

Also, as I said, there is absolutely no way VR spaces will not get enshitified the same way all current platforms are. In all VR spaces you will absolutely be the product while also being milked for every single micro-transaction you can pay.


> Secondly, all of the examples above were examples of trying to avoid reality.

1) What makes reality important? 2) What is the definition of reality?

> In all VR spaces you will absolutely be the product while also being milked for every single micro-transaction you can pay.

How does that differ from reality? Are you not paying for the walls that are presumably around you right now?


I will not dignify your questions with an answer because they are a philosophical derailment. And turning the conversation into one about solipsism and epistemological nihilism is a dead end. Related to this, the simulation hypothesis is nothing but repackaged theism. Many have treaded this territory and nothing of use was discovered.

Besides, I was the one who asked first the much more pragmatic question of why some people consider such a dystopian future to be desirable.


You pose these questions, and then when asked to clarify, claim derailment?

If you think reality is important, then it's upon you to define reality and why you think it's important.


the answer is "the law of cause and effect" and "the problems"


More like "we're not really listening to music on our phones for free". These days everything is a subscription. Then games are released where every cosmetic change is paid for, so you can be max level with the best gear but you'll still look like a beginner character because you haven't paid for the cosmetic upgrades to go along with the extra gear you won along the way.

Change the wallpaper in your virtual apartment, huh? You'll absolutely be paying for that. Some people will save money by just coping with the walls being non-stop ads for anti-depressants and pills for erectile dysfunction. Perhaps they'll take their headset off and do everything through command lines. Pity if the headset eventually evolves into implants that pump images and sounds directly into your brain.


Well I don’t know about you, but if I want a free app to change my wallpaper and there isn’t one I’ll just write one. It’s totally a choice to pay.


Talking about wallpapers is derailing the conversation. The point was that just as we see the social networks of today and the games of today getting enshitified so will every VR experience get enshitified.

As for writing it yourself, care to talk about the price of the developer account, the approval process, the risk of getting banned, the ever changing API surface, the time you use for all of this, etc. etc.? You are NOT the owner of anything in the "metaverse". You are just allowed the privilege of playing in the platform owners sandbox as long as you pay. When the platform owner will decide it is profitable to exploit users ability to set wallpapers there is nothing you as a user or as a developer on top of the platform will be able to do to counter that.

Here, read this (1) and watch this (2) and tell me you feel confident the same will not happen to any "metaverse".

(1) https://pluralistic.net/2023/01/21/potemkin-ai/#hey-guys

(2) https://www.youtube.com/watch?v=g16heGLKlTA


Your app does not need to get approved if you are the only one using it - in fact you don’t even need to have a developer account if you’re the only one using it and are ok resigning the app occasionally.

With regards to enshittification: all the examples listed were software companies or service companies. I don’t see any hardware companies on there, and I’d go so far as to say hardware companies that sell with high margins do not undergo enshittification. From the article: “… they abuse their users to make things better for their business customers …” Apple doesn’t have business customers. Their core business strategy is not to sell ads, or data, but rather hardware.


Your first paragraph misses the point. The argument was not about the current rules of any current platform. It was about the potential for abuse on future VR/AR platforms by platform owners. The platform owners are the rule makers and can change the rules at any point. Others platforms like the gaming consoles are proof the rules can be a lot tighter.

Apple is in no way excluded from enshitification.

Apple refusing to support more than one external monitor on non-Pro or non-Max M* laptops is an example in hardware by virtue of being intentional market segmentation.

On the software side, ads are slowly encroaching previously ad-free spaces on Apple software too. Apple is also a services company providing everything from an ads platform to apps marketplaces to media to payment services to banking services and leveraging their position in anticompetitive ways. It's platforms are subject to the same pressures of enshitification.

The part you quoted is just one of the steps. The strategy for a platform is to act as an indispensable middleman and abuse everyone both upstream and downstream.


You are not looking far enough ahead.

In the future people will be in their pods, only connected within virtual reality and everything will feel as if it was real.

This allows for much more optimal space and resources management.


Either you left off the "/s" or you're missing their point. Who will be doing the space and resource management?

This is not a future where individual consumers have control. This is a future where corporations give consumers enough of an illusion of control that they don't notice how much they've actually given up.


> Who will be doing the space and resource management?

AI, obviously, they are far better and meticulous planners then humans. AI will evolve to maximise shareholder value at all costs.


There is no guaranteed path for the future. Humanity makes the future it will live in. So I am just looking ahead in a different direction from you.

I find it hard to not throw a plethora of insults at you for the vision you endorsed. The Matrix was not an instruction manual, it was a warning.

Why do you want that to be the future?


In the future it won't be possible to have a life as it is like now and it's only possible in virtual realities. The scenarios that are normal and natural for people can only be offered in those pods.

You might not even be able to tell any difference when you are in a pod. In fact you could be in the pod right this very moment and you could be the main player where all scenarios are created and catered specifically for you to throw various challenges at you which might feel stressful, but you'll be able to surmount them. These scenarios give meaning to your life.


As I expected you do not want to engage in actual dialogue and answer the question I asked. You are just posturing that it is inevitable and therefore do not want to answer why you favour this dystopia to the detriment of any other possible future.

Moreover you are also steering this towards the simulation hypothesis which is a dead end for a debate.

I will no longer entertain this charade.


I haven't said I favour that. It does seem inevitable however. How would you expect humans to end up? In 10 years? In 50 years? In 200 years?

People are already out of their natural habitats, and they will be even more. Is there going to be some sort of magical future moment where people achieve what they wanted to achieve and will they be happy?

How could it happen if environment is going to completely change? What once gave rewards to humans, in the future can not, since there wouldn't be similar events that have built humans throughout the evolution.

People are already addicted to smartphones, people are depressed and have issues with their mental states, despite World being more comfortable and safe to navigate than ever before, people are overall not necessarily more happy.


Being in a permanent state of happiness doesn't exist for humans and it shouldn't.


It doesn't right now, but it's debatable whether it should or should not.

E.g. if you are able to produce a method or substance to produce heroin like effects in people without tolerance build up, should people be allowed to consume that?


I admire your restraint in engaging with GP -- kudos!

FWIW, it looks (to me) like "the world's most comfortable jail for your face".

That you cannot even peer out of this jail, but instead have an ersatz rendered version of you depicted on the outside, is even worse.

Our trajectory of pursuing a frictionless world, at any cost, is going to be our doom.


I find value in maintaining HN a civilized online space.


Asimov kind-of wrote about this phenomenon in 1956's "The Naked Sun", the second of his Elijah Bailey series. I won't give plot spoilers, but the main connection is that a group of people on another planet have grown accustomed to virtual holographic "viewing" being the main mode of interacting with other people, while physically being in their presence is deemed unwholesome, embarrassing, or otherwise uncouth, causing people to go to great extents to avoid it.

I don't know that Vision Pro is necessarily taking us on that sort of path; I'll probably hold my judgment until they've proliferated enough that I actually know someone who has one. But your point that the virtual world is becoming where we go to interact with other people while physical spaces are used for individual isolation is a good one, and my contribution here is to tie it to a book that is over 65 years old to show that it's not even a particularly new idea.


"while physically being in their presence is deemed unwholesome, embarrassing, or otherwise uncouth, causing people to go to great extents to avoid it."

Reminds me of the sex scene in Demolition Man.


Also Philip K. Dick in _Do Androids Dream of Electric Sheep_


I would argue that the ancient man living in the woods is absolutely right about cities—a person in a concrete jungle is dangerously isolated from reality. We know the negative effects of too little sunlight, we know the positive effects of exposure to nature. We know the negative environmental effects of the infrastructure that makes the concrete jungle possible, but those effects are invisible to the resident of the jungle.

Also, it's pretty clear that cities are too large for our social apparatus. The ancient man living in a tribe has a deeper personal connection with his fellow villagers than anything we develop in a city, much less in the worldwide city we call the internet.

Given all that, I think it's reasonable to take a backwards-looking approach to this. Evolution happens on the time scale of millennia, and our biology has had only had a few hundred years to adjust to the modern city. Rushing headlong along the same path into full, always-connected VR is hazardous.


> Also, it's pretty clear that cities are too large for our social apparatus

Citation needed? Humans have been living in cities for thousands of years.


Taking the Roman Empire as an example: Rome was by far the largest city in the empire at a total of ~1 million people. The others were estimated to have 500 thousand or fewer, with only 25-30% of the population living in a city at all.

Modern cities are enormous by comparison and our urbanization rate is completely flipped. 80% of people in the US live in a city, and we have 50 metro areas with a higher population than Rome had during the empire. Our largest metro area is New York/Newark/Jersey City at 20 million people, 20x that of imperial Rome.

And remember that Rome itself was an anomaly in its day, and the Roman Empire in general was an anomaly in European history.


At an extraordinary cost to human health. Sewage problems spread disease. Malnourishment due to only eating bread. The average height of a Paleolithic man far exceeded the height of a Neolithic man.


This is Berkson's paradox - Neolithic men may have been less healthy, but that could be because the hunter-gatherers died or stopped having children when they ran out of food rather than living off bread.


Cities of the past are nothing like those of today. Nor the lifestyle of them.


> In the future, going into virtual reality will be where you go to interact with others, and you'll take the goggles off to isolate yourself.

This sounds deeply shitty.


With that framing, I agree. However, I don’t think it needs to be. I’m hoping it’s more that virtual reality is where to go to interact professionally and you’ll take off the goggles for your personal time.

If, instead of needing to live in close proximity to work opportunities, often in an expensive and crowded city with a long commute, people could live wherever they want and commute virtually, that would be a huge positive in my book. That is already possible for some professions, such as software development. But good VR/AR will open the door to other professions that still require in person collaboration today and will improve those that are already remote capable. Then, at the end of the day, you take off your headset and live your life wherever/however you choose.


Yeah I agree that work is by far the most optimistic use case here. Still though, I work remotely, and it kinda sucks, for exactly this same reason. It is certainly a huge advantage to be able to live anywhere and not be tied to any specific location to get work done, but there are huge disadvantages too. I haven't personally figured out what I think the right way to square the circle is, but I think it might be more like the trend toward 2-3 hybrid work weeks rather than total isolated remote. At least for most people.


We're already texting far more often than talking, in person or on the phone. Perhaps you're used to that and don't feel strange, but I bet there are people who consider that shitty.


Yes, that is also strange and shitty. (But actually no, I talk in person a lot more than I text. I have to make an effort to see people that way, but I think it's worth the effort.)


We'll make cars so cheap only the rich can afford horses. We'll make air travel so cheap, only the rich can afford travel by ship. We'll make AR/VR so compelling only the rich can afford travel.


> people normally respond with an implicit assumption that the technology is not good enough

Because it's not good enough and there's no real plausible means by which it will be good enough.

I mean this was a core argument in the Matrix, and many agreed that it wasn't worth doing even when the virtual reality was absolutely flawless and a land of limitless bounty.

And there's no tech in progress that has a path to that kind of immersive experience. Even if that kind of thing is possible which I have doubts about my great-grand-children will be long dead by the time it shows up.

You're not going to hang out with friends in foreign countries for free you're going to be isolated in your house with a piece of plastic on your face.


We're hanging out with friends in foreign countries for free right now on this website, many people have friends they only communicate with via text, and some only with voice, and others with video, and others physically


No we’re not we are staring at a glowing screen and typing stuff.


So what's the difference between standing next to someone talking to them, and typing to them?



Wow..... I have to completely disagree with you and feel quite sorry that's the way you feel about your life. You sound similar to the Japanse Hikikomori men that I read about.

1. People do not really go into nature to "isolate" themselves. As you said, you can do that in your own house. There's a plethora of reason nature is good for.

2. Everything else you're listing as "advantages" are less equivalents of what can be done in real life already.

3. Hanging out with friends virtually is no where near the same emotionally or physically. This has been scientifically proven. You NEED to be talking to people in person and interacting with them. Otherwise, you will be emotionally stunted.


I hike for days to get to parts of the wilderness that no one else is in. I do not go to nature to hear or see other people. Gp is spot on with that.

I'm very normal and sociable and well adjusted. But our relationship with our surroundings and each other is changed due to technology. And will continue to evolve.


I meant to say "just to isolate themselves". Yes that's sometimes part of the reason but typically it's not the only reason, as you can isolate yourself inside your own home quite easily.

It will evolve into a mess of mental illness and likely the end of our race in my opinion if that's the way we go. We can already see this happening. Less people are reproducing, more people are mentally ill than ever, a lot of people see the human race as a blight on the planet etc. We will not evolve, we'll just die out or be something so far removed from what we are it won't be recognizable. It's obvious that digital communication doesn't meet our emotional and physical needs.

Also, did you ever think about the reason why you're "isolating" in nature? It's likely to get away from the modern technical world that's stripping you of something. Sort of ironic in regard to your stance on wholeheartedly accepting every facet of VR into our daily lives.


> Imagine if ancient man, living in the woods, saw people of today living in cities. "It feels very uncomfortable seeing all of these people isolating themselves inside buildings".

Forest man is on to something. Can I live in his family’s hovel and keep my antibiotics? Thanks.


Chances are you won’t ever need those pills provided that you survive beyond age 5.


If this is the future I do not want to live in it. No VR is ever going to match the actual experience of travelling to a place unlike any you've ever been. No emulated persona is ever going to match looking into a loved one's eyes. I can see the benefits of AR/VR for productivity or for niche consumption, but it could be 'strictly better than current reality' only with the narrowest of cherrypicked definitions. No thanks.


While I agree with your sentiment, if VR becomes a 99.999999999% match it’ll change human existence whether we personally like it or not.


Wait until neuralink becomes a real thing.

Do I want it? Of course not.


In the future I'm sure there will be ways to manipulate/drug your nerves, emotions, to want to live in that World and enjoy it.


WHYYYY?

You are approaching the territory of describing enslavement in your description.


>Imagine if ancient man, living in the woods, saw people of today living in cities. "It feels very uncomfortable seeing all of these people isolating themselves inside buildings".

A city is literally millions of people all wanting to live in the same place to be close to other people. I couldn't wait to move from the countryside to a city in order to be near more people like myself.

Communications technology is great, it helps me connect with even more people with common interests, as right now, and with my family while we are apart. It is in no way shape or form a substitute for being physically with people. There's something primally satisfying and emotionally anchoring about actually being together. That's why online discussions can so easily escalate into verbal knife fights and abuse. It's emotionally disconnected and I suspect the same will be true of VR.


You forgot the /s (i hope).


No, this idea was stupid when Zuckerberg promoted it, and it's still stupid with Cook promoting it. VR goggles are a dead end. Invisible VR or AR has lots of applications, but the technology is years away.


Your proving his point of the dystopia


I can't tell if you're trolling or not...


You omitted my favorite use case: reducing commuting and business travel.


Your comment is such a great bait :D As much as things could be really cool in VR, the simplest argument here is that as a human you can do much more than just look at things and listen to things...


Weren’t they trying to sell high priced virtual real estate in the metaverse next to snoop dog? Would decor not be monetized just like skins and items in gaming?


I wonder what impacts this might have in mental and physical health of people


As long as the technology is good enough...but it's not, so quit shilling


the mental gymnastics people are using to justify dropping $3500 on this is amazing


I don't know if you've ever seen Serial Experiments Lain. In it the protagonist slowly descends into an underground of hackers while finding herself isolated in a room with all sorts of mid 90s aesthetic crt monitors, servers, tubes and wires. And that is more what my current office looks like as opposed to someone hanging out on a couch wearing this headset. If the user interface is to be believed you can in theory do real work with just your eyes and your voice to dictate. It got me thinking. What is really a dystopia? What if our current keyboard and mouse desktop setups, laptops, and displays are the real dystopia?


So I just watched Serial Experiments Lain last weekend, and the wires are a strong aesthetic in the show, but that is not what the show is actually about. The horror at the heart of the show is about depersonalization caused by constantly being Connected to the subconscious of other people. It's a show about the fears of a constantly connected world brought on by the internet, not the user interface of that connection.


The dystopia is corporate control and the lack of freedom to hack our device as we wish.


Notably that show features a gargoyle who seems very proud to be walking the streets with all their cool hardware.

With the Vision Pro, Apple is going to let us become the prettiest gargoyle at the ball.


there's a scene in Lain where a junkie is wandering around, wearing chunky AR goggles, haptic gloves and a backpack full of computer, festooned with cables, gibbering euphorically about how wired in he feels.

so the show portrays AR - at least in that setting - as almost degenerate.

of course, Lain has an unfair advantage in not really needing any kind of interface to live in the Wired.


I think the (standing-)desktop setup with keyboard, mouse, and monitor(s) is already close to the ergonomic optimum for any kind of work that is heavily text based (from Excel sheets to technical and prose writing to coding to email and chat, and forums like here), despite the ergonomic shortcomings that remain. (The one exception being pure reading or markup tasks, where tablet-style devices are great.) Our bodies haven’t really evolved for written communication, but it’s nevertheless still the most effective in many ways, given how our hands and eyes work. I don’t think there is any dystopia in it. Rather, we should embrace it for the many purposes it is effective for, and work to improve it within its paradigm, rather than chasing a fantasy of replacing it with somehow more intuitive or natural means, which generally turn out to only be good for casual or low-text use.


> What if our current keyboard and mouse desktop setups, laptops, and displays are the real dystopia?

Im not following. Why would that be the dystopia?


same antisocial outcome but shittier subjective experience


Shittier in that it’s real rather than virtual?


Shittier in that my partner's hands and sometimes back are wrecked from a relatively short time using them.


I’m not sure wearing headsets all day and air typing is going to be a lot better ergonomically.


You should look into how the Vision Pro works, it actually doesn't include air typing. You can use a keyboard from traditional to a small one in your lap, if you want to use a virtual keyboard you're not lifting your arms to select letters, and most text entry is voice because STT is good enough now.


> it the protagonist slowly descends into an underground of hackers while finding herself isolated in a room with all sorts of mid 90s aesthetic crt monitors, servers, tubes and wires.

My Meta account name is “regretbuy” (IIRC), but I love the menu scene in Superhot because it looks so similar to your description.


I'm hoping for more of a Dennou Coil future myself.


Thank you for bringing Serial Experiments Lain back into my recent memory.


Literally laughed out loud to this comment:D


Lain turned out to be an AI iirc


It'll become accepted I'm sure, if it breaks through; could anyone have imagined walking into a living room 30 years ago [0] and seeing everyone hunched over a phone? Or 80 years ago and seeing everyone around a television?

I mean I get what you mean, but I also think that some things get normalized in subsequent generations. Our 15 year old and his gf have their phones out at all times, frequently switching to interacting, glancing at random videos, messaging, games, etc. Unthinkable 20 years ago, but for them it's tuesday.

That said, VR headsets have been a thing for a while now, at all price ranges; Google Cardboard and clones would make any smartphone into a rudimentary 3d headset, but as far as I know nobody normalized that into their daily life to e.g. watch videos. Head-mounted DVD players were a thing as well for a while I think, but again, never became normalized. Google Glass was the first 'big' AR system, and it was hated for being so obvious and its wearers being so blatant in glancing one way and having a camera aimed at you (which might have been recording) at all times; it might be interesting to see what today's iterations, Snapchat's Spectacles [1], Facebook / Ray Ban's Stories [2], Bose Frames or Amazon Echo Frames will do, I think they'll be much more accepted because they're not as obvious.

[0] I had to edit that one a few times, had to think about when mobile phones became daily use items

[1] https://www.tomsguide.com/us/snapchat-spectacles-2-0,review-...

[2] https://www.tomsguide.com/reviews/ray-ban-stories


> could anyone have imagined walking into a living room 30 years ago and seeing everyone hunched over a phone?

True, but I don't recall smartphones ever being unveiled with promotional videos of people all being hunched over them at dinner.

Smartphones were promoted by enabling us to do things on the go: making and receiving phone calls, getting directions, finding restaurants in the area. This is why this feels different: the experience that is promoted is one that feels already slightly dystopian.


The iPhone was introduced as having three features:

1. An iPod - that was already isolating people

2. A phone

3. An internet communicator


I’m confused. Are we judging a product category based on how it’s promoted, or how it’s used?

I don’t think my opinion of smartphones would be any different if early promotions had shown the downsides, or bigger upsides, or bizarre and unrealistic scenarios. Am I wrong to be anchored on how they’re actually used, which literally nobody foresaw?


I don't think OP is saying smartphones aren't dystopian in practice, just that no one set out to create the dystopia. That part of the Vision demo showed that the designers thought that this additional contribution to our dystopia was a positive thing, and that's concerning.


Precisely. And you can withness this in sibling threads. For some reason, today some people actually openly desire or pretend to desire the dystopia.

My suspicion is that this is related to the longtermist movement.


We're judging it based on the vision for how it will be used, as evidenced through its initial promotion.


Do you judge Viagra based on how well it reduces blood pressure?

I agree that demo was dumb and dystopian. But it's silly to latch on to one imagined usage, ignore the others, and have no interest in what the reality will be.


It would be silly to latch onto it, if it were the vision of some random fan, but it is the vision of the company creating the product, so no, it's not silly to consider it critically.


I wonder if anyone will feel brave enough to wear this on a bike...

Until the time they're arrested for using a smartphone while biking, or for relying on the computer not to malfunction just to avoid riding blind.


Or driving. It would interesting to have directions overlaid on the view of the road. The exit I need to take could be outlined and highlighted for example.


Until it crashes, and then you crash.

It's not proper AR, It's VR with passthrough which means it's purely active. If it fails it goes black.


I can’t imagine putting a CPU as complicated as the M2 in between my eyeballs and the road. It is too big and complicated.

I guess we don’t really know what the R1 does. It is hypothetically possible that there’s a failsafe road-eyeball path just goes through that chip, maybe that could be made as reliable as various other critical computers…


It's not as failsafe as glass.

I guarantee you Mercedes or Lamborghini would _love_ to have a fully enclosed interior with screens instead of windows, but it's a critical safety feature and anything that makes it less likely to work is just not happening.


We’re basically converging on fully enclosed interiors anyway, just without the VR headset (the pillars for the windows are ridiculous nowadays). Maybe the argument could be made that it’d be safer for everyone (especially pedestrians waiting at crosswalks) if drivers had a headset system.

Getting rid of the windows and making the headset totally necessary seems like riskier proposition though. For the Apple thing on a normal car, the backup option of taking off the headset is always there (assuming sufficient warning can be given before a failure, which is a big assumption, but should be doable).

Drive-by-wire systems exist, so it isn’t as if replacing some normally mechanical steps with electronics is impossible. The electronics just need to be made sufficiently reliable.


Wasn’t there a vampire movie maybe fifteen years ago, where the vampires drove fully enclosed cars via camera during the day? Daybreakers?


There is a separate processing path for the video pass through that is suppose to be distinct from the OS. If one crashes the other shouldn’t.

Yes I know the difference between in theory and practice .


I think that's both inevitable and terrifying.


I'm thinking it might have an onboard battery that affords a few minutes after the current battery is depleted to allow hot swaps.

Not a great solution to using these in such situations, because bugs as you say, but it might be a feature that allows people to feel so daring. Oof.


Skiing was what I was wondering.


Recon Jet made an HMD that is sorta similar to Google Glass, but designed more like ski goggles / sunglasses. One of its use cases when it came out around ten years ago was skiing, but I never used it for that. I mainly used mine biking with its ANT+ integration.


Specialized AR goggles exist. Not for me but I guess I can understand the appeal to some.

https://ostloong.com/sirius/


> Our 15 year old and his gf have their phones out at all times, frequently switching to interacting, glancing at random videos, messaging, games, etc.

Yes, and this is a bad thing.


Was it better when we all gathered around the radio?


> gathered around

Yes, because by definition this is a shared experience. Phones are isolating because no one knows what you're looking at and it's rather hard to share the small screen even if you want to.


young people definitely gather online. people revel in twitch streams, voice chat on disc, and all sorts of online gaming, video calling, and more. these are still shared experiences although they are not in person.

there is a lot of isolating tech (most social media) but it's also quite cultural-- yes it's global, but america f.e. is largely idealistic and materialistic and it shows.

i don't think the mode of interaction is the issue


Yeah the question at the root of this is how much the "in person" thing matters. I think the answer turns out to be "a lot". And back when I was in college a decade and a half ago, I was super optimistic about the internet and wrote essays arguing that the answer to this would turn out to be "not at all". But I think after watching this develop for a long time now, that it has not gone well.


Yep, like the other commenter said, it's right there in your comment: "gather".


Yes people imagined that. There's a movie about the company General Magic where you will see people filmed at the time imagining just that.


There’s been backlash to being on phones all day. iOS even guilt trips you every week with stats. It even guilt trips you for being too sedentary. If anyone is going to guilt trip you for doing too much of something, probably apple.


You should understand the guilt is on the recipient side (you). Here the sender (Apple) may have genuine concern about their users health or have marketing incentives to express the same concern. Probably a mix of both.


Regardless of whether they are doing it out of genuine concern or to cover their own asses, they aren't reporting my usage or lack of activity so that I can not do something about it. If they wanted me addicted, they would do something else.


That is exactly my understanding too. Though I think « inform » is more appropriate than « guilt trip »: my first understanding of your comment was « Apple wants to makes me feel guilt when using their products ».


Or Apple just has liability concerns so it includes these options as a feature.


I’ve never felt guilt from those stats. They might make me evaluate the last week, but IMHO guilt isn’t really a helpful emotion when it comes to past (not active) behavior.


> could anyone have imagined walking into a living room 30 years ago [0] and seeing everyone hunched over a phone? Or 80 years ago and seeing everyone around a television?

True, but is it so different to be hunched over a phone than hunched over a book. You can still just look up and be present. With a headset, well… there isn’t such a parallel.

Maybe the Sony Walkman is a better comparison because finally you could go out in the world buffered from it.


> You can still just look up and be present.

AV is slightly different b/c it's trying to let people still be present. Is it enough and will it work still remains to be seen.


Isn’t AR comparable to a headset the same way VR is to noise-cancelled headsets ?


GP said 'AV' (not 'AR') for 'Apple Vision' I assume - which 'tries to allow people to be present' through the 'EyeSight' feature (where it reveals the wearer's eyes, becomes translucent).


Correct. Apple Vision (AV) is also AR first which will help people remain present in their environment. Will it work that way in practice remains to be seen.


Sure ! And still the Walkman only takes one sense away, AR/VR intervene in two senses.


and apple just showcased how their headset DOES allow for intrusion and awareness of the world around it


I was getting an economics graduate degree 20 years ago, and at that time we had economic reports describing the upward arc of mobile phones to be exactly what they became, with all the negative social issues we see today forecast with "we'll need to do something about this" repeated over and over. It was seen as the mechanism the "rest of the world" gets access to computers and the Internet.


Seeing people hunched over watching the news isn't honestly that much of a stretch: https://twitter.com/HistoryInPics/status/411376204231753728?...


This whole thing would have been much better if they’d positioned it as a work and personal entertainment machine, similar to a bunch of monitors with a Mac mini, and replacing iPads or single-viewer TVs.

By trying to show interactions with other people, it’s just highlights a weird isolation - what it’s not pointing it out is that the advantage for people like me who work in a private home office where I’m already isolated - this just helps me work there better. I don’t have to cram my small table with monitors any more, and when I do watch a movie or show there I can do it on a giant screen.

Even the cameras could have been highlighted much better, either by photographing a complex machine, or an art collaboration across countries.


> This whole thing would have been much better if they’d positioned it as a work and personal entertainment machine

Honestly most of the presentation had this exact vibe for me. That the Vision Pro is directly positioned as a device for deep immersion, like movies, gaming, and engrossing work. The two exceptions I remember were those moments where a person plopped their Vision Pro on to take a 3D moment of their children and the one where a person answered a FaceTime call with the Vision Pro while packing their suitcase. I don't see those moments happening in reality with this sort of device.


> similar to a bunch of monitors with a Mac mini, and replacing iPads or single-viewer TVs.

Except the Vision Pro maps your home, visually and acoustically. I suppose it's acceptable if one already has a Roomba and an Alexa.


From a privacy point of view? Yeah, I’m ok with that if it stays on device, as is usually the case with Apple. The iPads and phones have lidars as well, no? And I’m going to be napping my home to plan renovations etc. Much rather have everything on device.


I don't disagree, however when was the last time you walked into a room where at least one person wasn't solely focused on their phone.

The 3D photo thing is super weird, they could release a stand alone camera in future, BUT, I think this is a v1 product, imagine where it'll be in 5 years time, or an SE model which isn't wrap around. I already wear glasses so having AR glasses is no big leap.


> however when was the last time you walked into a room where at least one person wasn't solely focused on their phone.

Do we want more of that ?


Does it matter what “we” want?

If you want to use your phone less, use your phone less. Complaining about how other people use their phones “too much” is just kids-these-days.


Yes, it does matter what "we" want. This is how society and culture works, by people debating what's good and what isn't good, and deciding to regulate some of the things that really aren't good through law, and "regulate" other things that aren't good through cultural norms.

Within my circles, the "regulation" of not having everyone on their phones constantly in social settings has been slowly but surely taking root for years. And that's a good thing. I hope (and believe) the same will be true of these headsets.


Does that work for free access heroin too ? Military grade weapons ? Driving under influence ?

There is a reason we live in organised societies, with rules and limitations, and it's not so that you can do "whatever you want whenever you want"

At some point we have to decide where we want to steer the ship


> I think this is a v1 product

This is exactly what I tell myself. This is equivalent to their five hundred dollar original iPhone with no App Store or even copy and paste functionality. I expect the SE to cost a third of this if not a fifth.

Edit: five hundred dollar iPhone typo


A fifth? ... You mean like the $100 iPhone they're about to release?!


cognitive dissonance is real. what is the totl retail now days? like $1500?


I know! ... my guess is the price will stay the same for the Pro version for a long time ... released with a lower spec slightly cheaper version ...

Then they'll keep adding features to justify the price ...

Then after a few years, the price will start creeping higher and higher.

And then, when the implanted-AI version comes out, it'll be $15,000 for Pro Pro AI Max version ...

"Literally Think Different!"

/only semi-joking ... this was basically the iPhone playbook and why wouldn't it be applied to the Vision?


> they could release a stand alone camera in future

I would prefer a lens that could adapt an existing camera to 3d photography / video. Some already exist, like:

- Canon RF 5.2mm F2.8 L Dual Fisheye: https://www.dpreview.com/news/7991481617/canon-shows-first-f...

- Loreo 3D Lens 9005: https://www.oddcameras.com/loreo_3d_lens_9005.htm


Strapping on goggles to record a family moment feels super weird to me too. I wonder if future iPhones will have the sensors required to take 3D photos.


I don't think this is something most "normal" people will do. However, I can see use cases in industry where grabbing a quick photo of some equipment etc might be useful, or in sports where for example cyclists could have a heads up display and use the camera to capture a beautiful view they discover.

As a parent myself, the idea that someone would stop what they are doing to put on goggles to take a photo seems crazy. You would miss the moment. The only reason I use my phone for most photos over a DSLR or Mirrorless is because the phone is right in your pocket and instantly available at all times. I'm pretty sure the newer iPhones already have lidar sensors don't they?


Not enough people have seen the movie Strange Days.

Watching recordings of people doing interesting things could be the killer feature.

Porn is obvious, but even just experiencing the recording of someone walking through some beautiful place will be a treat.


I expect current iPhones can do a good enough job. Lidar + just waving it around + AI make this mainly a software problem.


Yeah, I'm betting the next iPhone will have a new record mode.


That's a good point, maybe eventually some of the tech will leak out and not actually require wearing the goggles.

Regarding phones, I see your point there too, but at least with phones people can still look up make eye contact, easily set the phone down when needing to interact with someone; unlike these, there is a definite wearing-them and not-wearing barrier that many will probably not cross on a whim, so we'll just get used to interacting with virtually projected eyeballs on the outside of them. Very strange to me.


You can just flip the goggles up if you’re talking to someone. That’s often what people with tinted safety goggles, like welders, do.

Having to take AirPods out of your ears is more obtrusive I think.


> Having to take AirPods out of your ears is more obtrusive I think.

I don't own airpods but I don't mind if someone comes to talk to me with airpods on. I think it would be stranger having someone come up to me and talk to me with the goggles on.


People often try to talk to me while I'm walking the dog and I keep having to tell them to repeat themselves after I pause the audio and turn off the noise isolation.


> People often try to talk to me while I'm walking the dog and I keep having to tell them to repeat themselves after I pause the audio and turn off the noise isolation.

Well that is just plain rude. I meant if someone with airpods on comes to talk to me, I am reasonably sure they have transparency on or something like that.


> however when was the last time you walked into a room where at least one person wasn't solely focused on their phone.

Yeah and that's really bad.

I have a friend who used to do this, even back when phones first started coming out. He couldn't resist just being on his phone, trying to talk to him was a nightmare.


I think it's weird because the example they used of the birthday is trying too hard to be family-friendly and social.

The HoloLens had some great examples of highlighting where to hit a mechanism to make it work. I have no idea if it was any good in reality.

That doesn't work for Apple, at least not for advertising.

I think it would have been neat to have shown a sports person/dancer/musician looking through the eyes of their instructor to correct their posture, etc. Or at least show them reviewing the recording together.


I don't see phone as being equivalent to a VR headset. Phone is just a few sq. inches of distraction whereas the VR stuff is a 360 degree false world full of delusion.


And people get completely sucked into those few sq. inches of screen, tuning off from the real world.

In my life it got to a point with some friends where I introduced the idea of everyone taking their phones out and putting them face down on a stack on the table, if you really need to check your phone for something important one has to voice out "I need to check my phone", taking a break from hanging out to check it. It annoys me to no end when in the middle of a conversation someone takes their phone out just to check a message, or Instagram, or any kind of trivial notification, and end up lost in it for a few minutes, it's been really great to make them more aware of the behaviour as it completely tunes me off from a conversation seeing someone you're trying to communicate with staring intensely into their screens.

At least in VR I know that they aren't present, easier to adjust, with phones it just annoys me absolutely.


This is just good manners, I don't check my phone in the middle of a conversation and my friends don't do it either.

I also teach my kids not to do that.


I can't expect that every single person I met or I'm friends with will adhere to "just good manners", given the ubiquitous presence of those few sq. inches of screens around us there will always be instances where good manners aren't followed.


So what’s your point again? Is the Apple Vision making things better or worse?


It’s the world of instant information and social contact that makes phones distracting, not the screen dimensions. Actually I’d say the small screen form factor makes this problem even worse - makes it socially acceptable to be half-present with people, the worst thing. It feels like no big deal to glance at a phone while talking to someone, and feels churlish for the other person to take issue with it.

But if someone didn’t remove their VR headset when getting into a conversation with me IRL… I can’t see that ever becoming commonly accepted. It’s completely different from phones because you wouldn’t even know if and when they are glancing at messages or other content (or even filming you), so there’s just way more social norms for the tech to contend with. With a headset, there is a 100% clear distinction between when you’re using it vs when you’re not, and a distinct act putting it on and taking it off, whereas with phones you can flick a glance at it.

Direct-to-brain AR will be another matter though. Then we will be forced to accept that anyone we are talking to may be secretly engaged with other stuff at the same time.


In your mind it's not that different. Consider the importance of media on shaping your thoughts, beliefs, memories, emotions. Whether delivered through a cinema screen or a tiny phone doesn't matter.

VR/AR has wow factor, but you quickly get used to it, and then your concerns are mostly pragmatic: the headset is huge, the battery life short and so on.

Although AR at least has pass through, which is a big benefit, as it doesn't completely hide your real surroundings.


I'm a fan of VR/AR but found Apple's reveal video to be profoundly sad. There's a sense of isolation running through the whole thing. The immediate connection many people made was to Black Mirror.

The father wearing the headset and recording his children play around him really hit that note. From the children's perspective, instead of their father's face they see a pair of black goggles and rather than their father's eyes they see a digital facsimile.


>The father wearing the headset and recording his children play around him really hit that note.

"Hold still son so I can record this moment, I've only got 10 minutes out of the 2h of battery left on this headset" lol

The mental illnesses kids will have growing up with tech addicted parents are gonna be epic. Therapists are gonna be printing money.


people already film their kids on their phone, take live photos of their dogs, and tiktok/insta themselves in a cool place.

having the ability to share parts of our lives remotely and also remember them is profound. i often look back at photos of the past few years, and remark at memories i've forgotten, places i've been, and i share those things with others as well.

sure no one wants 10 people in a room interacting thru their apple vision pro, but i really don't think it's fundamentally different. and if i could have a 3d video of my deceased family that looks more real than a photo, i'd love that.

look at the parts of the tech that connect us more; there will always be negative things, loners, tech addicts, etc.

and as for the mental illness quip, i honestly don't have an answer because it's so outlandish we can't even begin to quantify it


You must be fun at parties


> The father wearing the headset and recording his children play around him really hit that note

Yeah that was a mistake. It should have been a worker recording how to assemble something to make a training vid. The family recording thing probably turned off a lot of people.


Precisely. Some mechanical training or live assistance is entirely plausible. I experienced a similar thing with HoloLens and other than the limited FOV it was really convincing as a tool.

Apple unironically going for the dystopia angle is a weird flex.


I mostly agree, but on the other hand: when I was a kid in the 80s, my dad spent my birthday parties walking around with a shoulder mounted video camera. It was lame, but not quite dystopian.


Rather than releasing a stand alone camera, I’d expect the vision pro headset to act as one too. In other words I’d expect it to be able to take spatial pictures whilst not being worn. Less creepy, and more like taking a picture with your camera / phone.


Or make an iPhone that can take spatial pictures.


This is just plain obvious. Not sure it'll be part of the '23 lineup; WWDC does not showcase new iPhone hardware features, but they have the hardware ready and miniaturised enough for the headset; if it doesn't make it's way to the Pro iPhones by '24 at the latest I'd be very surprised.

The sports clips and experiences are also not captured by people with headsets standing on the sideline; I'm not sure why a lot of commenters see this as the bleak new future?


We should be moving away from that not further into it though.


> The 3D photo thing is super weird, they could release a stand alone camera in future

We can call it "iPhone"


They showed the iPhone camera can be used for taking Facetime calls on tv using Apple TV boxes, so it would make sense for them to add a way to use an iPhone as additional camera.


>And these 3D spacial moment recordings, imagine children growing up in a house where when something nice happens, the parent rushes to put on goggles and stare at them through them,

Isn't this what used to happen with camcorders and now with smartphones?

Hell, how many people watch live music events and the like in-person through a smartphone camera (assuming recording is okay)?

I understand where you're coming from, but it pays to take a step back and look at things from a wider and more fundamental perspective.


I don't think this is much different of looking at people in the 90s SMSing all the time and what happened after that with smart phones, you can criticize what you are currently living. The VR/AR is just a normal, technical and market, evolution.

In the 80s people started using walkmans to hear their own music instead of listening in their living room with their family speakers.

I think that the social criticism starts much much earlier. Nothing special about Apple.

Another take, adding AI to the equation, is that probably the humanity ego is huge and all these stuff is harming it and we are not as important and intelligent as we thought. We are amazing creatures for sure and will fight for our humanity in the way other animals do for their survival.


Agree. This was the first wwdc that really made me feel like apple's vision of the future is really not mine.

And made me question my work choice as an iOS user and developer, this time from an ethical perspective.


I dont think this is ever going to happen. Phones sell because they made computers unintrusive. This tech has some time go until it becomes as simple to wear as sunglasses or even less. IMHO something like a new google glass is more likely to win this space


> Phones sell because they made computers unintrusive

Respectfully, how does this make any sense? Didn't mobile phones end up becoming the most intrusive technology on Earth? (I'm not sure what you specifically mean by "intrusive", but I'm assuming that you mean "the extent to which the object interrupts your life/tasks/etc.").

Phones sell because they are:

1. Convenient

2. In most civilizations on Earth, it became a necessary item in order to live (authentication, payments, maps, etc.).

I'll skip the lesser points, such as the fact that they took over the camera for >99% of people for 99% of the time.


> Didn't mobile phones end up becoming the most intrusive

I mean intrusive in blocking yoru attention and isolating you from the environment and blocking sensory input. While it grabs your attention it doesnt isolate . Phones are attention-grabbing but nowhere near as intrusive as goggles


I would say it’s a comfort/accessibility issue. Smart phones are just easier to use. Not from an efficiency perspective. There’s just less physical resistance.


This is labeled a pro device. Their home use marketing really misses the mark IMO.

The fact that I can mute out the rest of the room while working is a plus not a minus.

I'm skeptical this will be there on a technical level to justify the price but the concept of working in VR is objectively an improvement, especially once you factor in the portability of the system.

For entertainment - it's probably going to be the same as computer games - everyone had one for work and it was expensive and awkward - but work drove adoption and people started playing games on it.

I think this is the main failure of VR so far, it's trying to force people into this alien experience just for entertainment. I have plenty of ways to play games/watch video, I probably wouldn't use it if I had it for free let alone paying large sum of money.

If I'm using VR day to day for work - then I might occasionally game on it - this is the road to widespread adoption.


People already had computers for work. Nobody is going to have one of these sitting around for years until a Prince of Persia or Doom type game emerges.

Apple needs to build an app store for this and fast.


The video call virtual avatar literally looks like a reanimated corpse. Like honestly if you were directing a sci fi movie where you were talking to a brain in a tank “that doesn’t know he’s dead yet” that’s how you’d art direct it.

Can see it becoming a faux pas to answer a video call as the creepy avatar.

I know Zuck got clowned on for the horizon avatars but at least they were just lame and low quality, not outright creepy/gross.


The 3D spacial capture will ne done on the iPhone in one or 2 generations. I would bet a lot of money on that. That way Apple’s almost certain that all Vision Pro owners will upgrade their phones. They just couldn’t show it captured that way yet as it’s not yet available on the iPhone.


iPhones have had lidar for years.


Wearing a VR helmet at a family event is a big NO. I'd rather use a tech that would use footage from multiple cameras in the room, then build a sort of 3D-recording of some moment that I would be able to immerse myself in later, with full freedom of movement and pause-rewind.

Of course it would need some tech to fill in the blind zones obscured from cameras, but likely with generative AI and data from before and after the occlusion occurred such technology seems plausible.


> Wearing a VR helmet at a family event is a big NO.

Imagine how that scene with the dad filming the kids playing would look if the kids had the Vision goggles on, too.


At that point it'd be easier to host family events in Minecraft or at whatever Minecraft imitation Meta will come up with.


My dream is to become a fully wired in cyborg. I thought many hackers felt the same but I guess not.


Probably because being jacked in to what amounts to a $3500 Safari browser and an Apple amusement park feels less like Terminator's HUD and more like the guy in the Matrix that decided to believe he's enjoying eating an imaginary steak in an imaginary restaurant.


Wake the f. up Samurai, we have a city to burn.


I don't know if you are joking, but it is for many. Bar the wired part. Why not wireless?


I agree with you, not sure why they included that use case in the demo. It would have been better done if the dad recorded the 3d video with iPhone depth sensors (possible?) and replayed it later in VR.

I really like my Quest 2. But these devices are for when you are alone. I work alone and would love to use something that expands my workspace and provides a comfortable environment to concentrate in. Quest 2 can't really get there. And occasionally I find people to talk to in VR which is enjoyable. But when the family is home it makes 0 sense. I cannot imagine us all sitting in an empty room with goggles on, watching the same thing.


Apple's goal is surely to get away from the google form factor asap, and once we're at a pair of slick glasses I don't think you'd even think twice about it.


It feels like another pretty Apple device which is purely for consuming content, and of little use for serious work (or serious play). And $3500 is way too much to pay for that sort of device.

A UI that relies entirely on eye tracking and arm motion tracking, without a single button for 'accept' or 'back' or 'fire' in the case of games seems incredibly limiting. They didn't demonstrate text entry, did they?

I expected to see Apple's headset being demonstrated as a product for professionals, perhaps for architects, product designers, and engineers to visualise their designs for clients. Maybe it could have also found a niche in education.

But no, they seem to have iPad-like use cases in mind, and for that, the price is way too high.


What I find interesting about apple devices is that despite what you said, the engineering is usually top notch and beats most competitors. For example, I use an iPad Pro for hand writing notes. I don't care about iOS or any other fancy feature there might be, I have this device purely for writing notes. But the interesting thing is that I haven't found any other tablet that does specifically the hand writing as good as this one. It turns that specific aspect into a unique selling point for me and it almost seems like the competition has given up in terms of that. With this device I could see a similar thing. Regardless of whether this brings any new features over the competition, the engineering seems to be outstanding. Because of that I could see people buy this for a specific use case that it excels at, all the while not using it for "everything". Somewhat of a ramble but I hope I could bring my point across.


I don't think anyone seriously expects the price to stay at $3500 forever. If the first gen product is successful enough to warrant a gen 2, I suspect by Gen 3 we'll see a more approachable price point.

The biggest hint for this is the name: "Vision 'Pro'". Why call something "Pro" that doesn't build on another product to enhance it with other features? The pro moniker for Apple has always been a sign of increased functionality for power users.

Mark my words; we will see a mass-market consumer focused version of this product within 5 years if gen 1 is a success.


For “serious play” I imagine you could hook up a Bluetooth game controller, just like you can with an iPad (and it sounds like this will run iPad apps).

Likewise for work you could use a normal keyboard. I think they did mention that in the presentation.


This ability was demoed in the third or fourth minute of the intro. (NBA 2K23 with a PS5 controller)


Ah, good catch, thanks!

The AR aspect here is really clever -- you can use normal keyboards and controllers because you can just look at them when you need to.


That’s the fantasy vision, but I don’t see how it would be achievable within the next 15-20 years given the current technological outlook and considering the energy requirements and the necessary optical workings. It will require major technical breakthroughs that aren’t remotely on the horizon.


That's what people said even after the original iPhone was announced. I think there's a famous interview with the RIM boss who basically said "energy requirements make this design impossible" in 2007, and yet, it actually came out and worked a few months later.


I don’t think the current situation is comparable. You’d need more than an order of magnitude improvement in battery tech. Glasses would have to be less than 50g in total (and that’s on the high side; the sunglasses I happen to own weigh just 14g). Judging from its dimensions, the VP battery pack alone will probably weigh half a pound, and it only lasts two hours. In addition, we have no idea how to implement the required optics for 3D vision in the form factor of reading glasses, and we also have no good idea how to implement display panels that support both transparency and opacity, let alone how to achieve that in combination with those optics.

Of course, miraculous advances can occur, but you can’t just expect them to happen.


My wife an I use Airpods in the home when we're doing chores, and I'm already starting to feel uncomfortable about that. I feel annoyed when I'm interrupted from a podcast by a family member, and I'm sure they feel it. Perhaps that will lead them to talk with me less in the future.

The technology is siphoning attention from the home. Humans are highly visual creatures. The impact of this product will probably be greater.


i believe this is one of the reasons consumer VR hasn't taken off and likely won't: even if 5 people are in the same room it disconnects most people too much to be enjoyable on a regular basis. People would rather intuitively gravitate toward watching a netflix show or whatever than being bodily isolated and cut off for too long.

I haven't yet read/seen much about Apple's take on it so maybe they closed a gap I am not aware of.

> And these 3D spacial moment recordings, imagine children growing up in a house where when something nice happens, the parent rushes to put on goggles and stare at them through them, their little virtual eyes displayed on the outside, it’s frankly creepy to me.

Well, I have always found that creepy, even with old analog camera from back then (but I always found adults staring at children when something nice happens a bit creepy (or cringe ?) so that's me).


There are 3 things that standout about apple's vision product line.

First there's technology. This is clearly better than anything else in the market. Enough has been said on this.

Then there's usability and market. There's no real trojan horse feature that will get this product mainstream right now. Apple's hope is that the dev community will either build one or users will point them in the right direction with usage stats and telemetry. Maybe they will, or they won't nobody knows.

Finally there's the future. I see people comparing this to previous apple products and their past experiences with them etc. There appears to be two factions here, one that says "This is the future I want and I love it", and the other saying "This is the future I don't want and hate it". Both groups however are saying one common thing ... "this is the future". And that's pretty much Apple's bet.


I imagine eventually your phone will be also able to do these spatial video recordings so the functionally on the headset will fade into the background as a fringe use for passively capturing something while you happened to have the headset on (and enjoying the spatial videos you captured with your phone)


I’m in the same camp. There is one thing you and I have to admit though: people already live disconnected lives, staring at their phones and other screens all day long. People dining out together look at their phone while wearing AirPods. The direction was set years ago.


Once upon a time we laughed at the Back To The Future Part II's "Pizza Time" scene (https://www.youtube.com/watch?v=-8oS4VbRaAA). I guess we are in the future now.


Eh, the iPhone already isolated us from our surroundings. This is the next step.


I imagine a future iPhone will be able to record Vision enabled content. I imagine you might be able to just set the Vision Pro on the table while it's recording and you do your thing.


In short, using a Vision Pro is only going to be reasonable if everyone in the room has one and is using it.


Honestly you could replace everything you said there with an iPhone and it'd still apply.


He says as he sits alone in his room on a computer…


This is a bit of a rant, but I bought a Quest Pro last year, and I feel it was a total waste of money. It's actually embarrassing how much of an unpolished piece of garbage it feels like. Sure, it's fun as a PCVR headset, but as a productivity tool or as a multimedia consuming device, or as a social tool, it completely misses the mark.

And I'd say that 80% of its failure can be attributed to bad software. It's buggy, I constantly have to re-calibrate my work/play area, the hand tracking is janky, and even though it supports (incredible) eye tracking, almost nothing takes actual advantage of the hardware.

It's insane that a billion-dollar company like Meta actually felt proud to release such a steaming pile of trash. Do they seriously expect VR enthusiasts to build an entire operating system for them? Much like the reports we're getting about the Vision Pro, the eye tracking in the Quest Pro feels like magic, but nothing uses it—basically it's irrelevant to navigating the operating system and barely any games use foveated rendering. It's infuriating.

If I was Zuck, I'd fire all my product managers. I say good for Apple. I'll probably be selling my Quest Pro and buying the Vision Pro.


I found it notable Apple has this to say about eye tracking under the "Learn more about privacy and security" section:

>Eye input is not shared with Apple, third-party apps, or websites. Only your final selections are transmitted when you tap your fingers together.

Along with this small video: https://www.apple.com/105/media/us/apple-vision-pro/2023/7e2...

Which is great and important for me to see, but I'm unsure how that jives with features such as foveated rendering.


You can get a surprising amount of information about cognitive processes from eye tracking. I worked on the industrial version of HP Reverb G2 which included a PPG sensor and Tobii eye tracking. Was really an amazing product which suffered from having to use third party platforms like Windows Mixed Reality and OpenXR which didn’t have first class support for what was possible.

One notable takeaway is that eye tracking of saccades is loosely correlated with cognitive load. In fact, “cognitive load” was a product that would be licensed to developers to use in their apps so that difficulty levels could be dynamically adjusted to keep users in a sweet spot for learning.

Especially when combined with a PPG sensor which can in theory detect if a man gets aroused (HP never investigated developing a model for that, but some comments from friends who were DoD-aligned contractors expressed alternating concern or interest in it, because they knew from their own career and independent research what applications might be possible with those technologies)

…it could be used to infer things which someone might prefer to keep private (I.e. which categories of things they have an emotional response to).

Apple definitely is aware of this and is heading off potential issues by not only not providing that data to third party app developers but also even to Apple itself. Apple does have a strong pre-existing framework for dealing with sensitive health data, so they could have decided to capture similar data and treat it the same way they deal with ECG data. HP did not have a HIPAA compatible framework so they threw away 99% of the PPG data and distilled it down to just “heart rate” which was the only data source available to app developers from the PPG sensor.


I imagine there will be some developers that try to have analytics fire when certain events happen, correlated with eye-hovering. Hopefully Apple will catch these tricks before they can go live. The potential value of knowing where someone's eyes are (whether they saw an advertisement, for example) could be very high, so I envision a cat-and-mouse game.


Apple addressed that in the keynote. There’ll be an invisible canvas over the window content, and the app never learns where you look (nor Apple, ie it is not further stored or processed). Only when you click (ie tap your thumb and index together) does the app get an event.

(Downside is that one loses the capability for hovering, but strikes me as the correct trade off.)


Sure, that works for certain contexts. But what about a game, where your vision is used to aim a weapon? The app has to know where you are looking in order to know where to shoot. If it's an online game, then the information cannot be isolated on the device since you're shooting at other players.

Now imagine the game has advertisements, or loot boxes, or something else that the game company wants to know if you've looked at. How does Apple prevent them from learning that, if they're able to know what they need to know for the game to function as a regular FPS?


Shooting is typically an action you take, not just a matter of the direction you're pointing in - so this fits the model well: the game gets information about motion to swing the camera, and gets the exact position when you press a button, but doesn't get to know exactly what you're looking at if you're not pressing the button.

The question will be how easy it is to reconstruct that information from the last exact position + movement info...


This doesn't hold true for FPS games. You absolutely need to tell the game where you are aiming so it can draw crosshairs, even for on-rails shooters favored by VR.


I expect fps will still require a controller/kbm rather than relying on the eye tracking/gesture input. For one thing, I don’t know how else you would move and perform actions at the same time.


There aren't usually any on-screen crosshairs in VR games. You just use the crosshairs that are on the weapon, which is tracked through controllers. I expect there to be trackable controllers available for Vision too. It's also possible to use hand tracking, but that's not optimal.


most probably you are right. But I could imagine crosshairs are not needed anymore with highly accurate eye tracking, you would just intuitively know where you are aiming anyways.


You don’t necessarily want the cursor to be as fast as the human eye


Yep, you presumably need to animate a hand/gun/etc. moving on-screen. So maybe it would lag eye movements by a bit.


I’m absolutely happy if the answer to that will be that gaming is not the core focus of the device. Let AR be more than FPS in 3D for once.


Same goes for driving games, and really any game where you're navigating a cursor around a map.


It’s a long game. In both desktop and mobile, a viable and commercially successful alternative emerged even though Apple had the edge in sophistication and innovation.

A lot of people probably don’t remember what Android looked like in 2007. Using the Android pre-release SDK next to an iPhone, you’d come to a similar conclusion as yours: how does Google release a mobile OS so behind the curve? Should they fire all their product managers?

Same story with Windows in 1986. It was difficult to see how the graceless file manager from Microsoft could compete with Apple’s holistic GUI.

Give it four years. I think Meta intends to be the Windows and Android of this third wave of UIs. Validation of the market by Apple is exactly what they’ve been waiting for.


The Quest Pro has actual VR games and proper VR controllers though. I doubt that such games are possible or probable with eye pointing and finger clicking alone. Apple also didn't include any VR games into their presentation.


VR gaming is a flop. Lots of people bought headsets, played with them for a day and now they're all gathering dust. Apple is right to ignore it.


People generally agree that VR games are miles ahead of ordinary games in terms of immersion. The adoption problem seems to be that there is no affordable VR console with a lot of big exclusive games. The Meta Quest comes close to a decent price point, but there aren't many publishers making elaborate games for it. A chicken/egg situation with regards to adoption and game investment.

Basically, a Game Boy or Switch for VR.


"Exclusive games" have little to do with adoption. They are a vendor lock in thing. Games need to be good and immersive, not exclusive.


A game needs to be purpose-build for VR, that makes it almost an exclusive.


Maybe, but focusing on the "exclusive" part -- the one part that is consumer hostile -- seems weird. If there was a standard for VR games, or if a game existed in both Oculus and Vision, it'd be a good thing. Exclusivity is not a good thing.


they mean “exclusive to VR” not exclusive to a particular headset


What does the word "exclusive" mean in this context? What would be a non-exclusive VR game that can be played on a VR headset? What are some examples?

"Exclusive" in games consoles is a loaded word that means "exclusive to this platform and not available anywhere else".

If the intended meaning is what you suggest, then I'd recommend removing the word "exclusive" and just saying "there is no affordable VR console with lots of big games".


An affordable VR console with a lot of big games would be sufficient for VR mass market success, but likely some of those big titles would be automatically exclusive for that console, namely for first party titles. Those would first be made just to get the console off the ground. Sony made "Horizon Call of the Mountain" probably just to sell more headsets, not because they expected to make money with it. The install base is likely too low for that. A third party publisher wouldn't have an incentive to make such a big game. Valve probably also didn't earn anything with Half Life Alyx, they wanted to make their Steam market bigger.

The problem with Valve is that their platform is PCs, and a gaming PC + VR headset is too expensive for most people, who often don't already own a proper PC. The problem with Sony is similar: The PSVR 2 is not a standalone console, but just an add-on to a PS5, and PS5+PSVR2 is too expensive for people who are just interested in VR. It's like trying to sell a Game Boy which requires you to already own an SNES for it to function. It would be a flop.

The Meta Quest 2 is in a better position, it's standalone and rather affordable. But Meta can't support it with big games like Sony or Valve could. Meta doesn't have the expertise and probably also not the will to develop expensive AAA games in order too push Quest sales. They are more interested in the metaverse than in becoming a games company.

Nintendo could perhaps make a successful VR console, but it would require massive investments in AAA titles, which would be very risky for them, given their size. And Sony isn't interested in a standalone system.

Microsoft would be more likely. They have huge cash reserves and can afford to lose large amounts of money until they become profitable, as they've shown with the first XBox. But I they haven't shown any ambitions so far to invest billions into an "XBox VR" console and corresponding AAA games.


>What would be a non-exclusive VR game that can be played on a VR headset?

VRChat, Superhot, Simpleplanes, Flight Sim, Ultrawings, Phantasmagoria, Htiman, Nearly all VR compatible racing games, Rec Room, Euro Truck Simulator, Most VR compatible horror games like Phantasmagoria, etc etc etc.

Did you even try to look it up?


> "Did you even try to look it up?"

I do not need to "look it up", I own and play Superhot on the Oculus for example.

What in your knee-jerk reply would you say contributes to the conversation about whether "VR exclusives" are needed for a given platform to succeed?

I'm going to help you by providing the context for this conversation (not written by me, but in the initial comment I was replying to), which is:

> "The adoption problem seems to be that there is no affordable VR console with a lot of big exclusive games."

There you go. What do you think "big exclusive games" means in this context? Do you think it means "exclusive" as in "Nintendo exclusive" or does it mean "exclusive" as in "it's designed for VR first, regardless of headset brand"? Or something else maybe?


exclusive as in “experience that can only be had in VR”

something that people will look at and think “I need VR so I can play that”. The closest so far is half-life alyx, but that’s really not cutting it.


Isn't that the same as saying "VR videogames"? Assuming we mean good quality, and not a minimum effort port.

I was really impressed by the Oculus "First Contact" (or whatever the one with the robot is called). I wonder by there are so few games that feel like it.

Beat Saber is pretty cool, but what killed it in my family is that you cannot see what the other person is seeing, so it's a bummer when you're not donning the headset. Unlike with a regular videogame, where you can watch the screen even if you're not playing.


They are immersive, 100%. But also nauseating and forehead heating.


Have you tried a Quest? I found the SW on that fixed all neaseau I experienced on earlier headsets and I think the Quest 2 was even better but I don’t recall at this point.


It's very varied by person. My friend used a index on my 4090 dedicated (All other monitors to the 2080ti). Running at full frames on google earth. He puked his guts out 20 minutes later and took about 40 total minutes to recover. I was watching the stats the whole time. He just suddenly went urp right in the middle of mt rainier.

I had just showed him WA state and no ill effects, but I didn't have issues on a pre-release vive prototype or release occulus OG while doing barrel rolls and back flips or movement vluprs.


Wait, were you the one controlling where he was going? Like zooming him around the planet? Cause that’s a sure fire way to make anyone nauseous.


no I showed him how to navigate while watching me on the other monitor...then let him fly around and talked him through the controls.


Ok. But Index is made by Valve and you're using SteamVR. I've always had motion sickness issues with Valve. For whatever reason, Quest's rendering stack has always not made me motion sick. Even with Oculus Link which has what people consider "crazy" latencies, it's not which means that it's ASW and other last rendering corrections that are hiding better all the things that would cause motion sickness.


I have a 3090 with HP Reverb G2 and very high tolerance for motion sickness. I’ve noticed in Elite Dangerous that lower framerates cause less motion sickness. The stuttering between frames makes my eyes/brain not treat the visual motion as “real”.

So I crank up the details to super high and enjoy the awesome rendering detail and save myself from motion sickness.

Elite Dangerous is the only VR app that gives me any motion sickness at all and is also my favorite app in VR so far.


I think everyone can agree that artificial locomotion can be motion sickness-inducing. Which you can quite easily do a lot in Google Earths. The faster the worse.

Incidentally Google Earth has comfort options exactly due to that reason, such as the option to narrow the field of view when moving around. I wonder if that option was enabled?


No, it varies hugely person to person. Plenty of us can do completely disconnected motion in VR and not get any nausea. I could do barrel rolls and "Sliding" movement in VR no problem from day one. For the people who are marginal, software and framerate improvements can help, but those "comfort" settings work by killing immersion. There's also a small portion of people that will never be able to play VR without nausea.


There's no plauible way to allow you to move around the virtual space without moving in the real world and not give many people motion sickness, since by definition that is exactly what triggers it. The better the illusion of immersion, the worse sickness you'll get when your eyes tell you you're moving but your body and inner ear say you're not.


I had plenty of nausea with the Quest 2.


Same experience here. For stationary position games I found it fine, but anything with any kind of walking handled by the controllers it felt awful.

Interestingly I tried out F1 2022 in PCVR mode with it and was surprised to find that didn't make me feel ill. A lot of it comes down to the refresh rate and quality of tracking as its the lag that causes a lot of the nausea.


I think that has more to do with mental framing. Your brain is used to being "in a car" and having weird motion. You also usually have the inside of the car being stationary in your field of vision, meaning your brain can look at that and feel fine that you aren't moving.


A reviewer on YouTube said the Apple Vision is a bit heavy because they used a metal frame for some reason. (No comment about foreheads)


You can safely treat “reviewer on YouTube” as “stranger at a bar” in terms of accuracy. They get paid to get clicks, not to deliver accurate or expert advice, and that heavily favors “why don’t they just…?” comments glossing over challenging engineering decisions.


I think it was MKBHD who seems to know his stuff


Yeah. Also, the "why don't they just" (use plastic/carbon) came from me, not him. Apple is long known for using metal in places where it unnecessarily increases weight, because it makes a product feel more premium.


He did mention that competitors use plastic precisely because it is lightweight.


How many industrial design projects has he done, though? He doesn’t have any particular expertise in the area so while it’s fair to ask why they picked the trade-offs they did, it’s pretty far-fetched to think one of the best industrial design teams in the world forgot about a primary success factor, especially given how much other effort they made to hit weight targets. Absent someone who’s actually worked in the field saying it’s unnecessary, I’d give the benefit of the doubt and assume that they used aluminum instead of plastic for a good reason.


> one of the best industrial design teams in the world forgot about a primary success factor,

Is this the same team who came up with a phone that bends in the pocket?


And a wireless mouse that can't be used with the charge cable plugged in...


There are other headsets without metal frame, so we pretty much know this is not necessary. And we also know with certainty that metal absolutely isn't necessary for laptop cases, but Apple still uses metal. The explanation is easy: aesthetics, feel, "haptics".


There are no comparable plastic laptops with similar performance, portability/thiness, battery life, and quietness — metal is rigid, and an excellent heat conductor that contributes to those goals.


There were many such laptops when they were still on Intel.


Yes, and Apple also sold plastic laptops when they were still on Intel. Apple has repeatedly attempted to sell products with a cheaper chassis (iPhone 5C, Plastic Unibody Macbook) yet their customers preferred metal enclosures.


Yes, because they feel more premium, that's their "advantage". A headset which hurts your neck more in the long run could offset this premium feeling.


We don't really know why Apple chose Glass + Aluminum for their headset, but we do know for a fact that:

- 1 Their customers have historically chosen metal and aluminum for their products, and Apple is in the business of building what their consumers prefer.

- 2 Glass and Aluminum conduct heat better than plastic. The 2hr battery suggests thermals are a great consideration where the materials chosen were not just for aesthetics. Perhaps plastic could've worked, but see point #1.


thta's not going to get better with working in one. it'll still be forehead heating, nauseating, distracting.


How much do gamers actually value immersion though? Seems to me the satisfaction of solving puzzles or executing strategy and the sort are the real drivers of satisfaction with games. Immersion is nice, but unless the interaction mode provides something novel to enable better ways to play it’s really just a novelty.

In some cases they use it well, like Beat Saber. But how much is that worth to people?


> there is no affordable VR console with a lot of big exclusive games

What is even more impressive is when a VR version of a big, popular game is the best version of that game.

Affordability is somewhat relative. If something is truly compelling, people will spend a lot of money to get it.


I don't think that price can always be outweighed by other factors. Successful game consoles are always pretty cheap. Only productivity devices can ask for a high price.


I have to agree overall. I enjoyed my Meta Quest 2 but it's sat unused for a while now. There were a few games that really were awesome (Beat Saber, Walkabout Mini Golf, and Breakroom) but the last 2 really only shine with other friends and coordinating a time we can all play isn't the easiest. This doesn't even touch on the motion sickness aspect. The Q2 further hampered by the piss-poor UI/UX on the Q2, it really is astounding how bad it is. Once you are in a game it gets better but joining a party or joining an in-progress game is a roll of the dice. When it works it's almost magical but the failure rate is way too high.

The resolution on the Q2 is also just way too low for certain games, I think a MtG-type game would be super cool on the Q2 (and the games I tried in this vein had /massive/ potential that was thrown away completely due to blurry text). In the ends the Q2 has way too many downsides and not enough upsides to keep me using it. I will be getting a Vision Pro at launch though, the resolution alone almost would sell it for me but the emphasis on AR and productivity are the clincher.


VR gaming has definitely cultivated a community in the genre of sim-racing and Half Life Alyx modding


The problem is that the Quest 2 is what everyone bought and it's arguably not a compelling enough experience to encourage people to put it on instead of playing traditional PC/console games. Most people serious about PC gaming eventually upgrade to an Index because it's still the best option at a great price if you have the room for a full-room setup - and these people definitely play at least a few times a month.


I’ve played VR games. They can be a lot of fun and very immersive.

But there are clearly still problems to solve. And some of them do involve hardware capability. Basically for the mass market they’re just not there yet.

The apple headset certainly appears to have significantly more power and capabilities in many ways. It will be interesting to see what pops up.

Though obviously a future version with a lower price will be necessary for it to truly go main stream.


VR gaming is a niche. Racing and flight simulators are very good; other software is typically mediocre. But VR games do not work with AR glasses.


Even with my racing sim rig, I find myself using my curved monitor more than the Quest 2. It's just such a pain to re-calibrate and update the software and tweak my graphics settings every time I want to hop in that I usually just don't bother.


Games are successful and making millions on the platform. If some people churn out of the experience is that equivalent to it being a flop?


I'd attribute part of that to there being so few games available that you've got a captive audience and limited content.

Meta was proudly boasting about how the Quest 3 will have "Over 500 titles" - thats not something to boast about, it's just highlighting how incredibly poor the platform has been from a developer adoption standpoint.


I agree that so far that is where we've been, but imagine a Sim City or Civilization game on the Vision Pro. THAT would be a ton of fun. I'm sure Sid Meier could make something interesting on it. Frankly, I would love it if Apple would throw money at him to make one of his games on it.


>imagine a Sim City or Civilization game on the Vision Pro. THAT would be a ton of fun.

Why? No, seriously, why? People say this as if it is a given, but how would VR improve a game serious that is built around strategy as it's main selling point. How does VR improve Civ as a game? In fact, there are very few genres and types of games that the immersion VR brings is worth the limitations: Weird controls, isolation, headaches, wearing a mask, expense, physical exertion etc.

I'm so tired of people who don't seem to know anything about either video gaming or VR gaming parroting this empty hype of how VR magically makes all games better.

VR is a peripheral, not an advancement in rendering technology.


The same way a huge TV makes a college football game more fun.

It's a city. I live in a real city. SimCity is this flat set of graphics on a screen. I'd much rather be able to view my city that I created and see what it looks like on a human level.

I get your point, and many things would not be good.

But for me at least, SimCity would be. So would Call of Duty (and I know that because I've played 1st person shooters in VR before on multiple occasions).

But that's just me.


One problem to solve is that probably we don't want people to stand for hours and having to walk through the city or the world. They might not have the space at home to do it. They could sit on an office chair and rotate themselves 360°. If they spend 3k on the visor they can spend $100 on a chair. However you don't even have to do that kind of exercise on a screen.


Yes this is where lack of controllers is going to be an issue. I guess you could use a PS/Xbox controller but then you may as well be playing on a pc.

I get why they didn't want to go the controller route, but they'll surely be very aware that it greatly limits what can be done with games.


They never said you can’t use a controller.

Remember, the iPad doesn’t come with a controller, but you can use a Pencil with it if you want.

Same thing will likely happen with the Vision products.


The point is that most games would probably need a special VR controller (not a normal one!), unless they can be played point&click style. But Apple won't include them, so the respective games won't be made in the first place. There aren't even smartphone games which require external controllers, despite the install base of smartphones being orders of magnitude higher than what the sales numbers for the Vision Pro headset will be.

Moreover, making elaborate games like civilization wouldn't be economically feasible even if they could be appropriately controlled by default. The price for the headset is so high that not many people will buy it. Then elaborate games can't sell enough copies in order to be profitable. For example, it seems unlikely that even a success like Half Life Alyx made its development cost back. The hardware install base is too small.


An Xbox controller would probably work fine. I already have multiple connected to things like PC's, Macs, and even an AppleTV in my house.

I think Apple is playing the long game here. Moore's law and manufacturing improvements will cause the price to go down over time.

Remember, the first Macintosh cost $3,000 in 1984. Today most Macs cost less than 2/3 of that price before adjusting for inflation.


So let's wait again 40 years. 2024 to 2064. But Moore's law will not last that long. The prices per transistor aren't exponentially decreasing anymore.


A screen would be nowhere near as immersive.

My point was that it would be like walking around Madurodam while playing a game.

And standing would be optional. Apple showed most people sitting down, so I’m presuming you could do that.


You can walk up to the walls of the room you are playing in (or bed, or chairs or flower pots or a ball you left on the floor, ouch) but I guess that if this sticks somebody will build a 360° walking pad specifically for games. Meanwhile they can artificially limit how far a player can walk in a straight line. Mazes, small vehicles, etc


The already have one [0]. Pretty neat once you get used to it.

But you should be able to do VR while sitting down and "walk" using gestures or a controller.

[0] - https://www.kat-vr.com/


> VR gaming is a flop

But isn't it still the most succesful application of VR?

I don't see another application of VR / AR that will prevent the device eventually gathering dust once novelty has faded. Certainly nothing in the presentation from Apple.


Half Lie Alyx is a great game.


> VR gaming is a flop. Lots of people bought headsets, played with them for a day and now they're all gathering dust.

A flop indeed. /s From [0]

"Standalone VR headset Quest 2 from Meta (formerly Oculus under Facebook) has outsold Microsoft's Xbox Series X/S consoles, according to the latest information from the International Data Corporation (IDC)."

> Apple is right to ignore it.

Incorrect.

Apple didn't ignore it because they cannot ignore it. Apple is targeting both gaming and for work just like how the Meta Quest Pro has done so.

[0] https://www.thevirtualreport.biz/data-and-research/65297/que...


The Kinect sold even more devices, but as a paradigm was still a massive failure. Nobody used them beyond some original demos.


Maybe the new Xboxes are a flop, also (at around 20 million sold by 2023)? The Xbox 360 has sold 85 million. PS5 has sold 35+ million.


It’s a billion dollar business.


VR games? Sure gaming, but I didn't know VR games specifically made so much money.

Either way, if I was gaming, playing a new single player 2D (Last of Us, etc) game on a MASSIVE screen on the Vision Pro seems so exciting. Beat Saber and throwing my hands around? Not so much.


Have you tried a VR "Big screen" experience? It's a massive gimmick. It's not special. The quality of what you are looking at is lower than using a nice monitor, and sitting in some empty environment that hosts this big screen is dumb.


It's impossible for me to try anything because it doesnt exist right now. There is no VR experience that has high resolution. If the resolution was better, it would be great. Don't care about the "environment" Just project it into my living room like a home theatre.



Very cool! I had no idea


But Beat Saber would be more exciting, despite seemings.


I’m sure people can come up with games or interfaces that would work just fine.

But they also showed full support for pairing standard Xbox and PlayStation controllers and playing with those the way you can on any other Apple platform (except the watch, of course).


You need special controllers for VR games, ordinary ones won't do. The standard controllers are just for conventional games.


I got one last year also, but was disappointed by software selection. My children loved the hardware but wanted more software. At their behest, I tried to write my own software for it, but went in circles trying to figure out what SDK to use.

I wish someone on the SDK team there could observe me as I tried to search for this information and failed because it would be super-informative on how to (and how not to) succeed on creating a wider community of apps...by making SDKs easily available.

<All opinions are completely my own personal opinions>


This is on brand for meta. Look at fb. It didn't come with instructions, but that didn't stop Cambridge Analytica from happening. Look at React. It didn't have instructions and yet it blew up. I bet Meta expected people to pick up the slack, that the best of what was there while they killed time polishing it.

Zuck was probably part of the problem... a room full of POs pushing for stupid shit while building on a rocky foundation.


> It's insane that a billion-dollar company like Meta actually felt proud to release such a steaming pile of trash

Let's be real. You can have all the money in the world, but that doesn't guarantee you access to talent. It doesn't guarantee that you made insightful technical decisions that hamstring you later in the product cycle. It doesn't free you from short-sighted managers or employees who just care about making it to the next vesting cycle.

When you're a billion-dollar(haha, not so significant these days) company, you have a tremendous pressure to show results. You have tremendous pressure to show innovation and be first to market. It has no bearing on the quality of the products you release in the short-to-medium term.


John Carmack led them to a very strong technical base. Based on his testimony they squandered it.


It’s kind of funny that meta stock didn’t drop on the vision pro announcement. What does that say about the market’s assessment of meta’s vr/ar prospects?


I actually expected the opposite TBH.

Not because Meta had a better product, but Apple putting its might into this product category means that "Meta" is vindicated in investing here -- that there exists / will exist a viable market for this technology in coming years.

All the damage done by the poor quality 3D renders of AR/VR worlds from Metaverse -- that made a layperson laugh at the concept of AR/VR -- is probably partly offset by the positive press and marketing budgets that Apple will bring to the table.


The market is hedging it's bets.

If Vision Pro will succeed Meta could be seen as the cheap-ass alternative. 3500USD vs 500USD is like Ferrari vs. Kia. Not everyone can afford the former.

If Vision Pro doesn't succeed, the market will likely move on from VR completely, leaving Apple and Meta in the same place. Money spent, product introduced, tried, didn't work. Move on.


Apple has the stubbornness and funds to drive this for a decade, and they will. Apple Vision Air AR glasses are coming even if they don't sell a single Pro unit for five years.


maybe the odds of meta dropping the whole vr thing have increased? then they can stop lighting piles of money on fire because zuck read snow crash as a teenager? OTOH, maybe apple will show meta how a headset can actually be done and they can mimic their way to product market fit?


I much prefer Meta dumping billions into VR rather than targeting ads and worsening teens' depression.


tracks with what carmack said when he left. what a shame!


I'm with you. Had the same exact problems, and I returned mine.

Interestingly, the Quest 2 seemed less buggy when I had one.


i dont understand this mentality, obviously the intent is to improve the experience over time, creating a VR platform itself is an ambitious task. Are there no happy customers of the Quest?

metas strategy is to release a cheaper consumer product and improve it iteratively. Hate to break it to you but day1 apple vision pro will have software bugs too.


> It's insane that a billion-dollar company like Meta actually felt proud to release such a steaming pile of trash.

Can you name a better product in the price-bracket?


If you have a product that's 5x as good, but 2x as expensive, literally everyone will buy it. The iPhone is the perfect case study.


If you’re comparing the Quest 2 to the Vision Pro, it seems more like 2x as good for 5x the price.


Based on the hands-on impressions I’ve read it sounds significantly more than twice as good.

Five times the price, yeah. Today. This really feels like a product aimed at developers right now not the mass market yet.

Which fits. The iPhone took a few years to really take off. The Apple Watch took a few years to really takeoff.


Has Apple ever released a Developer Edition product before?


They've had developer units but not a publicly released one as far as I know.

When Apple TV was being developed they were handing them out to devs for free like they had millions they were trying to get rid of. I was a lowly single app creator and they happily sent me one.

For the Intel to M1 migration they did something similar but required them to be returned this time, and limited which devs they sent them out to.


Yes, sort of.

The first Intel Mac and the first ARM Mac were DTKs: Developer Transition Kits.

You’d buy them from Apple at a reduced price, could use them to get your software running, and then had to give them back when the real hardware was released.

There was nothing similar for the watch, iPad, or iPhone though.


Yeah the Intel Macs. I think you had to return them?


11 times the price


...if you are in the US. Outside US iPhones are nowhere nearly as popular. In many markets Android phones have about 90% of the market share


> Outside US iPhones are nowhere nearly as popular. In many markets Android phones have about 90% of the market share

In my opinion, with the possible exception of South Korea, that is purely a matter of apple's global pricing vs local purchasing power and a huge number of those android users would have an iPhone if they could afford it.


> a huge number of those android users would have an iPhone if they could afford it.

I don't think that's true. Samsung flagship phones cost the same as iphones (at least in the UK) and they sell very well here.


I believe they were referring to the original iPhone compared to the other “smartphones“ of the day.

Not todays market, which has evolved.


Fitness and gaming wasn't demoed much yesterday, but are going to be insane on the Vision Pro - so I guess it was just too obvious to demo. Or is it?

Also Meta could catch up if it was just software. But how big of a deal is the M2+R1?

Anyway the metaverse was already being retracted, else this Vision Pro could have hammered Meta.

Clearly Occulus has to evolve to match VisionPro. Controller has to be optional. It has to have better AR. Meta has to be the more "open" alternative to Apple's ecosystem barriers.


> Also Meta could catch up if it was just software.

This was Microsoft's warcry for years: who cares if Apple has better software? We have better hardware. Well guess what, software matters and it matters more than hardware. If you have both, you have literally a generationally-defining product (e.g. the iPhone), but even if you just have the software, you'll still eat your competitor's lunch. Why does everyone have MacBook Pros these days? Prior to Apple Silicon, it certainly wasn't the hardware.

Meta's about to get a rude awakening because "we can fix the software later." As if technical debt doesn't exist and as if you can so easily get engineers that eagerly want to clean up someone else's mess.


>Why does everyone have MacBook Pros these days? Prior to Apple Silicon, it certainly wasn't the hardware.

Oh, I don't know about that. Admittedly there were some awful choices made between 2015 and the emergence of M1 architecture, but Apple's reputation for making well engineered hardware goes at least as far back as to when they started milling Macs out of aluminum. I'm thinking back specifically to the PowerMac G5 in 2006. It was designed well, felt solid, and when you opened it up, it continued to look well made. I recently popped open my 2015 Macbook Pro because I'm finally having hard drive issues, and for as much as I have railed against Apple fanboyism over the decades, it looked so nice inside I wanted to take a picture. (Why?! Who cares!? What would I even do with that picture?)

I got tons of mileage out of various Dell desktops and self-built machines over the last couple of decades. It was better bang for the buck, I didn't have to fuss over proprietary connectors and a locked down operating system. When the company I worked for made a big shift away from (it's okay to chuckle) Coldfusion and MS SQL Server to Ruby on Rails and MongoDB in 2013, though they offered to let me continue running on Windows, I asked for a Macbook because that's what everyone else was using. Might as well learn. Didn't much like OSX but the hardware was better than any Windows laptop I'd used. Later I bumped up to a 2015 MBP, which I only just retired this year.

While I was Windows at home and MacOS at work, last month I snagged a Macbook Pro w/ 64gb of RAM from B&H for $2400 - the first Apple computer I've ever paid for myself - and it's replaced my Dell desktop and that 2015 Macbook that my old job let me keep.

I effectively skipped the bad Macbooks, so my perspective is tinted by that. But even during the bad years, even my friends who continued to operate in the MS domain were buying Macbooks and dual booting them into Windows.

I had access to a Surface tablet through work early on. It was really nice! Just like a Zune. The Surface Pros looked pretty great too, but anecdotally, I never saw them outside the context of visiting a business that was a strictly MS shop.

I still like Windows as an OS better. But Apple's new architecture (which importantly doesn't have me carrying dongles around, or even needing them at home) was compelling enough for me to take the plunge.


I haven't liked the usability of the entire Macbook line though, and It's very upsetting that so many laptop makers have decided to just copy it.

There's lots of little details which are well made, but bad decisions: the screen hinges tend to be a little too floppy, the trackpad is oversized, removing the physical buttons makes it hard to use, moving the power button from a separate area to the "eject" button location on the keyboard (and making it look like a regular key), everything they've done with keyboards (flat, overlay spaced keys with no surface indentation and less and less travel). The ridiculously sharp edges of the body chassis which cut into your when you rest your hands there (seen at least one video of someone just filing a bevel into the machine to fix it).

They look great, but that's all they've ever seemed to me: in usability details they've always felt terrible (even MagSafe, which seems like it should be great, feels great, and yet has seemed pretty lacklustre when I've had to use it with a work machine).


Two thoughts here. The first is that this comments is totally valid and mostly the inverse of everything I think, really shows that we all value different things so thank god for a free market.

The two things I agree on for the 2016-2022 laptops are the sharp edges and terrible keyboards. Maybe take a look at the new laptops next time you get a chance, the edges on my MBAir and 16MBPro from 2022 are waaaay more comfortable than my previous two MacBooks and the keyboard feels much nicer too.

Back to point one though, to me the trackpads are well sized, the hinges are smooth and just right and the power button being where it is is a none issue, I even quite liked the touchbar.


The looks-over-function school of design thought Apple is famous for


> This was Microsoft's warcry for years: who cares if Apple has better software? We have better hardware.

What on earth are you talking about? Microsoft is a software company first. They rode the Windows advantage for more than a decade and didn't have anything to do with the hardware, because they aren't a hardware company. Windows ran (and was preinstalled) on everything, from the lowest tier of trash to the highest end machines, and Microsoft didn't care about the quality of the hardware, because they owned the platform that everyone used for computers. Office is another software product that still doesn't have any real competitors and is one of Microsoft's biggest cash cows.

On the flip side, when exactly did Apple have better software than Microsoft? Their office suite is barely usable for anything outside of the most basic use cases. Apple beat Microsoft in mobile, and Microsoft kept its head in the sand and assumed the Windows advantage would live forever and didn't wake up until Ballmer was gone, after Apple had started eating their lunch in the laptop space, but the assumption was that Windows was so good people wouldn't switch, not that Apple had bad hardware and good software, if only you could run it somewhere else.

Microsoft is named for being SOFTware for MICROcomputers, they don't have a blind spot around software. This is some Soviet revisionism or something.

> Meta's about to get a rude awakening because "we can fix the software later." As if technical debt doesn't exist and as if you can so easily get engineers that eagerly want to clean up someone else's mess.

I don't know, Meta is probably the largest employer of PHP developers in the world, I bet they can find people willing to do some other masochistic shit too.


> If you have both, you have literally a generationally-defining product (e.g. the iPhone), but even if you just have the software, you'll still eat your competitor's lunch.

The Vision Pro is really a first-product distraction for the Apple fans. This only further validates the market for XR devices and Meta's Quest's lineup will be the cheaper alternative for those who don't have iPhones or Macs (Since Android phones still outnumber iPhones) [0].

I'm only interested in the next iteration when it gets smaller and cheaper and when Apple announces a direct competitor to the cheaper Quest. Probably 'Apple Vision'. Then it will get its 'iPhone' moment, but it is not going to be the Vision Pro.

> Meta's about to get a rude awakening because "we can fix the software later." As if technical debt doesn't exist and as if you can so easily get engineers that eagerly want to clean up someone else's mess.

Meta is fine. They will just copy Apple and make it cheaper and at worse case, become the Android of XR headsets and glasses.

[0] https://www.statista.com/statistics/272698/global-market-sha...


> Why does everyone have MacBook Pros these days? Prior to Apple Silicon, it certainly wasn't the hardware.

It’s not like it was better built or had a better screen than cheaper alternatives or anything.


> so I guess it was just too obvious to demo. Or is it?

I certainly don't think so? My guess going in was that fitness would be one of the main plugs — especially because high-end fitness devices already cost a fair amount, and that would have made their tail end, "How much would a computer plus a screen plus umm... it's $3,5000!" go a lot more smoothly.

I do think that's likely high on the road map, particularly given such a large investment in Fitness+, but also can see them being very wary about encouraging any high-energy movements with a $3,500 device. I think it's the pricing and dev-kit-ish nature of the v1 more than too obvious.

But Vision Pro does seem to imply a future Vision Sport.


Sweating profusely in an electronics device... I've ruined ski goggles just sweating in them and they're meant for that. Also just stumbling with these on or slipping and there you go cracked $3500 screen.


Honestly, how often do you bang your head hard into a wall/floor? Sure, I'm a tall guy, roughly 200cm, and I sometimes bang my head in a overhead doorstock. But that is typically some old cottage or similar, where I would probably not bring a XR headset anyway.


I was taking while being tired from exercise. While skiing biking etc often. While doing vr, I've broken a monitor and gone through a wall and broken several controllers. While just waking around I forgot my shed had a low door and clipped my head last week first time in years. That's why I wear a effing helmet. You're blind on your stationary bike and pop off wrong. Done. Foot stays clipped in, faceplant.


>While doing vr, I've broken a monitor and gone through a wall and broken several controllers.

Jesus set up the room boundaries, it's really not that hard.


They don't always work


Worse, if you run into a wall at high speed the screen probably ends up in your eyes.


I for one will be sticking to controllers and forgoing eye tracking. Hand input is irritating and limited, I hate using pinch gestures to click on things and the article makes a solid point that the lack of haptic feedback just makes it feel like you're flailing around (which you are). Eye tracking is nice to have in social interactions but I don't see the point otherwise.


Could be that they're leaving it (and gaming/etc) for developers. They largely showed core stuff only at this point.

The platform launch was very locked down. For incremental hardware upgrades, they put hardware in the hands of devs by either shipping out early or bringing devs to their offices to work on prototypes in a secure environment.


> I guess it was just too obvious to demo. Or is it?

Gaming can't be good, because there's no controllers. Even if the hand and eye tracking is absolutely S-tier, you're never getting something like button input from swinging your empty hands around, twitching your fingers. Just the occlusion from the back of your hand would be enough enough for that to be impossible.

Fitness might be more doable, but even then I think you're going to be very limited with hand tracking only. I think there's a reason why Apple skipped over those segments almost entirely.

EDIT: I meant "VR" controllers, as in things that are tracked in 3d space. Of course you can use a game pad or MKB, but that kinda misses the point of being in VR in the first place.


I hate VR controllers. I'd prefer to play VR games without them. I'd prefer they do hand and eye tracking to determine what I'm doing, like look at a zombie and and go pew pew pew with my finger guns.

The only VR headset I've tried that can do hand tracking (Oculus Quest) sucks at it and games there didn't pick up on it as an input at all, as far as I know.

However, just because VR controllers won't exist for the Apple Vision at release it doesn't mean they never will. Perhaps Apple will release some later, or it'll be a third party release. Heck, someone might unironically release a Nintendo Power Glove style controller that works incredibly well with Apple Vision.


That’s a unique take. Especially for guns in VR which feel incredibly natural with a controller that is basically a pistol grip with a trigger. For people really into Pavlov they even sell kits to mount the controllers together into a rifle configuration with attachments for a shoulder strap and buttstock. I’ve never heard of a successful shooter game that uses finger guns.


> I’ve never heard of a successful shooter game that uses finger guns

Because none exist.


My family went to a Sandbox VR store and played a zombie game. In it they gave you “guns” that the cameras could track. I put guns in quotes because when you remove the headset you see that you were basically holding a stick with reflectors on it, but in VR it sure felt gunish.


Why do you hate VR controllers?


> Why do you hate VR controllers?

Sensitive hands and wrists. Pain and fatigue using them.

Having to swap out batteries. Recharging AAs is a hassle, but I don't like using disposable ones.

More clutter to keep track of, store and organise. Kids misplace them frequently.

More crap to have to keep clean, sweaty and oily hands leaving residue on them.

Would rather just have a single device, the headset, that I put on and not have to deal with a bunch of other accessories.


Why do you say there are no controllers? I was reading about bluetooth keyboards and interacting with a laptop, so why wouldn't a bluetooth controller be feasible (say, before launch)?


I mean it's not impossible, but Apple would have to design and manufacture one. If they're planning on it it's weird that they wouldn't mention it.

VR controllers aren't simple the way a game pad is. They need a full tracking solution, which means sensors with base stations, cameras with some form of vision based tracking, or the headset needs to be able to track them based on some reference points (infrared lights or something). And that tracking needs to be fast, precise, and correctly positioned relative to the headset.

Using the Index controllers with the Lighthouse base stations is kinda plausible from a technical standpoint, but that would mean Apple would need to allow the headset to work with SteamVR, and that seems very unlikely at this point.


If camera based hand tracking works already, it seems like a Bluetooth controller wouldn't be that hard to add in after the fact. Didn't PSVR just use a single camera and a glowing orb?


They are feasible and I dare say already working given its essentially a modded iOS, which already has support for game controllers.


They talked about and briefly showed using game controllers and keyboards/pointing devices with it.


I saw a traditional gaming controller and a keyboard. Was there some kind of tracked-in-3d-space pointing device that I missed?


At 1:31:50 in the Keynote video they mention “Bluetooth accessories like Magic Trackpad and Magic Keyboard” and show those on a table and a person typing on the keyboard. I don’t believe the cursor of the trackpad is shown (but I’m on my phone and my eyes aren’t as good as they used to be!). I bet it’ll work like on iPadOS.

There are a LOT of WWDC sessions on visionOS; hopefully it’s shown off in one of them. EDIT: Trackpad usage shown here https://developer.apple.com/wwdc23/10076?time=914


>Of course you can use a game pad or MKB, but that kinda misses the point of being in VR in the first place.

Why? When I go to an imax cinema I am impressed with the immersion, I don't feel the need to jump around and act out the scenes in front of me.


I thought they showed a PS5 Controller being used?


1) Console controls are not a suitable alternative to proper VR controllers 2) What makes you think the PS5 controller was connected to the headset, and not the headset being used as a TV showing the output of a PS5?


Re: 2, they showed it being used to play Apple Arcade games and since they already have support for that it’s unlikely that they have deep philosophical objections.

Re: 1, it’ll be interesting to see how well the camera system works in practice. All of the reviews are quite positive about precise movements so it might be that they’re throwing hardware at this problem but I’d also be surprised if they were not very carefully tracking performance – if the accuracy isn’t there, it’d take an unusually un-Apple like product manager to risk a billion dollar investment rather than enable that class of device.


Using controllers isn't about accuracy, it's about occlusion and haptics.

Using only gestures means no haptic feedback and it gets de-synced as soon as the camera can't see what you're doing.

Take Apples gestures for example (e.g. tap your thumb and index finger together to click). As soon as your hand is rotated in such a way that your own fingers are hidden from the camera, you actions stop being applied. This will also happen should one hand cross the other, or resting your arms down while standing, or reaching up for something, or even resting on a couch with your knee up.


Oh, I’ll have to see if and how Provenance can run on it.

PSX games with a PS5 controller on a theatre screen? My inner 8 years old is going to loose his mind.

But that doesn’t solve the issue for the mainstream.


But the latter is ideal. I just want PS5 shown in Vision Pro rather than a sub par VR game.


It supports Bluetooth input. I wonder if the visual tracking system that tracks the hands is so good that it could accurately track controllers. Then third party motion controllers could be relatively cheap and plentiful.

Assuming it could run Virtual Desktop, there’s your Half Life Alyx in ultra high res.


Gaming is pretty good with a keyboard and mouse. No controller necessary.


I agree that keyboard and mouse is superior, but as a grown-up with a family, it's rather infeasible. The Steam Deck has been great; I actually play a game for 30 minutes maybe twice a week. I'm hesitant to even try keyboard and mouse on some games because it will spoil the experience with a controller. When I have the rare luxury of a free evening alone at my desktop, I'm more likely to play VR.


VR gaming is different from regular gaming. You largely want to feel like you're interacting with the environment (in most games).


having a virtual screen that is way bigger than the space you are in is a huge selling point for me. I'm more interested in this than VR stuff honestly.

if it was under 1k usd I would probably pick one up for work. maybe give it a few more years.


    But it does put an enormous amount of pressure 
    on the eye tracking. As far as I can tell so 
    far, the role of precise 2D control has been 
    shifted to the eyes.
I've got one good eye and one bad eye. The bad eye is legally blind, has an off-center iris and is kind of lazy w.r.t. tracking.

I'm extremely curious to know how Vision Pro deals with this. One certainly hopes there's some kind of "single eye" mode; certainly seems possible with relatively small effort and the % of the population who'd benefit seems fairly significant.

Eye tracking most certainly sounds like the way to go, relative to hand-waving.

The Minority Report movie probably set the industry back by a decade or two. Waving your hands around to control stuff seems logical but is quickly exhausting.


From Apple's visionOS page, re: Accessibility (https://developer.apple.com/visionos/)

"And for people who prefer a different way to navigate content, Pointer Control lets them select their index finger, wrist, or head as an alternative pointer."

This reads to me like they've got you covered (at least in principle).


Hopefully you can just set one eye as dominant. I as well have one good eye, so just using that one for tracking should be fine.


They have a training process at the beginning that learns your eyes. It asks you to ‘point’ to various points in space. I’d imagine it would be able to detect a dominant eye with that, and prefer it over a slower or non-tracking eye.


From a Q&A session, a trackpad will also work.


Covered in a basic sense, but that's an extremely diminished experience which basically removes half the reason to use it


Let’s see how it works before we start making all these assumptions. Apple has the best track record in the industry when it comes to accessibility, hands down.


I agree, but if that's their entire solution, it isn't good


It would have been much less exciting to see Tom Cruise sitting on a couch, hands in his lap, gently flicking his fingers to scroll through crime scene footage. IIRC he talked about how tired his arms got during filming.

EDIT: found it — he didn't talk about it, but it was reported that he had to frequently rest his arms: https://medium.com/@LeapMotion/taking-motion-control-ergonom...


Oh, 100%. The interfaces in Minority Report were spectacular in terms of cinematic movie appeal.

Artistically it was a great choice, and they certainly weren't intending to inspire a bunch of wrong-thinking UI choices in the real world.


I distinctly remember they said they had hired scientists to discuss what the future would look like and I always thought "yeah, right, so in the future we got those holo interfaces and computers ultra connected but people in the same office still copy files from one computer to the other using some kind of plug able storage".

But now that I am also in the future I still use USB drive to move files at the office because. Guess they were on something after all :].

    The fallacies  

        The network is reliable;
        Latency is zero;
        Bandwidth is infinite;
        The network is secure;
        Topology doesn't change;
        There is one administrator;
        Transport cost is zero;
        The network is homogeneous.


I rarely use USB media, instead just copying things around using the network. But last week I needed to copy 200GB to my son's laptop, and ended up putting a 256GB SD card in the laptop and copying to it over night (it was started at 9pm anyway).

Never underestimate the bandwidth of a station wagon full of mag tape.


The dystopian future has a bustling physiotherapy and occupational therapy industry.


I can't speak for working in VR but I think us desk jockeys tend to underestimate what other people are doing for work. The lack of movement is what does thought workers in.


There are of course many jobs that are much more taxing on the body overall than waving one's hands at a Minority Report style computer interface. (I used to work at a restaurant! I've done actual work, I swear!)

However, waving your hands in the air for 8-10 hours a day feels a bit unnatural relative to "actual" physical work. I don't have a real scientific basis for this statement.

But there are some physics at play IMO. In boxing it's generally accepted that a missed punch consumes about twice as much energy as one that connects -- your body must do the work to accelerate your arm, and then it must do a roughly equal amount of work to decelerate it. I think there are some parallels to waving one's hands in the air for 40-50 hours per week relative to "actual" physical labor.

More to the point: MR-style interfaces (at least as typically implemented/depicted) don't really offer tangible advantages to using a good touch pad. They trade tiny finger movements on a trackpad for huge sweeping arm movements that accomplish the same things.

I think if MR-style interfaces had somehow been invented before mice and trackpads, we'd be celebrating mice and trackpads as absolute miracles of efficiency.


I actually prefer keyboard alone due to the concise control, I find mice pretty clumsy. But there are things that it can do that I can't achieve on a keyboard and same for VR, perhaps we'll end up with blended work stations.

On the note of tiring out, I have been strength training for 12ish years now and I still get tired quickly when holding my arms above my head to work on my car. I think because they are a smaller muscle group, they can be saturated easily. I don't have the same issue with repetitive motions, it's just holding the arm in a similar position that does it.

Maybe if AR gestures take into account full motions rather than holding the arm in a similar position for too long, it might not tire so easily.

I have VR already and I will say it's an exhausting experience in general, I can flat screen game for hours, but in VR I want out in less than an hour. I think it's the full focus it forces on you, and the split world spatial reasoning going on.


I’ve been on both sides of the coin in a role where I was on my feet 12 hours a day.

The aches and pains were there, but just in different spots. Yes my core was stronger, but holy hell were my feet exhausted.


You're talking like I didn't need to go to physiotherapy and occupational therapy to deal with my wrist issues and the lack of muscle tone in my glutes.


Same with all the scientists in movies writing on glass boards.


Yeah, Tom Cruise doesn't seem like the type of person to admit he would need to rest his arms.


What gave it away, the iron cross at the beginning of MI:2?


Not because he's allegedly fueled by the souls of aliens?


Considering there is a "calibration" step. I'm going to guess that as long as the "bad" eye behaves relatively predictably, it should be able to ignore the bad input and put appropriate weight on the "good" eye.

It will also be interesting how the outward display will display the "bad" eye input including missing eyes, lazy eyes, nystagmus, etc...


> The Minority Report movie probably set the industry back by a decade or two. Waving your hands around to control stuff seems logical but is quickly exhausting.

There was an operations center I worked at that had a large touch screen (think cnn wall), that always sat in the corner of the area collecting dust, but whenever some higher up would visit it would get pulled out and used just to provide a wow factor. It was always the unlucky one that ended up having to use it during this time. The fatigue was horrible.


Track record wise, apple is one of the best in terms of serving accessibility. So I’d bet greater than 50% odds that they’re thinking about lazy eye or one eye or derivatives there of.


[flagged]


Well, they have pretty consistently invested in and showcased accessibility, famously ROI-be-damned (to paraphrase Tim Cook). So, it’s not a stretch to expect (but not assume) they will have the perspicacity to do the same here.


I agree not to drink the apple copium, but they do excel in accessibility by a large degree.

Honestly, I think a large part of the industry taking into account accessibility in a more serious ways comes from the competitive advantage apple has created.

Doesn’t mean accessibility hasn’t been done in the past. By no means. But it’s a market segment they have on lockdown


FWIW, I emailed a fairly senior accessibility person at Apple yesterday with a question about the headset and got a response back today, even in the midst of WWDC. And I'm a relative nobody — just a bootstrapped solo founder who works in the accessibility field.


> Eye tracking most certainly sounds like the way to go, relative to hand-waving.

I suppose Apple could change my mind but I've never been a fan of eyes as input. Touch typing or any equivalent where you can look at one thing and do another seems impossible now.


I'd wait for the product to actually launch before making such a judgement


Apple don't need you to defend them here. They presented what they presented, and whether we're interested is based on that. It's a legitimate concern and a valid point of discussion.


The interaction paradigm you describe (where it's literally impossible to look at one window while actively using another) makes so little sense that we can almost certainly rule it out. Certainly no press members who demo'd the unit are describing the interface that way.

That's not "defending Apple", that's just... gosh, I can't think of any word besides common sense.

From all of the presentations and press accounts, it seems clear that eye tracking is something of a direct replacement for the mouse/trackpad cursor -- in fact it seems you can use a trackpad or presumably a mouse as well. Looking at an app or "window" seems equivalent to mousing over it, not equivalent to clicking to activate that window.


> The interaction paradigm you describe (where it's literally impossible to look at one window while actively using another) makes so little sense that we can almost certainly rule it out.

I think the implication is that the window you are looking at is focussed automatically. Which wouldn't work for say, typing up a document to a text editor while reading from a reference doc. That's just the beginning, I use a mouse-to-focus window manager and that happens all the time with all sorts of things.


I don’t know how it will actually work, but the Mac has always been click-to-focus, which would avoid that issue.


That contradicts the many accounts of "look at a text box bar, start talking it types"


I don't disagree with your other points AT ALL, but given how repeatedly proven that common sense is entirely a fiction, if any argument falls back on it you have to be immediately suspect of it.


I hear that now and then, but looking at the wiki on it, the “criticism” paragraph is only like 3 lines with no real refutation, out of a hefty lengthy article.

What’s your motivation for saying that? Personally I am suspect of people say there isn’t common sense.

Here is the definition at the top: Common sense (or simply sense) is sound, practical judgment concerning everyday matters, or a basic ability to perceive, understand, and judge in a manner that is shared by (i.e., "common to") nearly all people.

One can certainly say there is common sense. For instance no one likes pain, people like pleasure. People care about entertainment, not boredom. People tend to learn on the whole as they grow up. I also think on the whole have a perception of what is wrong and right, even if there is variance between cultures. The golden rule of reciprocity seems to be near universal (like: don’t kill, don’t steal and so on)

Lots of things we share as a species.


Your basis for discussion is that you read the wiki article on common sense? Then you cite a dictionary definition? Suffice to say that is pseudointellectual and inappropiate.

And then you obviate the concept by rendering your own gross ignorance: people don’t like pain? Today you learn about masochism. People need entertainment? Monasticism, asceticism, Buddhism, nihilism, et al. People learn as they age? Quite unfortunately many studies show many people lose a great deal after their teens. And your invocation of the idea that cultures share morality such as a prohibition on murder (this has nothing to do with “reciprocity” by the by) is truly divorced from basic observable reality.


Well, let's start with some basics.

100% of a farming town/community are going to hold "don't grab an unknown fence, as it might be electric" to be common sense. Yet there's going to be a sizable portion of the planet to whom that WON'T be obvious, and they'll try it.

100% of New Yorkers might know not to park your car overnight in a particular area, yet people new to the city will happily do so not knowing they may be putting themselves or their property in danger.

If you have to put bounds on "common sense", or contextualise it at all, then it isn't common, it's just knowledge shared by a group of people of unknown size.

You say it in your final sentence - "near universal". If it's not universal then what's the point? Where is the magical cutoff point of the population where "common knowledge" is no longer common? 95%? 50%? If so, why?

Such speculation is ludicrous because it's going to change drastically from area to area and change even more so depending one what you're talking about. And if such a definition is so fluid, when you're positing that it is in fact universal, then it's pretty clearly false.

"Common sense" and "common knowledge" are just shortcuts we use so we don't have to explain the entire contents of history in one go to explain one other concept. This is very useful. But relying on it existing _ALL_ the time is the work of a dangerous lunatic that has clearly not encountered a broad enough segment of humanity.


Thank you for your thoughtful reply. I think I might gonna read up a bit more on the topic, it’s quite interesting.

I think my personal stance is more on the idea that there is a lot we share as a species (from words, to symbols and human basics) and that could be grouped as common sense. But perhaps my grasp of english fails me here.

Looking at the etymology it also seemed to stem from “community” , what is common among a community. I would wager things were very common for his peers when Thomas Payne wrote his book “Common Sense” , a world to him that was largely very the same in style and substance.

Perhaps that concept simply can longer apply in a pluralistic global world or in a modern sense like you point out with your examples.


I have pendulum nystagmus since birth. My eyes saccade and shake very rapidly, even when I'm focused on something. Folks who talk with me certainly notice.

I really hope I won't be locked out of using vision pro just because its input modality isn't compatible with my body...


You might be SOL on version 1 or 2 but they'll figure it out eventually.


That was my thinking. They don't always nail down accessibility issues right away, but I think they're fairly highly regarded in terms of getting to things eventually.

(I say "I think" because I have not actually used their accessibility features)

With all of the new paradigms in VisionOS it seems perhaps unavoidable that accessibility features might take some time to catch up. Engineers and designers have to wrap their minds around the new stuff.


Maybe. The eyes are a sort of the canary in the coal mine for lots of neurological issues. Sometimes people don’t notice problems because the brain adapts well, and accommodation may be more challenging than one might think.

I have a loved one having issues due to a brain cancer. Optical seizures were the first warning sign and went unnoticed for some time. It would be amazing if VisionOS could potentially detect some conditions or help folks with epilepsy just as the watch has for some cardiac events.


Their accessibility features are quite fascinating.

Sound recognition for example, or Assistive Touch on the watch. Not needing those I have no idea wether or not they’re actually useful, but it sure is impressive and seems well-thought.


They describe alternate pointer modes in this presentation (18:30) https://developer.apple.com/wwdc23/10034


Index finger, wrist, and head are supported as alternate pointers.


One could hope they have an accessibility mode where the eye targets are larger. Or they offer a hardware controller pointer.


Curious. How do you go with 'normal' VR? Are there any problems playing games?


> The Minority Report movie probably set the industry back by a decade or two. Waving your hands around to control stuff seems logical but is quickly exhausting.

The bad idea had waves even before that: Remember the gorilla arm.

http://www.catb.org/jargon/html/G/gorilla-arm.html


I remember the gorilla arm but I found out that occasionally raising a hand from the keyboard and tapping the screen is not a noticeable effort and it's faster than aiming at the touchpad or (of course) reaching for the mouse. I discovered it mostly by using a Samsung tablet with both a Dex interface over Android and with an Ubuntu Unity one (the defunct Dex for Linux.)

That's maybe because the screen is small and low. Raising an arm at or above the head is a different matter. If you're cycling with a heart rate monitor you discover that you add some beats when you raise your hands (say, go from 150 to 155) and gain some if you lower your chest close to the handlebar. This doesn't take into account increased or reduced drag because I measured it at home on trainers.


I've noticed the same thing when I ride in my basement. If I come up off of the hoods, grab a water bottle and take a drink my heart rate bumps up ~5bpm. It'll climb a bit more if I eat something like a granola bar or a gel.

What's interesting is that unlike steady state to a hard sprint, the heart rate rise is almost immediate. There's some delay - 20 to 45 seconds - if I change it up and do a sprint interval before my heart rate reflects that effort.


Strabismus and amblyopia are common enough (4% of the population) that one would hope these types of systems would make sensible provisions to accommodate people with these conditions.

https://www.childrenshospital.org/conditions/strabismus-and-...


Lazy eyes, I think, are probably popular among software developers (I got one, and most devs I know have eyes issues). I think that's why Apple got lenses too. I think (hope) they got us covered since we'll be the first to use, develop on, and afford these devices.


I’m with you on this one, but I also have nystagmus in both eyes. I’ve been curious but also somewhat apprehensive about stuff starting to rely on eye tracking and whether these edge cases will get handled.

Probably best for me not to be an early-adopter in this case :)

Edit to add: Can confirm that Apple stuff is amazing with accessibility, so I’m not surprised there are alternative input methods. My main thinking towards the future is around how nystagmus and foveated rendering will interact. That’s a giant and fascinating research topic I imagine.


I tried the HoloLens ages ago and honestly the only upgrade would be the display & how often the inputs worked. Microsoft was demoing the HoloLens @ some hackathon and had a program where you could flip pages in a virtual book. On top of the book looking super low quality / illegible, I could never get the controls to work. The only thing that will make the vision pro seem like a worthwhile purchase would be if the controls worked flawlessly. I already have faith in the displays & picture.


I have a divergent squint on both eyes when tired/exhausted. I yet to try VR with eye tracking. Furthermore, I think the future would be some kind of brain reading implant/scanner.


They have accessibility/alternative input features to help with that


The arm waving is some form of exercise, which is a plus for me.


"Gorilla arm" is a term engineers coined about 30 years ago to describe what happens when people try to use these interfaces for an extended period of time. It's the touchscreen equivalent of carpal-tunnel syndrome. According to the New Hacker's Dictionary, "the arm begins to feel sore, cramped and oversized -- the operator looks like a gorilla while using the touchscreen and feels like one afterwards."

https://www.wired.com/2010/10/gorilla-arm-multitouch/


I guess non of the engineers lifted weights. that community has a bunch of terms for this


Reading through his notes, I couldn’t help but think “I don’t know this guy, but it seems like he has spent years thinking deeply about interfaces. Also, he sounds ex-Apple.” It made me smile when I got towards the end and my suspicions were confirmed.

Andy’s thoughts were infinitely deeper than anything else I’ve read about the device+interface so far. In particular, I liked his observations that so far it’s mostly a 2D plane, with exceptions like the Breathe app and a few demos, which might just be good for demos.

Seeing the heart in 3D reminded a lot of what we saw 10 years ago around what book publishers were trying to do with the iPad. Cool demos but limited real world use.

Here was one particularly thoughtful section:

“unlike the physical version, a virtual infospace could contend with much more material than could actually fit in my field of view, because the computational medium affords dynamic filtering, searching, and navigation interactions (see Softspace for one attempt). And you could swap between persistent room-scale infospaces for different projects. I suspect that visionOS’s windowing system is not at all up to this task.“


Reading through it, I was struck by the contrast of how broken the web page was with what the author was saying about UX. The author seems experienced in the field, but the site felt completely broken. I don’t know if it’s a device or browser issue, but reading on my iPad, if I tried to use my right thumb to scroll, the page acted as if I had reached the end of the article, even though it was clear there was more because the last visible line was only half-visible. (And this was after having to manually zoom in because all the text was shoved to one side and tiny.) Scrolling with my left thumb worked about 50% of the time. The rest of the time it worked as poorly as scrolling with my right thumb. I gave up after about 5 paragraphs.


that site is not really suitable for mass consumption, iirc it used to even have a caveat that said it really only existed to make certain leaf nodes sharable. it's basically a personal zettelkasten, but this update was also sent to his patreons as well, where the writing just ended up being whatever your email client made of it.


If you visit his top level website, you can see that it uses a columnar layout. I agree that it compromises the experience if a single article is linked like this, especially on smaller screens, but it isn't broken, this is how it is intended. It just doesn't cater to all formfactors.


Did we see the same website? The page I visited appeared to just be straight text. I’m on a iPhone with safari. It felt like a totally normal blog post by someone who doesn’t care for the current js heavy web.


Reader mode solved that.


That's very kind; thank you.


From years of observing first-hand how hard it is to develop software, I’m shocked that Apple continually makes products that fit their aesthetic (eg. The crown and headband evoking the Apple Watch) and have coherent and mostly-baked software on launch like this. Is there a book about how they’re able to operate at scale and deliver such consistent things? Do they just have an insanely good product team? How do they document requirements? It’s fascinating to me, because I think it’s their real competitive advantage - they get to the finish line with product that would make sense to a single, invested, super smart person, instead of a pile of inconsistencies and incoherencies that contradict each other as you move through the application. The only other product I feel is delivered at Apple’s level is CloudFlare, and I think that’s why they’ve grown so fast.


I know it's a joke, but I think it's still a good illustration: https://i.imgur.com/XLuaF0h.png

From everything I've heard, Apple's org structure is a lot more focused and UX-driven from the top down than basically any other large tech company. That means you get a lot less compromising and in-fighting.

The other side is how Apple makes money compared to other companies. Meta makes money mostly by selling you (the user) to advertisers (the real customers). Same with Google (70%ish ad revenue). Microsoft is a hot mess of competing B2B fiefdoms (the vast majority of their revenue comes from businesses, not consumers). Amazon only makes a few products directly for consumers, and mostly makes money from being a marketplace + AWS. That leaves Apple fairly uniquely positioned as a premium consumer hardware company that also sells consumer software services to enhance the experience and... drive more hardware sales.


If you just draw the org structure for Apple differently in the link, it's same as Amazon..


I don't think so.

Apple builds an insanely strong core from which any and all other teams can (maybe must) build on top of and use to power their team's platform.

You have a core messages team for example - off of which Mac, iPhone, and Apple Watch teams integrate. Do that for each core service or product (photos, safari, notes, music, etc).

The Amazon structure is such that everything flows from the person above you, work is not shared between teams.

One is compositional (Apple). The other is inheritance (Amazon).


Look again. Amazon is a tree, Apple is hub and spoke.


If you look at it closely, any first level node with a child node at Apple can be bypassed by the central boss. The boss is connected to every other node.


You could say the same about Dyson or Oxo or Bodum. Having a single esthetic and coherent product line is a choice.

On the face of it, it sounds like a company would want coherency, but in practice it means leaving a lot of money on the table for every single of your products. Any consistent design decisions means anyone who's not pleased by it has no option in your lineup to go to. That's of course a gamble Apple is willing to take.

For instance, Apple consistently uses glass and metal in their product design, the only exception I can think of being airpods and their mouse. I personally prefered the plastic for weight and comfort, so that set me out of the market for their headphones and I went to Sony for instance.

Same for laptops with touchscreens, if you care any bit about them Apple is a no go to you. Same for so many things.

PS: on inconsitencies and incoherencies, Windows is the king of the kitchen sink approach, and it's consistently stayed ahead of Apple in numbers. Another instance on how being opiniated isn't only a positive aspect from a business perspective.


Your point is valid, but that ... diversity? ... comes at a _very_ steep price. Not only for the manufacturer, also for the consumer! With Apple I _know_ a couple of things about any future purchase, without ever having used, held or felt the particular product. Namely: 1) Build quality will be significantly higher than competitors, 2) It will hurt my feelings to look at the bill after checkout, 3) It will work as advertised (or slightly better).

Yes there also will be quirks I'll be unhappy with, which I _could_ (given enough time and experimentations) find a better solution for. But fundamentally I'll be getting a solid product, that works well enough. It is _that_ promise, which fundamentally allows Apple to do an online store and do a mostly directly-to-customer retail operation.

If you increase diversity, you make this "buy any product and you'll be fine" thing so much more complicated, loosing more sales than the niches even contain. I'd guess almost all people just want it to work :TM:, at least that's me, and that's me deciding the phone of my mum


I totally see this point, in particular as an opportunistic Apple customer myself. Apple doesn't need to do marketing regarding some aspects of their products (e.g. build quality) and can focus their messaging on more specific parts.

On the other hand, that doesn't seem to be what most consumers long for. Apple had two runaway successes with the iPod and then the iPhone, but to note, the iPhone is not the best selling phone brand nor dominates the international market. Same way Apple is not the #1 computer maker.

So yes, focus and consistency brings tremendous value, but there are many other ways to get that level of value, and many customers will have different priorities.

(I also don't think any brands need to be #1, being profitable should be enough, but I am more and more interested by how Windows has stayed at the top for so many years. In particular Apple progressively feels closer to Sony when it was at its peak: superbe products, strong ecosystem binging, coupled with also a heavy dose of strategy tax on how to benefit from what the other companies are doing)


porsche also.


Hypothesis: Apple is a hardware company that also does software, and that ethos allows them to ship more complex products over much longer time scales than their competitors.


I agree. Sometimes it feels Apple ships software too slowly. For example we have to wait for iOS 17 to be released to get a Whisper level dictation on the iPhone while Whisper has been out for almost a year. But maybe shipping at the pace of hardware is a good thing. Maybe we shouldn't deploy our sites on each commit


If I were to speculate I'd say it's because they don't ship things until they are finished. I'm sure a lot of the polished launches we have seen have been late by 6, 12 or 24 months. If the market isn't ready, the product doesn't ship. If the technology isn't ready (say, a high enough resolution VR display) to create a wow product then the product doesn't ship. I think it takes a great organization to not "accidentally" ship bad things. You need to prevent bonuses, egos, sunk costs, and self-deception from shipping something bad prematurely.


Creative Selection by Ken Kocienda is the closest, modern look inside Apple's product development that I've found. Focus is on the development of the software for the original iPhone and is a great book.


Yes! this book is a first-person account of product development at Apple, showing how both the design and the features of a system tool keep progressing under the selective pressure of iterative review sessions, with creative and specific challenges provided by managers at multiple levels, all the way to the top (Steve Jobs at the time of the book's tales). It's a great book, very well written, the only one I've read that provides insight into (some of) Apple's creative process. When they say that Steve's DNA has profoundly shaped the company, I imagine this is one aspect of what they mean.


It's Steve Jobs' legacy and a function of the authoritarian, highly controlled process he created. A democratic self-directed environment like Google or design-by-committee like Microsoft could never produce such a product line. Even an Intel with billions more in funding cannot recreate TSMC factories.


You might enjoy Ken Kocienda's "Creative Selection", which gives his first-person account of Apple's creative process.


They're absolutely awful at online / web-based services though. Embarrassingly bad at that kind of software.


I wonder how much can be attributed to Radar, their homegrown bug tracking / project management service.


Fascinating... Where can I read more about this Radar tool?


not only that but how they manage to keep it all relatively hush hush until the big reveal.


I echo the awe at the hardware and the disappointment at the paradigm. I wish the keynote had three extra demos:

- Something whimsical to inspire developers. Give me files represented as physical blocks that I can pile, or a task manager that shows processes as plants around me. Close apps by scrunching their screen like paper. A globe with my geotagged pictures.

- A game, any game. Beat Saber was the killer app for me to get my Vive, and Valve already spent tens of millions of dollars to create a triple-A VR game. Neither seem compatible with the Vision Pro input methods. Apple could at least play their strengths, like an AR hide-and-seek, or a horror game with eye tracking.

- Content creation or professional work. The last thing we need is another passive device to watch 3D TikTok. Show someone customizing their "home screen" environment to look like a fantasy potion shop; or a mechanic looking at a 20x magnified broken part; or an SRE watching their kubernetes cluster as a floating 3D graph; or a VFX artist in the scene scrubbing forward and backward to adjust an effect.

It feels like inventing the first smartphone, camera apps and all, but it doesn't make calls and the only input is tapping on icons.


> or a task manager that shows processes as plants around me.

We finally have the technology to bring https://www.cs.unm.edu/~dlchao/flake/doom/chi/chi.html to life


>- Something whimsical to inspire developers. Give me files represented as physical blocks that I can pile, or a task manager that shows processes as plants around me. Close apps by scrunching their screen like paper. A globe with my geotagged pictures.

I respect Apple for not doing this. These are demos that are kind of fun for 20 minutes but you would never want to actually use a device like this: https://youtu.be/z4FGzE4endQ?t=59


> A game, any game.

By far the most disappointing part of the keynote was when they finally do show someone playing a game, and it's a basketball game played with a regular controller on a flat, virtual TV. The TV is even smaller than an average real-world TV.

It's typical of Apple to so completely ignore gaming, but it's disappointing that they're continuing that trend even with VR.


Realistically, that's probably the best we are getting for a while. VR gaming has had a difficult time. The physical space requirements are immense and the gameplay types have been limited.

A lot of value could be had playing flat screen games this way. Imagine sitting on a plane with a controller in your lap, but it feels like you are sitting in a private loungeroom or some zen environment playing a game on a large TV.


I didn't expect them to show a first-person shooter, but there are so many possibilities for casual games that actual utilize VR/AR in some meaningful way.

The fact that what they came up with was "game in a rectangle, but this time floating in the air" makes me think that either there are major limitations to what the AR platform is capable of or that they're intentionally trying to tell people "gaming isn't a priority here".


Because gamers, or even consumers, is not the target demographic for a $3500 device. They want to brand as a device for enthusiasts (but Apple everything!) or professional (work video chat, artists, graphic designers).


you mean the demographic that regularly buys enthusiast tier 2000 dollar pcs and 1500 dollar VR headsets is not a demo that apple wants to buy their enthusiast tier 3500 dollar device?

apple's anti-gamer slant has always stunk to me of some exec being salty about what went down with nvidia and hating the idea of ppl playing games. their slapfight with epic games is another example of them not giving 2 shits about the gaming market and alienating a highly growing userbase.hell they didn't mention live streaming at all and that seems like a usecase this thing is tailor built to be good at considering how many cameras and screens it has!

meanwhile they want people to buy into apple arcade...


  > Give me files represented as physical blocks that I can pile
Man, people still want a file manager like fsn on IRIX systems, 30 years later...


I used Eagle Mode[1][2] for a number of years, and it was non-ironically the best file manager I've ever used. I only stopped because it required dozens of very out-of-date packages for its preview features.

But the point of whimsical demos is inspiration, like a concept car, not complete realism.

[1] https://eaglemode.sourceforge.net/ [2] https://www.youtube.com/watch?v=G6yPQKt3mBA


Update: it seems there was a new release some months ago, and the UI feels a bit fresher. Time to give it another try.


This is UNIX! I know this!


VisionOS uses old school UI like blocks and doesn't do anything new in terms of just reusing design paradigms from ipadOS/macOS. It makes sense why people would think of it just as a computer they strap to their face.

Half of their marketing was about how it's a essentially a macbook that you strap to your face...


It takes time for developers to get the device and begin writing software for it. As time marches on, those things that we all find value in get bought by Apple or Google and become the default apps for things. If someone then decides to try something crazy, cool. If not, someone else later will make a competing device that tries. If it wins, cool. If it doesn’t, someone will make it way later still (like Apple recreated what General Magic was working on way later).


Nobody may need TikTok on their face, but that doesn’t mean many wouldn’t want it.

TikTok on your face likely has better market potential than the dreadful Horizon Worlds.


I think Apple is doing smart things here with their software:

- leverage existing content means they won't have an empty room problem or a big "now what?!" moment for users after the novelty wears off. Without Steam, most existing VR platforms would be completely pointless. So far VR is for games and most of those are published on Steam. That's because most VR hardware vendors suck at software and end up outsourcing that to game studios. Meta included. Despite their ambitions, their goggles are mainly devices for running games not developed by Meta.

- a focus on the living room experience with a high end movie theater experience running, again, off existing content. Genius move because people already spend lots of money on home theaters. Some people buy 10K$ plasma screens even. This market is very real. Also, involving Disney with their huge back catalog of completely unmonetized 3D movies ... so obvious.

- Extend the highly successful IOS ecosystem to the new experience. They have tens of millions of apps already. And they'll work fine in AR. Why not do that?

The strategy is about content. It's the right strategy. The first generation hardware is of course amazing too and it will bootstrap a new generation of application developers that will be using Apple SDKs and tools to target all this with new content. But to bootstrap the ecosystem they need an audience.

Here too the strategy is genius: SDKs are based on things developers already use. They just co-announced a push into gaming for Mac along with some convenient porting kits for developers. That was just a footnote buried deep into the mac related announcements. But of course this means more content coming to AR as well. All that lovely content currently available via Steam.

Top to bottom the strategy is about compelling content. I think it's going to work.

This is not the final answer to AR but the opening salvo in a decades long push to completely own this space. Step 0 is to get millions of these things in the market with enough content to get people hooked and keep them consuming content. The rest will come later. It's appropriately radical, pragmatic, and conservative at the same time.


It is very unfortunate the way that we see repeated claims here either implying or directly stating there is something revolutionary, besides the power (and matching price) without any acknowledgement of the existing space.

a focus on the living room experience with a high end movie theater experience running, again, off existing content. Genius move

Cool. Tell NReal and BigScreen that, they and others have already done it on Quest and other platforms. Nothing new here.

Without Steam, most existing VR platforms would be completely pointless. So far VR is for games and most of those are published on Steam.

Tell that to the many Quest users who have delighted in the wireless experience and superb exercise apps and other new vistas opened up to them in their existing platform, including remote working via services like Immersed. The latter can absolutely be boosted with the new specs the Vision Pro is bringing.

Look, this is a bold move by Apple, and I applaud it; I think it could give a welcome boost to the industry but please can we stop with the nonsense about them creating something that wasn't already there, or at least elaborate what it brings and improves with reference to the prior art.

That information - which I don't doubt is there, this is an exciting product - will be genuinely useful.


It's the same song every time Apple releases a new product. Yes, bits and pieces have been present in a variety of already existing products, but there is simply no one single existing product you could compare this one to. And history has simply shown that Apple usually gets a lot of things right and is a main driver in mass adoption. I don't know if that's gonna be the case here, since it's still "a lot" to put on something intrusive like that on your face for most people, but it's just not right to compare it directly to a Quest or something.

Similarly, sure, maybe NReal and BigScreen have done something similar and were earlier in doing so. But they simply don't have the same audience Apple does. I get the frustration -- we shouldn't credit Apple as the sole inventor of all good things. But they are usually really good in stitching a lot of good things together and releasing them in a single product that usually turns out fairly successful.


If the strategy is fully about content (which would make this device about selling services on top of the Vision Pro), why is the Vision Pro specced as it is, and why is the price point so high?

The specification of the device seems to be complete overkill for the kind of strategy you are describing, which frankly does not need extreme hardware power, and makes it cost prohibitive to many of the people who would otherwise buy the content.

All I can think of is that there are lower specced and lower priced devices coming, but I am sort of at a loss as to why they didn’t start with those - as if this fails, they aren’t going to get off the ground.


It's specced the way it is to deliver the best in class, best possible content experience without compromises in initially small volumes. Neither of those things is an accident. Scaling up manufacturing is hard and expensive. Much easier to create a lot of hype for an expensive device that is going to be so popular that they won't be able to satisfy demand. So, they'll make lots of them and sell out in no time. The ebay value of these things is going to be insane.

By the way, this is a similar price point to the magic leap and holo lens devices. This just seems to be what it takes for a state of the art AR/MR experience.

And it's not like there is a shortage of very wealthy people buying things like Teslas, ipad pros, fully specced out iphones, that carry thousands of dollars worth of equipment. For those people 3500$ is just the right level of premium and exclusivity. Like it or not, Apple is very good at separating people like that from their cash. There are millions of them and most of them are already loyal customers. There is no need for a lower price point and there absolutely is no need to compromise on quality.

The goal isn't to make you happy but to make you crave something you can't afford yet. So, when the time comes you'll be spending a bit more than is comfortable for you. That's what Apple does. They leave the low margin scraps to others.


It’s a fair point but 1/ Apple for the most part didn’t start it’s product journey for any of the other products you’ve mentioned with the ‘Pro’ version - its strategy has generally been to get people to fall in love with the regular version and convince them they want more of it, doubly true when they’re introducing a new category product and 2/ It’s strange because I own many of those fully specced out things you mention, and this leaves me utterly cold. Fwiw, I also own a Kindle.


Yeah, this is the Vision Pro. That leaves room in Apple’s usual nomenclature for Vision Air, Vision Max, Vision Mini, Vision SE, and just plain Vision.

The high-end system spec is to ensure there’s no tethering requirement, thus the computer-class SoC in there.

Will help keep people from turning into Stephenson’s gargoyles. :)


I see a lot of the pricey functionality they packed into this thing as about making it less onerous to put on. People need to actually decide to put it on to do stuff that they could do with their existing devices.

There's one unavoidable obstacle to people wanting to put on the headset: you're going to look really weird and isolated to other people in your environment. The tech to solve this problem doesn't exist, and maybe it's too high a barrier and this product category just isn't going to work. But Apple is betting that if they chip away at all the other reasons to not put on the headset, it'll cross the threshold where it wearing it becomes a regular routine for most people who buy one.

If they can bootstrap that into an ecosystem so that there's things you'll want to do with the device that you couldn't do with your other devices, at that point they can start selling a cheaper device that doesn't try to solve the onerousness problem through sheer luxury.


Maybe this is the minimum spec device that allows users to read text without getting nauseous?


Leveraging existing content from the iOS/macOS app store is going to be key I think. The "now what" is a common issue with the Quest. It's a fun little toy but once you realise they're charging AAA prices for what are essentially tech demos in many cases it becomes a bit of a hard sell.

That combined with Meta boasting "over 500 titles" like it's a good thing really do put into perspective just how devoid of content the quest really is.


>a focus on the living room experience with a high end movie theater experience

One of my biggest impressions watching Apple's trailer was that this paints a big target on televisions and home theater hardware in general.

If the experience is executed well, it would be much cheaper (yes, really) and easier and simpler to just get Apple's VR goggle instead of a TV and peripherals that will infuriate you.

Clearly we live in interesting times.


I rarely watch tv all by myself. I couldn’t imagine but two, three or four of these devices and then sit on the couch with three other ppl in isolation.


If it is about content, why would everything still have to be "apps" in their walled garden? If they want content, allow people to freely use it to their liking and produce and innovate new things.


> Given how ambitious the hardware package is, the software paradigm is surprisingly conservative.

I actually see this as a bit less surprising. After all, if you change the hardware in a big way, and you change the software in a big way, users will have a harder time adjusting to the new platform. Instead, they're making a big leap on the hardware side, keeping legacy apps and concepts, and then will presumably iterate to more 'native' experiences that were previously impossible/unimaginable/unintuitive.


Text is still a large part of the interfaces we use.

We are all highly trained to read text, it seems basic but it is in practice quite abstract.

Text is still best read on a flat surface.

The great innovation I can see with this new Apple device is eye tracking, they have not invented it, but they might have perfected it enough to be useable.

Eyes could be better than a mouse.


Eye tracking is almost certainly more accurate and faster. To the point where we make it an entire game to see who can who can move the mouse to what their eyes are already looking at fastest, in the form of shooter games.


Interesting point — if you no longer need to use a joystick to aim your weapon, how will controllers evolve? Will the second joystick be used for some other function, or will it be replaced by a different type of input method?

It would be funny if controllers evolved to be more like the single-joystick models that we had decades ago, with the joystick on the left and rows of buttons on the right. History doesn't repeat itself, but perhaps it'll rhyme?


One joystick has to exist since there are many videogames where the character is not in focus (barely), thinking of hack and slash with top down camera (the recent diablo 4),the majority of the focus is on where the attacks are going.

In top down bullet hell shooters the player had to aim and shoot in one direction as it walks in another (potentially opposite), so one stick is still needed


Agreed — I'm thinking we'd have just one joystick, to control movement. The second joystick could evolve into some new interface, or devolve into a row of buttons.


PSVR2 already has eye tracking. And unlike the Apple Vision it also has controllers with a stick. Perhaps there are already such games which let you aim with your eyes (assuming you are shooting at something), while you move around with one stick, and the second stick isn't used at all.


Yeah, if their eye tracking plus foveated rendering works as advertised, it could be a huge step forward. I'm really curious how responsive the gesture controls will be too, it was really cool seeing the finger pinches(?) being used as an input method. I wonder if it's specifically designed just for that thing or if it's all built out to track any arbitrary hand gesture accurately. And I wonder what the language/api for describing hand gestures would even look like.


I don’t doubt them. The PSVR2 has both and it supposedly works very well.

That lacks other features of course and must be tethered to a PlayStation 5.

But eye tracking + foveated rendering seems like it’s going to become table stakes in the next few years.


In the demo, I was hoping to see some app/screen/display anchored to certain walls or being fixed in a 3D spot. That way I can walk to a place to see that "screen".


In the demo there is a scene where a guy displays some iceberg photo on a virtual screen and then moves closer to it, with the screen anchored to some 3D point in the real world.


They kept doing that, like the woman sitting on the sofa watching a film leans down towards her popcorn[1] and the screen doesn't move, or the woman lying in bed watching the ceiling change into a blue sky[2] instead of just a blue sky where she was looking.

[1] https://youtu.be/TX9qSaGXFyg?t=180

[2] Same video, 5min 12s: https://youtu.be/TX9qSaGXFyg?t=312


I think they'll inevitably introduce this down the track. The idea that you can set up a workspace as fixed to a particular place. Sit on the couch, and by default it has media apps and a big screen. Put it on in your office, and you can cover the walls with reference drawings and appropriate apps that are all where you left them.


Isn’t that an intractable problem without LiDAR? I suppose the naive solution would be to fix the screens in spots as you suggest as opposed to pinning onto real life locations.

edit: ah, right the vision pro does have lidar. I look forward to seeing this pinning if it's implemented. I'm still skeptical it would work well - I've used ARKit and it's not really that accurate when you're moving around.


The Quest has been doing this for years with the guardian setup and now with their option to bring your desk and couch into VR. I think the Pro has more advanced features but I haven’t tried it. You do have to assist it by drawing our boundaries but it’s pretty easy to do


It should be relatively simple with a QR code or similar physical marker in the real world.


With yeah I agree. Without though it seems difficult


why do you guys say this? Amazon's "View in your room" feature does a decent job of placing a furniture in AR. We can even walk around/towards it. And that tech is just Apple ARKit right?


I don’t think it’s that good. “Decent” is accurate


right.. so maybe Apple just thinks it is not good enough yet. They must be stretched on onboard compute capacity already.

And some might argue you just switch apps to find that "screen".

Personally I would put something interesting on the ceiling and look up at it.


Is it possible to track your position that accurately in an arbitrarily large space?


It does have LiDAR, same as the iPhone/iPad AR stuff has had for years now.


But the Apple Vision has LiDAR.


That was repeated a bunch of time during the keynote: "familiar"


I find it interesting that Microsoft had _so much of this_ 9 years go. I developed with the HoloLens 1 around 2016 and I recall:

* Fingers together gesture clicking.

* Voice activated menu navigation. It was glitchy/I never used it though. In 2023 these sorts of systems are much better though.

* No controllers. It was all gesture based. Opening the start menu required your hand upturned, fingers together then outstretch. Kind of like an "open" gesture.

* "Pointing" based on head looking vector, which was annoying.

* Spatial anchors and being able to remember past spaces and how you used them. There was a whole set of SDK APIs for the spatial stuff built into Win 10

The Vision Pro iterates on some of these. Eye tracking for pointing and more cameras for tracking hand pose for clicking, to reduce the annoyance/strain. IMO all mandatory for long term usage (I frequently got motion sick with the HoloLens and the 3 months of debugging/developing on it were a challenge).

Maybe if Microsoft had switched to more commonplace display technology and continued to iterate on their product instead of let it languish, they could have had a solid competitor to this, if not maybe be first to the market.


Microsoft and other companies ultimately chickened out of taking risk with this. It happens a lot. I worked for Nokia around the time that the rumors about the iphone started swirling in 2006. Exact same thing. At that time, Nokia had multiple touch screen operating systems, a linux based internet tablet running a mozilla based browser with a webcam, Symbian phones running browsers based on khtml (which later evolved into webkit, safari and chrome), lots of nice hardware, lots of investment in hardware, chips, radio technology, etc. A lot of mobile firsts happened long before there were iphones, android phones, ipads, etc. Nokia had a huge technical moat.

Six years later Nokia pulled the plug on the whole thing because it got absolutely steam rolled by Apple who had none of those things but started building them around the same time Nokia decided to focus o flip phones instead (sorry, this level of corporate insanity still makes me angry). They were worried about the Motorola Razr more than they were about the iphone. They actually killed multiple touch screen platforms because of that before scrambling to re-build one around the sinking ship that was Symbian. And then they badly botched that. Reason: Nokia management was arrogant, out of touch, clueless, incompetent, complacent and utterly in denial about there even being a problem. Six years later the whole lot got fired and sold off to ... Microsoft. Who of course promptly killed it because Apple had won already and between them and Google, MS saw no path to success. So, they walked away from the whole thing.

That's the company we're talking about here. So, sure, MS has stuff that they've been sitting on for ages. But what they've lacked for years is any clue as to what to do with all that stuff. Like with the iphone, Apple has been known to prepare something for years. And people just assumed it would be fine and disappointing.

IMHO things are not that hopeless for MS and they could pull a few things together. XBox, holo lens, and some other bits and pieces could make for a half decent experience. Apparently they are talking to magic leap about licensing some of their tech as well. But that's not going to happen overnight and a rush job is not going to be good enough here.


This is classic Apple. They're never the first mover; they let others fail, learn from those mistakes, and make a pretty polished product that everyone thinks they created the genre.


Classic is right. That period ended with the iPad reveal almost 15 years ago.


Are we gonna pretend the Apple Watch didn't exist? AirPods, if we want to stretch?

Pebbles and other smart watches filled the criteria, but never delivered a cohesive package. I do admit that the first 3 or so generations of Apple Watches are pretty rubbish, but 4th onward with the possibility of all-day battery life, the watch became a clear winner compared to all others by manufacturers. Garmins and other fitness-first watches hold the crown when it comes to the more slimmed down and focused feature set, but for the generic smartwatch the Apple Watch is untouchable.


The corollary to my original point, is that Apple's first gen product isn't great (but still better than the competition), but after a gen or two is the bomb. OG iPhone? Meh. iPhone 3GS? That's where it became useful. OG iPad? Meh. iPad 2? yep.


It wouldn’t surprise me if Apple has been working on this for more than 9 years. They know not to release until all parts are mature enough to make a successful release. Sometime the hardware just isn’t there yet, and you have to wait a few generation for it to be smaller/faster/have more pixels/use less energy. And even today it seems that hardware is not at a level where they can do a successful consumer release, so they have a pro version for markets that can work around the limitations of todays hardware, like the huge size of the goggles or the BOM cost.


Interesting notes.

I'm disappointed even though it's entirely predictable that VisionOS is built on the iOS/iPadOS foundation rather than OSX. I guess we'll see how "walled in" it is but it's hard to see any reason Apple isn't going to be just as constraining and controlling about what happens on this OS as they are on iOS, if not more so. Which ultimately means I'll be very reluctant to ever adopt it in any meaningful way as my primary computing device.


It's a bit odd if you think about it: a desktop OS effectively creates a virtual 3d space for windows to live (move them in X and Y, or change their depth relative to each other), and the iPad or iPhone treats the surface as just a simple 2d plane with no depth or really even XY axes (you could sort of argue that the app switcher is an X axis).

So they're effectively recreating a lot of the elements of the macOS w/r/t window positioning, resizing, and depth, but building all on top of the iPadOS paradigm. If you consider the input methods are probably closer to a touch interface than a MKB interface then it kinda makes sense.

Personally I have zero interest whatsoever in living inside Apple's "only what we allow you to do" world, and the fact that they seem to be expanding it to what seems like a very powerful, desktop replacement computing device is kinda gross to me. But from an interface standpoint I think it's really interesting and actually makes some sense.


Definitely pros and cons. I remember reading that because of the tight integration with iOS this allowed them to achieve best in class latency for iPad + pencil that couldn't be achieved on any other platform. Having followed Oculus / Quest development that low latency is not optional in this context and every millisecond counts so I can see why they would go this route.

On the other hand, the closed ecosystem is definitely cause for concern. Fingers crossed that WebXR support comes out from behind a feature flag to allow for progressive (spatial) web apps.


> best in class latency for iPad + pencil

Hmm. Marketing horse-puckey.

A 2018 Dell XPS that has a Wacom digitiser and OneNote gives me lower latency and better inking experience than an iPad and pencil 2. I used them side by side for long-form writing for a while and the iPad was left in the dust.


You might be right it's more marketing than substance, but FWIW this is the article I was thinking of praising low latency of iPad + stylus: https://danluu.com/term-latency/


Yeh this was my main thought. However good the hardware is doesn't matter if the software can't fully utilise it. The idea of virtual displays (the most obvious immediate benefit of Vision imo) for MacOS seems like a huge benefit, but for iPadOS its downgraded to pretty cool. I've never felt any particular need to add multiple displays to my iPad, and whilst a big display would be nice, I wouldn't describe it as groundbreaking (especially as others such as XReal are already doing this in a much smaller form factor).


This might have made sense with the iPad but I can't see how anyone is going to write software that 'fully utilizes' the system in a way that Apple doesn't support without billions of dollars of investment.


Also it’s locked to Siri which is dogshit and falling further behind by the second. Apples speech to text is atrocious, the OpenAI tools blow it out of the water, as does Google’s.

No matter how fancy the visuals, it’s hard to have a pseudo-hands free voice interface that doesn’t uh, work?


It has to be evident to Apple that Siri needs dramatic changes. They keep building hardware that needs robust voice support.


I would not be so sure, mainly because of ecosystem lock. I sincerely doubt a double-digit percentage of Apple employees have touched and android phone or used OpenAI APIs. Big problem with ecosystems is you don’t know what you’re missing.


> I sincerely doubt a double-digit percentage of Apple employees have touched and android phone or used OpenAI APIs

I strongly doubt you're right. Apple is full of geeks; just because they tend to be tight-lipped doesn't mean they aren't paying attention to all the same stuff we are.


Double digit? I live walking distance to Apple HQ and there’s not exactly a lot of people in the shops and cafes with Android phones and you practically never see someone pulling out Google pay at a register.

Whereas I’d wager more than half of Google employees use iPhone, and a lot of Google staff work exclusively on iOS and Mac apps. Apple software runs in their hardware, period. It makes them too insular and unable to see their shortcomings.


The question was whether they’ve touched an Android phone, not whether they use them regularly.


Siri has been atrocious for going on a decade now. How much longer does it have to be terrible before we can say the people at Apple don't dogfood?

Hell it was bad as a voice assistant now it's gonna get lapped by other companies releasing personalized AI assistants.


I don’t use Android or OpenAI APIs, but I’ve been burnt frequently by Siri. You don’t have to use something better to know that it’s pretty bad.


Yes, but you don’t know how bad until you go from daily driving something that works and going back to garbage.

I switch between iPhone and Pixel and Apple/Google products and each has a lot going for it. But leaving Assistant for Siri is downright jarring.

Why Google made the shitshow that is Bard when they could have just iterated on Assistant I’ll never know. Such a solid product.


Siri is my alarm clock and my weather report, that's about it. Siri needs a lot of help.


You don’t need to even leave the ecosystem- Siri in 2023 is much worse than Siri in 2018 or so.


The regression in autocorrect is also nuts. It seems to aggressively modify grammar to the point of regularly changing the meaning of what I’m writing. It’s like a CS undergrad project where they went too hard on Markov chains.


It’s absolutely horrible- it will go back and modify correct words for whatever forsaken chain it decided it wants. I hate it.


I also have a fairly expansive vocabulary and talk about esoteric technical matters on my phone frequently. As far as I can tell Siri was trained on 8th-grade level text and constantly dumbs down my text and distorts the meaning.


Ugh, so true. I'm both a software developer and an avid reader of literature & philosophy, so my day to day word stream is both extremely varied, and frequently allusive with cryptic phrases and idioms. Not to mention the fact I do actually care about correct punctuation. Autocorrect and voice typing have long since given up on me.


You can help "a little bit" by adding some auto-replacements to the keyboard setup, but it doesn't work very well.

It would be really nice if there was a dictionary you could add technical words to.


if i understood the wwdc keynote correctly, they completely redid the text-to-speech, so this should actually be good now (maybe)


Yeah, but they’re so far behind even if they 10x Siri they’ve got nothing on OpenAI.


Then again.. Siri is all on device. I don’t want to send my voice anywhere


The current voice transcription engine at OpenAI is using Whisper-1[0] which is open source and runnable locally, if you wanted to keep it all on-device. I run it locally for various things and it works pretty damn well.

[0] https://github.com/openai/whisper


I have used whisper but it seems very picky about which device it will perform well on ("well" meaning quickly)


So is Google’s version and it’s a 100x difference, but agree re OpenAI.


> Siri is all on device.

No it’s not. Try doing literally anything without an active internet connection and she’ll say “I’m having trouble connecting”


I also wonder if it made more sense from a security perspective. In the early iOS days, one of the big problems for desktop apps was unrestricted access to your data. IOS improved this with application sandboxing by default, and it eventually made its way into OS X.

But we all know the difference between implementing something brand new versus retrofitting something into an existing system. So my guess is iOS had more secure defaults for a system like vision.


As far as I know, Intellij doesn’t run on iPad, so they’ll need to fix that problem before I consider adopting one.

Though I do remember reading you can connect it to your mac, so who knows. You’ll just need 2 $2000+ devices instead of one.


Be more specific what you mean by building on "OSX". AppKit is very old and crusty. Why would they build on AppKit? You want that?


Ultimately its more about the openness of the OS. If they built on iPadOS but they made a reasonable experience for freely sideloading apps, I'd be OK with it. Or even if they support WebXR really well, I might (just) find a way to be OK with that. But if they lock this down and position themselves as rentseekers to take a cut from every piece of software that runs there regardless of how much they contributed to it - I'm out.


well i remember seeing you can bring your macbook or other devices into the space as a virtual window/screen/monitor (whatever they're calling it) so I already thought I can run vscode and any other apps at first at version 1 release. but I know what you mean.


You don't think they are contributing something by creating this platform?


From a developer POV it's about the difference with the other platforms. If Apple provides a platform that bring the dev 50% more profit than the status quo, sure taking a cut of that extra profit could make sense.

If that same dev would have made the same profits on other competing platforms, Apple is just fragmenting the market and it's take should be limited to basically management fees.

I hope this new platform brings newer applications and innovations, but also expect competition to adjust, making it a situation where Apple is just one of the many platforms.


In the iOS vs Android case, iOS does in fact typically bring at least 50% more profit to developers, often far more.


It's more complex than that...the whole limitation on promoting your existing membership services, the rules enforced on pricing and passing the 30% to the users etc. makes it a much muddier picture.

iOS sure brings some value to the table. How much actually ? who knows...

For comparison Youtube is also a platform regularily criticized by its participating creators, but you're not seeing high profile lawsuits or the EU slapping them fines at every turn.


No it’s not more complicated. iOS brings in twice as much app revenue than Android in total.

What you’ve said doesn’t change this at all.


I'm not sure where you're taking your figure of twice as much, but I assume it's the AppStore vs Google Play break down ?

That number is the total revenue for the whole store (there's a lot to say about it, since Google is also not the only Store for android, in particular in China and Korea), and it matters very few for any individual developper.

It's not because the total amount of in-App purchase for all games is twice as much on iOS that my new ssh shell app will make twice as much on iOS than on another Store, for instance.


Most US developers are not producing versions for Chinese app stores. That’s a red herring.

Of course individual products have their own idiosyncrasies, but is it in fact the case that on average, an app on iOS will make more than the same app on Android.


not as such no.

Part of (my) definition of a platform is that when you buy it you are fully paying for the cost and contribution the platform provider made when you buy it. If there's a residual financial (or other) obligation after that then you don't have a platform, you have a partnership. I know there's other definitions of platform but that's the one that's important to me.


I don't see why that should be. Obviously you have a right to do whatever deals you like and not buy this product, but it seems to me that one reason Apple is investing so much in this is because they expect to recoup a huge reward over time, not just from device sales.

I guess you can always buy an android based headset that allows sideloading but presumably it won't have the same level of investment.


You're proving their point. That they expect revenue (huge reward) after the initial purchase is, by definition, rent-seeking.


It would be rent seeking if they didn’t continue to make investments in the platform, but nobody believes that is their plan.


you don't have to guess - I bought a Quest Pro which allows side loading

As far as this goes ...

> it won't have the same level of investment.

Meta is constantly mocked for how much they have invested. You can argue about how much it paid off, but they certainly invested.


Right - and the Quest Pro isn’t well compared to what Apple is marketing here. I couldn’t accept a job working with a VR company because I couldn’t get past the nausea the Quest Pro induces in me.

I think the point is that Meta’s investment is weak compared to Apple’s full court press. It really is laughable.


> I couldn’t accept a job working with a VR company because I couldn’t get past the nausea the Quest Pro induces

Ouch. I'm sorry to hear that, I feel sorry for folks like you that are sensitive to nausea. There's so much potential in this tech but I can see a whole new class of disadvantaged people coming who aren't able to fully utilise it.


Yeah - I mean obviously I don’t know that Apple has solved this. If they have, it’s a clear winner as far as I’m concerned.


Yeah it kind of sucks because it seems like won't be able to use something like Hammerspoon to control windows and other UI elements the same way you can on Mac.


> I'm disappointed even though it's entirely predictable that VisionOS is built on the iOS/iPadOS foundation rather than OSX.

They’re the same foundation, iOS is a fork of OS X and was originally announced as such. I’d be shocked if there isn’t a well maintained internal build of iOS which keeps parity with macOS windowing etc.

The difference in restrictions between the platforms is entirely arbitrary. If this device appeals to Mac users or would-be Mac users, I’d be shocked if it doesn’t at least eventually allow a Mac-like environment.


It certainly isn't arbitrary, macOS has swap and doesn't kill apps and iOS doesn't have swap and does kill apps. This makes everything very different, and one makes it easier to do real-time rendering than the other.


I guess I should clarify. It’s arbitrary in that the restrictions are policy-based, not limitations of the underlying technology. There’s no technical reason iOS couldn’t have swap, it’s not available because trade off decisions were made to enable use cases on much more limited devices.


The recent iPads based on M series chips now support swap. I think it was added in iPadOS 16 last year.


It's different-ish. iPadOS is still a more limited/controlled OS and the swap policy is also more limited. You can see it in the open source kernel where it's called "app swap", so it doesn't swap out the OS, but on macOS it can.

"Limited" makes it, well, limited, but it has performance advantages: fewer unexpected page faults mean audio/video processes won't unexpectedly drop a frame as often, and there's fewer disk writes which is good for the lifespan of the hardware it runs on.

Btw iOS actually has a very limited swap policy called "freezer" that only applies to suspended background apps, and only when it's in the performance budget.


Where can I read more about this?


The xnu source code on github. The rest of it… well you just did read about it. There's an older book called Mac OS X Internals that covers some of it.


> What if programs live in places, live in physical objects in your space? For instance, you might place all kinds of computational objects in your kitchen: timers above your stove; knife work reference overlays above your cutting board; a representation of your fridge’s contents; a catalog of recipes organized by season; etc.

I'd prefer an OS that spans multiple devices. Something how computers behave in “The Expanse” where someone can flick stuff from their mobile device to another system for a better workspace. If I'm typing this on my workstation, but want to switch environment and bring something light like a tablet or just my phone, I could send the tab and it would retain its state. The closest to something like this is a server-client architecture like Logitech Media Server. A paradigm like this would be more useful than immersion.

This is also why I like Vim. Buffer is a separate concept from File, adding an extra flexibility – in terms of layout – to how I edit to complete my current task. I want a similar flexibility with applications, at least lightweight – state wise – ones.

CloudKit and Handoff are close, but very brittle. Apps like Bear, Reeder, Anybox, Things 3, make switching devices seamless. Another good example is handoff between the HomePod and the iPhone.


> Apps like Bear, Reeder, Anybox, Things 3, make switching devices seamless

I think Apple's whole point is that they provide the APIs so that apps can do this themselves. Which, yeah, is a bit disappointing; I'd love to have a 100% fidelity, flick-to-another-device type of Continuity built into the OS, but the APIs are at least a starting point.


Simple protocols would enable this for existing hardware and software


Reminds me of some descriptions of Bell’s Plan 9.


I think the big usecase of the Vision Pro is just a larger screen and immersion. That's what it looks like Apple highlights in the demos.

Imagine developing with the ultimate 10x setup: several files, documentation, and debuggers open at once, with a zen nature background. Or making music, or painting, or even writing a research paper.

Imagine playing a video game but the environment wraps around you. Imagine watching movies in a virtual theater with a simulated 100-inch TV (one of Apple's demos). Even reading and browsing the web can be improved with an ambient environment and extra space for more stuff.

Is it worth $3500? For end-users probably not, but if it genuinely makes professionals and hobbyists substantially more productive, it will be worth $3500 or even way more. How much money would you spend to write, code, create faster?

Of course, this assumes the VR actually performs, and whether a bigger screen and immersion actually makes people work faster. As of now it seems VR is still a gimmick which impresses people at first, but doesn't provide much outside of niche experiences; which, if this holds for the Vision Pro, makes it very much not worth the price.


> I think the big usecase of the Vision Pro is just a larger screen and immersion

No one has come up with anything better yet. That’s one of the big problems with VR that I’ve seen. Do you have games on one hand, and a virtual environment showing mostly 2D stuff on the other.

I think that’s one of the reasons Apple is releasing this when they are. I sort of suspect they’ve gone about as far as they can. They don’t have a killer app, but that’s ok.

They’re putting it out there for developers, and we’ll see. Someone will come up with something compelling.

VisiCalc, Lotus 123, WordPerfect, Photoshop, PageMaker, Angry Birds, the web… none of them came from the platform vendor. So what will someone come up with that will make everyone rush to order the Vision 2 when it’s announced for $1500?


> Imagine developing with the ultimate 10x setup: several files, documentation, and debuggers open at once, with a zen nature background. Or making music, or painting, or even writing a research paper.

I used to believe this was the ideal setup, hell I even implemented it with monitors, but since I've integrated LLMs into my development process I find all I need are two side by side windows and now with CopilotX those are both in VSCode. I occasionally venture out for documentation but even that will make its way into the editor at some point. Using Edge and the Bing sidebar I can even save myself the trouble of reading through all of the documentation on a page by just asking Bing to retrieve the info I want from the page I'm on.

Doing all that in VR would be cool but I don't think it's going to increase my productivity nearly as much as LLMs have.

Disclaimer: I'm bullish on VR and I love gaming and socializing in VR


> but since I've integrated LLMs into my development process I find all I need are two side by side windows

This irrelevant for those of us not doing cookie-cutter-been-done-1000000-times-before stuff. In such cases all the LLMs in the world are useless, but many pages of docs open at once are useful. Go tell your LLM to write a ARMv6M assembly MIPS-II emulator. I'll wait. Go on.


Wow, you must feel really important, being you all the time.


I'm sorry that workflow doesn't work for you. Must be hard not getting to use modern tooling. If you care to, maybe try it out for some rubber ducking or inspirational uses, it's not limited to strictly technical things.


There are dozens of us!


If you want that immersion, I think $3500 is totally reasonable. I paid more than that for my two additional monitors.

The problem for me is that I like monitors better. I like the feeling of real space around me. I cannot conceive of wearing googles on my head all day. I just don’t want to.

And I definitely don’t want to be separated from other humans by googles. It just not attractive to me.


The pixel density appears to be about half of a no-frills monitor at a usual distance. There's plenty for movies and games but just not enough to emulate your 2-4 working windows at a reasonable angular size.


His comments on anchoring virtual tools in physical space reminds me of the work Microsoft was doing with Hololens and Windows Mixed Reality. If I recall correctly, there were APIs for anchoring things in space.


I recall stumbling upon those too in the Win 10 mixed reality API docs. There were a lot of interesting things in there that never really saw the light of day/any significant use

EDIT: Found the API docs. There's some really interesting classes in there! I remember handling coordinates to be confusing/a pain though haha. https://learn.microsoft.com/en-us/uwp/api/windows.perception...


Regarding the last part, I was wondering if an AR UI would maybe start a trend back to a more 3D/skeuomorphic design (in the sense of NextStep/Windows 95), which then would also have a comeback on the desktop. That's mostly just wishful thinking on my part, though. ;)


PSA: this page only scrolls if your mouse is on the left side.


Good thing my browser settings are to show scrollbars all the time!


Interestingly 'meta' given that Andy talks about the disappearance of interactivity cues that happened about a decade back...


Windows works the same (for me at least), scrolling follows the cursor (but input does not).


Firefox reader mode works well here.


Brave's reader mode wasn't shown as an option, though my BeeLine Reader extension was able to parse the text when I invoked its Clean Mode. I was a little surprised, since the BeeLine extension didn't detect/color the text inline, like it usually does.


Related to his comment about the windows, I’m really curious how window management works in this. They’ve shown you can have multiple windows kinda in the space at the same time. But how do you rearrange them/move them around in space? How do you close them? Is there some way to put 2 apps next to each other in the same window, and keep them together when you move stuff around? I’m mostly curious because interacting with this stuff is still a mess on iPad, and the hand gestures they demoed for Vision Pro are a lot more limited than what we’ve seen with multitouch.


The way it was presented seems to indicate that you can just set them wherever - if they don't move with you, then they'd have a physical position, right?

And for me, that sounds super exciting. Imagine going into a room and you have apps on the wall. You go into your home office, you have apps ready to go (picture a big status dashboard covering a wall, your music app to the left of your desk, stuff like that).

If it actually does work that way, that really is a whole new world.


That's definitely really cool, but there must be a way to detach from that paradigm too for things like flights and hotel rooms, where you want the work/whatever to be wherever you are. And you'd still want to be able to rearrange the things in your home office room.


I cant stop thinking about how cool it will be to finally remove all these monitors and wires from my desk. Operating inside of a virtual space has an amazing appeal to me to de clutter desk and office space area. Scale screens bigger and smaller and not be confined to: tight windows, bringing in more physical screens, setting them all up, switching between work spaces with hotkeys. Even the virtual workspace on a single physical screen uses extra mental RAM capacity to always be remembering which space the tabs are and where. This evolution frees up my neck, my posture, minus the bit of weight on my head, I can be sitting or standing. This next development from this company I'm connected to seems to go to such a more natural way of thinking about and interacting with our computing space. Besides all the great human side of it's value added, this headset is tangible for me because it's connected to an ecosystem that I'm part of. If I were to get a Quest for example, now what? I'm connected to Facebook? Great... The price for Apple Vision is pretty high but damn it's a pretty amazing piece of technology, and blows most other major headsets out of the water, there's over 20 million pixels for each eye for God's sake. The amount of virtual real estate we're talking about is like walls of 4k screens all wired together throughout your house a hundred different HDMI/DVI/DisplayPort Adapters to GPUs, what a freakin nightmare that would be. Easily all costing well over $3.5k.


Hit me this morning they never showed anyone working with files.

Can’t believe after the failed attempts to kill files on iPhone and iPad they’ve learned nothing and still think we can pretend a professional work tool can exist without a proper file system.

Not saying it should have desktop icons strewn across your living room, just that the “files siloed in apps transferring via share buttons” iPad model is objectively productivity hell and puts mental load menu fumbling where MacOS is instinctive drag and drop.


One of the WWDC developer videos shows the Files app icon around 10 minutes in: https://developer.apple.com/videos/play/wwdc2023/10203/


Files app is a crutch and a band aid to solve the problem of not building a file centric interface to begin with.

If it's anything like iOS/iPadOS it just doesn't work for getting work done or keeping projects organized, buildable upon and sane.


You’ve posted a 3-paragraph rant on a baseless assumption for a product which hasn’t been released yet. Let’s all withhold judgement until we have more information about how this truly works.


Great notes and I really liked his idea of large persistent info spaces and sharing those with others.


> "Something in the Apple omertà makes me uncomfortable naming my collaborators as I normally would, even as I discuss the project itself."

I don't understand this. It's things they made them sign? I saw another article saying they make them sign a promise to not show they are wearing the headset, allegedly because they might look like huge dorks and cause brand risk.


I'm about 90% sold but I'd love to have someone's experience using it as an emacs terminal for coding.

As long as it doesn't get sweaty or nauseating after 30 minutes, I would still be content with it as a glorified monitor, not needing new UI paradigms or gesture controls.

Distraction-free huge monitor computing anywhere in the world is a huge advantage over laptops imo.


Side note unrelated to Apple Vision:

Why would anyone want text fully aligned to the left of the browser window?! At least it's distinctive, I don't think I've ever seen that on any other website in the last 15 years.


To me, Apple Vision vision's, so-called spacial computing, is exactly what I was always looking for in a portable computing device: beefy processor, pixeless experience, app and file navigation using natural eyes and hands movement. This has the potential of replacing all desktop, laptop and tablet I may own.

Now I just have to wait for a Linux compatible version.

Because there's no way I'm gonna use any Apple software as if it was twenty five years ago, before iTunes insanities and whatever happened since. NFW.


> Now I just have to wait for a Linux compatible version.

That'll probably be about 15-20 years. Apple will have to prove that their interfaces work, and the rest of the industry will have to follow, and hardware will have to be hacked, and new paradigms built, and eventually -- maybe -- it'll be possible to do a non-janky version of this on linux. But the UX is paramount here, and linux is notoriously bad at that.


I think VisionOS apps can spawn multiple windows, so you could just spin up a fancy Remote Desktop app that serves Wayland windows and snaps the mouse to whichever window you're looking at.


“… you won’t be confined to “a tiny black rectangle”. You’ll use all the apps you already use. You don’t have to wait for developers to adapt them. This is not a someday-maybe tech demo of a future paradigm; it’s (mostly) today’s paradigm, transliterated to new display and input technology.“

This is why I think this device will be different than its predecessors. There are practical reasons to buy it … assuming you can use it without getting sick or strained.


Vision Pro opens opportunities for a new web interface of the type that Gary Flake and team were building with Silverlight and Pivot at Live Labs.

Gary has a great TED talk (Mar 2010) on this radical but intuitive spatial sorting interface. Foveation and hand movements are perfect user links.

https://www.youtube.com/watch?v=LT_x9s67yWA

Combined with a Vision Pro—OMG.


Slightly offtopic: What a cool personal website!


VR has been "just about to become mainstream" my entire career. I'm quite old now. Perhaps it'll break through before I retire.

That said, the same thing was true about video phones, to the point that everyone assumed humans do not need to see each other on a phone call. Then we had a pandemic and things changed.


I was hoping to see more paradigm shifting user interface ideas. I think those exist inside Apple Park, but are waiting for their time. It's telling that this was announced at the developers conference. The focus now is on the platform, and I think mind-blowing user experiences will come later. (At least I hope)


Yeah projecting they eyes is such a stretch, seems to suggest they think we really will hangout with eachother while having the goggles on. Maybe they are right in the end, but weird. Same with the cameras displaying the world instead of using the eyes. Interesting choice, but also a big bet.


> the amazing power of the iPhone map is that I can fly to Tokyo with no plans and have a great time, no stress.

Figuring out how you're going to get mobile data access in a foreign country without being charged through the nose does feel somewhat stressful...


Even the most scammy carrier isn't going to charge you an amount that is significant compared to the cost of the holiday. If you want to plan ahead to figure out how to get it cheap than go ahead, but it certainly isn't worth ruining your holiday over once you are there.


Airalo app allows me to buy an eSIM for Japan in the comfort of my home. Starting at $5 for 1GB and 1 week.


The vision pro showcases one use case which is compelling enough to make me consider buying it (in a couple of years, when it’s more settled): As a replacement for a monitor for work. Seriously, having a huge monitor everywhere is a game changer for me.


I agree this is the compelling use case. The benefit of a big monitor without having to devote space to it, and on the go.

Also, a private monitor where nobody can look over your shoulder, which is a game changer for doing any kind of sensitive work or personal business in a public or shared space.


Anyone surprised they didn't incorporate an EEG sensor and use that in conjunction with eye tracking? They could acquire someone like Neurosky.

I thought at this stage, it would be fairly foolproof to use that at the very least to simulate something like a click, no?


They want to ease us into the dystopian nightmare…


> The hardware seems faintly unbelievable—a computer as powerful as Apple’s current mid-tier laptops (M2)

So I take it that you can basically leave your laptop home and take your Vision visor with you and work from a virtual environment?


Interesting to read that the iOS 7 depth/parallax effect was tried all over the UI. It makes sense to indicate interactivity in a better way, but probably the right call to ditch that.


Sorry for hijacking this thread for a question about personal curiosity but does anyone know of an affordable way of measuring pupil distance?

I want to measure my pupil distance while browsing social media. The reasoning is my reading in "Thinking Fast and Slow" that pupils dilate when we see something we find interesting / something we like / when we are thinking and vice-versa. I want to put it to the test and it seems like finally the consumer tech is close to making it possible.

Does quest support pupil tracking? I did some cursory research but couldn't find any reference for it. Industry pupil tracking headsets are way too expensive; My last hope is that someone will jailbreak vision pro...


Hold up a ruler to your face and look in the mirror. Or take a photo so you can measure distances in an image-editing app.


A high res webcam and OpenCV?


The primary input is eye tracking to me is really exciting, feels like the future.

I wonder what happens in games though. Will this not essentially be aimbot, looking at the target and then shoot?


I dig his vision of huge, persistent infospaces after all the "method of loci" [0] is a very effective and well described mnemonic device at least since antiquity.

The basic observation: It takes very little effort and exposure to remember places (spatial information) in contrast to taking in linear heavy processed information from writing and numbers (e.g. "facts").

Combine these two and you have a powerful way navigating through the information space itself.

If you look at it, we are confined to rectangular boxes, fancy interactive books enhanced by moving pictures, audio and some very limited tactile feedback but nevertheless bound by the superstructure of a static tiny detail in our overall spatial awareness.

Of course we are now so used to the endless iterations of rectangular boxes that we are extraordinarily proficient at extracting a lot of richness/information from that: in scrolling, clicking, typing, imagining ... I'm reminded by the scene of "The Matrix" where Cipher let Neo glimpse the Matrix code on his screens: "You get used to it, I don't even see the code. All I see is blond, brunette, redhead."[1]

IIRC once a 50s western movie was shown to a rural "test" public from the Eurasian steppe some of who saw a moving picture for the very first time. The reactions were mostly enthusiastic during slow "panorama shots" but the audience reacted very agitated in "close-ups" or "medium shots" where e.g. in the heat of the action the horse's legs were "cut off".

We take it as given that we learn the language of movies and find them at times 'realistic' and 'immersive' when in fact contrasted to 'reality' itself qbit by qbit they are basically - with some rounding error - as highly processed as books and thereby rely heavily on our imagination filling the gaps.

While we are on the subject of of naïve visions, another area which isn't appreciated enough imho is the immense dexterity of our hands as with "vision processing" the devices themselves are glued on our dexterity of "exact" and "repetitive" moves, there is too little wiggle room for organic experimenting, refining and expression like one would naturally do for a musical instrument.

[0]https://en.m.wikipedia.org/wiki/Method_of_loci

[1]https://m.youtube.com/watch?v=MvEXkd3O2ow


> I dig his vision of huge, persistent infospaces

This is immediately where I went as well. Been hoping for something like this since the first few iterations of VR tech. I can imagine that will become one of the 'killer apps' of this platform. Unfortunately, he also said it might take some iterations on their windowing system.


Has anything been said about how the eye-display proximity might affect vision?

Maybe this is a better time than ever to start a new opticians.


It's a strange time to be releasing this product. Not from a technological perspective (clearly the hardware Apple managed to put together is extremely impressive and they seem to believe that it finally got good enough to have real appeal albeit at a rather hefty price), but from a political perspective.

It seems like the last few years have generally had a technological backlash with worries about both invasive spying and surveillance capitalism on the one hand and mental health issues (especially in teens) sprouting on the other.

So it seems strange to react to all that by offering a product that glues a computer straight on your eyeballs. Now of course Meta has already been going down that road, but let's just say that company doesn't have much to loose in the reputation department.

To be fair, Apple seems to have made a few interesting design choices. They seem to have gone out of their way to create an illusion of two way transparency to their headset making casual human communication at least theoretically possible (although how much conversation you can have with someone actually wearing these due to distraction is an open question - if it's like trying to talk to someone on their phone that it will be at best half their attention).


Does anyone know how these notes are generated? Is it a custom site, or something off the shelf?


He says it's a custom site: https://notes.andymatuschak.org/About_these_notes

That said, you can have something identical to this with Obsidian with a few plug-ins and their hosting service.

https://obsidian.md/

https://github.com/deathau/sliding-panes-obsidian


Thanks, that looks very useful! I wrote a script to export Markdown from Joplin notes (https://gitlab.com/stavros/notes/), but the UI in this site is better.


It’s custom.


If you got time and energy in the near future, please, consider slightly enhancing it according to the average web surfers expectations. Particular irritants were the broken scrolling (bound to a container), the broken footnote links and the forbidden content behind links. Additionally the classic escape hatch, Safari Reader, didn’t work.


Sorry. I built it in a weekend several years ago and haven't touched it since. Someday I'll give it some more attention…


Ah, thank you, they look great.


> But it does put an enormous amount of pressure on the eye tracking. As far as I can tell so far, the role of precise 2D control has been shifted to the eyes.

I've been researching eye tracking for my own project for the past year. I have a Tobii eye tracker which is probably the best eye tracking device for consumers currently (or the only one really). It's much more accurate than trying to repurpose a webcam.

The problem with eye tracking in general is what's called the "midas touch" problem. Everything you look at is potentially a target. If you were to simply connect your mouse pointer to your gaze, for example, any sort of hover effect on a web page would be activated simply by glancing at it. [1]

Additionally, our eyes are constantly making small movements call saccades [2]. If you track eye movement perfectly, the target will wobble all over the screen like mad. The ways to alleviate this are by expanding the target visually so that the small movements are contained within a "bubble" or by delaying the targeting slightly so the movements can be smoothed out. But this naturally causes inaccuracy and latency. [3] Even then, you can easily get a headache from the effort of trying to fixate your eyes on a small target (trust me). Though Apple is making an effort to predict eye movements to give the user the impression of lower latency and improve accuracy, it's an imperfect solution. Simply put, gazing as an interface will always suffer from latency and unnatural physical effort. Until computers can read our minds, that isn't going to change.

Apple decided to incorporate desktop and mobile apps into the device, so it seems this was really their only choice, as they need the equivalent of a pointer or finger to activate on-screen elements. They could do this with hand tracking, but then there's the issue of accuracy as well as clicking, tapping, dragging or swiping - plus the effort of holding your arms up for extended periods.

I think it's odd that they decided that voice should not be part of the UI. My preference would be hand tracking a virtual mouse/trackpad (smaller and more familiar movements) plus a simple, "tap" or "swipe" spoken aloud, with the current system for "quiet" operation. But Apple is Apple, and they insist on one way to do things. They have a video of using Safari and it doesn't look particularly efficient/practical to me [4].

But who knows - I haven't tried it yet, maybe Apple's engineers nailed it. I have my doubts.

1. https://uxdesign.cc/the-midas-touch-effect-the-most-unknown-...

2. https://en.m.wikipedia.org/wiki/Saccade

3. https://help.tobii.com/hc/en-us/articles/210245345-How-to-se...

4. https://developer.apple.com/videos/play/wwdc2023/10279/


From reading/listening to reports of people who were able to demo the device, I think Apple may have nailed it, or come close. Everyone I've seen has absolutely raved about how accurate and intuitive the eye tracking + hand gesture feels.


I just went to echo this. Most of the reviews I’ve watched or read have raved about the eye tracking, regardless of if the reviewer was a fan of VR in general. It feels really cool that we might finally have a new computing paradigm. There is so much potential here and I can’t wait to see where things go.


No one has used it for an extended period of time. You don't get a headache right away, but after a while, you feel it.


The people who designed it, built it, and tested it have used it for an extended period of time. It would be un Apple like if they decide to ship a product with such a well known potential defect in the user experience.


It can't be as bad as the 90s sega vr, where executives walked out after testing it and then buried the entire project.


Moving your eyes doesn’t cause headaches. Eye strain certainly can. If the resolution is high enough and the lag issues are taken care of there’s a good chance that it can be used for longer periods of time.


Entirely serious question: is this something that can be trained? That is, after a long enough time of use and strengthening of the appropriate muscles and neural pathways, the headaches and such go away?


I think for some people, it won't be a problem. Much the same way some people have no issues spending hours using VR goggles without nausea. There are many people with accessibility issues who manage it, for example, and geeks who just like the idea of living in the future. In my experience, your eyes just get tired, even after you get used to it.

I think unless Apple has really added some sort of magic, it will be an issue for most people to use Vision Pro for anything but passive activities, or with a physical trackpad.


Surely you are already moving your eyes to look at click/tap targets anyway. Why would there be any additional muscle strain or neural pathways needed? Clearly moving our eyes doesn’t cause problems because we move them all the time as it is.


Maybe that's why some people prefer keyboard shortcuts; I don't move my eyes around the keyboard when typing, or when pressing Ctrl+S or Ctrl-C/Ctrl-V or in Windows I can Alt-Tab repeatedly and the highlight moves and I can looking around at the available windows while watching the selected one change with my peripheral vision. I can keep typing in a textbox for a bit while looking away, but with a virtual keyboard or speech to text which might be getting it wrong that's less of an option.

Compare with a smartphone, I can move to tap the 'back' button in Safari without moving my eyes to the back button. Looking "at" the phone generally is enough to see where my finger is, and say clicking on links I can look at the link and move my hand to it and move my eyes away while my hand is moving.

Having to coordinate looking and clicking - while I haven't tried it - I can imagine that feels more load-bearing, more effortful, more constraining, more annoying.


The eye tracking as the interface is what stood out to me as the most radical choice in the device.

I'm doubtful we really know how good it is yet. Without a few hours/days of hard use its just not going to be obvious the degree to which they've overcome all these long standing obstacles via "magic".

It's hard to think voice is anywhere near ready for this application, I still find it highly frustrating after decades of tries.


You have those problems mostly because your Tobii is external device much more far away - you won't get good enough accuracy comparing to any eye tracking that is just a camera few cm far away from your eyes.

With Apple solution (or any VR with eye tracking):

1. You have very accurate head 6d location (gyroscope, compass, accelerometer + lidar for translation) at 100+ fps minimum and very computationally cheaply without much latency

2. Cameras pointing at each eye from distance of few centimeters. In this situation Eye region of interest occupy full image resolution. Probably even VGA resolution (or less) is enough to have very accurate precision in this case

3. Because headset covers your face you don't have any problem with different lightning condition, cluttered background, whats more they use IR floodlight to have consistent lighting of pupils

4. Possibly they even use similar Truedepth IR dot pattern to get 3d depthmap of your pupils similar like truedepth sensor works with difference that here they can put thos camera more far away and have it more accurate disparity map

5. Who knows maybe they even use lidar instead of truedepth for even lower latency

6. Because sensors are so close they can have very accurate reconstruction of your eyes and distance between them - most people don't have completely symmetric eye/face

Tobii is a sensor that is just like 0.5m far away from your face. Has to track your head, estimate orientation and distance, and detected eye ROI will be a tiny crop of their image sensor even if they camera has 4K resolution (which I doubt they use such resolution). At that distance if you want to get accuracy of 1 cm of what you are looking at screen you need to detect pupil movement of ~1 degree angle. If you want even better accuracy without gitter like 1mm then you need to detect movement of pupils of 0.1 degree angle.


Interesting. You might be right. I'll have to try it and see.


> If you were to simply connect your mouse pointer to your gaze, for example, any sort of hover effect on a web page would be activated simply by glancing at it.

I’m curious about how they deal with apparently not doing this, in terms of how do you know what button is going to be clicked if you click? The product page under privacy says that websites and apps don’t get eye tracking data, only knowing when you click, so I guess that means a website won’t display its active/hover effects on a button you’re looking at. Because if it could then it could just link that to analytics in JavaScript and defeat that privacy protection. So if you don’t have that, how do I actually know what’s going to happen when I click?


I’m sure the info. is in one of the WWDC videos, but I imagine the hover effect could be activated by the OS without notifying the webpage.


Sure, for a well built standards compliant website making appropriate use of HTML elements. There are a lot of websites that do their own thing, use lots of custom Javascript, where standard browser features like tabbing through their elements don't work, and they don't work well with existing accessibility tools. I'm skeptical that an OS or browser level feature is going to be able to make sense of them. That's their fault, but it's the reality of a huge part of the web.


Heads up: PSVR2 has eye tracking, I know it’s used in some games. Might be something to look into.


PSVR2 uses it for foveated rendering where the eye tracking doesn't really have to be that precise. It's done for performance optimization, not input controls.


I'd like to connect this thing to a drone and go flying.


I too was wondering if it could act as DJI's FPV helmet.


Punch Cards --> Text Terminals --> GUIs/WIMP --> ?

? =! Vision Pro


How about we wait until someone actually has one to review?


This reminds me of people who gripe about sports rumours in forums about rumoured player signings. If you're not interested in speculation about hardware not yet freely in the hands of users, why not leave it to people who find that discussion interesting?

There are reports now from dozens of people who've had similar 30 minute sessions using some of the features. That's enough to get an idea of some facets, guess at what's not polished, what's not ready at all. But they're all easy to ignore if you want to wait until next year.



While I do enjoy me some MKBHD, I'd like the opinion of someone who's not fully bought into the Apple ecosystem already. This is a good perspective to have, but it's an incomplete perspective.

And I'd like to know how realistic this thing is for "not apple users". It's a VR headset, can I use it with Windows to play MS Flight Simulator or Star Citizen? Can I use it on Linux to get wall-to-wall emavs? More real world testing is required.


It’s not a VR headset, it is a computer with its own OS and M2 processor. I don’t think that VR or AR was mentioned a single time in the Apple event. They say this is the first device for spatial computing and that this is equivalent to the first Mac as far as changing OS UI goes.

This will be another part of the Apple system along with Macs, iPhones, iPads, etc. You will never be able to use this as just a monitor equivalent for any other OS.


Right, so: it's just another VR headset, just beefier. Have you not been paying attention to Meta's Quest and the like? Modern VR headsets are computers, they run their own OS, with applications/games loaded onto the device itself, and they can be used as a VR display client for other operating systems.

This is an M2 computer strapped to your head: it can bloody well do VR, let's see what folks do with this device that Apple never asked them to, but they do anyway because Apple doesn't get to tell them how to use their overpriced hardware (especially if the moment you buy it, the buyback is $50 if you're lucky)


Quest Pro owner[1]: it's "a total waste of money", "embarassing how much of an unpolished piece of garbage it feels like", it's "a steaming pile of trash", "infuriating", "buggy", "janky".

MKBHD on the Vision Pro: better image passthrough, better responsiveness, better hand tracking, better eye tracking, better image quality than any other headset he's ever used.

You: "overpriced hardware".

No, better hardware.

[1] https://news.ycombinator.com/item?id=36220459


Something can be both "more expensive because it's better" and "overpriced" at the same time. Those two are not mutually exclusive.


Someone can't say "let's wait until people have them for review before speculating" and then describe them as "overpriced" at the same time, those two are mutually exclusive.


They most certainly aren't, given Apple's track record for adding about 200% markup on every single product they've made in the last 20 years. This product is overpriced until proven otherwise even if it does exactly what they claim and it beats the competition by a country mile. If it's not $1000 over an already healthy profit margin, that'd be an Apple first in a very long time.


200% markup compared to what? The price they paid for the stuff? This product is not overpriced until any competitor makes anything like as good, and sells it significantly cheaper (people who pay for branding get value from branding). Which we won't know until ~2024 when this thing goes on sale.


You should already know the answers to those questions.


Why? It's only been out for a day, you telling me everyone's already had their run at hacking it? That seems like pretending you know what folks will do with this thing.


Every day another submission about these goggles nobody seems to like. I'm just annoyed by all that unwarranted attention. Just because it's Apple.


Anecdotally I like them, and not just because it's Apple, seems like great hardware/software/industrial design etc.

But valid to be annoyed at it, Apple do tend to be quite polarising for people.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: