In 1981, I received a computer as a birthday gift at the young age of 13. Likely the first micro-computer in my small California town. When I took it to my Junior High science fair people were like stunned by a kid with a computer.
When you turned on the computer, all you got was a flashing cursor wait for you to program in BASIC.
The next year, a successful local business person asked me, if I thought in the future most people would have computers. I said yes.
They replied they didn’t think that would ever happen. And they didn’t see, the need for it in their business, even after seeing VisiCalc.
In less than 5 years, they had computers in thier business for Lotus123. And not long after, they had computers in their home.
I have seen this pattern repeat many times. I have also seen many branches of computer tech die (amiga, os/2, palm….)
But from my point of view “figure out what it’s for” is the ultimate computer hackers playground.
And perhaps Vision Pro will be the first headset “worth criticizing” regardless of its success.
This is all true but it can be a little bit of survivor ship bias at play. Look at all the technology conferences over the last 40 years and for the most part you will see a grave yard of tens of thousand of ideas that looks promising but ended up going nowhere - sometimes simply due to being out maneuvered business wise.
Bill Gates once said that in some ways Microsoft was lucky to be as succesful as they where, that they had compititors that had better products but to Microsofts luck, could never get a foothold in the market.
Maybe Apple Vision is the future but it could also just end up as another path not taken on the trash heap of history. Time will tell.
I am not sure why you are focusing on "survivor bias". My comment was about being a programmer and the fun of seeing a new technology that is an undefined territory to explore.
But I did directly mention that many of these do not workout as as I referenced: amiga, os/2, palm, and VisiCalc and Lotus 123
However, so that you may have some idea of my personal "survivor bias", here is a super simplified multi-dimension history of my over 43 years of computing smashed into a linear list with massive things forgotten:
TRS-80 Model III, LDOS, BASIC, VisiCalc,
modem-to-modem
TRS-80 Model 100
CompuServe
MS-DOS compatible, MultiPlan, TurboPascal, dBase
Windows 3.0
DesqView
Delphi Online
OS/2, REXX
Macintosh System 7 --> 1993 to OS X
Dial-up ISP
Solaris, Perl
BBEdit
Windows 95
Palm Pilot
Linux for Servers --> 1998 to Today
Linux Desktop
ColdFusion
Apache
Python --> 1999 to Today
Always on Internet (ISDN, DSL, Cable) --> 2004 to Today
Nokia cellphones
PHP
Zope/Plone
Mac OS X --> BETA to Today
Windows XP
Nginx
iPhone --> 2009 to Today
SublimeText
Windows 10 --> 2018 to Today
VS Code --> 2018 to Today
NodeJS --> 2018 to Today
SvelteKit --> 2021 to Today
KDE Neon --> 2022 to Today (the year of Desktop Linux arrived for me)
The longest thread is from 1993 to Today is what is now known as macOS, but for about the first 15 years of my usage Apple was considered at risk of failure. MS Windows was always a part of my world, but seldom my primary focus, it has only been since 2018 that I use it on an extended daily basis. Some of the above are clearly dead. Some I still use lightly (Nginx, BBEdit) but are no longer the focal point of my work.
As far as Apple's Vision Pro, I have no clue if it will be successful. For myself personally, it is the first headset I am even interested in playing with.
I have had one or more Macs since 1993, I went years without a Windows system. (Servers have been consistently Linux with some Unix and Macs early on).
I probably spent less than a day total with Windows for Work Groups, NT, 98,ME,Vista,7,8,11, etc
Much of my Windows 3 usage was under OS/2 Warp
The above list really about my personal major themes. I already can see major omissions, but Windows is not one.
>This is all true but it can be a little bit of survivor ship bias at play. Look at all the technology conferences over the last 40 years and for the most part you will see a grave yard of tens of thousand of ideas that looks promising but ended up going nowhere
Survivor bias is real, but the interesting failures (in comparison to VR) are things like 3D TV.
3D TV has a very weak effect, the 3D effect is basically dismissed after a while, after you get immersed in the actual content. A bit like watching a (good) silent movie, at first it's jarring, but after an hour you don't notice it anymore.
On the other hand, because VR is so immersive, the effects on the brain are quantifiable, it is fundamentally different than all other forms of media. You can permanently change your brain with VR in ways that are literally impossible with letterbox media. It can change your brain, in a way that no technology has ever been able to before ('Strategic modification of Bayesian priors' if you're into the science).
That kind of thing is why VR is a guaranteed success, it's not just about the fact that movies and entertainment and games will get to a new level, but the psychological possibilities for actually modifying the self with VR are going to ensure we all have headsets in a decade.
VR is immersive to a degree that it tricks your brain completely. It can activate powerful biological mechanisms that cannot be done in other digital/scaleable ways. For example, there's a company in Spain called Virtual Bodyworks, founded by the VR researcher Mel Slater, which can affect your implicit bias with body swaps performed in VR. This is genuine selective neural net-surgery, done in a way that cannot be replicated outside of VR.
The company I work at has a smoking-cessation VR app, takes 5 minutes for some people to go from decades of unbroken addiction/40 smokes a day/first smoke within 5 minutes of waking, to not even being able to think about smoking. There is no way to do what we do with a phone app or even a 3DTV, it requires VR.
That kind of software is going to be the reason VR will take off, it's capable of changing the mind in ways we can't imagine at the moment.
Not to mention that spatial computing is the 'final form' of human machine interaction, plenty of research on that.
> The company I work at has a smoking-cessation VR app, takes 5 minutes for some people to go from decades of unbroken addiction/40 smokes a day/first smoke within 5 minutes of waking, to not even being able to think about smoking. There is no way to do what we do with a phone app or even a 3DTV, it requires VR.
Five minutes to quit smoking? There is one app that claims to be clinicly proven and even they only claim 33% of users actually quit after completing the program.
'only 33%' is not so bad, that's basically a market-leading rate. 10 weeks of CBT is the established benchmark for 'proven' tobacco cessation, and has a 2 year 33% success rate (IIRC).
Most of the tobacco cessation products top out at 33% for some reason, but combining them gives higher success rates.
What app are you referring to btw? Very interested in this area at the moment.
You may be right about what you're describing, but this sounds incomparably more like bullshit than any chance of being right.
I've played with VR, and it's really not that immersive - no more than any other good media.
In particular, VR is definitely not as immersive as actual reality, so anything that could be done in VR to convince someone of non-fantastical things can also be done (at a higher expense, for sure) with actors and props. So, your claim about 5 minutes to quit smoking has no reasonable chance of being true from where I'm sitting.
>n particular, VR is definitely not as immersive as actual reality, so anything that could be done in VR to convince someone of non-fantastical things can also be done (at a higher expense, for sure) with actors and props. So, your claim about 5 minutes to quit smoking has no reasonable chance of being true from where I'm sitting.
Well yeah, it can be done in real life, it's just much more brutal.
A fourteen-year-old boy was said by his parents to have started smoking at the age of seven, and to be spending every penny of his pocket money on cigarettes. He had at one time regularly smoked 40 cigarettes per day, but was now averaging about half that number because his pocket money had been reduced. He said he wanted to give up smoking because he had a smoker’s cough, was breathless on exertion, and because it was costing so much money. Physical examination and chest X-ray were normal.
Treatment was given in the outpatient department. On the first occasion he was given an injection of apomorphine l/20 g, and after seven minutes he was told to start smoking. At eleven minutes he became nauseated and vomited copiously. Four days later he came for the second treatment, and said that he still had the craving for cigarettes, but had not in fact smoked since the previous session because he felt nauseated when he tried to light one. He was given an injection of apomorphine l/20g, and after seven minutes he lit a cigarette reluctantly, and immediately said he felt ill. He was encouraged to continue smoking, and he collapsed. He was given oxygen and an injection of Coramine. When he recovered he was very hungry and asked for food, which he ate voraciously. Four days later he was given apomorphine l/40 g. and vomited as soon as he attempted to light a cigarette seven minutes later.
When he next attended he said he no longer had any craving for cigarettes, and he made two interesting comments: “When f see an advert on T.V. for cigarettes,
it seems like a dead advert. ” “Just smoke from my father’s cigarette makes me feel ill”
Two months later he left school and started working. He said he had “got a bit down” at work and wanted to “keep in with the others”, so he had accepted a proffered cigarette. He immediately felt faint and hot, and was unable to smoke. It is now a year since his treatment, and his parents confirm that he no longer smokes.
The only reason this experiment seems to have worked is because it was brutal. So not sure what the relevance is. I imagine you could induce serious vertigo through VR in someone, and associate it with smoking in the same way, but that's still brutal and a form of aversion therapy, which brings serious ethical concerns from my point of view (particularly if applied to a child).
I don’t know, I think you are a bit overhyping the effects, but anyway — this is very creepy. Even if the effect stops at being able to manipulate people on a short-term (e.g. in-“game” transactions, etc) basis more effectively than the already criminal (morally) existing manipulations.
Also, I tried looking into the Strategic modification.. paper? but didn’t find it.
I think I'm being pretty mild on the effects :) But maybe that's the enthusiasm of working in the field.
And yes, it's creepy for sure. There is a theory that every generation witnesses a key technological development and is 'lost' to that development, and the generation that grows up with that technology is immune to it. I imagine VR is going to be a technology like that, there will be people 'lost to it' in the same way that people were lost to TV or to smartphones today ( ref: https://www.youtube.com/watch?v=6Olt-ZtV_CE ). Ethically VR is going to be a minefield.
The company I work at has a smoking-cessation VR app, takes 5 minutes for some people to go from decades of unbroken addiction/40 smokes a day/first smoke within 5 minutes of waking, to not even being able to think about smoking.
Emphasis mine.
This is a meaningless statement.
I argue the only valid statistic is x% remain non-smokers at y years, something comparable to other methods.
And even if your company’s method does turn out to be statistically significant, that doesn’t necessitate everyone owning the device, only the clinic needs to.
I mean, super early days and they're talking their book, but this is definitely a massive use case for VR.
I thought about this a while back, you could definitely do aversion therapy treatments potentially more effectively with a VR setup.
> that doesn’t necessitate everyone owning the device, only the clinic needs to.
If this approach works, then it will be super widely adopted as it costs insane amounts of money to drive behaviour changes right now, and lots of health services would be interested.
>I argue the only valid statistic is x% remain non-smokers at y years, something comparable to other methods.
Absolutely agree, but our problem is time, we did our first patient around 8 months ago, so we're just limited by that timeline.
So far so good though, followup shows classic Aversion reactions, the people that were affected still have a very strong reaction to the taste and smell.
> On the other hand, because VR is so immersive... tricks your brain completely
I find there is a similar acclimitisation effect to 3DTVs. Do a repetitive task in VR and it doesn't take long for VR to not matter. If you use VR everyday the wow disappears and desktop gaming can be a refreshing change. Only after a decent refractory period might you put on a headset and get some sense of magic again.
> takes 5 minutes ... no way to do what we do with a phone app or even a 3DTV, it requires VR.
Oh, you are snakeoil salesman. I regret responding.
I think the key is that this is not a VR device but an AR one. The illusion is harder to maintain in AR but if you manage it is way more tricky. The difference between VR and AR is undervalued in the discussion. imho, the ability to do a credible AR (due to superior computational power and the abundance of sensors) _could_ be the key for Vision success
Except it's not really--in the sense of an out-in-the-world-walking-around HUD. And certainly the keynote emphasized in-home entertainment and (I think?) gaming. It's got elements of AR to it but it seems much more like VR+. (And Apple was almost certainly smart to just coin their own term.)
you do mention a couple things I hadn't considered or heard of before.
> The company I work at has a smoking-cessation VR app
This sounds very intriguing! Could you possibly provide some references and/or the name of your company? (I don't smoke but I'd love to understand better how this works.)
More generally, how do you stay on top of what's happening in VR tech & applications? I'm asking because I've clearly been missing out on some significant developments.
I'll take that bet. I'm a kind of tech-cynic, have a Nokia 8110, ten year old thinkpad and listen to minidiscs. But in January I'll fly to the states and buy a Vision pro day 1. The last time I was this excited about a technological innovation was with OS X. My brother was a CS guy and he recommended I buy an iBook when I started university because he thought OX was going to be a game changer. And in the beginning it was weird to have the only Apple in a lecture hall with 300 Dells. But 10.1, 10.2, 10.3... It felt like being on the cutting edge.
That's what the Vision Pro looks like to me. The Powerbook 12" all over again, future-tech.
>Could you possibly provide some references and/or the name of your company? (I don't smoke but I'd love to understand better how this works.)
It's really simple as hell, there's an effect called the 'Garcia' effect, whereby if you feel sick ≈6 hours after eating or drinking a novel taste, you get a lifelong aversion to that taste of smell. It's highly conserved, found in every animal studied with the sole exception of vampire bats (they only have one dietary option).
And this was used in tobacco cessation in the 60's. There's a lovely case study where a 14 year old boy is given apomorphine three times while smoking, he vomits so copiously he passes out at one point, but literally can't smoke afterwards.
It fell out of favour because injecting people with opiates to induce uncontrollable bouts of vomiting wasn't a popular way to treat smoking.
We just do with a simple VR spinning room, you take as long a break from smoking as you can comfortably manage, light up and then do our 'aversion' session for 5 minutes, or as long as you can manage.
From that, we see around 20% of people that are unaffected (the efficacy is dependent on a factor that varies ≈10000:1 across a normal population).
For most people, you get some effect, and for around a third you get an instant reaction where even 2-3 5 minute sessions mean they just can't smoke. They can cave to cravings and light up... and then just have to throw the cigarette away in disgust.
>More generally, how do you stay on top of what's happening in VR tech & applications? I'm asking because I've clearly been missing out on some significant developments.
Very difficult, so much happening in various silos. Jaan Aru and Mel Slater are probably the two researchers I try to follow as much as possible, but this is a prescience at the moment, so even finding common terminology is a challenge.
This is fascinating, thanks for elaborating! Do you think a similar therapy would work in the case of alcohol addiction? I.e. drink a glass of some high-proof spirit, put on the VR glasses with the app, profit(?)
yes basically, in fact when we talk to people about this, around 50% of people have had some experience of this in their life (with me it was Kiwi fruit after a long car ride, I associated them with the nausea and couldn't eat kiwis for around 20 years).
Of that 50%, around a half or so have a history of this with some kind of alcohol. I have one friend who will almost puke if he smells whiskey, because of one terrible night in Norrköping with a bottle of Paddy's.
>Will it also cause aversion to headsets and to VR? :)
Unlikely, the Garcia effect is tightly coupled to taste/smell. Could inadvertently give a lifetime aversion to any food/drink though, especially if it's a new taste.
> Bill Gates once said that in some ways Microsoft was lucky to be as succesful as they where, that they had competitors that had better products but to Microsofts luck, could never get a foothold in the market.
Bill Gates famously engineered his own luck by opposing a free market.
There were any number of better operating systems at the time than DOS, especially in the 1.0 timeframe. CP/M was arguably at least on par. Any number of 16-bit minicomputer operating systems. Probably a bit early for Unix realistically.
Compared to Unix and especially modern operating systems DOS is more comparable to firmware than OS, it was something ultra minimal which basically got out of the way of developers. And being so minimal it allowed IBM PCs to sell for prices below $10,000 (1980s! dollars). This was they key to the success of Microsoft and the IBM PC.
You might have created a ‘better’ OS at the time, but no one cared about your fancy slow OS which used all the memory cost a fortune in hardware.
perhaps my tendency to expression is overly ironical and indirect at the cost of clarity and obviousness, but I meant it seemed biased that Gates described the reason why MS won out over superior tech was due to 'luck'
Clearly Microsoft did lots of things right (and wrong) over the years. And one of them was creating DOS in the first place. But IBM could have presumably gone in a different direction for the IBM PC fairly easily. And had they done so, it's reasonable to speculate you might never have heard of Microsoft.
It is biased as it is coming from the winners. But it does say something that Gates was willing to admit that it was because of some glorious amazing product that they were successful.
A big part of it was also Gates ruthless business practices.
My father in law was in the construction business during this time at a company building bridge spans. A supplier gave them 5 Apple IIs as a bonus for buying so much product. The owner exclaimed, throw them out computers are a fad!
To calculate the bridge spans requirements it took 3 guys a 3-5 days to do the calculations, drafting the spans, and double checking everything. My father in law opened the book on basic and saw the math it could do.
He wrote a very basic program to take the inputs and do the math. It took the computer less than an hour to return the results.
Who knows that the future brings but I’ll be optimistic.
You could argue the computer was an extension of those human done reports. The iPhone moving the business reports from desk to be mobile as you go. VR on the other hand is entirely new paradigm. However VR will find its way into a lot of “pro” industries. VR interior design, VR architecture, VR manufacturing line design. There’s tons of usecases.
I think the transitions from punch cards to CLIs, to GUI/mouse interfaces, then to touch interfaces, are more apt metaphors when considering the impact of VR.
Touch found it's place as the interface for small-form devices. I'm pretty sure VR is the interface for gaming (with other niche applications). It will be wildly popular, probably a headset in every house within 20 years, I just suspect it will be doing what we already expect - gaming, visualizing chemical reactions, architecture, etc... things we already do in 2d, made just a bit more convenient in 3d with spatial controllers.
My first internship was at a biology research startup back in ‘96.
My supervisor, the head of R&D, had an SGI w/ 3d goggles and a glove and would spend hours in front of it manipulating virtual molecules. He would talk about his days as a PhD student at Stanford in the early 70s working with punch cards and the progress of computing up until that time just over 20 years later… and thought that maybe in 10 years, when his rig that cost about $100k would be cheap enough for the average consumer, it would be the next step in personal computing.
what about the idea that the headset is just a crutch. It seems the UIs shown eye and hand tracking are only tied to the goggles as demonstration to get the tools in developers hands.
My first computer (bought my parents) in 1986 was an enhanced 128K Apple //e with Duo disk drives and green screen monitor and a dot matrix printer. It was about $3K in all in 1986 dollars.
My second computer I got for a graduation gift was a Mac LCII with 10MB RAM with a 512x384 12 inch monitor a LaserWriter LS printer, an Apple //e daughter card, a 5-1/4 inch drive for the card and a copy of SoftPC. It was $4000 - in 1992.
People use to spend a lot more on computers back in the day.
Both of my computers were hackers playgrounds for me.
Your computers were tools for building... anything, and the manufacturers (especially Apple back then) actually educated, promoted and rewarded amazing things. This is a consumer device tightly coupled to the most restrictive ecosystem we've ever seen. Doesn't seem like a hackers' playground.
You know, open-mindedness and being curious about new things is part of the hacker ethos. If you are dismissing something without even experiencing it, based on nothing other than preconceived notions about it, maybe you aren’t really a hacker in the true sense of the word? Just a thought.
I paid $10 a month to distribute my little freeware on AOL in college so normal people could access it. I also submitted it to info-Mac archive via ftp.
I would have gladly paid $99 a year for the bragging rights of putting my app on the iOS App Store so anyone could get it.
I paid for a 65C02 compiler back in mid 80s using my allowance and paid for a Mac C compiler in the early 90s.
My parents were solidly middle class - a factory worker and a teacher.
The point isn't paying to publish apps, the point is that you can't hack into it if this is anything like iOS.
If this runs a version of iOS, we can't access the code that is running in it, we can't open a terminal, we can't use our programing language of choice. We can't use the actual device for building anything hacker related by itself.
We can make apps for it using an actual PC made for building, but only if it runs MacOS, ans using their proprietary language. The actual device is far from a hackers dream or plaything if its as locked up as the iPhone. But very cool device though, hopefully it brings VR into Mainstram outside of gaming
You can definitely hack iOS all you want to, use private APIs, do all sorts of things that are against the App Store rules, run it on your own device and publish the code to GitHub to let other hackers compile it and run it from source.
iPhone enthusiasts Stephen Hackett (his real name), hack into pre-release versions of iOS all of the time to see how it’s running from their computer.
Even that 1986 6th grade hacker that I was would never want to program on the phone. I fail to see the fascination with running a fully fledged IDE on a phone.
When I was programming on the very open Windows CE devices pre-iPhone, I avoided running on the phone as long as possible and used the emulator.
You can use any programming language you want that compiles down to ARM.
Do tell me how I can run my own home screen, directly access the filesystem, write my own device driver or tweak the kernel without voiding my warranty and risking bricking my phone using a jailbreak exploit, assuming one is available for my device.
This "you can build and directly deploy an app to a limited number of devices if you're on the developer program and have a mac, therefore you can 'hack iOS all you want'" meme is borderline dishonest.
Also, the fact that you "fail to see the fascination" is of little significance to those who do want to develop on-device. You don't get to make "you can hack iOS all you want" work by restricting what people are allowed to want.
Piss off, I'll do my tinkering on a platform that isn't continually trying to prevent me from doing it, thank you very much. I'll use my expensive pocket bauble as the limited device it is designed to be. Just don't try and gaslight me into thinking it's in any sense a general purpose computer.
You do you. The rest of us do our tinkering however we choose based upon our own situation, as well. Some people only own the "expensive pocket bauble" (you know, economics are a thing) and they're still out there hacking on it rather than whining about the walled garden. We all know that there are other ecosystems that are more modification-friendly. We don't need to hear your over-qualified internet rant. hence the "nut up" comment. "void my warranty" sheesh. What point are you trying to make?
There are plenty of kids out there tinkering and hacking in the truest sense on iOS and macOS using one of their relative's dev accounts.
For one thing, it isn't. I can do linux kernel dev on my desktop and laptop without voiding my warranty or risking bricking anything.
And most of the things I'd want to do with an iOS device that I'm prevented from doing wouldn't involve any kernel-level work anyway.
But more importantly that's the most pathetic justification I've ever heard. By that logic, since playing the piano is difficult we might as well sell pianos with all the keys glued together.
The truth of it is that Apple products are not designed for tinkering and they actively resist attempts to do it. To spend my time plumbing their internals to add value to a product that treats me an my kind as a sort of infection would be a pure waste. I'll do my hacking on Linux and use my iPhone for arguing with people on hackernews, it's fine.
The argument is not that you'd run your IDE on the AR device (although there's people who will want to do that).
The point is that you can hack the device to run your software, or distribute your software so that it's easy for others to use it as well, but not both.
>I was would never want to program on the phone. I fail to see the fascination with running a fully fledged IDE on a phone.
Problem is, Apple isn't advertising this as a phone or "mobile device". They're advertising it as a spatial computer. And IDEs are probably my second-most-used computer applications after web browsers, so I want to be able to use them with the near-endless monitor space offered by this spatial computer.
It's true that it's a walled garden. But with everything the device has access to, cameras pointed at your pupils, cameras pointed at your surroundings, precise tracking of where you look (which the OS prevents apps from accessing), I'm not sure I want apps to have full permission to the hardware. The more "intimate" devices become, the less you can trust random apps. They still let you run whatever you want on your own device as a dev, which seems like a good compromise. But I'd never trust other people's apps with access to all that.
Disagree. HoloLens and oculus went with internal SDKs too complicated for most devs, and had official docs recommending Unity and unreal for development. This seemed stupid as those environments were not purpose built for this class of device (mixed reality). I think Apple has shown some serious wisdom here and am excited to explore their frameworks.
Price still has to go down to be mass adopted. Apple sell just ~25 million mac per year and probably most of them are just cheaper macbook air with price below $1.5. Even though their current macbooks are great and people see the value of having a laptop, it's a mature market but Apple still has only like 10% market share.
People wallet and their salary is not out of rubber - there is limit how much they can stretch their budget.
The tech will be back-engineered once the first VisionPro is out. It took Android smartphones a few years too before they were competent with Apple. Today, the best Androids are almost as good, if not better.
There will be multiple VisionPro competitors within half a decade and they’ll all compete on price and focus on mass adoption - just as it happens with phones and laptops.
I feel much the same way about having been given my own computer with internet when I was about the same age in the late 90s.
Following developments in VR/AR for a while, I've felt like we've been in a "business computers before spreadsheets", "home computers before the internet", "mp3 players before the iPod", "PDAs before the iPhone" sort of period, wondering if we'd see a sea change platform emerge. I generally have a low tolerance for hype, but I'm excited about the potential here. Can't wait to experiment with developing software for it!
Perhaps it is not the next iPhone but the next Mac. It is a better at home or at work general computing device. I am a cynic and I see it as the next iPad.
I feel the computer revolution was seen by many. The Vision Pro won't be nearly as explosive, if at all. It's essentially an interface for personal electronic space, and that space is being explored by hackers every where with all kinds of devices. So 'figure out what its for' is being explored by many, and doesn't have to be exclusive to Apple devices.
Like 5 years ago I worked at a digital agency that received a bunch of devices from different manufacturers. AR, VR, spatial audio, directional audio, holographic displays and were given a sort of blank check to make some cool experiences. We never really got past the novelty factor. Obviously there may be some cleverer folk in the world than we had at the time, but I came away thinking these would never really get mainstream acceptance for anything besides gaming. I also came away realizing that mouse and keyboard are really just amazingly efficient and 2d screens are perfectly fine.
I saw Personal Computer, PDA / SmartPhone and Tablet and I never doubted their success as a product category.
But Vision Pro, or AR / MR is the first one I am joining that camp. I am sure It will do well in many areas, I just dont see it carry the same weight as PC, or Smartphone.
Computers were invented to speed up calculations, and yet the primary thing we use them for today is to read the news and send each other messages and memes…
I think it’s totally reasonable, to expect developers to find the real use
In one or more of William Gibson’s early books, maybe neuromancer, there was a think called simstim where you could vicariously live a celebrity’s life as they live streamed their daily existence while wearing a headset.
Combine that and pornography and I think you’ve nailed it. Pun possibly intended.
That depends on your view of VR entertainment and Las Vegas-style (adult) attractions.
The Vision Pro was showcased with Disney as a content partner. While Disney has historically featured "family friendly" entertainment, there is definitely a market for adult entertainment that the Vision Pro could excel in.
Is it "an exciting Las Vegas adventure in the privacy of your home" or "a lonely person in a dark room trying to escape the sadness of reality"?
Apple won't allow porn-focused applications in their stores.
Apple won't allow alternate stores on Apple-branded mobile devices, or sideloading.
This is a fundamental disrespect of their customers who have purchased the device. Apple regards their brand image more than allowing their customers to display what they want on the device they own.
There is a browser on that thing. There will be generic 3d viewers that can access content over the internet. I think you can display what you want one way or the other.
That they don't want porn to be the centerpiece I can understand. That's very problematic or outright illegal in most parts of the world.
I consider this to be hugely beneficial to Apple users. It’s beneficial to Apple too - if everyone was mentally and physically sick from pornoghraphy even more than today, who is going to work, earn money and buy Apple products?
Not necessarily. Developers and product designers assume that users are going to use their product in a particular way. In my - limited - experience in those roles users tend to find entirely different ways to use your product that you never ever would have thought of, and if you had thought of them would have had significant impact on the product itself. The resulting impedance mismatch tends to be overcome with ruthless applications of duct tape, post-it notes, bending and twisting of parts and - unfortunately - the overruling of safety devices and lock-outs.
The first computers were operated by the British and US defence establishments. Could you explain in which way you feel ENIAC was "100% open platform"?
People who owned and had access to ENIAC could load and run any software they wanted on it. They did not have to fight with an app store support agent for months over arbitrary rules only to get their software rejected. That's what open means.
I've never understood this criticism of iOS. It is a fully fledged general purpose operating system with bare metal access and ability to run anything that can be compiled. Sure you can't distribute apps that do whatever you want on the app store. But there is absolutely nothing about iOS that makes it any less of a general purpose computing platform than Windows or Linux.
On iOS you don't have root. This prevents you doing many things with it, which is why jailbreaking exists. The fact that you can write an app and deploy it directly to a restricted number of devices for testing (if you're on the Apple Developer Program and own a Mac) doesn't change that.
Try altering the iOS home screen behaviour, directly accessing the filesystem, writing your own device driver or tweaking the kernel. You certainly don't have "bare metal access" except by voiding your warranty and risking bricking your device by jailbreaking it, if there even is a jailbreak exploit that currently works on your device.
>As far as I’m aware you can’t do that with iOS devices (potentially including the AR ones).
Sure you can. You can fire up Xcode right now and build a self signed app to your phone with whatever the heck you want in it. Distributing that on the App Store is another story, but there's nothing stopping you from executing whatever arbitrary code you'd like on your own phone.
The question isn't whether you can view the existing web content on the Vision Pro, the question is whether all of the Vision Pro SDK, sensors, and 3D functionality is exposed to the browser to enable VR-enabled webapps for the genres which Apple would disallow on their app store.
You don't need an app to watch porn. Just connect it to a stream, either online, or offline through a storage device. An app would be useful only if we're talking about VR, but so far this hasn't worked for porn.
New opportunities for abuse do exist with Vision Pro. If you allowed free reign access to the sensors people could record the inside of your house/work, capture your face, fingerprints, retinal pattern etc.
And again with Objective-C it's impossible to prevent private API usage unless you have some sort of App Store model which can inspect the binaries and prevent abuse.
> And again with Objective-C it's impossible to prevent private API usage unless you have some sort of App Store model which can inspect the binaries and prevent abuse.
A alternative is to sandbox applications to prevent them from calling anything else than the official API, and to use a less restrictive sandbox for applications signed by a key owned by the vendor.
> If you allowed free reign access to the sensors people could record the inside of your house/work, capture your face, fingerprints, retinal pattern etc.
Yes, yes they could. That's not and shouldn't be Apple's problem. That's your workplace's problem to regulate how the device is used on-site, the government's problem to regulate how it can be used in public, your household's problem on how it can be used in private, etc.
The device will be inevitably jail-broken anyways, so a walled-garden isn't going to stop bad actors.
Not to mention, most of the things you mentioned can already be accomplished with less expensive and much more subtle devices, like a standard digital camera. And those device definitely don't try to prevent abuse. (Imagine if your camera refused to take a picture because it thought you didn't have permission!)
> Yes, yes they could. That's not and shouldn't be Apple's problem. That's your workplace's problem to regulate how the device is used on-site
Considering the vast majority of exploits on Windows are not the fault of Windows and are the fault of 3rd party applications. The fault is always put on Microsoft.
If Apple gives you free rein, and shit hits the fan. People won’t blame the company for allowing a piece of software to go rogue. People will blame Apple.
At least where I live in Australia there are laws about how biometrics are managed.
Apple can't just capture them and then allow any rogue app to access it. The device would be considered a threat to national security and banned.
And no there are no other mass consumer devices which specifically store a 3D representation of your face and a high resolution scan of your retinal pattern.
Unreal Engine developer here, super thrilled to see Apple spitefully choosing to only support Unity with no mention of UE. I imagine due to the bad blood between Epic and Apple over AppStore lawsuits.
After the Nvidia blamed them for their failing GPUs, they literally never shipped anything Nvidia ever again. There’s probably an alternate universe where metal does not exist because Apple and Nvidia became friends and just optimized cuda for macOS
I remember Jerry Pournelle on Twit.TV saying he reviewed the original Mac and said it was a bold vision but ultimately a toy. A great idea but with not enough RAM to make it really useful. Steve jobs saw the review and took action. Jerry said the 512KB Mac was excellent but that point it was too late.
He was black listed for life from having any early access to hardware for review and was never allowed to an Apple conference ever again. This was in place until Jerry's death 33 years later.
That's fascinating. I wonder how formalized and widespread this logic is -- did economists tell leadership, or did they just happen to organically hire spiteful leaders?
Sure? Why would you do anything to help someone who’s trying to tear down your business?
Don’t expect long term love for UE from Amazon or Sony or Microsoft, either. They all know that Epic’s end game is to get regulators to force platforms to allow Epic as a competing app store. Let others take the risk, swoop in and undercut once there are profits to be had.
Nobody is going to want to fund that. It will take a decade or more, but UE is doomed unless Epic divests it or gives up on the predatory app store / regulatory business model.
Wow, you seem to have a lot of love for walled gardens...
Take the risk? What risk? By now both surviving mobile platforms (iOS and Android) are well established, and especially the iOS app store is raking in obscene profits for Apple out of the work of others. I would have been ok with a 30% "app store tax" a few years ago, but nowadays it just feels like milking the cash cow...
I’m just saying that Epic is using a legal strategy to attack markets with vertically integrated hardware and software distribution (phones, consoles). The owners of these platforms are not thrilled, and we should expect UE to be a casualty.
When Unreal had those super expensive licenses in the order of +$200k most indie devs moved to Unity, so the average Unity dev doesn't have lots of resources to spend on a pricey headset and the expectation of very high quality titles. Either too few devs will ship anything or quality will be low.
Politics aside, nobody should ever choose Unreal Engine for standalone VR anyways. It's clear that Epic doesn't prioritize VR, so it's more easily explained as an efficient use of resources.
In concrete terms, Unreal Engine's VR support is awful; you can't get a good looking game running at high framerates on mobile hardware with it. Its forward renderer is severely lacking on mobile VR, and deferred is untenable.
I don't want Unity to be as dominant as it is, but it's the only sane choice for VR. Epic is very clearly not interested in being a leader in this area.
I'm still wondering if they choose Unity only as initial partner and will allow other to integrate their 3rd parties engines by themself. Otherwise they would alienated not only Unreal Engine game makers but also:
- godot
- bevy
- flutter
- qt
since all those above render their own ui widgets - you need to integrate somehow this eye tracking to be able to select it.
Isn't there also the issue of the device overheating ? accidental water projection ? oil projections ? Camera fogging ? The battery dying while doing something and you're suddenly blind ?
I think the Apple demo of people mostly sitting on a couch is not a lack of imagination, but a pragmatic approach of how this device should be used.
You may be right. My hope is the AVP is at least step toward something that could make potentially dangerous physical activities, like cooking, safer (e.g., by showing you the temperature of a surface before you touch it with your hands, as suggested elsewhere in this thread).
It really makes it a pity the Hololens didn't go anywhere. I wonder how much time we'll have to wait for a device that actually aims for that space, and not a portable display like the AVP seems to be.
I feel like even if it is good enough it’s only a matter of time until it lags and you lose a finger. Lulling us into a false sense of security is the real danger.
I don’t know about you, but it’ll be obvious to me in much less than 1ms when the passthrough is lagging. You know your hand moves, so when you don’t see it move it’s incredibly jarring.
You might be special. (I'm not being snarky.) I'm a digital musician. 5 ms lag is just barely perceptible to most of us.
My intuition regarding proprioception is similar to yours, though -- give me laggy input of my own hands and I'm still pretty likely to get things right.
Sound travels around 1.7 meters in 5ms. Acoustic musicians can play fine while being apart more of that distance, although there is a tendency to slow down, unless consciously keeping the tempo up (everyone individually feel that they are rushing a little bit, but that's how they just keep the pace).
There is a huge difference between consistent latency and jitter, but even then I doubt that 1ms latency + 1ms jitter would be very noticeable.
Damn, a bunch of worry warts in your other responses. I feel like I could safely manage a knife if I closed my eyes, or if I was cooking at night and my power went out! Like, I wouldn't immediately chop off a finger... I could manage to set it down. Sheesh.
I'm planning to be a Day 1 user of AVP to see what I can build with it, and I look forward to your cooking app!
Even if it’s almost fine, do you really want to encourage someone to hold a knife with your app and a big black box in front of their eyes, ready for any litigation?
I’m definitely getting a vision pro the moment I can, but I am most definitely not going to wear one when I’m handling anything other than a keyboard on my couch.
I’ve never seen an AR/VR usecase that interested me, but if this thing turns out to be a decent replacement for external monitors I’d probably buy one. I travel a lot for work and the laptop on the shitty hotel desk setup is a major drag.
I'm with you, it seems neat technology wise but the use cases I have seen just seem almost forced. I mean it is neat but I am not really wowed by the potential. In the realm of teaching, particularly with representation of physical things - this thing will be awesome. But beyond that, I don't know.
The idea of a portable large viewing screen is brilliant but that it not the real main selling point of this. A decent VR head set could do something similar.
In 10 years time, comments like this will either look like a prophet that could see the folly of the future... or be like all those people that dumped on the iPod and iPhone.
I’m not trying to dump on it, I’m just saying that as I see the product today, that’s the only thing I imagine myself wanting to use it for. That said, when the iPhone first came out I was only really interested in it to upgrade my mobile email experience, so we’ll see I guess…
this seems like it’ll be much higher resolution than even decent VR headsets. Also, I think the AR mode being very good is important. Not seeing the real world for too long would make me feel claustrophobic
I somehow doubt that this would work for real work. The desktop/paper metaphor of the monitor is an important part of its usability. Not sure whether "spatial computing" can invent something equally workable for the stuff people usually do on a computer (besides 3D gaming).
It is conceivable that completely different categories of workflows will become available. In the end everything we do on a computer has been invented at some point or another. I guess there are plenty of eager early-adopters and fanboys that will be more than happy to report on their magical productivity boosts.
In any case, assuming that the above caveat is not an issue, if the main real use case is as external monitor replacement I would think that suitably optimized hardware would deliver much better value for money. No need to wear a computer on your head when you need to carry one on your laptop (even mobile) anyway.
I’m also rather skeptical about whether it would do very well for that usecase. But Apple seem to be saying it will work well for that, so I’ll wait and see. Once I’ve had a chance to try one and see how it works/feels I’ll make up my mind about it.
I’m also not really concerned about value for money. As far as business expenses go it wouldn’t exactly be a major spend. I’m personally more concerned about how good the experience is, and value for weight and size. For about half the days in the year I have to live with ~35 lbs of possessions that have to be able to fit in a suitcase. So for me if it could deliver sufficiently improved work experience it would be worth it.
That makes weworking and hotdesking in an office even more useless than it already is. Sitting there with your headset on, disconnected from most of the world.
Some people just need a quiet place to work where there’s coffee and clean bathrooms and no toddlers running around tugging at your leg asking Dada to go up in the air.
My usecase is a probably not so common, but as a frequent hotel-room-worker the main value I (only occasionally) get from the wework-like services is a comfortable desk/chair (with a monitor) and a more stable internet connection than you sometimes find in hotels.
I assume the end goal is something like Cyberpunk 2077, where you have a HUD always, so if you go get a cup of coffee whilst working, you can, and still fidget with things as you do, or what have you, maybe you're about to go to lunch and miss a message that could prevent one of the worst outages of your company, or whatever, I'm just trying to see the net good that people will figure out over time if its done correctly, but there's likely to be something people will come to enjoy about it. I would also assume you would want as many RDP type of applications on this platform as you possibly can have, and if Apple is not pushing companies that make RDP apps to be on their platform or other useful app makers onto their platform, then they're really dropping the ball early.
I feel the same with the Meta devices but I think what the journalist misses is the strategy.
Meta completely failed in the strategy, their store is empty of AAA offerings. This is not bad luck it is a blatant incompetent strategy from Meta. It is in the business book.
I assume Apple has a strategy, it could fail? Obviously, but they are doing what is right.
When the iPhone 1 launched there were many hiccups but in a short period of time.
I agree that initially this product will not sell many units. A sci-fi thinking also tells me that this could improve many industries with completely new interactions with the world. I see this more a B2B product than B2C.
Apple's strategy has always been to go 100% in and don't look back. Eventually people assume if Apple is investing so much into this new thing, they must have a plan. So a few adventurous devs and users get onboard. Apple marketing team uses that as the bait to bring more people onboard.
Yes, but you can say the same about Meta: their bet on Oculus is big yet nobody is onboard. Without being an Apple fan their thinking and execution is different.
Too many business/econ types have difficulty envisioning what new tech may be good for. Be it early computers, or the internet, or now this, they seem to be bottom-line people with little imagination or vision.
I look forward to a virtual, nothing-in-my-hands-but-I'm-still-playing-it drumset. The lack of tactile feedback will make it worse than real drums, but it'll still be fun to be able to play them anywhere.
Air drumming feels much nicer with drumsticks. You actually get tactile feedback this way if you use a relaxed grip and let the butt of the stick hit your palm.
I make Aerodrums and that operates at 125Hz for the mocap. As far as I understand, Apple Vision Pro tracks hands at 83Hz. They would need to improve that (and provide developers what they need to do a good job determining the position of the drumstick tip), for an air drumming app to be possible without accessories that plays to a musical instrument standard.
I've replied elsehwere in the thread plugging our live Kickstarter for Aerodrums 2, check it out, it only has 7 days to go.
Thank you for the plug. We're doing a Kickstarter right now for Aerodrums 2.
It will have support for VR passthrough. We'll add support for Apple Vision Pro and Meta Quest 3 as soon as they make it possible (if anyone has connections to speed that up, I am interested).
I feel like the VisionPro is a demonstration of what it takes technically. All the frameworks are fleshed out. The work is done and in x-number of years it will take off or the various pieces will be absorbed into other products.
Imagine just the eye and hand tracking with no screens. That could be a much smaller device and one can trade the screens for an interactive environment.
If I follow correctly, it seems they removed the $99 developer registration fee? Smart move, to maximize the number of app ideas generated for the launch.
they’ve removed that for a while if you just want to develop locally, and are OK with apps expiring every 7 days unless you republish with an online check in.
You still need to pay for more advanced features, and to publish your app.
I actually think the killer feature for this thing is already there: a hands-free, mostly passive consumption device that can seamlessly switch from full attention entertainment (or work) to partial attention consumption.
I think it will work a bit like wireless headphones with audio passthrough. I can sit at my PC and hear audio from whatever I'm doing, but I can also get up and move around and still hear that audio. The audio passthrough, even though it's not perfect, is easily good enough for me to hear my environment and even carry on a conversation without having to take them off, and I can pause whatever music or podcast or video I'm listening to just by pressing a button on the side.
I think the headset will provide a similar experience in that you can push whatever random YouTube video you were watching out of your direct line of sight, and then get up and go make yourself lunch while you're still half paying attention to it. Not only that, but you can interact with it using your eyes + minimal hand gestures.
The really intense, fully-present experiences (like gaming on a current headset) will probably still happen, but I think most people won't be spending much of their time there, similar to how the total amount of time spent in modern, intense, realistic gaming is tiny compared to how much time people spend scrolling feeds and mindlessly consuming content.
You do use your face though and goggles can make drinking from a mug tricky. Hot drinks can also mist up your lenses if you linger near your nose hole.
My experiments using straws in VR just led me to stabbing myself in the gum, lip, nose and cheek in various painful ways. The perfect vessel is a long neck beer bottle. Easy to get into your face-hole and won't bang against your goggles. The downside is how easy it is to knock over.
No, but they will mess with and override proprioception. I've used the Vision Pro and the first time you try to take a drink with it you'll probably miss your mouth. I think the main reason is that you see the cup coming towards your slightly offset "eyes" and your body automatically adjusts to match.
I think if I had closed my eyes (I didn't try that) I would have done better.
Maybe the headset is the next iPHone, but i doubt it rather Apple Smart Glasses that should be bred from the headset is the next iPhone. Lightweight smart glasses that do amazing things created by Apple and developers will make everyone want to own a pair of smart glasses.
My bet would be there is none in our generation and we spend the rest of our life with two kind of computing: one that fits our hand and other devices to deal with the rest.
The question becomes, is this inherently better and more convenient than a laptop ? If it's not, it will stay a nice accesory, like the watch is a nice accessory.
The watch itself is already a phone substitute for many people and continuing to grow. Isn't wearing a small thing on the wrist better than a brick in your pocket?
Do you have any imagination about how form favors could advance or even be different?
Plenty of people prefer a brick in the pocket to a small thing on the wrist, even before you consider all the technical advantages of the brick — screen, input, battery life, etc. I don’t feel the brick in my pocket, while I feel the small thing on my wrist almost at all times, however comfortable the band is.
The watch prevents me from reading the internet. While still remaining connected to the world through SMS and calls and purpose-apps. The watch allows people to un-nerdify.
Cool but i want and millions and millions will want smart glasses that...
- That can be used to zoom in (binoculars)
- Keep track and show the score of the real life fencing, ping pong, tennis, card game ,etc (Apple Score) your playing
- See how a building or place looked x amount of years ago (Apple Rewind)
- Go to a conference and upon meeting someone the glasses provide their name and company their with
- Many many more innovations
and LOL my nicknames for some of those ideas are probably silly, but they are unique and i think useful to the point of people buying Apple Smart Glasses as much as they buy iPhones.
I think the main difference is people are currently using smart phones to book appointments, ride public transport and find their ways to public offices. Even someone with dire financials will buy a decent smartphone as they can't afford not to.
All of your examples are great, but you'd still keep a phone in your pocket while doing these activities.
Another aspect to this: most of those were better done with Google Glass, but the social discussion and Google being Google just killed it. I hope the Apple devices will bring more acceptance, but once it's there I'd see other AR makers take precedence to bring better implementations than what the Vision Pro is pointing at.
Heck, at this point the iPhone isn't even the global market leader, I'd totally see a Samsung like maker take the AR market and push it further, the same way their smartphones are actually pushing the enveloppe while iPhones and Pixels keep being the vanilla choice.
Sure and the glasses and your iPhone will work together. The glasses will sell iPhones and vice versa and in the same equal numbers.
Now The iPhone could do the ideas ive thought of above but it's all about User Experience ... when playing ping pong im not going to hold my phone up and let it take score for me and display it on my phone. No i want this to be done automagically and effortlessly and wearing glasses is the best UX for it so is meeting people and seeing their name/info and all my other ideas I noted above and others not noted. One of my coolest ideas ive thought of fixes an old age question we as humans wont ask again.
Meta (Facebook) not having a phone on the market to sell and work alongside smart glasses is a bad thing for them!
I don't think there's a single comment here that's about the developer tools. It's all just people debating the merits of the Vision Pro and XR in general.
It’s pretty boring because it boils down to “I can totally imagine myself doing X with a headset” and “I can’t imagine myself doing X with a headset” — the operative word being “imagine.”
I’d love to read a comparison between visionOS SDK and OpenXR from someone who has actually used the latter. I gather that OpenXR is lower level, but what exactly does that mean for an app developer? How much more work would it be to build a “spatial experience” cross-platform with OpenXR? What would the compromises be for both approaches? Would it be feasible to build an engine that runs on both platforms?
A lot is still just becoming more clear now the SDK is here today but as usual there are definitely fine print gotchas. There are waay more restrictions in the MR Shared space mode so whatever (Unity/webXR) exports only to RealityKit for rendering with tight Apple conditions. In Full VR mode without Passthrough you have full control so custom shaders but no gaze and a 1.5m motion range limit. The MR mode itself has two flavors single or multi app modes with some tradeoffs like no hand joints with multiapp. Hand Joints are non OpenXR standard so need translation to be crossplatform anyway. And so on. All that said it's great so many experiences can port and there is a lot more interest in WebXR now hopefully WebGPU will fall into place too and it will be interesting to see what we can do with the Internet in 3D.
In the WWDC unity-metal demonstration they definitely seemed to have gaze tracking. Have you checked looked through the sdk and confirmed that there is no public way to do this?
Yah, its very easy to get confused by them chopping all the sessions up for the conference. It was a little frothy but my understanding now is Gaze is provided in both MR modes known as Bound and Unbound volumes. So look and click works fine. In bound volume you don't get head or hand pose or room geometry. In the Fully immersed VR mode you don't have gaze data for understandable privacy reasons although room geometry could be handy but if it really matters then you build to MR targeting the more restrictive OS requirements like Unity Polyspatial and only Shader Graph.
Even the Unity team had to correct a few edge cases around WWDC as the sessions dropped:
I don't think that tweet thread addresses gaze in particular? Is there a particular tweet I should be looking at or is it just asserting the (absolutely correct) assertion that it's confusing?
---
I'm specifically talking about the "Bring your Unity VR app to a fully immersive space" WWDDC talk, which claimed to be built on Metal and ArKit, and had a very very brief demo in it talking about gaze, screenshot here: https://media.hachyderm.io/media_attachments/files/110/520/9...
Note the demo wasn't actually on-device, and gaze was just following where the screen was pointing, so it's definitely possible I'm reading too much into this.
No I am sure they don't give you access to the Passthrough data or gaze details to the app in VR mode unless there are quite a few people wrong on this too. And since in VR you are completely outside the rendering subsystem I don't even know how you would interface whats in your app and the gaze hover trigger etc. My guess is the slide is referring to the MR unbound mode (full space they can call it to muddy the water). I would definitely be interested to know I am wrong here.
EDIT:
Checked with our team and so VR mode gaze has event triggers on any Apple UI object without exposing the details so that works unless your UI is not easily portable. And that's the same across all MR and VR modes.
I have a Valve Index and I’ve watched movies in a virtual cinema many times, most of those with friends. It’s immersive but becomes a bit of a sweat box after a few hours. The FOV and resolution are lacking and I can’t use it for virtual monitors. I can see this device fixing most of those problems if the specs are true. But I don’t see myself taking it out in public. That’s laughable. And I’d only drop that price if I was sure that the ecosystem is relatively open.
Well, ok, but since the tools and libraries don’t exist yet… Instead of waiting for somebody else to build middleware and hope it will be good, what would it look like today to design and implement an app that works on both visionOS and OpenXR?
That’s a question that I find interesting. Even if I don’t end up building such an app, it would illuminate the differences between Apple’s SDK and the rest of the industry.
Honestly, sometimes I question how much this site is really “hackers”.
I downloaded the SDK and gave it a try. It was pretty seamless to get the sim booted up. However I did have trouble getting any of my hobby apps to build for the native SDK. A lot of libraries will need to add target conditionals to build natively. I finally got one of my most simple apps running, and SwiftUI does translate pretty well into it, but there’s still a bit of visual jank that I guess I could probably learn how to fix. The simulator also does seem kind of slow and unresponsive; I wonder if that’s at all representative of the final OS.
I think discussions around potential behavioral impact of this tech category is a must, particularly with the existing forces at play that subsidize the literal engineering of addiction.
Yes but not when it happens repeatedly, covering the same ground and obscures discussion about the specific content being posted. It happens with every post about Google and every post about Microsoft and it is mostly just people covering the same ground again and again.
This just tells us that this version of Apple's Vision product line up is for the rich tech bros here and is not expected to be used by the general consumers who won't throw $3,500 on a XR headset just like the Hololens.
Clearly in the comments it just shows how out of touch they are in reality. There will be a version of the vision product line-up that will be reduced to the size and naturalism of a pair of glasses with some AR and reduced XR capabilities. But that is 5 - 10 years down the line or possibly could be quicker than expected.
Who knows, but this first version isn't it. This one is for the devs and techies only.
I personally think AR is mostly for techies/nerds/gamers, and professionals.
Doing CAD work in AR with a $3500 devices is worth it for any businesses.
Even as a sale tools when selling large products (cars, boats, etc.) without the need for stock. i.e. I can go to my local dealership, and look at their new model in persons. Then play with AR to decide which options I want for the colour, wheels, dashboard, etc.
I'm not joking when I say I want to write software for this thing that can help you get the most out of a psychedelic experience. I think the combination could be powerfully positive (or negative, of course).
Signs are quite potent signals, static objects designed to inform and direct. It's only by long habituation that we ignore their power. I'm not at all surprised that they captured your attention!
Todd Rundgren wrote (or produced) an Mac app in 1990 called “Flowfazer” that was intended for meditative visuals. I remember lots of “is this what computers are supposed to be for” articles at the time.
>I'm not joking when I say I want to write software for this thing that can help you get the most out of a psychedelic experience. I think the combination could be powerfully positive (or negative, of course).
On the negative side, I think it the Vision Pro just needs a HYPER DEMON port.
yes, ideally it would be generative and reactive. There was a throw-away idea at the end of Stephenson's "The Diamond Age" that altered the appearance of a woman based on your reaction, to make her more and more attractive to you...this idea generalizes in some interesting ways. Even better if there existed a small affordable fMRI machine or at least a way to measure alpha, delta, theta waves etc and maximize those.
At the same time terrifying, I’m happy that they mentioned they’re closing off that area from developer access. I don’t want the app to know where I’m looking, or how I’m looking
I simultaneously want Jeff Minter involved and not at the same time.
The experience would be wild but he has a habit of also releasing a lot of stuff on doomed systems. Atari Jaguar, PLaystation Vita, Nuon for example. That said when you have released like 100 games, eventually you will launch on platforms with no future.
But having Tempest come out of the walls... oh man!
I'm not skilled enough to do it myself, but I can't wait for the AR apps to help with physical skills, especially things like art.
I do ceramics on the wheel, and having an app that can show me techniques, highlight areas that are thin or weak, critique my technique in real time, and so on, would be something I would definitely pay for.
Vermillion is the best app in this space I think. I get to paint with oil paints, and have nothing to clean up and no major expenses like buying more paint.
And with a browser in place, I can watch painting tutorials while actually painting, and not worry that I dont own a specific brush or other tool.
It's awesome. It's on Steam for anything that supports SteamVR or PCVR, and on the Quest store for Meta devices.
cooking too, it would be really neat if you can incorporate other sensors. like a little avatar telling you the pan is too hot/cold. I can def. see things like a floating recipe while you do messy prep so you don't have to wash/dry your phone to scroll to the next step.
Piloting planes! A Cessna usually is 50+ years old (since regulations apply for newer planes), instruments are awfully analog, and a good share of crashes happens by fly-into-IFR. Get a helmet: You’ll still get the analog instruments + warnings if they go outside range + 3D navigation map + horizon + a circle around far-away planes that might fly towards you (TCAS) + weather map. All of these are today routine causes for accidents. All that in the $3500 helmet that you also use for work. Plus you can look at your feet and see the terrain.
If you really can't wait, just go out and buy a Quest for 1/7th the price and start doing it today. It's a rounding error on what you will pay for Vision Pro and you can get an idea early of what it will be like.
(I probably would recommend waiting for Quest 3 in a couple of months though ... it'll be so much more like the Vision Pro ...)
What's wrong with the Quest Pro passthrough? It's been a while since I last tried it but I remember being very impressed with the latency and fidelity of it. That's not to say it was a great experience, the software was really unstable but I was very impressed with what it could do.
honestly, for it's primary purpose it's great. That is, you get complete situational awareness, you can see people and talk to them, pick up your coffee and drink from it all just fine.
But the actual quality is super dependent on the environment. In poor lighting it completely goes to hell, turning into pixelated goo, and the frame rate drops to the point you can physically see it. It also experiences distortions, so around the edges of objects you get major waviness that changes as you move which can be very distracting.
This is why you'll hear completely contradictory reports of it from different people. If you happened to use it in a really well lit environment with nice uniform lighting you probably would have thought it was really good.
yes ... but they "can't wait". If they want it that badly then such a tiny cost would be easily tolerable to get it earlier. Quest 3 pass through is awesome by all counts from the people who have tried it.
> video pass-through on the Quest 3 presented colors more accurately and offered an almost lifelike rendering of the real world. I was even able to use my phone while wearing the headset, something that often feels impossible on a Quest 2
It will certainly be fascinating to see how good it really is.
I watched a video compilation of VR headset mishaps (running into walls, hitting things, etc) and I think that most Vision Pro users will be a lot more reserved in their use. Apple's promotional collateral showed fairly static use cases: couch, standing near a desk, etc.
I'm keen on trying it or getting one, but I have zero interest in using it while cooking or similar.
I saw a video of some sort of motocross meet (not familiar with it) and apparently the visors become completely covered with mud, so riders start the race with several clear plastic strips on the visor and they tear one layer off off as needed. (A comment was made that motocross meets are littered with the debris). Probably wouldn’t need to be so dramatic here.
As an alternative - would a headset lite version without the AR work better here? Display needed information on a nearby tablet or large tv while the user wears a headband only.
This is also done in auto racing, particularly endurance racing where you might pick up 4+ hours of bugs and debris on the windshield over a driver stint.
The windshield has a stack of huge stickers on it that get torn away by the pit crew during stops. Apparently you can get stacks up to about 20 layers.
Seems like a reasonable possibility for messy AR applications.
I assume some of the things you'd want to wipe off need cleaning agents that would also remove the anti-fingerprint coating. Which I'm assuming exists.
Do you have to tell it the style? Like if you're trying to do a portrait, but it's in Picasso mode vs Michelangelo mode, does your portrait have a funky face
Eye tracking and hand gestures are the default way to interact with the device, but it also supports pointing and even normal cursor navigation via a paired trackpad or mouse in the interest of accessibility.
I'm assuming that in the long run the idea is that if you're developing for the platform you own one (or your company does). The labs are a bridge for the first batch of developers, not Apple's long-term plan.
It's more that ARKit is a shared component used by iOS and visionOS. When you develop a fully immersive scene, you have easy access to the necessary ARKit APIs to do things like attach objects to walls, etc.
The top comments miss the big driver for the first couple of years: business applications that are remote and/or collaboration heavy. I think these can easily drive more demand than Apple can fulfill in the first few years. Such applications are very price-insensitive.
Also the military training applications are potentially enormous. The cost reduction in training with simulated environments is huge, there's already a lot of DOD investment in this. I don't know how quickly this will ramp up, though.
Yeah the amount of people comparing this to $1000 gaming headsets you have to plug into a $2k computer claiming it’s way too expensive for the (current) market really seems to be missing the point.
I don’t think Apple spent any time at all talking about games during the launch. This is a general computing device not simple an entertainment one. Nor is it a mass market product like a Quest 2 (yet).
But yeah $3.5k is not a lot to spend for businesses either.
I could see every music producer wanting one as soon as some production software gets VRized. And some novel synth UIs get invented. $3k isn’t much in the Music gear world.
I'm surprised that none of these comments address the completely new interaction paradigm -- eye tracking + hand gesture tracking. Previous devices couldn't track accurately enough to provide high quality user interaction this way. All the ones I know about required holding controllers.
If this is successful it will be the first completely new interaction paradigm since the mouse. Of course so far interactions are mapped down to hover and click because existing software can then run, but the UI tools aren't limited to that at all.
I'd expect that in five years we'll have a completely different sense of interaction with content through Vision (or Vision like products).
(For those who want to quibble -- Touch is somewhat new relative to the mouse, and multi-touch gestures are truly new but very limited. But the design of apps and user interaction hasn't significantly evolved relative to the mouse-bound era.)
Eye tracking and hand gesture tracking are hardly new, even used in conjunction. The issue is, as pointed out, none do it well (accurately) enough to be of quality use but progress has certainly been made over a few decades. I've not used the new Apple device but I'd be surprised if accuracy is still good enough. I'll be happy to be wrong but I suspect there's still a fair amount if work needed to get us to a point your average user is happy with. We already complain about touch devices like touch screens which are steps above anything current gesture and eye tracking I've seen can currently do (to my knowledge). People still want physical high precision keys, knobs, button and physical devices like controllers. Tactile feedback is part of it but the accuracy/precision is a larger part.
The PS5 also does the eye tracking really well. It still sucks compared to using controllers. The tactile feedback of the buttons along with the very subtle activation motions is a touch beyond gestures. (In particular, they work together. Without the tactile feedback to know you have released a button, it is hard to reengage it fast enough.)
Ha! I didn't actually do that pun on purpose. Does work, though.
With a controller, though, you don't have to lose contact with the buttons for activation. Even with a keyboard, where you do lose contact, you feel for the activation of the buttons to do so.
More immediately, though, you also have more fingers for rapid fire activation. Some silly math, but if you type upwards of 70wpm, that puts you at about 400 button presses per minute. Not sure how easily that would translate to gestures, as far as speed is concerned.
I'm very excited to see what people will be developing.
The feedback from early press has been incredibly positive from just first party applications, but third party apps are where a lot of things will come up that Apple themselves likely haven't thought of.
I meant once you become somewhat popular, it's got to be terrifying. Not necessarily that it's terrifying before you start. When you first start out, there's a billion things happening and this isn't one of them.
And established a life-style + employees as well. If it were to suddenly vanish, from experience, you can basically go bankrupt in a matter of months. Maybe its because I've been there (not sherlocked, but a competitor undercut us with unsustainable margins, in fact, they went out of business shortly after we did), and rode that roller coaster. I wouldn't do that again, or go anywhere near that situation. Someone who hasn't been there, would probably be much more willing to take that risk.
This seems like the same logic that people say "Well if they tax people more making over $X then it kills the motivation to make that much", my dude, you don't make .10 * $X is that really the barrier to you making more?
Same here, you're essentially saying you could come up with something massively popular but won't because Apple might sherlock it? The truth is that even when Apple sherlocks things there still exists a market for the "Pro"/"Advanced" version
Reminders: Things, Due, etc
Notes: Drafts, Evernote, Notion, etc
Maps: Google Maps, Waze, etc
Journal (or whatever that are calling it): Day One, etc
and the list goes on. I actually like it when Apple releases a new app in an existing category. It sets the bar and exposes even more people to the idea of such an app, some of those people go looking for a more advanced version.
> a company called Karelia Software had a $29 search app named Watson with some better features like plug-ins for improved internet search. In 2002, Apple released Sherlock 3 with features similar to Watson, making Karelia’s app redundant and eventually forcing the company to close down.
Yeah, the employees of Karelia really thought the story more worthwhile than their livelihoods.
Still better than not making Watson at all. The point is that if you're "terrified" of being copied or sherlocked, you probably have no shot at it to begin with.
Is that actually something that is in any way relevant?
Stuff Apple sherlocks nowadays are either OS things that have been painfully missed forever and where third party software was a workaround or they implement a baby version of the third party software.
I think their baby versions – while plenty for many – may serve more to prove a market and wet appetite for more than truly making third party software obsolete. Apple doesn’t do niche nerdy stuff but their new features will – and that‘s my hypothesis – often nudge people in the direction of wanting more.
Have you tried it? Not a great experience. Apps don't get access to the camera feed so they can't actually detect your keyboard, you have to do a manual registration process which is janky. But the bigger problem is the quality of the passthrough. Depth perception and latency are especially problematic for this application. Cool demo but not worth using in practice. Maybe it could be better on Vision Pro.
Well, the point was the suggested software already exists, not that it would get the approval of everybody.
I agree that formal training and teaching with a quality provider is likely to be optimal but any tool that adds to the available options for training is worth exploring, especially if somebody already has access to the platform the software is available for - paid instruction is not always an option for many.
We also need to be careful not to gatekeep music by demanding instrument players go a formal route of instruction.
I'm working on AI piano lessons right now (extremely rough landing page at https://trebel.la/ while I work on the core tech). I am drooling at the idea of integrating with this thing one day...
My teacher literally would spend one hour with me on one page of sheet music to ensure I can play the notes evenly with the correct dynamics. He could hear the tiniest hesitation I have, and then come up with specific methods to help me practice that part. That is why I don't think any of such software, AI or not, actually knows how to teach piano, and I doubt they ever will.
I tend to agree with you, as I have spent many years in lessons myself and taught for a couple. But a) real lessons are expensive and difficult to access for a lot of people, and b) even for someone who has a teacher, most practice time is self-directed. I think there's a real opportunity to make practice much more efficient, and also provide a better experience than currently exists for those who can't access a human teacher.
Also, multimodal LLMs are improving so quickly that I wouldn't be that surprised if they were able to do what you're describing in a couple of years. But they are definitely not there yet.
You may want to look to Sight reading factory. They do a pretty good job allowing you to dynamically generate unlimited sheet music according to your pianistic capability.
Have been taking piano lessons for almost a decade now (very committed), I can tell you most piano teachers are not going to care about this kind of gimmicks. Most teachers don't even want to bother with online lessons over Zoom or whatever -- you either go to their home for lessons on a real acoustic piano or you don't have lessons. Only the most eccentric teacher is going to spend $3.5K on this toy and expect their student to do so as well.
Music teachers don't dislike Zoom because they're reluctant to use tech in the abstract, they avoid it because trying to teach a music lesson with a bad mic, a half-second delay, and a low-FOV camera pointed haphazardly is a nightmare.
Source: many family members who are music teachers, spanning multiple generations and a wide range of technical prowess.
This would still have tactile feedback because you would still be playing a piano. Imagine things like scales being overlaid in a different color where your fingers are supposed to go. Maybe if you hit the wrong key it would flash red. There's all kinds of things you could do.
Ah, so you want this for lessons on a real piano? That seems oddly specific. Far cheaper to get one that has midi capabilities and use any of a number of great apps out there.
I grant that if you can afford a full size piano, you probably don't balk at this thing, though.
I think it's very important for people to get out of the HN bubble when discussing this. $3500 is just way too expensive for mass customer adoption. Meta is struggling to get customers to adopt the Oculus hardware when its an order of magnitude less expensive. I can't see why _any_ developer would sink tons of money into building custom apps on this platform when there will be less than a few hundred thousand users next year.
I'm an AR skeptic but I appreciated Benedict Evans's take on Apple's strategy here:
> Meta, today, has roughly the right price and is working forward to the right device: Apple has started with the right device and will work back to the right price. Meta is trying to catalyse an ecosystem while we wait for the right hardware — Apple is trying to catalyse an ecosystem while we wait for the right price. So the Vision is a device pulled forward from years into the future, at a price that reflects that. It’s as though Apple had decided to sell the 2007 iPhone in 2002 — what would the price have been?
Yeah this is clearly the right take. One proven strategy for creating a new category is to start with the price-insensitive high end consumers with a luxury product, and lower the price as you gain economy of scale.
Same reason Tesla started with the Roadster instead of trying to build a mass-market sedan first.
To be fair, the original iPhone was like this as well - expensive and unsubsidized, locked to a specific carrier, small production, took many years to develop, was very much a future shock device upon its unveiling, etc. The 3g was the one for the masses.
Because this is a first iteration specifically made for developer to start to get their app running, so that when the consumer less expensive model will be released a couple of years down the line there's already an app ecosystem ready.
It's a strategic release aimed at devs and early adopters, the consumer level device will come in the future at a complete different price point. That's when mass customer adoption might happen.
Also Meta is struggling also because their device is a novelty without many great software. See a pattern?
"so that when the consumer less expensive model will be released a couple of years down the line there's already an app ecosystem ready."
....
You think that developers are going to create a thriving ecosystem of apps when there's no market until several years down the line? Bit of a Chicken or the egg problem isn't it?
Tens of hundreds of Innovator-category developers will build surprising, delightful apps for the hundreds of thousands of Innovator-category Vision Pro first-year buyers.
Early-Adopter-category developers and buyers will wait to see what the Innovators discover.
A well established dynamic in business is the “flywheel” or “virtuous cycle”. Yes, there is a chicken and egg problem here. That doesn’t in any way make it impossible to get the flywheel going. You just need to invest time, money, and energy to support/subsidize each part of the cycle.
The hardware is probably being sold near to cost, thus speeding up that side of the adoption cycle. And courting developers will allow them to build up the app ecosystem.
Did they say thriving? They will be the initial foray. There will be enough of a market for early creators (like Apple TV which had small numbers) but reduced competition. The early creators will have some return, reputation and experience to carry through for when the audience grows.
It won't be the opening of floodgates, but I think it will be interesting enough.
As a developer who's worked with the HoloLens, it's not very good & is extremely underwhelming. The tech is cool but the viewport is so small and the hand tracking is extremely tiring. It could never work as a consumer product in it's current state.
The Vision Pro seems to be more focused on the end user experience, so hopefully that will help Apple get their foot in the door with consumers. However, I've never used it so I cant say for sure. The company I work at will most definitely be purchasing them as soon as we can though, very excited to either be extremely underwhelmed or extremely impressed.
Microsoft’s ability to execute at anything hardware other than Xbox is very questionable.
I don’t even know if it is ability, rather than unwillingness to invest the billions of dollars and 10+ years it would take to get something like this off the ground.
Meta's attempts are currently limited because the sticky use cases seem to be fitness and a little bit of gaming. When you have a device that is viable for some level of work, it's a somewhat easier sell. Apple will be testing this for work (and hybrid remote work) with staff and will have a strong idea as to how viable this is going forward.
Except people have been crowing about using VR headsets for work for years now too and it still hasn’t taken off. This is one of those areas where I think techies are just totally out of touch with what the market wants.
The comments I've mostly seen are people saying that when the pixel density gets there, they'll jump on it or at least be very keen to try it. It hadn't taken off because we've been yet to reach that point. 2024 could be where it starts.
I think the incentive is there for employers to favour these (fit more staff in physical offices; and suits remote work well) and that will push it into the general white collar world.
> Meta is struggling to get customers to adopt the Oculus hardware when its an order of magnitude less expensive
I rarely use my Rift S these days - not because the hardware isn't good - but because the software experience is an absolute dog's anus. Wanted to play Beat Saber the other day. Turned on the PC. 25 minutes later, ready to open Steam. Whoops, now the Oculus app says its out of date. Another 10 minutes faff because what it actually meant was "you need to migrate to a Meta account". Then I had to sit through the tutorials again. Then Steam decided Beat Saber needed updating and took 45 minutes to download 800MB on gigabit fibre. Finally, after just about 90 minutes, I could play Beat Saber.
$3500 to not sit through that absolute omnishambles of a torture ever again? Sign me up.
Quest headsets are indeed cheap but but perhaps sacrifice too much on the experience quality. People gotta want something before it can get cheap (not my words).
Every time Quest 2 comes up on HN people whine about it basically playing PS3 games or glorified mobile arcade system.
I personally love it, RE4 VR and the month I spent VR boxing every night was some of the most fun I’ve had gaming recently. But there’s limits to that sort of thing, especially which games get ported and VR’d. Apple doesn’t even need a suite of A+ games to get people to buy it, I don’t remember them showing any games in the launch.
I get why Apple pushed it up higher in the market. The first two years will be for developers and very early adopters.
This is like buying an expensive monitor/TV/casual gaming device baked into one.
Hell there’s people who buy $2000 TVs, even though most people buy the $500 ones. This is way more capable than just a new TV.
Wow that's a really prophetic take on things - I've mostly written off Palmer Lucky these days but he was bang on the money there.
It's interesting though how counter it runs to John Carmack's philosophy where mass market accessibility is everything. I wonder if looking back people will rethink his contribution as actually holding things back.
It certainly seems like Meta is switching gears a bit with the Quest 3 going out to market at $500. It's going to be phenomenal value for what it is at that price still. But it's going to leave behind a significant segment of people for whom $299 was a palatable indulgence buy or gift for their children etc while $500 is not.
This isn’t a mass consumer device. $3500 is what Apple’s charging to folks with great ideas who want to dictate what the future of “special computing” looks like. This is for early adopters pretty much exclusively.
The rest of us will get the apple vision se in 2028 when the territory’s been mapped out
It's also possible that Meta is struggling because a $500 dollar VR headset isn't a broadly compelling experience given all the compromises they had to make to get it to $500 bucks.
"Smartphones" weren't the most dominant phone category until after the iPhone.
It's really hard to know until units start shipping though.
To be fair, the Oculus experience is pretty bad. Mine only has three eye-width settings, none of which match my face, so my vision is always blurry. And whenever I use hand tracking, it only picks up my “clicks” about 20% of the time. It’s not surprising to me that no one wants to use it.
I mean, it's not exactly a gaussian blur. It's more like an RBG shift, mixed with the kind of effect you get when you defocus your eyes ever so slightly.
The presumption, from an investing standpoint anyway, is that working on this version will give you a first mover advantage for a V2 in two years that will have slightly upgraded chips, similar display, and cost half as much.
I don't really understand this assessment. Half as much is $1,750. That's really not much compared to what people are willing to pay for Apple devices. Basically the price point of an iPad Pro, and I think the potential of the Vision Pros is much higher than the potential of the iPad. It's just a matter of whether they can realize some commercial use-cases like with the iPad.
The Vision Pro is marketed as a device with the potential to replace or significantly augment a laptop and/or TV for a lot of users (I understand there are presently some asterisks needed there). And that's assuming no other novel, high value uses emerge.
Apple sells something like 7 million MacBooks per quarter. Many of them are well more than $2000. I don't think a $2,000 price point is a crazy barrier if the device is able to live up to what it could be.
I don't see this as a big problem. This is clearly aimed for development/investors in tech right now, and in this market the 3.5k price tag is actually pretty low consider this is actually a product, and not even a prototype.
The potential thanks to the higher resolution is huge. If you ever tried to do any non-gaming task on an oculus, you realize how important resolution is for text. Forget games, this is an enabler for a disruption in user interface.
My biggest gripe to me, let it be either meta or apple, is the vendor lock-in. I'm not going to touch anything made by meta. But apple is not far.
Occulus may be an order of magnitude cheaper but I bet it’s an order of magnitude shitter too. Maybe this is the price/quality required to actually get people interested in mixed reality. It was a similar thing with the iPhone. Most people were happy with $100 dumbphones. Eventually the utility became worth the investment.
You don’t need a new codebase to have a visionOS app, you just need to tick a checkbox on an existing iOS project. Deeper integration will take longer but that checkbox will get you 90% of the way there as long as the project was built using UIKit or SwiftUI.
> Apple is not comfortable working under Khronos IP framework, because of dispute between Apple Legal & Khronos which is private. Can’t talk about the substance of this dispute. Can’t make any statement for Apple to agree to Khronos IP framework.
From what I understand, OpenXR is a common interface that needs to be implemented by the headset manufacturer. If Apple doesn't do this, OpenXR apps won't work.
Before OpenXR every headset had its own interface. Valve and descendants have SteamVR, Oculus has its own runtime, Microsoft and descendants have Windows Mixed Reality.
Games/Apps targeting these platforms used to have to implement each one of these interfaces separately if they wanted to support that family of headsets. OpenXR shifts the responsibility onto the headset manufacturer, from my limited understanding.
edit: I don't understand why this is being downvoted? If there is a technical detail I got wrong, I'd welcome an explanation rather than Reddit-esque downvotes.
Exemples remind me of when Encarta was the future. In hindsight, yeah, encyclopedia are digital now (losing the plural in the process, unless we count language versions) and optical data distribution media have been ubiquitous for a while, but Encarta wasn't exactly the future of Microsoft.
I remember the hype about ARkit and that it has been presented as a general purpose UX pattern, when in practice I saw it get popular only in specialized areas.
I’m curious how spatial experiences will fare.
Also, I don’t like that the gesture tracking in the dj app video demo seems laggy, especially when moving the fader.
> I remember the hype about ARkit and that it has been presented as a general purpose UX pattern, when in practice I saw it get popular only in specialized areas.
Once Vision dropped it was clear to me that ARkit was always meant to be used with this device, and that most of the iPhone/iPad apps that use it are not the final intended use case. I don't think you can judge Apple's AR success based on the existing hardware.
In retrospect, several things they've released were in preparation or fallout from the xrOS project.
• AR was an odd proposition for the iPhone, and lidar was an overkill for its gimmick apps, but it was obviously preparing devs and apps for the "spatial computing".
• iPadOS got mouse support, but only with a clunky fat mitten-pointer. This UI makes more sense when it's limited by eye tracking precision.
• Continuity features. Controlling a nearby iPads was a nice-to-have, but for visionPro that doesn't have its own I/O, a seamless integration with nearby devices is a must-have to get work done.
From people who've used the device the gesture and eye tracking is flawless.
But how those SDK events gets translated into UI control movement is obviously going to be dependent on each developer. And I suspect the variance in quality will be much greater than with iOS given everything is in 3D.
> But how those SDK events gets translated into UI control movement is obviously going to be dependent on each developer. And I suspect the variance in quality will be much greater than with iOS given everything is in 3D.
Why is the variance any different? You're provided a tuple representing where the user is looking similar to the location of their fingers? You're provided a hook for when the user clicks (with their fingers) in various ways?
This all relies on Apple to accurately capture those inputs at the device and OS level to provide it to developers, which they've apparently done a very good job at. The abstractions on top of this don't feel very different than those that exist on any other platform.
I built a number of iOS apps in the early days of the SDK and I remember it took a while for the community to learn the best techniques to efficiently render complex tables in scroll views.
The same thing will happen here except that you have the additional issue of learning how to efficiently render 3D assets in response to user events. After all most developers do not come from a gaming/VR background.
In VR gaming there's already a huge variance in UX. Many developers treat VR as just a mouse look + gamepad. Only the top few games design bespoke interactions where you move hands naturally interacting with the world, instead of pointing at things and pressing A/B buttons.
I suspect majority of visionPro apps will be mostly iOS apps with iOS gestures, maybe with one or two 3d gimmicks.
Apple expects to sell 150k units next year. I'm sure there will be a couple of popular apps, but beyond those couple, what is the financial incentive to develop for it?
Especially if you're first, or close to first, with a killer app with a high attach rate. I read that something like 50% of PC VR headset owners buy Beat Saber, contrast that with attach rates measured with scientific notation like 1e-3 for most mobile apps that aren't from huge megacorps. If you can make the Beat Saber equivalent for Apple's platform it could be lucrative indeed.
Right, my point was, there will be a couple of apps that everyone will get. Mobile phones have anchored app prices near free. Maybe some great app will get to charge 1/2 of the users $10. That's less than 1M if they sell 150k devices. I hope you don't need a big team to build whatever it is and are sure your app is the successful one.
People who spend $3500 on a headset are industrial/professional users who will make more than $3500 in value for it. That is why Microsoft HoloLens, and a lot of commercial AR products you've never heard of, cost even more.
They accomplished that with the iPhone, which is small, sleek, metallic....a status symbol like jewellery or a fine watch. It's not imposing, it doesn't overshadow your outfit. AVP is the complete opposite of all that.
> You might want to read similar comments that were made about the iPhone
What comments do you want to share from 2007 that will be relevant? The iPhone was released without an App Store and no intention to support 3rd party native apps until developers demanded it. The App Store was released after millions of units had already shipped. A speculative platform of uncertain consumer interest, shipping a tenth of the units in the first year compared to iPhone cannot be compared to a once in a generation phenomenon where the dev platform was pulled out of the vendor by demand.
It will be interesting to see how it turns out but the use cases Apple showed so far are kind of laughable: photos, movies, meditation apps, facetime, 2d games etc. The only tangible benefit was "a big screen". There is more chance these goggles end up in a drawer than an iPhone which was already used compulsively through out the day.
> and the Apple Watch
It's a popular product but are there many developer success stories on the Watch? AFAIK a handful of fitness/media apps are popular but doesn't it otherwise suck as an app platform?
If anyone is familiar in the Cyberpunk 2077 universe, it seems reminiscent of "Brain Dances". I could see an app that can play and interact with pre-recorded footage and provide the ability to view that experience in a 3D space
If they sell 150,000 units somehow and it only has 2 apps on the market then you could expect an extremely high percentage of those people to buy your app.
More than a couple of apps. I'd guess there are a handful of viable opportunities in each of many categories - music, productivity, particular game genre, etc. Many developers will port existing iOS/Mac apps, even if just to work in the 2D iPad mode.
The buyers of the first 150k units are overwhelmingly people who value Vision Pro as equipment and people who value Vision Pro as an Explorer’s Adventure. Both will be good customers of artisanal apps.
I heard closer to 600K in sales. But I do think, regardless of the actual number, Apple doesn't plan to sell millions of theses. At least not this iteration of the device. Which is something I've seen parroted quiet a bit.
It's weird how on one hand, Apple is pushing the envelope with graphics rendering, but they have been funneling absolutely everyone into Swift for years. They recently released metal-cpp to help developers that have existing C++ codebases bring them to apple by dropping in header files that bridge to Objective-C, it's hacky and unwieldly but kind of works I guess.
It's especially weird to me considering how nice Metal is to work with, but neither swift not some hacky ObjC/C++ approach seem suited to the task of pushing the envelope of graphics rendering to take full advantage of this hardware. Am I off base here?
Honestly, I hope this takes off is really successful, and kicks off enough investment in competitors so that I can hack on cool stuff in another software ecosystem.
Metal is a modern - and under many points of view, pretty good - rendering API that allows you, for example, to declare the rendering pipelines/dispatches fully without you having to call multiple "change_state" in your CPU code (similar to DX12, Vulkan etc). This means the GPU can run "flat out" and that a driver can be pretty lean and introduce little overhead (if done well enough).
In more advanced cases when many API commands need to be encoded depending on users' input, and if batching is not enough, you can use things like "MTLParallelRenderCommandEncoder" and multiple queues to take adavantage of multiple threads submitting such commands.
This all means that "pushing the envelope" in terms of rendering is absolutely possible despite the possibly small overhead added by Obj-C - which however also brings some advantages as it often avoids having to call "free" on GPU memory since the MTLBuffers are reference-counted.
I think apple forcing using metal is a mistake. They don't have moat here - nvidia has with CUDA. All those CV researchers using mostly CUDA or OpenCL and they are not gonna learn metal because they probably not in even Apple ecosystem. Apple dropped ball on OpenCL.
Plenty of useful algorithms and GPU computing frameworks but barely anything available with metal backend. Libraries like OpenSYCL, OpenCV, Open3d, PCL don't have any Apple GPU backend.
Apple merely added these libraries to an already existing system to better link Swift to C++/Objective-C++. Objective-C/C++ and C/C++ work well together and always have, it is pretty much required as nearly all game engines (even Unity/Unreal) are C++ underneath just like all apps.
Apple did recently add better support for mixing Objective-C++/C++ to Swift which was more difficult before. [1][2][3] Swift to C++ had more leaps prior to recent updates though it was still possible. Really it just handles the Swift to Objective-C++ to C++ link better.
Objective-C++ has been used since iOS inception to link C++ game engines to Objective-C. All you have to do is compile with g++ or other and typically files with .m are Objective-C only and files with .mm are Objective-C++. Objective-C/C++ and interop with C/C++ actually works really well and every single game engine on the store (pretty much ever game) uses Objective-C++ to link them at a minimum.
We used lots of C++, Objective-C and even Objective C++ (g++ mixing Objective-C + C++, connecting to C++ engine) in a custom game engine that worked on mobile for lots of shipped titles. It is powerful and fun. People have been mixing assembly, C/C++, Objective-C/C++ since the 80s. These technologies so close to the metal and machine code actually work better together than most "modern" frameworks/systems.
A sample of early game engines using this link is Oolong from 2008ish, that was used on an early Quake port and many game studios had something like this for custom engines to get them to the iPhone/iPad/etc. [4]
Fun fact: Objective-C was created before C++. Objective-C all they way back to 1981 but officially 1984. [5] C++ all the way back to 1982 but officially 1985. [6]
It's interesting how just switching the positioning of headsets from use cases like Horizon world / second life games to practical uses like DJing , watching movies immediately makes the device seem so much more useful and something I can see myself buying.
I'm hyped, ngl. I'm adding SwiftUI and Reality Kit to the list of things to learn this 2023. Better be prepared for 2024 when this thing goes mainstream.
Thanks all for sharing their opinions on what's great/bad about Apple or VR in general.
Anyone here with actual experience with visionOS SDK and its API's, able to give an insight on how it compares to other platforms?
I imagine there's a world of interesting details and differences, considering that the device is designed around a single, very powerful compute specification...
I'm really interested to see how high level the support is for "spatial computing" is in the APIs they are offering.
This whole thread is full of people wildly speculating about applications which, without amazing platform level support are each going to be a small miracle in state of the art computer vision applications.
Can they offer full 3d image detection / segmentation to give you a full spatial model of the user's environment? If not, a lot of people are going to be sorely disappointed by the amount of computer vision expertise required to get even basic apps they are envisioning to work. On the other hand if Apple does offer this type of platform support, it will be quite revolutionary.
Meta has offered a range of spatial computing APIs for a while but they have been slow to take off. The Quest 3 should really push this along though. It will be fascinating to watch the two platforms compete on this front.
So far Meta has published a whole bunch of spatial APIs including scene detection, spatial anchors, plane detection etc. [0] They are far ahead of Apple in many of these respects, but what they haven't had is a device that made this attractive to actually develop for in any meaningful way. Quest Pro lacks a depth sensor and has too small a market presence to attract a lot of developers, while Quest 2 has ugly, black and white low resolution pass through.
So Quest 3 will be the first spatial computing device the world has ever had access to that is available at mass market price (<$500) with depth sensor and high quality pass through. It will finally be worthwhile for developers to build mixed reality apps targeted to regular consumers.
Based on what you mentioned all of those are also available in ARKit or Vision API: plane detection (vertical and horizontal), anchors (both local and geolocated), 3d scene reconstruction, 3d object reconstruction, custom planar markers, qrcode & barcodes, 3d human pose skeleton, 3d hand skeleton, face landmarks mesh, world tracking (SLAM), text detection. Haven't checked Meta APIs but it doesn't look for me as far ahead of Apple.
I feel very vulnerable with a device like that on my head. I've been playing with VR hardware in various iterations since the mid 80's and I haven't seen anything yet that managed to get me to overcome that feeling. As well as nausea due to lag. I wonder how this one will do but at the current price point it just isn't all that interesting, it also reminds me of the way Google Glass was introduced and hyped and what eventually became of it. But Apple tends to have a much better product strategy than Google.
They talked about this briefly on the WVFRM podcast [1]. Apparently the pass-through is significantly better than other headsets they’ve tried to the point that they felt comfortable walking around and overcoming that sense of vulnerability you describe. Still, $3500 is quite a lot.
My instinct is that this entire product is basically a dev kit for whatever Apple wants to release in 10-15 years. Get something high quality in peoples’ hands and hope the use case bucket trickles full.
This first version of the visionOS product line is not the one that will be the 'iPhone' moment.
We are looking at the device that will be ready for these app in the next 5 - 10 years which by that time the device will be smaller that this first version.
Probably going to be called 'Apple Vision' in the form of AR glasses.
If someone had actual information of that type from Apple and released it on a public forum, they will be hearing from Apple's legal team in 5...4...3....
Otherwise, it just seems like a confirmation of their reading of the tea leaves.
I think both of you need to look at the amount of patents related to the Apple's AR glasses and their related vision products a lot more since it give lots of obvious clues about where Apple will take their Apple vision based products.
All publicly available for everyone to see. [0] No inside information needed at all.
The tea leaves, breadcrumbs and obvious clues that tell the truth as close as possible that defines the next phase of Apple's AR/XR products all to the source as public as possible.
The form factor we have now is basically identical to what headsets looked like in the 80s and 90s [0]. They are barely smaller than they were back then. The problem is not the computers, it's the optical pathway. You can't shrink optics.
I haven't seen anybody suggesting the occlusion issue is or can be solved, along with the huge light leak that's native to normal glasses. I am sure apple is trying out all the angles on glasses AR, but I don't think you'll be able to get anything like the current Vision experience on them without some new physics.
> This first version of the visionOS product line is not the one that will be the 'iPhone' moment.
You mean it’s not a device that’s significantly more expensive and less capable from a feature perspective than the competition at launch (aka the 1st gen iPhone)?
If someone can crack the VR/AR code, it will probably be Apple. Not saying they will though.
It bodes well that Kara Swisher seems to really like the device and she's one that has tried all the headsets and been fairly skeptical. Using hands as the primary control seems like a pretty big breakthrough. The default of mostly pass through of the surrounding environment sounds like it also addresses the nausea.
You're right at this price point it's definitely a gen 1/dev device. Gen 3 is where it'll likely get interesting, assuming some killer applications are created.
The form factor of the Quest Pro helps with that quite a bit, where you're essentially wearing it like a hat, and the sides and bottom of the device don't contact your face or seal out all the light. There are notorious stories of people punching through walls or TVs playing VR games, but being able to see the room in the periphery is both less distracting than you'd think and gives you enough spatial awareness to not really have those issues.
> There are notorious stories of people punching through walls
Bahaha, reminds me of the time when I first got the PSVR and my friend comes over. We're playing an FPS demo that came with it (PS Worlds? I think). It's this scene where you're in the passenger seat of the car, shooting up people on motorcycles that are shooting at you. Anyway, I'm standing off to the side watching the TV when out of nowhere, my friend punches me in the face.
I happened to be standing exactly where the NPC driver was sitting in the VR world and my friend wanted to try punching the character.
I suspect there's going to be a whole new class of disadvantaged / second class citizens in the future who belong to the minority that can't tolerate wearing these type of devices.
Having said that, I haven't met that many people that get nausea from wearing sunglasses, so at the theoretical limit there is some level of quality that seems at least almost universally accepted.
> Apple Vision Pro has an all-new App Store where users can discover amazing apps and content.
As an ex Daydream (R.I.P.) user, the app store screenshots look oddly familiar. Maybe Google can resurrect Daydream (probably under a new name, or just call it Daydream Pro?) and reuse some of their sunk investment as software for an actual VR/AR headset rather than a thingie where you have to insert your phone?
Oh wow, I had assumed that night vision goggles would already cover the night vision idea more than fine. And under the budget of this. Night vision things are expensive, though. Impressively so.
The drone things, amusingly, are already price competitive. At least once you factor in the cost of the drone.
>- Drone/robot/telepresence HUD. It would be incredible with stereoscopic cameras
That's interesting, I could see this as part of the security task force for world leaders and such, some extra guys with virtual eyes spread all around and can watch people around corners.
Has the headset been confirmed to work in the dark? It would requires apple to have flood lights on the headset to light up the environment no that the cameras can see.
It has in front both lidar and truedepth camera - truedepth camera has IR flood flights and IR cameras but provide resolution of depthmap 640x480 at 30fps. Unfortunately raw IR camera video stream is not available for 3rd parties so you could only render this VGA depthmap heatmap style.
Despite people always posing this as a “chicken and egg” problem, in reality there is never going to be a fledging developer ecosystem without first getting users on the platform. You can see the same in all of Apple’s own past product launches, iPhone included. Unless they can figure out a way to get the Vision Pro into the hands of millions of active users, I doubt they will get big time developer effort behind it.
If you are an early developer for a platform, you get an outsized share of users that can be a reasonable advantage. We developed for AppleTV when it was first opened to general developers and were able to rank well in a key category by adapting a previous app. Admittedly, that was a $x00 product rather than $3500, but I think they'll get enough of an audience (developers, wealthy, enthusiasts, etc) for developers to get in front of. Maybe not worth it for AAA titles, but being one of the only solid apps in a category, or simple games in a genre.
They support all iOS and iPadOS apps - that's enough to be useful initially for the use cases they were presenting. All developer will have to do is probably just tick on one box. Same with all Unity games. Only VR style games/apps will need more integration but since It's all based on ARKit, ARKit apps or Unity apps using ARFoundation won't require much work to adopt for vision pro.
I may have missed this information, but did they announce somewhere the configuration required to be able to develop applications on this platform?
I'd really like to try out their SDK, but without knowing the minimum requirements, I don't really dare invest in a Mac (knowing that I won't be able to add RAM after purchase, for example).
You can develop for the platform on any Mac, including their lowest end M2 products like the Air or Mini.
The device itself is running the base M2, so there's reason you can't right code using a device with the exact same chip. I would recommend at least 16G of RAM, with 32 being more future-proof. Rumor is the device has 8 on-board.
Exposure therapy would be huge. Existing exposure therapy apps have been run by people without phobias themselves and have shot themselves in the foot by advertising themselves with big vivid pictures of phobia scenarios guaranteeing that their target audience avoids them.
Whoever releases the first truly compelling AVP app is going to make a lot of money. They could even charge $50 one-time or $50/yr. If it’s compelling enough the “I just spent $4k” demographic is going to see the price as a mere triviality.
This is different territory compared to marginally useful 2D app that millions of developers produce.
To make a believable virtual/3D experience requires enormous investments if you don't want it to look like a video game from 25 years ago. It's quite telling that Apple rushed through the 3rd party demo session at the event, as every example looked terrible. As terrible and useless as 99% of AR apps produced thus far.
I would suspect that only giants like Disney could produce a "killer app" experience that woes people and make this worth the money and discomfort.
And on top of that, yes, lots of developers will produce what are basically 2D apps where they take an existing app and project it in space, possibly showing multiple sub screens. Slightly more useful if done well, at best. Financially likely the only realistic scenario for most developers but it also cheapens the appeal of this device.
Not to mention that you need to apply way more real world scrutiny to possible scenarios. As an example, somebody suggested a cooking app.
To solve what problem, exactly? Cooking is instructions and timers. Yet with this device one can project more information. What information, exactly? Cooking requires free movement, to grab items, throw things in the trash, but here you are with a thing strapped to your head with low visibility. Possibly wired as you need to be thinking about battery during a long cooking session. Maybe you're cooking with two people, as couples sometimes do, then what? Countless real downsides, merely theoretical upsides.
Don't get me wrong? Will the tech elite embrace it and will a few influencers rave on about this cooking app on their channel? Sure. But that's not my point. My point is that you need convincing apps with tremendous added value to offset the price and discomfort of this device.
IMO, a full blown cooking app is just a distraction. It could be the "travel app" of this platform and safer for any sane developer to ignore.
A developer doesn't need to make a killer app, just supplementary apps that people will use in this environment. And yes, some people will just make money making sure their 2D iOS app works in this environment, and a subset of that will convert it appropriately to work as a volume with depth.
In this environment, Apple handles the difficult spatial stuff and many developers will just build planes or volumes within that, and get solid mileage from that.
As anyone who's arrived early at an ecosystem knows, you first get used to the basic stuff, try a few free gimmicks, and then you're looking for apps to try and buy. Anyone in that marketplace at that point has far easier opportunities. Leave expensive-to-develop apps like VR courtside experiences to NBA and whoever else - those will sell the device. Smallfry developers will be fine otherwise.
I want more of those downvotes. VR developers are an especially angry bunch as they believe VR is awesome, we're just too ignorant to understand it. We get it just fine, VR sucks.
I don't know about VR developers being angry. I'm a VR/AR skeptic too and I'm surprised at the seemingly overwhelming optimism about Vision Pro from commenters. I think maybe it's the result of a large number of people who either enjoy Apple products (I'm one) and a large number of people who want VR/AR to succeed.
I agree that there are use cases that when you think them a through a bit just aren't as practical as one thinks, such as the cooking scenario. Walking through scenarios in your head and thinking through the logistical issues with using such a device is a good way to ground yourself in some of the VR hype, in my opinion.
https://www.economist.com/business/2023/06/06/apples-vision-...