A lot is still just becoming more clear now the SDK is here today but as usual there are definitely fine print gotchas. There are waay more restrictions in the MR Shared space mode so whatever (Unity/webXR) exports only to RealityKit for rendering with tight Apple conditions. In Full VR mode without Passthrough you have full control so custom shaders but no gaze and a 1.5m motion range limit. The MR mode itself has two flavors single or multi app modes with some tradeoffs like no hand joints with multiapp. Hand Joints are non OpenXR standard so need translation to be crossplatform anyway. And so on. All that said it's great so many experiences can port and there is a lot more interest in WebXR now hopefully WebGPU will fall into place too and it will be interesting to see what we can do with the Internet in 3D.
In the WWDC unity-metal demonstration they definitely seemed to have gaze tracking. Have you checked looked through the sdk and confirmed that there is no public way to do this?
Yah, its very easy to get confused by them chopping all the sessions up for the conference. It was a little frothy but my understanding now is Gaze is provided in both MR modes known as Bound and Unbound volumes. So look and click works fine. In bound volume you don't get head or hand pose or room geometry. In the Fully immersed VR mode you don't have gaze data for understandable privacy reasons although room geometry could be handy but if it really matters then you build to MR targeting the more restrictive OS requirements like Unity Polyspatial and only Shader Graph.
Even the Unity team had to correct a few edge cases around WWDC as the sessions dropped:
I don't think that tweet thread addresses gaze in particular? Is there a particular tweet I should be looking at or is it just asserting the (absolutely correct) assertion that it's confusing?
---
I'm specifically talking about the "Bring your Unity VR app to a fully immersive space" WWDDC talk, which claimed to be built on Metal and ArKit, and had a very very brief demo in it talking about gaze, screenshot here: https://media.hachyderm.io/media_attachments/files/110/520/9...
Note the demo wasn't actually on-device, and gaze was just following where the screen was pointing, so it's definitely possible I'm reading too much into this.
No I am sure they don't give you access to the Passthrough data or gaze details to the app in VR mode unless there are quite a few people wrong on this too. And since in VR you are completely outside the rendering subsystem I don't even know how you would interface whats in your app and the gaze hover trigger etc. My guess is the slide is referring to the MR unbound mode (full space they can call it to muddy the water). I would definitely be interested to know I am wrong here.
EDIT:
Checked with our team and so VR mode gaze has event triggers on any Apple UI object without exposing the details so that works unless your UI is not easily portable. And that's the same across all MR and VR modes.