Since they do mention eye-tracking they can heavily exploit foveated rendering which will make the resolution extremely high at the exact point where your eyes are looking and much lower in the outer regions of your vision. This is practically unnoticeable (if done right) and allows for much more interesting performance optimizations. Full 2x8K in 90Hz is impossible otherwise.
Around the early/mid eighties, I got to play in a flight simulator for the new F1-11 avionics package Australia was buying (my dad ran the project building the sims).
It had a pair of Silicon Graphics Reality Engine IIs, one projecting a lower res image over the entire half-spherical screen, and the other driving a projector mounted on a gimbal that tracked the flight helmet - to display a high resolution image in the direct field of view of the pilot.
It was _possible_ if you tried, to "trick" the system so you could notice from the pilots seat what it was doing. But it was _remarkable_ the difference between sitting in the seat with the helmet on, and watching from behind where you could really obviously see the high res patch of sky moving around. Enemy planes turned from Space Invaders kind of pixel art into recognisable Russian fighter planes when the pilot looked at them. The "immersive reality" while flying the sim was amazing.
It greatly depends on the techniques used for rendering. Current rendering engines are focusing on pixel perfection at every part of the screen as they don't know where you are looking. More and more games use a hybrid between path/raytraced effects and other shader effects that could benefit from knowing where the viewer is actually looking.
Especially raytracing can get huge speedups from sampling less rays: https://www.peterstefek.me/focused-render.html
Nvidia has researched in temporarally stable resoultion reduction at the edges (needed or you'll notice flickers in the blurring) as well as enhancing contrast which the eye is more sensitive to in the peripheral vision than sharp details.
Put a lot more research into this as well as proper support in the major 3D game engines and we have a winner.
Current lenses have quite a pronounced sweetspot in the centre of the vision so high resolution is wasted at the edges.
"fixed foveated rendering" is supported with Oculus and implemented directly in some games to reduce resolution at the edges just without eye-tracking so you can notice it if you move your eyes instead of your head. There is also "dynamic fixed foveated rendering" to ramp up/down for the current rendering load.
Which version(s) have you experienced? (Any that track eyes?)
I've only used Quest 1, with _fixed_ foveated, and while it's noticeable, it's good enough that I could see a generation or two of improvement pushing it beyond noticeability.
I think part of the problem is the peripheral area where you wouldn't notice but still see is so thin that you don't save a lot. Maybe Apple has cracked it, though.
Might also be that peripheral vision is more sensitive to movement, and the edges where the lower res and higher res rendering meets could make for distracting discontinuities that look to the reptile brain (or the preprocessing in the retina/optic nerve/visual cortex) like a tiger...