This is an intriguing idea, possibly much more convenient and ergonomic for productivity than a fully fledged VR headset.
But the fact that these are 1080p internal displays suggests it's a "fixed" 1080p screen that moves with your head, not a "virtual" screen that appears to stay in place as you move your head.
Does anyone here have any experience with how usable that is? When I start up my Oculus Quest 2 and it has the fixed startup screen logo before it switches to head-tracking mode, it's rather disorienting, since visual cues are totally mismatched with my sense of balance.
I'm not sure whether I could get used to a big screen "tethered" to my head angle like that, or if you get used to it?
Also I wonder about the brightness here -- the Quest 2 isn't very bright at all, but it's OK since virtually all surrounding light is blocked out and my eyes adjust. Whereas this device is basically large sunglasses that lets in tons of surrounding light -- does the screen brightness compete?
It is an obvious idea. AR/VR is hard. Dumb displays are not. I think we should have had similar products a decade ago, and if I'm not mistaken, these exist in cheaper SD versions from noname manufacturers.
That said, I want this, and not Meta's thing or whatever Apple is cooking up, because I knew ahead of time that no one is going to get the VR part right initially. They are barking up the wrong tree, because as far as I know, no developer is paying attention to proprioception. From what I can tell, they're focused on the eyes only, and proprioception is essential for VR. AR is different, maybe they're getting that right. I'm just not interested in AR. It's great for a futuristic technology in a film, but in practice, it seems silly to me.
But I don't know everything, hardly anything, really. What is the metric called that is the distance to where the display appears to be? It is not focal length, and I need really a word for what that is called. If the display appears to be, say, 10' away, I probably don't want it. It would be awesome if this apparent distance was adjustable. I am near sighted, hate contacts and correctable lenses cause me migraine. Thus, I prefer to view displays quite close, no more than 2', or to appear no more than 2' away when it is actually much closer to the eye and using a magnifying lens. Pads have worked well for me for consuming media because I can hold at the right (close) distance for comfort.
Really hope I get what I want, have wanted it for about 25 years and been confused why HMDs had only mostly been used for military purposes, for insane prices.
It is about time someone has ignored the difficult to get right AR/VR tech, and just gave us a high quality, personal, wearable display.
> no developer is paying attention to proprioception
That is not true at all. Proprioception is top of mind for every serious VR developer. Outside of poor software performance, it's the #1 cause of simulator sickness. Short of hacking the vestibular system, there's nothing we can do about the lack of virtual acceleration cues. So we design within the constraints we have. It's why you see teleport locomotion. It's why developers get into such arguments over "smooth movement" with gamers who think they know better. Lack of proprioception with virtual movement is a huge problem that we would love to solve, but laypeople have an irrational fear of a little extra electricity getting piped through their ears, so I doubt we're going to see much change there.
Fixed-view, wearable displays violate proprioception even more than VR HMDs, because they don't even give you the real acceleration cues you get from actually moving your head.
As for your question about distance, I suspect you're looking for "accommodation". Accommodation is the physical distortion of the eye to change the focal distance of the lens. Combined with vergence (the distance at which our binocular vision rays converge), the two systems work together to give you a sense of how far away something is (among other things, like atmospheric hazing, parallax, and trained memory of similar objects). Most modern VR systems lack the ability to account for accommodation, leading to what is called "vergence-accommodation conflict". It can be a source of eye-strain for some people, but it tends to be adaptable over time.
Note that I said "most" modern VR systems. It's an active area of hardware research, as the solutions--light field displays and varifocal lenses--are large, complex, and therefore expensive. The latest developments in "pancake" optics, where focal distance for the lens is folded in on itself via a clever system of internal reflectance, has a possibility of bringing the size down, but unfortunately not the complexity or cost.
So again, it's really not fair to say developers are "ignoring" these problems. We are all extremely aware of the them. There are just some things you can't do anything about without a major breakthrough.
> Fixed-view, wearable displays violate proprioception even more than VR HMDs, because they don't even give you the real acceleration cues you get from actually moving your head.
This is unfortunate, I have severe motion sickness; Can't even play Minecraft or any FPS. So VR headsets are out of question.
I was waiting for display glasses like Lenovo's T1 for years now[1] due to its accessibility benefits. From what you say, It seems like I would be sick if I use display glasses too until the proprioception issues are resolved and I'm glad that it's being seriously researched.
Are you saying that you have tried VR headsets and find that they are bad for your motion sickness, or that you presume that they would be bad because of FPS games?
If it's the latter, there is a small chance that you would not have the same motion sickness problems with a VR HMD as you do with FPS gaming. Most of the headsets on the market today use 6-degree-of-freedom tracking for the head and hands. So if you move your head, the view does update accordingly. Actually, I'm not even aware of any headsets that are still available for purchase that have 3-DOF head tracking, though there are a few like the Vive Flow that have only 3-DOF controllers.
Headsets like the Quest 2 and Valve Index also have extremely high screen refresh rates (120hz and 144hz, respectively) that have practically eliminated for most people most of the motion sickness problems that can be attributed to the hardware alone. Granted, there are still some dodgy apps out there. I would avoid anything based on 360 video content in your case. Hell, I don't really have any motion sickness problems and I avoid 360 video content, just because I find non-interactive stuff to be a poor experience overall.
I wouldn't go out and buy one right away, but if you run into someone with a Quest 2, I'd encourage you to try it out. Supposedly, Meta is about to release their next iteration of headsets, whatever "Cambria" or "Quest 2 Pro" or "Quest 3" are supposed to be, which should be even more powerful.
The point I was trying to make in my post was that GP didn't know what he was talking about and that devices like the Lenovo T1 would be a lot worse for the kinds of issues he was mentioning than the VR headsets he was complaining about.
I just presumed that VR would suck for me since I couldn't play any FPS and didn't try any real VR headset.
Thank you for the insights, It makes perfect sense for why VR headsets with greater refresh rates and 6-DOF might not make me sick. I'm going to try it when I get a chance.
> there's nothing we can do about the lack of virtual acceleration cues.
I think we're not talking about the same thing. I don't see how it could be a cause of simulator sickness, which I thought had to do with eye strain from focal points not being correct, and the inner ear.
Proprioception is a sense like the ordinary 5 senses, the perception or awareness of the position and movement of the body. It is what makes our body feel like our body. Dissociation, caused by dissociatives, can cause inanimate external objects to feel like they are part of us, which could be considered chemically hacking proprioception. The wooden arm illusion is another way to hack proprioception using touch to reinforce the sense of it. But VR should be able to do this without chemistry or touch, because visual clues reinforce the sense of proprioception.
VR can really be the wizbang killer app, but it isn't. Maybe they show you a virtual arm and you see yourself manipulate it, but it doesn't feel like your arm (I am not talking about the feeling of touch). That is how they are ignoring proprioception. And it doesn't have to be an arm, because in VR, you don't have to be human. It should be possible to blow the wooden arm illusion out of the water with VR, and have ten arms, and they all feel like they are distinctly your arms. But we don't see that, and no one is working on it. All we get is a stiff arm without natural reticulation.
So it is fair to say developers are ignoring proprioception, because they are unaware of their own sense of proprioception (like knowing where your hand is without looking at it). Proprioception should be of primary concern in Virtual Reality. They should toss everything, and start all over again with inducing proprioception as the prime driver of the technology.
> As for your question about distance, I suspect you're looking for "accommodation". Accommodation is the physical distortion of the eye to change the focal distance of the lens. Combined with vergence (the distance at which our binocular vision rays converge), the two systems work together to give you a sense of how far away something is (among other things, like atmospheric hazing, parallax, and trained memory of similar objects). Most modern VR systems lack the ability to account for accommodation, leading to what is called "vergence-accommodation conflict". It can be a source of eye-strain for some people, but it tends to be adaptable over time.
Interesting, but complicated. Should be called "apparent display distance," or something simple and easy to understand. Still, gives me more information. Appreciated.
A non-trivial number of VR users experience phantom sense of varying strength, including myself. Proprioception is not the only thing being consulted here, seeing someone touch what visually seems like your face is a common low-level manifestation of this. It's possible to be in an avatar and have someone touch the avatar's tail and somehow weirdly you can feel like "your tail" is being touched (albeit you feel it in your tail bone).
Maybe the technology to better integrate more brain variants will eventually exist, but current VR tech already has a niche to cater to and develop from.
> A non-trivial number of VR users experience phantom sense of varying strength, including myself.
You have elegantly spotlighted the problem. Call me when they all experience it, always and every excursion into the virtual. Then we'll know VR developers have finally started paying attention to proprioception.
I don't have enough prior knowledge that I can't tell your definition of proprioception is correct or not, but I know that forward kinematics is completely neglected and that should at least be included in your version of it. Even teleop robots weirdly don't have full limb/joint angle capture nor cares about forward/inverse kinematics agreement, just inverse with control handles.
Even this, what you are saying about "virtual arms", has been tried. Inverse-kinematic self-avatars were a big thing within the industry about 5 years ago. And it was... largely not that important. Without a tracking point for the elbow, it's nearly impossible to do IK for limbs. Remember, a double-pendulum is a chaotic system. A small perturbance can lead to dramatic divergence between simulated and actual state in chaotic systems.
That said, it's still possible to get body sense in VR, for the head, main torso, arms, and especially hands. It's hard to do. And when things are hard like this, there are economics of scale to consider. For example, the Quest 2 has native hand tracking built in, but not a lot of other VR systems do, and the ones that do often have dramatically different APIs (though OpenXR helps a lot here, which shifts the problem to whether or not the system in question has an OpenXR-compliant driver). Do you go all in on one platform and support its input modalities to the maximum degree? Or do you try to target the common denominator between a broader swath of the market.
Non-VR consumer software developers get to develop their software in an environment of operating systems that have well-established UI and UX metaphors. You make a button in WxWidgets or UWP or SwiftUI or HTML and that button comes with a LOT of functionality that you don't have to program yourself. We don't have that in VR. To make a button in a game engine for a VR game you basically have to implement that entire thing yourself. Oh, the game engine probably gives you a raycaster to be able to find objects, but you have to program up where that raycaster should be firing, which objects you should be interested in, and what interactions hitting the buttons on your controller map to how the button in the view reacts. To a large degree, every VR developer has to reinvent the world when they make something. We're working on the very edge of performance that these systems can eke out.
We don't have shared UI metaphors, but not for lack of trying. There are several projects underway attempting to do just that. MRTK and StereoKit in particular have put a lot of work into just that. But they aren't for everyone. They can't do everything. It'd be like expecting Call of Duty to be implemented strictly in Win32 controls. And the input modalities are so much more diverse and non-standardized than desktop mouse-and-keyboard that I doubt there will ever be a one-size-fits-all UI toolkit for VR.
I reassert that you are not being fair to the VR industry. Everything you have talked about so far has been tried and either found to not be worth the complexity or not possible to do within the current constraints.
Accidentally stumbling upon proprioception is not what we're after. We are after being fully proprioceptively immersed within seconds of jacking in, and until developers place proprioception front and center in their development of VR, accidentally stumbling upon it is all we can expect.
> It is an obvious idea. AR/VR is hard. Dumb displays are not. I think we should have had similar products a decade ago, and if I'm not mistaken, these exist in cheaper SD versions from noname manufacturers.
These products have existed for decades. I can't find the one I remember seeing ads for during the 2000s, but Sony had one in 2011 called the HMZ-T1, they had a much earlier model called the Visortron. I think the widely advertised ones were maybe iTV goggles, but I can't find any history on them. Typically advertised as equivelent to a large screen tv farther away.
If you want a demo of a real product, call around to children's dental offices. Many of them have (or have had) these to help distract kids during exams and procedures, although mounting a tv above the exam chair is also popular (a lot easier with a lightweight lcd or oled than a crt or plasma)
Wikipedia (https://en.wikipedia.org/wiki/Glasstron) says the lineage is Visortron (????), Glasstron (1996), HMZ-T1 and HMZ-T2 (2011), HMZ-T3 (????, maybe also 2911)
I bought a pair for experimentation as a viewfinder (about $1000). They proved the potential of that application, but suffered from some fit-adjustability issues and too small of an image. They could have been very good, however.
I had a Sony Glasstron in 1999 that was basically the same idea but TV-resolution instead of 1080p. It was fine for playing games and watching movies. A little bit alienating to anyone else in the room, though.
Yeah these have been around for almost a decade as cheap VR knockoffs for the unsuspecting, albeit I haven't seen the direct USB c output option before. Not until you take it home and turn it on do you realize the resolution of 1080p strapped to your eyes is a pixelated mess and the lack of even basic 3DOF makes these things uncomfortable for most people.
> Not until you take it home and turn it on do you realize the resolution of 1080p strapped to your eyes is a pixelated mess
Well, that sucks. Thanks for the clue to never purchase without a personal demonstration. I figured SD would be like that, but really hoped that 1080p would be satisfactory.
But doesn't this depend on pixel density? If the displays approach the pixel density of displays of high end smartphones, I'd probably be satisfied with that. Even though those pixels can be seen (at 10x magnification), at high enough pixel density, pixels can be ignored when focusing on the video content. I used to sit really close to CRT televisions, which have huge pixels, so there is bound to be some improvement from the way it was in the dark ages.
I’ve repeatedly tried to get into virtual dev work on my Quest 2 (1920x1832 per eye) but the screen density still isn’t there yet. Feels similar to trying to code on a classroom projector screen if that analogy makes any sense.
Quest Pro is hitting in October (supposedly) with 2160x2160 per eye but I still don’t think that gets density to where it needs to be for doing desktop work.
I think Apple has realized this, which is why every screen resolution leak I’ve read has it pegged at 4K per eye or more. But who knows when that thing will even get announced?
Regarding the Apple headset, I'll only believe it's real when Apple officially announces it — it's been bouncing around the rumour mills as "6-12 months away" for at least half a decade, with whatever specs sounded good that year.
(Also, I remember the initial rumours for the iPad; everyone I saw was saying it would run normal Mac OS X).
Yep, that’s why I personally don’t think the recent news about Apple trademarking “Reality” names has any bearing on the device’s imminent release.
That said, I do think we’re approaching an announcement since it was apparently demo’ed to the board earlier this year. I just want it to happen so I can get excited at a keynote only to go “holy moly, I’m not buying a first-gen Apple peripheral for $3000, Mister Tim Apple”.
The problem is that Apple, if they ever do release such a thing, will not provide an HDMI input to make the product versatile and useful. It'll be gimped by Apple's fear of connectivity, just like all of their iOS devices.
Oh I fully expect to have to jump through several hoops to get a workable office environment on any Apple XR product.
I look at it like the newest MB Pros: if the hardware is way ahead of everyone else, I’ll hold my nose and put up with some amount of kludgy workarounds (eg: Parsec because no Bootcamp, etc).
I'm so glad Apple finally shitcanned their inexcusable "butterfly" keyboard and embarrassing emoji bar. Jony Ive's departure was the best thing for Apple in the post-Jobs era. The guy is a pompous hack.
Doesn't matter. USB-C is not a video input. And if it were, Apple would still gimp it. I'm sure developers can't access the USB port on iPad Pros, just as they can't access the shitty Lightning port on anything else.
Debatable. You can run monitors and these ar glasses over usb c and I’m pretty sure you can get hdmi to usb c adapters if that’s what you need.
> Apple would still gimp it. I’m sure developers can’t access the USB port on iPad Pros
This is semi-true. According to this stack overflow post you can get access if you’re part of their MFi program but that’s mainly aimed at corporates. You can try working around it using midi or raspberry pi if you’re doing personal tinkering https://apple.stackexchange.com/questions/13605/how-to-get-s...
I think you're talking video output, not input. Yes, there are USB-based video digitizers (for ingest), but you have to be running capture software on the host device. Very different from simply having a video input going to the screen.
HP Reverb G2 has a 2160x2160 full-matrix LCD. I would say it comes very close to being usable as a virtual monitor, but the virtual screen still needs to be quite large. Extrapolating from this and older generation headsets, I would guess 4K per eye should be enough for a true office experience.
On top of that, the headset itself must be comfortable to wear for sessions longer than 30 minutes. None of the current models are that comfortable.
I am starting to see resolution doesn't matter. According to the specs, Valve Index has a very low average pixel density of about 13-15ppi. Criminal. Compare that to the high end iPhone display's pixel density of 128ppi. So if we could have ocular displays around 100ppi @720p, that would look a lot better than that of Valve Index displays at its seemingly impressive 1440x1600.
It's hard to discuss a monitor being 100ppi@720p because those 2 variables are related via a third variable size. A 720p screen would be 100dpi at about 14" and indeed that is a common size for crappy laptops. A tiny screen with that PPI strapped to your face would look like shit. For instance a 2" square screen would be 200px x 200px.
A 20/20 or 1.0 visual acuity equals 1MOA(1/60deg).
A 20/20 equivalent 360 degrees panorama image is has a size of 21600x10800px minimum. For a VR headset with 120x120 degrees H/VFOV(178deg diagonally), you need a 7200x7200px panel. 2 x 7200px x 7200px x 24bpp x 144Hz = 360Gbps, or worth 4x 100GE LAG’d, or one PCIe 4.0 x23 link, or one DDR5 channel.
Our retina only needs ~8 megapixels - the same number of pixels as 4K/UHD (3840x2160 is 8,294,400 pixels).
High pixel density is only required at the middle of where our gaze is - the surrounding can be highly compressed because our eyes are not very sensitive. No pixels are required where our blind spot is, and our colour and light level sensitivities varies from fovea outwards, so there are other compression possibilities too.
If the display can "move" with your eyes (high pixel density only for fovea e.g. contact lens) then the display also only needs a limited number of pixels.
Valve index has a PPI of 598. What I assume you are talking about is PPD, or Pixel per Degree. Iphone doesn't have a fixed PPD since the distance to your eyes is not fixed.
An ocular 720p display at 100ppd would be be about an inch diagonally at 10cm from you eye balls. That's basically unuseable
The difference is that VR headsets have 120 or so degrees of viewing angle, whereas this is much narrower. This will look as sharp as an HD TV, probably.
There is the focal distance, which is the distance at which your eyes need to focus. I'm pretty near-sighted, so my focal distance is short.
There is also the focal angle, which is the angle your eyes have to be at to focus at a particular point which is a given distance away. As we evolved over millions of years, the focal angle corresponded to the focal distance the angle would get get closer and closer to zero (where your eyes are angled parallel to each other), the further away something was.
But with modern VR systems, they feed different images to each eye, and they give you different focal angles (to make things appear further away or closer together), but the focal distance doesn't change -- it's still that screen that is millimeters away from your eyeballs.
So, your visual systems get all kinds of screwed up. So far as I know, this problem is not something that can be corrected with any modern VR system.
AR is different, since you're layering one generated visual image with one screen that both eyeballs are looking at and that gets put on top of the actual real world image that you can see.
They are barking up the wrong tree, because as far as I know, no developer is paying attention to proprioception.
Not sure what you mean by this. VR headsets know exactly where your hands are and can infer most of the rest of your pose from head height, they rely pretty heavily on that for the immersion.
While position tracking is important, what I was talking about was the user's sense of proprioception. Seems to me that too much attention has been given to tuning the visual response to the user turning their head or looking up and down, or turning around, and little to no attention being given to attenuating the user's sense of proprioception, to reenforcing the feeling that the virtual body parts are an integral part of the user's actual real-world body.
Depends. Are you attacking my credibility with an ad hominem fallacy? Just wondering. Because if so, it is only because my argument has confounded you, and if you can't beat that, maybe you can beat me.
But have you? Because I have and to me, within the confines of what is economically feasible (eg hand tracking but not leg tracking), they seem to spend a lot of effort on proprioception.
I think in a conversation about VR it's ok to ask whether youve actually used it.
> I think in a conversation about VR it's ok to ask whether youve actually used it.
I don't see how it's relevant. Nearly all VR insights came from those who never actually used it, namely because it didn't exist yet.
But, in fact, I have used VR quite extensively. I do own or have owned or borrowed or used or demoed at least 50 different models over the last 25 years, and my experience over the last decade has only ever reinforced this grievance, that VR hardware developers and content producers are paying little to no attention to the user's proprioception, and nothing I have ever seen, neither in my own experience, in depth reviews, documentation, nor your own statements has led me to believe otherwise. Though the component technology is getting better all the time, the VR development is no different than it was in 1996. It's as though developers are either entirely unaware that proprioception exists, or are aware and think it will take care of itself. And though it does sometimes, accidentally, it is not every time and always, and there is no great reason why it shouldn't, other than that developers apparently either haven't discovered it, or haven't discovered how.
Quid pro quo, now please give me your VR credentials so I know I'm not wasting my time. After all, once you can no longer speak to someone's argument, you think it is ok to personally attack them. So let's scrutinize your experience and knowledge in kind and see how much of an expert you really are, and maybe see if we can't get you to realize that when your ability to persuade fails, and you have no further ability to make valid argument and nothing more to offer, then you should really simply stop, leave others alone, and not try to bully or attempt to humiliate them with personal attack.
I simply asked if you had used any modern VR sets and you have started accusing me of 'personally attacking you'.
Nearly all VR insights came from those who never actually used it, namely because it didn't exist yet.
Everything gets invented by people who've never used it. But VR is an explicitly experiential medium. Again, I think in a VR discussion its perfectly fine to ask if someone has used it.
I have an Oculus Quest 2 (for my sins), when I first used it I was blown away although I find myself not bothering with it much now, there doesnt seem to be much to draw me back there, which makes me think Meta are heading for trouble. But thats an aside.
I'm confused about your argument and I think perhaps by proprioception you mean something different to what I'm thinking?
Because my experience with the Quest is that the device knows exactly where my hands are, their exact orientation and even what my fingers are doing. And it puts a lot of effort into making my 'hands' part of the experience by showing them to me in VR. This to me seems to be leaning heavily on my innate sense of proprioception to maintain the VR illusion. True, it doesn't know where the rest of me is. But the hands are the most important part, hence it seems that within the confines of what is economically feasible (eg hand tracking but not leg tracking), the developers of the Quest have spent a lot of effort on proprioception.
But you maintain that VR developers are entirely unaware of proprioception. So I think you are talking about proprioception in a deeper sense? Or in a sense that has not occurred to me? I'd be interested to know exactly what you mean.
My point was lost. If your argument is valid, it will avoid any mention of your opponent and their personal circumstances and focus only on what was said. Otherwise, you are constructing a fallacious argument known as the ad hominem fallacy.
Regarding proprioception, you are likely very aware of most of your senses, such as sight, smell, taste, etc. Proprioception is a sense precisely the same as the sense of sight is a sense, and it is merely the perception and awareness of the position and movement of the body and it's parts. Without looking to see, you know where your nose is, you know where your hands are, you know if your feet are, say, pointed towards or away from each other or anywhere in between. This is the sense of proprioception. When you become experienced at driving a vehicle, you will have good sense of where the perimeter of that vehicle is, including the edges of a trailer if you are towing. This is an extension of proprioception, the vehicle more or less becomes a part of your body, and you know its edges without actually constantly checking to see where they are all the time.
In VR, what you see and hear and sometimes feel should give the sense of proprioception that what you see yourself manipulating, whatever it happens to be, should feel as though it is a part of you. There are tricks that developers can use to reinforce this sense of proprioception, but beyond seeing a moving limb in feedback to your own actual limb movement, I have never seen any development of proprioception beyond this.
There is a body illusion known as the wooden hand or wooden arm illusion. Your arm is placed under a table so that you can't see it, and a wooden beam (say, a short 4x4) is placed on the table so that you can see it. The assistant has two feathers, one in each hand, but you can only see one. Simultaneously the assistant strokes the wooden beam and your hidden arm with a feather in the same motion, and within seconds, your sense of proprioception (shockingly) fools you into believing that the wooden beam is your arm.
This is just an example of how sensory reinforcement is able to fool your sense of proprioception into rigorously believing that what you see is you. I have rarely, if ever, very deeply believed, so far as to the suspension of disbelief, that the elements that I've manipulated in VR have ever become an integral part of me, proprioceptively. Developers are ignoring proprioception, and all it takes is to clue them in somehow ("hey! Have you guys heard of proprioception? Look into it!"), and they will do the rest, which will take research, understanding and then changes to the product and content to constantly reinforce the illusion, and when they do, every VR experience will truly be immersive and not just said to be so in lip service by marketing materials.
Just because one subjectively likes something does not mean that it is objectively any good. Without addressing proprioception, no matter how advanced the underlying hardware technology gets in resolution or surround sound, unless proprioception is placed as the primary concern, VR will continue to kind of suck and never become mainstream. Everyone should have VR hardware, it should be everywhere in education, in commerce and in industry, and right now it is nowhere but in the rare gamers' inventory.
I have the Avegant Glyph, which is fixed, and I it is uncomfortable. It's natural to move your head to change your view, but that doesn't work with these devices. In practice I use a Quest instead, but even Daydream was more comfortable for media consumption.
In a random video on YouTube the demonstrator person says that the screen can be fixed in virtual space, so that you can look away from it.
https://youtu.be/6uOoS2EF4CI
It would need more than a gyro - if it's fixed in space the optics are very different and much simpler, and the required screen resolution is completely different.
Yes I walked around with a 640x480 monocular display tethered to a compactflash card in my toshibia e750 pda with usb host controller, around 20 years ago.
Of course I didn't do desktop work with it, but I looked up offline wikipedia entries (in tr3/tomeraider3 format), read books, watched slideshows and played interactive fiction.
I could do that lying on my bed or sitting in public transport. The only problem was that when I was walking I noticed that I was walking in a sinus wave. I never noticed that as I never had something to read so close attached to my eyes . The problem with this is that my right eye clear real world vision merged with the left eye 640x480 display vision creating an transparency effect on the virtual window. So in essence when I was moving I struggled to read the sentences on a moving background. If it was sideways it would be less of a problem, but it was up and down. Obviously you don't have that problem on a bike, but then it could be dangerous. But sitting and worknig should be ok, it would just take a little bit getting used to, but I guess it would a bit be less ideal then a fixed in space desktop like with the Oculus or Hololens.
For brightness, I used it in broad daylight, I could still see the virtual screen brightly.
Some time later I did the same with a eMagin800 HMD which had a 800x600 screen for the left and a 800x600 for the right. I used that for winamp stereoscopic visualisation. So for these types of applications it's for sure great.
>When I start up my Oculus Quest 2 and it has the fixed startup screen logo before it switches to head-tracking mode, it's rather disorienting, since visual cues are totally mismatched with my sense of balance.
Since you already have a Quest, you might be able to approximate the fixed AR experience using a media player and a 1080p video file.
Moon VR[1] is free and has a passthrough mode to overlay the video on your external surroundings. I haven’t personally used it but I’d be surprised if it doesn’t have a head-lock option for making the video track your head (if it doesn’t, Skybox VR does, but it’s a paid app) - in theory this should be fairly close to the Lenovo glasses experience only much heavier and with a grainy B&W passthrough.
I would be surprised if it’s fixed - gyro based head tracking is very well established tech now. It’s probably 3dof tracking. (So no translation, just rotation.) the Oculus GearVR and Oculus Go took this approach which works fine for most people for sit and watch theater use cases.
Edit: oh, yeah so I think you’re referring to 3dof. I could imagine these being fully tracked but thought you were suggesting entirely untracked.
These do suggest being entirely untracked from the specs.
Otherwise the 1080p internal OLED, if tracked, would result in a usable effective virtual screen resolution of more like 320x240, unusable for much of anything. Since the field of view would need to be significantly wider than the virtual screen to avoid clipping, and then there's effective resolution loss from resampling during virtual screen placement on top of that.
If the tech allows making the display see-through on a per pixel basis, they could simulate seeing the full display as a monitor hanging a meter before your eyes. Turn your head, and the virtual monitor would move out of view, with the outside world showing up.
> But the fact that these are 1080p internal displays suggests it's a "fixed" 1080p screen that moves with your head, not a "virtual" screen that appears to stay in place as you move your head.
Check [nimo](https://www.nimoplanet.com/), a start-up working in the same segment, i've tried on their product and it does have a virtual screen fixed to position and it allows upto six 1080p. screens at a time pinned at any angle.
They're about $1000 and show the potential of this indeed seemingly obvious product. They do suffer from a lack of sufficient fit adjustment, and the image should be larger.
I don't know why almost no one has explored this product category. One company stands above all others in failing to see the potential: DJI. Not only has DJI failed to put HDMI inputs on its FPV goggles, but it has released a string of drones that aren't compatible with them. Yes, most DJI drones are not compatible with THEIR OWN GOGGLES, despite having wireless video transmission that allegedly uses the same protocol.
Trying to move around while you've got a screen like that active in a VR headset is a bad idea, but I love watching Netflix VR like that. Just find a comfortable position and chill.
I feel like we're about 3 generations from "shut up and take my money." A no-brainer version would:
- Allow AR, or at least not block my full field-of-view, so I can use them in contexts.
- Have competitive resolution with my normal monitor. 1080p, as these are, seems like an MVP, but 4k is where I'd start to really use these.
- Sci-fi: Support some kind of wireless protocol. A cable running down my shirt feels awkward. It's fine if I need a cable for 30fps movies, but for emails/coding/etc., I'm happy with much less refresh.
- Less plastic. More metal (or carbon fiber or whatever).
- Price point of $200 makes these a no-brainer shut-up-and-take my money. $400, I'd probably buy. $600, I'd probably /almost/ buy, but wait for used on eBay. $1000 is a no-buy.
> - Have competitive resolution with my normal monitor. 1080p, as these are, seems like an MVP, but 4k is where I'd start to really use these.
When talking goggles, I would take a wider FOV over super high resolution. 4k smooshed into an image 30mm across would be a waste. That said, I can't find any mention of what FOV these glasses are.
> - Sci-fi: Support some kind of wireless protocol. A cable running down my shirt feels awkward.
I think I would trade a high quality cable over a heavier battery, necessary to support wireless at video bandwidths.
> Less plastic. More metal (or carbon fiber or whatever).
I'm intrigued by why you feel plastic is a drawback. Especially as metal has IMO very few advatnages in this case (heavier, lower radio transparency, bends under shock etc.)
The nreal air hit most of your requirements. AR overlay. 4k (although I don’t know if that’s overall or pair eye. $599. Wireless with android phones. I only found out about them through the article, I’m going down my local phone shop tomorrow to see if they have any stock and see if I can test them out.
Depends on quality of simulated display, and how many.
It might be the display bandwidth that gets in the way, but if I could get N simulated arbitrary resolution displays that can be moved in AR fashion, I'd pay a lot more than $1000 for portable 8k virtual displays.
The "looking weird" is an argument that I find really interesting, as it is both completely true, but also extremely volatile.
For instance wearing headphones in public 40 years ago had a strong social opposition, and you would totally look weird with headphone on. This was replayed with bluetooth ear pieces. Those ships have long long sailed.
Same argument was also made for the smart watches. Every time there was a kernel of truth, but as this is only a question of perception, there just needs to be a switch in people's mind and we reach the other side.
AirPods used to be weird. That’s how quickly it changes.
I’m an Apple fanboy and would queue up for a toaster if they released one and I had second thoughts about the AirPods cos of the look but bought them anyway and the utility was such that I wore them anyway and and then they took off.
Apple can move the zeitgeist. Very few brands have that power.
One could have said the same thing about mobile phones a couple of decades ago when the first primitive PDAs came out but plenty of people are glued to theirs now, even in public.
Not their target market, but I can imagine a world where plugging them into headless servers (like in a rack in a DC) might be simpler than having physical monitors hooked to KVMs (or more realistically the little fanless machine I have on top of my bookcase, which is just enough of a pain to pull down and hook up to a monitor and keyboard...)
Think a QR Code on the server, scanned by the glasses. Confirm and then have a connection to this machine. That’s part of the future I have been waiting for.
The price of these hasn't been announced, but the competitors cost 600-800$. That's a bit much for me.
That they're a lot lighter and smaller than full-fledged VR headsets is nice, but they're also more expensive and lower-featured, so that's a tough sell.
One thing I like about them is that they're really just dumb displays rather than requiring a connection to the Facebook servers or some other dumb stuff that will ensure a limited life time and limited user control about what they do.
I want to trade my laptop for a set of glasses like these, plus a portable keyboard/mouse and drive everything from my phone. Imagine being able to code on a plane with a large screen, without having to worry about fitting it into the space in front of you..
I have a friend who does this for productivity reasons. When all you need is a shell to write code in, a phone actually does extremely well because you can project it to your TV and there's just less opportunities for distraction compared to a full desktop PC where there's the temptation to aimlessly browse the web or play video games.
This looks like it could be a fun companion device for the Steam Deck on e.g. a long train commute or flight. If the display is fixed in the center of your line of sight it may not be great for productivity work as that’s fairly unnatural, but for gaming you’re most often looking at the center of the screen anyway so it might be okay?
I've got a Oculus Quest 1 which I use for fitness it's perfect for me, because it gets me moving and my heart rate up and I hate normal exercise / gyms.
The other thing I have used it for is to watch movies and sync my screen from my PC. It's a bit awkward for this so i've stopped using it (due to poor battery, being too large bulky, annoying to setup with the VR features getting in the way like boundaries etc).
However, when it did work watching a movie on it was great. To me it seemed like this was far better in terms of the view/screen than a top end TV, so it seems obvious to me that glasses can be the future of watching.
Why have multiple screens? I can have a laptop for portability and then at home, pop on my glasses and have whatever type of screen setup I want, better than any monitor setup, want to watch TV or a movie? Pop the glasses on and it's the same as being at the cinema, in your own home.
No need to take up space with a desktop computer + monitor or TV, no need to worry about finding the right place to set everything up, can literally do it from anywhere with a touch of a button. Heck can take laptop to hotel and watch movies from there and have the same experience I would at home.
I'm surprised there is not more focus on something like this. Perhaps this is something Apple are going to do with all there rumoured moves into this space.
There are some apps that offer synced watching with friends in the Quest, they've been working just fine. Having the ability to share a virtual screen in the same physical location would be a game-changer though. That could be a killer feature.
They are mentioned in the article, but I just picked up a pair of NREAL air glasses to use with my steam deck and I am blown away so far--the price point is about 1/3 of the T1s but most of the specs are very similar. The support for fixing the screen using the 3dof sensors is limited to their android nebula app for now but if they write a driver that allows it on a computer that would be helpful, but for now I find they are most comfortable to use in a fixed position (ie laying back on the couch). The technology to make compelling head mounted displays is finally here--I've owned various HMDs for drone viewing but nothing I'd want to relax and watch a movie/play a game on (the screens are usually too small and 480p resolution at best). More competition will only improve the space so kudos to Lenovo!
I have a pair of Nreal Air glasses which I purchased for my steam deck. I used them a couple of times then they went into a drawer.
The issue I had with them is that the screen moves with your head. I know this sounds super obvious, and I knew this would be the case when I purchased them; however, I did not understand how disorienting and frustrating it would be for gaming.
Most gaming HUDs are at the outside edge of the screen, and you look at them fairly frequently. If you’re like me, you do this by moving your head slightly with your eyes. I found it to be very distracting to move my head every time I wanted to check the edges of the screen because I would move my head and eyes and expect to be at my target… but not actually have arrived there.
Additionally, because I was required to look only with my eyes I felt like there was a lot of fatigue, and occasionally a bit of soreness, caused by constantly moving my vision past what all of my habits had trained.
I think that the idea of a glasses based screen is sound, but I will wait for a decent pair with AR built in before I buy again. If the screen were fixed into place using AR cameras it would be a really cool experience overall.
Normally, I'm not one of those "shut up and take my money" types, but... I think I really want these.
Unless I'm missing something, it's too bad there isn't a wireless option. Maybe it's technological limitations of standard wireless technology, but I would think that wifi would be at least somewhat adequate. I could at least picture hooking a Pi Zero to it and using that either alone or to VNC into my other devices.
Obviously the lenses make it seem like the virtual image is several feet away, so in terms of focusing it's no different from an actual object several feet away. If anything, it's probably healthier than a computer monitor that sits up closer. Plus, these glasses would actually be better than a VR headset in that the lenses can be designed so that the focal distance matches the virtual screen distance perfectly. (In VR, there's a mismatch for most distances.)
The only thing you might have to worry about is interpupillary distance (IPD) -- the space between the OLED displays needs to be adjustable to the space between your eyes. E.g. the Oculus Quest 2 has 3 settings for this, while the Quest 1 had a smooth adjustment. Adjustable IPD is essential, so I'd assume this either has it or they'll sell different versions for different IPD's?
I'd bet they're going to be overall better for your health. You'll be able to have a focal distance farther than a screen, without having to look up or down (causing eye and neck strain), while also choosing the healthiest posture for yourself.
On my camera, the viewfinder (the thing you use to look "through" the lens) is a screen. You can alter dioptry, which, AFAIK, is the equivalent of "moving the screen back". Essentially, you can have the screen visually as close or far as comfortable.
Odds are, these glasses have that built-in. And it's going to be a lot better for your eyesight than staring at a screen 50cm away from your face.
Can't possibly make the eyes in China any worse. But in all seriousness, if they cause you to be cross eyed like regular VR glasses, then it has the same negative impact on development of eyesight as normal VR/3D has.
Crossing your eyes to look at something is normal. What's weird is that it's all at one focus depth, so your individual eyes don't need to refocus on things that should be at different distances, even though you're still getting the depth signal from crossing your eyes.
"VR gets you cross eyes and impacts development" has never been substantiated, so far only the opposites were reported online. That means there are unknown side effects, but don't seem like there are serious harms.
Light brightness and spectrum are associated with retinal damage. Blue light from LED overexposure can certainly do harm and can also affect your sleep cycle and hormone balance. I would not bake my eyes with these things anytime soon.
All of this is totally untrue for a device like this.
The brightness is orders of magnitude less than daylight, there's zero retinal damage happening. They're not outputting UV so zero spectrum issues as well. There's also no "blue light overexposure", it's balanced.
Yes, cool color temperatures late at night can keep you up. Which is why 'night mode' exists, even the Oculus Quest 2 has it -- it's trivial to solve using software alone. But that's no different from staying up playing exciting video games or watching too much TV because of the cliffhanger at the end of each episode.
But nothing is "baking your eyes", sheesh. And using these for productivity during the daytime will affect your sleep and hormones precisely zero. Being outdoors on a sunny day, or even a cloudy day, is flooding your eyes with orders of magnitude more blue light than these dinky little screens.
You can’t possibly say it’s untrue because long term effects have not been studied for this device but it is a documented fact that screens aren’t good for us. Having it literally an inch from your retina cannot make it more benign.
Yes I can say it's untrue because it's simple physics.
You can literally measure the light spectrum from sunlight and from LED screens. Daytime sunlight is full of blue light, far more than from a screen. It's physically impossible for LED's to do some sort of damage that doesn't happen from regular walking around outdoors, because it's 100% known what they emit.
And an inch from your retina makes zero difference when the brightness is much less than daytime and the focal length is far. Because with lenses, as far as your eyes are concerned, the light is coming from far away.
You don't need to study long-term effects when the physics are 100% known here.
Again, blue light close to bedtime is the only thing that's different, and that's trivial to fix.
Whether it's true or not, I know for me personally I prefer a much darker screen to most other people. My monitor for example is currently on 0% brightness. I find bright monitors uncomfortable to look at.
Citation needed. The first wave of reports I saw suggested this, the second wave suggested the reverse (no damage, no hit to sleep duration or quality). What's the current consensus, if any?
As some others have pointed out, this is merely an old product coming back.
In the late 90's and early 2000's there were several companies that made these, with the Sony Glasstron and Olympus Eye-Trek being the two model lines coming to mind, but I believe where were others. You could do things like plug them into a cassette-based portable TV (some of the high end Sony models had S-Video out). There was even a special PlayStation branded model (that was oddly actually made by Olympus) that sold for around $400-$500 equivalent back around 2002. I remember there were these little booths (like little voting booths where it was darker) outside some store in Japan where you could try the Sony headset hooked up to a PS2. I actually found a photo of it: https://www.olympus-global.com/en/news/2000b/image/nr001012f...
They never sold very well, but there are at least a dozen (maybe more) models of "personal wearable displays" out there from 10-20 years ago. Some actually have head tracking. Most are composite or SVGA, but IIRC Olympus kept making these for quite a while. Sony actually had a product line that was before Glasstron AND one after, that never had much of a brand name, but I remember it was actually quite advanced—720p, HDMI and OLED… and that back back around 2010 or some such.
If this worked well enough, some "laptops" could become just a slab with a keyboard & touchpad on it. Screens are thin & light already, but perhaps there would be some power savings in that?
With transparent lenses and no tracking, how are these better than the non-transparent video glasses that have been available for two decades and never caught on? The contrast ratio on any transparent display is terrible since the pixels can't block light, only add light. The darkest black levels you can get is however much tinting you apply to your glass.
Looks to me like Lenovo developed some display tech for AR and then couldn't get the tracking working and decided to ship a tech demo.
I get the idea, but why would you extend your laptop screen with this? Considering there's no VR or AR type tracking, as soon as you look at your laptop it would be obstructed by the second screen. Granting you no real benefits.
Edit: Apparently there's an accelerometer which can activate a screen when you look up vs looking down. But seems like that'd be clunky.
The original Virtual IO glasses of the late 90's were a consumer product that was riding the Virtuality hype that was in vogue at the time. They failed in the marketplace, however, I remember meeting the rep and what he had to say. I was absolutely smitten by the product and 'knew it was the future' even though that video was somewhat VGA, so not even SD resolution. It was in 'stereo' unlike Virtuality, which relied on novelty to give the impression of 3D.
The use case apart from immersive gaming was just so you could watch TV whilst doing tasks such as washing dishes or making dinner. From what I remember there was a means of seeing the world around you, either out the bottom or with transparency in the image. I can't quite remember. There was composite video as an input, which was in a cheaper variant of the product which had no head tracking, it really was just for watching video.
This was in the late 90's when the big new thing was DVD, however, there were two competing formats. Some products such as the Virtual IO glasses were waiting in the wings for this market to take off, at a point in time where zero DVDs had been sold.
Games did need to be written for it and I think the trick was interlacing, half the VGA went to one eye and half to the other. It would be up to the developer to write the rest. At the time Windows 95 was new and DirectX plus OpenGL as we know it on Windows was in version 0.1.0 with games being DOS.
As you can imagine, this 'watch a soap opera whilst doing the washing up' market with an imagined vast mass of Americans ready to by the product never happened. Price was perceived as the problem to adoption, hence the Virtual IO product tried to address that by being competitively priced, with the 'TV watching' cheapo version available too.
The product was compelling but the imagined mass market was never there. If you want to do stuff in the kitchen and have the TV on then you just put a TV in the kitchen. You don't think 'oh, I must get that expensive headset to watch TV whilst I am preparing dinner'.
Another factor that is overlooked is that people have traditionally been adverse to wanting to be seen wearing a VR headset. There is that matter of losing peripheral vision and security, you are not going to wear anything that blinds you on the subway journey home. Second to that, and not appreciated by some, is that a lot of people put a great deal of effort into their appearance - make up is a thing - and being out and about is about seeing as well as being seen. Times have changed, but even still, Google misjudged this aspect.
Imagine if you had a VR/AR headset that had week long battery, infinite resolution, zero latency tracking and only cost $50. Would your sister, mum or dad be wearing them? Or would they just be checking their phone every ten minutes?
They probably would not have the use case of boring household chores as microwave ready meals, dishwashing machines and food delivery services have solved that problem for those that need it.
My dream gadget is seethru glasses/goggles that can embed a fixed screen in the corner of the FoV so that I can half-attentively watch TV or Youtube videos while cooking, running, etc.
I’d give up half my FoV for it. Audiobooks are great but sometimes I want even more stimulation while I’m doing something else.
How much I would love a world where most of the people were walking on the street, having conversation with you, etc while watching something on the glasses and how much it would mess up the last crumbles of attention span we have.
Eh, that world is already here in the form of smartphones. I must wonder if you realize that yet.
I love the serendipity of meeting people and impromptu conversation, but that doesn't mean my entire day is spent around people. In fact, the examples I gave (cooking, exercising) are specifically things I do alone on my own.
Sure, people would use such a gadget in public, but that's already how people use smartphones + headphones so that ship has sailed. I would even argue my glasses are slightly less antisocial than the status quo since you can look up and see the world.
Streaming video will be a killer app of AR. Such a basic thing though is a huge challenge so far. There isn’t even a good AR display yet let alone everything else.
For me, watching television or movies in AR/VR has very little appeal. Usually, I'm watching with other people and turning to them to see their face or reaction to something on screen is important.
The short-term killer app of AR for consumers is probably a lot more mundane. I think things like heads-up displays on a car windshields or motorcycle helmet visor is a likely contender. But that could change if they get rid of the need for goggles. AR contact lenses could be pretty wild and a less anti-social.
There are a lot more opportunities for AR in commercial applications (like surgeons, technicians, etc...).
I’m referring to TikTok and Netflix etc where you can stream video content on transparent normal everyday glasses as easily as popping in your AirPods for music now, or a virtual shared display in 3D space so you retain 100% of the social sharing factor.
How does these avoid optical constraints? I thought it was not possible to design lenses in such small space that would project image 2-3 ft away at least.
This is an old idea. I've seen chinese companies show off amazing displays at CES 4-5 years ago. Royale was the name of the company or the device? It was a mostly stellar experience. My eye sight is pretty bad. I wasn't able to get perfect clarity with the adjustments in one eye (think the adjustment went to 5 and I'm a 7).
Additionally, the glasses come with multiple nose pads that’ll be helpful for extended use and a prescription frame if you need it.
Would it be feasible to build slim adjustable optics that could allow anyone to set their prescription dynamically and directly within the glasses or would that make the glasses too bulky or hideous looking?
Adjacent to this, I've been looking into what's possible with regards to glasses frames that incorporate bluetooth bone-conduction audio. If I can find some of sufficient quality I might use them the next time I need to get prescription glasses.
I feel like 1080p is not enough to fit enough information in such a tiny screen. Even smartphones are sometimes too small for certain tasks, just a tiny section of my smartphone would be even less suitable for working, watching or gaming.
The NReal is better is a lot of ways and their support people seem active and open to suggestions. They just need to bring it to the states. Even importing it's still fairly cheap compared to the T1.
If Apple comes out with AR glasses with cameras, no one will bat an eye. (And if they do, Apple will already have a great story for why it isn't an issue.)
They had a reputation for great tech but not great products. Glass was not compelling (or, at best, not marketed well). No one wanted it -- except gadgets nerds. So it was easy to complain about one thing or another.
If Apple does it, everyone will want one. Apple makes great products and has the best marketing of anything ever. Even if it has a camera going 24x7, people will be able to live with it.
Very useful for shutting out those chatty people who don't realize some of us have stuff to do on deadlines even if we're on our way to some vacation destination
It could actually be useful to block out some surroundings (if it can do this) in say an office - I find people walking around can be incredibly distracting, especially if someone starts pacing.
But seriously, watching something private/sensitive in public is the only use case I see for those as they're not AV/VR capable and you already need a phone/laptop to use them, so you might as well watch your videos on that instead.
But the fact that these are 1080p internal displays suggests it's a "fixed" 1080p screen that moves with your head, not a "virtual" screen that appears to stay in place as you move your head.
Does anyone here have any experience with how usable that is? When I start up my Oculus Quest 2 and it has the fixed startup screen logo before it switches to head-tracking mode, it's rather disorienting, since visual cues are totally mismatched with my sense of balance.
I'm not sure whether I could get used to a big screen "tethered" to my head angle like that, or if you get used to it?
Also I wonder about the brightness here -- the Quest 2 isn't very bright at all, but it's OK since virtually all surrounding light is blocked out and my eyes adjust. Whereas this device is basically large sunglasses that lets in tons of surrounding light -- does the screen brightness compete?