I want this sort of product. It is exactly the use case I have for VR but what I have heard is that existing VR is no good for office / programming work, as the text is not crisp enough. There are also complaints around the comfort of wearing such a device for a long time. They are reported to be too heavy and too hot. I can't wait for the reviews to come out to see if this product can overcome those shortcomings.
The resolution of 2448x2448 is promising. This is the upper end of headsets we should expect to see in the next 12 months or so. That said it takes a lot to make a good headset. I am most excited for a potential headset release from Valve, which has been rumored and featured in various small leaks for the last few years (Valve "Deckard"). The youtube channel "Sadly its Bradley" has a lot of great content about stuff like this.
Hah, never expected to see a Bradley shoutout on HN. Seconding the recommendation; he’s a diamond in a sea of mostly garbage clickbait that we call the VR YouTube niche.
And yeah, I think we might actually see an HMD this year with optics and a panel that can finally be useful for desktop usage[1]. My since-returned Quest Pro was maddening close: the optics are absolutely spectacular but the display resolution just doesn’t get there.
[1] Not counting the Apple HMD here. If it does get released this year and has the rumored display resolution it’ll leapfrog the entire industry by an even larger margin than the first retina devices. $3k is a helluva price tag though.
Ooh yeah, I’ve not actually seen the rumored specs of the apple HMD. I would love to see a leapfrog happen here. I assumed 2500x2500 was all we would see for a while.
Simula is pretty good. I tried the software with my HTC Vive (1st Gen!) and it is way more readable than any other VR text I saw. So maybe ultra-high resolution isn't strictly required.
If you have a VR headset that is compatible, why not give it a whirl? It's open source on GitHub.
> For optics, our headset will feature a custom 3-lens design which, in tandem with our displays, will provide 36.2 PPD, 100 degree FoV (monocular). For comparison, this PPD is 3.27x better than the Valve Index (11.07 PPD) and 1.76x better than a Quest 2 (20.58 PPD).[1] (Our original target was ~45 PPD, but we decided to trade off more FoV for less PPD). Bottom line: Simula's headset will offer significantly sharper text quality than existing headsets (offering higher PPD than any other portable headset on the market).
> We just received updated lenses from our optics suppliers and ran some QA tests on them. TLDR: We're pretty blown away by how good things look in person. Even without software distortion correction, text and other fine details are extremely crisp. We can't wait to show this quality off in our review units.
> Though it's very hard to convey in these videos, the most important thing we can confirm is that the image quality is absolutely incredible. We've been telling people that Simula One pixel density is more than 3x better than the Valve Index, and nearly 2x better than the Quest 2.
Though, this is their word. They've yet to get to a point where they can ship to third-parties for review. They've been very transparent on the development IMO, but it's quite a bit of money so people are cautious and skeptic.
A few friends of mine are working with event-based cameras. Such cameras aren't thaexpensive, and can sidestep a lot of that processing (well, at least the exposure part, with almost infinite dynamic range, and high framerates). They seem a good fit for SLAM (Simultaneous Location and mapping), which is probably the main use-case here.
I'm aware of event-based cameras and looked at them for Simula's SLAM, but last time I checked my distro they were fairly pricy (for a device like this anyway). I'll ask my contact how much they cost.
The image pipeline here is for AR passthrough, which has different requirements than SLAM (although nothing's stopping you from using the images for SLAM as well).
For AR passthrough, you would need some fairly complex processing to get video from an event-based camera. Event based cameras are definitely exciting for machine learning applications, though.
Very nice writeup. As I was reading it I realized what a long way we've come since I started in machine vision. We had analog cameras and doing things like bad pixel correction and per-pixel black offset correction were much harder, if not impossible.
You mean a CCD or film? CCDs I thought this step was _essential_ for any decent quality - I had to implement a pipeline for processing RAWs for old digital backs, applying the dark frame & flat frame. Granted processing power was much lower back then if that is what you mean, would have taken a bit with 22MP files...
Good question, I meant older CCD cameras with analog (RS170 or similar) output, digitized by a frame grabber in the PC. That was what we used for industrial machine vision in the 90s and early 2000s.
They are probably referring to cameras with CCD sensors but analog output, akin to older CCTV cameras. So any corrections on the digital raw data had to be implemented in the camera by the manufacturer, as the camera user only had access to analog data.
For those interested in this, I recommend also checking Immersed VR. It's not standalone (you still need a PC somewhere) but it works great. I can not wait until we have devices with better optics!
I can't say much because NDA, but it's not the most difficult protocol to reverse engineer.
Also, if we decide to make our own RX IP for it, there's a good chance I can open source it. Will have to have a lawyer look over the licensing/NDA terms to make sure we're not messing anything up in that case.
Its (presumed) competitor, MIPI-CSI is not open source either.
SLVS-EC looks easier to interface with an FPGA at least, presumably the voltage levels are closer to standard, unlike MIPI D-PHY's exotic dual level switching that either requires an external IC, a passive resistor network of borderline compliance, or an FPGA (e.g. Lattice CrossLink) with the correct special inputs.
SLVS-EC has some very minor issues when interfacing (DC offset compliance) which requires a hack in detecting when the link is up. And yeah, MIPI D-PHY is a royal pain unless you have native support (Xilinx, Lattice, Intel Agilex 5). C-PHY even more so.