Nothing that's satisfactory in one way or another. Probably Luxonis DepthAI?
The main problem with off-the-shelf solutions is that they add another set of cameras, and afaik nothing exists that allows custom cameras.
We're gonna need an FPGA anyway due to the large amount of IO (2 cameras for AR, 2 for eye tracking, IMU, whatever other sensors we need, plus potentially mmwave radar if we decide to go that way) so it's tempting to put the processing on the FPGA as well.
Interesting - I guess I assumed the hurdle is both hardware and software. Oculus's hand tracking was a huge lift. Is there any commercially available software stack being worked on that is at least hardware generic? Or is everyone forced to build from scratch?
There's a lot of research papers that I found, but nothing hardware generic unfortunately.
Hand tracking is a difficult beast especially, and we would like to just use the new Ultraleap module for that, but they don't support Linux yet.
Eye tracking is relatively simple because it's a closed/controlled environment. Just some IR LEDs, an IR camera, and some edge detection and math.
SLAM (positional tracking) has a lot of different approaches . There's open source software, but it's generally running on a normal computer and that's not particularly efficient (especially with our GPU already loaded). Some research papers use a FPGA, but the code is rarely available so you just have a starting point.
You could probably crib the software from DepthAI or similar? We could implement the AI coprocessor they're using and adapt the code. I haven't looked closely enough yet to see whether that's a good use of resources.
I recommend QP if you are going to do FPGA processing using a softcore or hardcore processor. It's an event-based state machine framework that handles IO really well. A hardcore processor would be more performant and take less LUTs but softcore will give you more flexibility as far as sourcing FPGAs.