Hacker News new | past | comments | ask | show | jobs | submit login

So would this be extended to physical cabinets with flipper buttons?



Absolutely, that's the main goal. I've built my cabinet over 10 years ago, so testing is assured. ;)


For a physical build, just curious if you've thought about integrating something like OpenTrack to track head/ eye position of the player and modify the camera position - to make the table look even more real when it's in a physical cabinet?


Yes, I also have a Tilt Five kit, which is awesome.

We're still doing lots of work on the core parts though, so these things will come later.


Support for something like this already exists for Visual Pinball thanks to BAM. Use a Kinect V2 for best results.

https://www.ravarcade.pl/


People have done this with an old Kinect. It works okay, but most people say it's not good enough to use regularly.


Yeah, I used to work on the Kinect which is why my mind went there - but the v1 certainly wouldn’t be up for this.

Fun fact, originally, the resolution of the depth sensor was 640x480 but was nerfed in firmware to 320x240. Why?

The makers of Rock Band wanted to make a Kinect version. But, with a Rock Band mic, bass, two guitars, and keyboard plus Kinect, the Xbox 360’s USB bus couldn’t handle it. So the Kinect got nerfed.

The company behind Rock Band either shut down or went bankrupt before the Kinect went on sale. At that point, way too much of the tooling (not to mention pose estimation modeling) around the Kinect had been built with the 320x240 resolution constraint so it wasn’t feasible to “unlock” the full res.


It's so sad that microsoft fucked over kinect at every conceivable opportunity. I used it a lot for 'creative coding' open frameworks and processing projects (many in that community did). Thank god they bought the original from primesense and didn't have the ability, or didn't exercise the ability, to break their multi-platform sdk.

When the Kinect 2 sdk was windows only it was a huge turd in the punchbowl and a clear sign that microsoft was not serious about making it a real tool to do real work with. I did do one project with the kinect 2 and learned just enough of the sdk to write a shim in c# to run the camera and pipe the data out over the network to a box that was actually doing the rendering.

The kinect 2 also was excessively picky about it's usb3 port, I remember going through about half a dozen usb3 cards until I found one that worked.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: