Hacker News new | past | comments | ask | show | jobs | submit login
Real time lightsaber tracking and rendering on the Kinect (kinecthacks.net)
86 points by RK on Nov 22, 2010 | hide | past | favorite | 23 comments



Wouldn't it be better if he was only holding a small 6" long stick, the kinect tracks that, and digitally adds the extended lightsaber? Maybe this was just his step 1.

It would minimize the destruction caused in your home from swinging around a 4' stick.


Maybe you could use the wiimote in combination with the physical tracking to get more precise telemetry + precise physical location and AR.


That was my thought a week ago. The kinect is limited in that it can only see the front of your body. Some movements obscure your body and it basically just freaks out.


You just described Playstation Move.


There have already been Playstation Move demos that do exactly this, with superb accuracy and great response times.

(Disclaimer: I work for SCEE, but this is my own opinion)


The full length stick works better for duels. Filming cool duels is pretty much what this thing screams to be used for. Of course to minimize damage, you could probably use something lighter than a solid wooden stick.


Full-length duals have already been done, though. I don't think that is the killer application of this tech.


...if by "real time" you mean delayed by 200ms.


In graphics terms when someone says 'realtime' they usually are referring to the performance of the algorithm, not whether or not there's a delay. Lots of video games have input delays of 100ms or more.

The framerate is pretty bad, though. I wonder why - I was under the impression the Kinect captures at 30fps.


It looks like he had a CPU monitor on his panel, which also looked pegged. Could just be he needs to spend sometime optimizing cause the Kinect can certainly record faster than that.


Couldn't you do this with any webcam and CV? I don't see how the depth component makes it any easier.


As far as I understand, depth values make the blob detection easier. Normally blob detection tries to use the brightness differences caused by 3D objects - depth values make it much accurate.

http://en.wikipedia.org/wiki/Blob_detection

The youtube video channel of the author has his previous demos : http://www.youtube.com/user/yankeyan


> He says the next step would be to add a flying droid that shoots lasers, a blind fold, etc.

Great, once they have a blindfold the rendering delay will not even be noticable anymore. Or the rest of the rendering, really.


The main problem (with real time) here is that you can't move the stick too fast or Kinect will lose the tracking.


disappointingly there is still a bit of lag in there. Though I'm sure that can be improved in time. It would be interesting to see him swing the stick around faster, see if the rendering can keep up an still track the stick when it is a blur.

Looks like fun!


I heard that Kinect only lags for a few frames. Perhaps the main lag is the rendering lag?


On ubuntu. Great to see that hacking kinect is already enabling other hacks.


Neat but not really exciting from a technological standpoint. Then again if I was 12 this would be pretty sweet....


this is a potential killer application. if they manage to make the tracking fluent and accurate, everybody can finally duell other players in epic lightsaber fights. anyone who liked star wars has probably been waiting for this - let's see who gets it right first - PS3 Move, Kinect or even Wii?


I think I have a different definition for 'real-time'.

Pretty cool though for sure.


So how did he do that? (High level explanation is fine)


He says he's using the OpenCV library. I assume it's got the tracking and rendering functionality already in the library, and he just plugged the Kinect video and IR depth data into it. http://en.wikipedia.org/wiki/Opencv


Hah! I've gotta get me a kinect for tinkering purposes...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: