Hacker News new | past | comments | ask | show | jobs | submit login
Augmented Reality 3d Video on iPad with Kinect (youtube.com)
60 points by iqram on July 2, 2011 | hide | past | favorite | 15 comments



So damn clever... releasing an SDK for the Kinect[1] was one of the smartest things Microsoft could have ever done and I think has gone a long way to contribute to it being the fastest selling piece of hardware in history[2] (did anyone know that? I sure didn't, and barely believed it reading it).

This reminds me a lot of what Johnny Lee did with the Wii soon after it was released with head-tracking 3D[3]

Part of me wants to stop using a computer and take up beet-farming because the cool factor seems so much less than what these guys are doing.

Anyone on HN actively toying with a Kinect and want to share some video?

[1] http://research.microsoft.com/en-us/um/redmond/projects/kine...

[2] http://www.vgchartz.com/article/83375/kinect-is-the-fastest-...

[3] http://www.youtube.com/watch?v=Jd3-eiid-Uw


I have no video, but I was able to get a WebGL-powered game world running that was controlled through a Kinect using DepthJS. This was done in just over a day of playing around. Was pretty cool when it worked.

The source code is available on github at https://github.com/thoughtworks/kinect-spike

Edit: My co-workers were looking at me with concerned looks that day. I would stand up and wave my arms for 30 seconds and then sit back down at random times throughout the day. :)


Very impressive Derek -- how easy was it to work with the Kinect data via the API? Is the information coming out of the sensors pretty straight forward or does it just come in as a video feed with additional metadata (positional?)


The way that DepthJS works is through three technologies: libfreenect[1], OpenCV[2] and then a simple Tornado (push) server. libfreenect is what grabs all of the data from the Kinect device and OpenCV is the 'interpreter'. This translates to just getting events such as 'move' in the browser.

[1] http://openkinect.org/wiki/Main_Page

[2] http://opencv.willowgarage.com/wiki/


What I find cool is that this sort of application (3D video recording) is actually possible with libfreenect. I bet the most difficult part was defining a storage format and getting it integrated into an app built with the augmented reality SDK.

[Added in edit:]

Anyone on HN actively toying with a Kinect and want to share some video?

Something I'm doing with libfreenect: http://nitrogen.posterous.com/gesture-like-light-control-wit...


Looks like you and I have toyed with the same idea (Kinect controlled lighting). I was shocked by how easy it was to put together and how effective it was. Probably not an easy thing to commercialize though ("yeah, just embed these sensors in your walls and replace all of your light switches...").


I put together a very basic Kinect controlled lighting app for fun using ManagedNite:

http://www.youtube.com/watch?v=CF4mzdYo3VY


Here's the launch video for String SDK. Looks pretty impressive, much smoother than many other implementations I've seen

http://www.poweredbystring.com/official-launch

Great demo video about 1:30 in to the launch video.


It's hard to tell how it performs in real life (not a launch video) but that being said the demos of it in action were impressive.

I wonder if this is the form the Holodeck/3D technology we've all been waiting for since the 70s will be in.

AR of your living room mixed with a glassesless 3D TV[1] and some motion controls. Very exciting stuff.

[1] http://3dradar.techradar.com/3d-tech/52-inch-glasses-free-3d...


Isn't the new Nintendo tablet doing something like this?

Also, why wasn't the piece of paper on the table showing up, only the books?


The piece of paper tells the app the position of the animation and is removed by the app from the video in the iPad. If you look closely you'll see the app projects some kind of wooden surface where the piece of paper was, this could be a pattern recognition thing (looking at the surrounding pattern and filling in the paper with that pattern) or just a custom job.


I believe that in augmented reality programs you need a target. The piece of paper is that target. It allows for the AR program to know where to place the video and also how to rotate the camera as he is walking around to the side.


Ya it looks like the "augmented reality" part was faked. The angle of the video never quite matched up with where he was holding it either. In any case it's still a nice proof of concept.


I found the Nintendi E3 Wii U Demo.

It gets interesting around the 2:40 mark (the whole thing is pretty impressive)...

http://www.youtube.com/watch?v=DowHrWluHxg


Very, very cool. I want to use this with my own app!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: