Hacker News new | past | comments | ask | show | jobs | submit login

Where is the second camera? To gleen 3 dimensional data from a single 2 dimensional image is hard, and requires intense algorithms. Two cameras makes it easier but still hard.

I would love nothing more if they would publish this and or open it up to community development.




3DV, the company whose 3D camera technology MS bought a few months ago, has published some papers on their technology. I haven't read them, so I cannot vouch for their depth (no pun intended).

http://www.3dvsystems.com/technology/tech.html#1


While you're correct, it isn't mandatory to have two cameras for decent depth perception. However it would require a new sensor type.

Humans are capable of sensing depth with only one eye, this is due to the shape of our eye. Camera's work by receiving a 3D image on a 2D sensor. Our eyes work by receiving a 3D image on a 3D sensor. Our focal point is akin to a 2D sensor, but our peripheral vision wraps around a huge portion of our eye, which gives us the ability to triangulate.

While I doubt they'll have come up with a 3D sensor, I wanted to make it a point that depth perception with a single sensor is difficult for a computer presently. It would be entirely possible to design a more complicated camera, to compensate for the algorithms.

From the video, I'd guess they're either using two similar resolution cameras. Or they might be cheaping out. Using a 12MP camera would give you amazing picture resolution, however for 3D recognition, you could probably get away with amazing resolution for a project like this with something less than 1MP as the companion. This is essentially how one of our eyes work, we have incredible resolution in the focal point, but our peripheral vision is rather poor, but our brain superimposes the images scanned (quite literally our eye scans the environment using the focal point) and fakes everything.

I'm sure using a small, cheap companion sensor would provide 3D extremely well for movement recognition.


I was at the event yesterday and they showed a bar with two cameras in the middle, which you can see in the video.

The nice thing about the Xbox 360 and PS3 for these kinds of input devices is that they're multicore, so the budgeting is pretty straightforward if they simply consume one core at all times.

I was cringing when they had the live avatar using the motion capture, though. It was gutsy and impressive, but seriously, guys, probabilistic kinematic constraints. People don't usually have their wrist twisted all the way around when they're standing in place, even if it's physically possible.

I'm very pleased with Microsoft's vision, though. I think it's a great step in the right direction for the industry. I do tend to wonder, though, if there is a fundamental difficulty in an input device that would seem to be inherently more fuzzy than the Wii controller.


There are some low-price 3d cameras. Possibly, it works more like a laser depth scanner.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: