You know what? A year ago, I would have shared the exact same scepticism.
Now, I have an infant daughter who likes nothing more than being carried around, which has exponentially increased the number of activities I do with one hand, standing up.
Being able to have a soundless controller that you don't need to pick up or fiddle around with in a way that disturbs a tiny human sleeping on your other arm sounds like absolute perfection.
I just went to the leap dev meetup in nyc, (i also dev for leap). My feeling is that its a great UI instrument, for UI focused applications. What do i mean by that? Well, think of museums, aquariums, jobs such as architecture, perhaps autocad. Jobs/tasks where it would be way more fun and efficient to do the action with your hands.
These are things I've come up against in writing apps for it.
If I can lean my elbow on something it's OK. If I have to keep my hands up for very long, not so much fun.
I've been trying to restrict the broader movements to less-common actions (mode changes, for example) and then use simpler, arm-on-table doable actions for other things.
Standing desks would be a nice alternative too. Avoid the chair and have the thing facing upwards instead of forward. You'd have to work out how to keep your body out of the picture (no pun... sorry, I can't lie). Then you're really avoiding compression of any sort of nerves, except maybe in your feet.
Because you'll like it so much that you won't want to boot to an OS you like less to use some stuff?
I guess that makes sense, it's also why I don't go to the amazing restaurant nearby but go to the shitty one. Only the shitty restaurant has tacos, and sometimes I want tacos.
Also, specifically Ubuntu - it depends on a higher version of libc6 than is available in everything up to and including unstable in Debian repositories, it's only available in experimental.
It usually relies on the libc of the system too, which means you are REALLY limited to bleeding edge ubuntu for these normally. OpenNI has been horrible about this.
It really worries me, especially as it's not just differently structured distros like arch that are out in the cold, but debian, which Ubunutu is derived from.
Does anyone know why the Leap preorder page specifies a Phenom II or an i3/i5/i7 processor in the fine-print hardware requirements? Is it for a particular chip feature? Or is CPU overhead going to be a big issue?
Does anyone have any real-world experience using the Leap? It looks cool as hell but I'm hearing rumors that it doesn't work well in the presence of halogen / incandescent bulbs or direct sunlight. If I have to carefully control the lighting to be able to use it, that's a dealbreaker for me.
Yes, I have a dev kit and have played around with a it a bunch. I'm very impressed with the low latency and high framerate (~110 fps via usb 2, rumored to go higher for usb 3.0). Latency is unquantified but barely perceptible even when moving quite quickly.
The potential performance issue is with dropouts and interference. It's pretty easy to have detected fingers drop in and out, mostly due to fingers occluding each other. With a level of dynamics modeling/filtering built on top of the leap SDK, it may be possible to minimize the impact of dropouts.
overall, it does seem like a reasonable way to interact with a computer, especially for drawing or manipulating objects, etc.
I unfortunately can't post a video because we're building the leap into a trial clinical device at the moment and its out of my hands.
Clinical devices are an awesome idea for this... nice one! Having worked on systems in hospitals before, I couldn't believe the number of times users had to handrub etc. each time they went from handling our scanners back to the actual product they were working with.
Congratulations on an awesome way to remove that step.
had the same issues with the dev-kit, hope they implement some sort of a multi-device mode in the future which gets rid of these occlusions caused by having only one point of view to the scene (you can do only so much with modeling, but that would be a start). The interaction experience for actual hand-gestures is, due to these limitations (sweet-spot-size, occlusion and hand orientation) far from what could be considered intuitive. Single tip input with chopsticks is the only mode that works immediately as one would expect.
A few times I've had to move stuff out of the way because it would trigger a false hand presence.
Lighting hasn't been an issue for me; I get assorted warning messages ("Bright light!" "Low light!") but it doesn't seem to make a noticeable difference in what I've been doing.
I tried shining a laser pointer on but it didn't do anything. That was disappointing. :)
My understanding of it is that it uses near-infrared LEDs (slightly longer wavelength than visible light) to illuminate your hands, and that's why certain types of light sources that emit a lot of light at these wavelengths are alleged to cause trouble. It makes sense that a laser pointer wouldn't cause any interference because laser light is not in the passband that the cameras can see.
Well, I guess if you're that much of a diehard, just check out this project instead: http://www.duo3d.com/ they're planning on fully opensourcing everything. If you're annoyed because of the apparent dependency on ubuntu, note that I got things working on gentoo without much issue: http://www.keyboardmods.com/2013/03/leap-motion-in-gentoo-li...
D: s'what keyboards, palm rests and elbow rests are for
S: yeah, i hear you
S: i played around with it for about 20 minutes and my arm was aching
So yeah. this is me being skeptical