Hacker News new | past | comments | ask | show | jobs | submit login
Leap (leapmotion.com)
175 points by aes on Aug 22, 2012 | hide | past | favorite | 60 comments



Please, for the love of Jef Raskin and Henry Dreyfuss and Don Norman and all that is human factors, no.

The film version of Minority Report was not a model for practical or usable interface design. Millions of years of evolution have built our brains and bodies for interacting with things that provide physical feedback when we touch them. Waving a pencil in the air, "manipulating" an invisible item and looking for visual feedback from a screen, these are not good experiences. Even if you discount the "gorilla arm syndrome" that StavrosK quite rightly points out here, the fatigue of trying to perform fine and accurate motion without physical stimuli for your hands and fingers to respond to is significant.

I'm sorry to be a negative voice in the face of innovation, but this really does feel like a technology in search of a problem. What worries me greatly is that it has a remarkably high "cool factor" that would be excellent in short demos, and could be easily pitched to companies looking for a flashy feature to get a leg up on the competition. We were saddled with some dubious decisions at the dawn of the GUI age, and we're just starting to lose them as we enter the Direct Manipulation age of interfaces. Please don't let this concept of feedback-free hand gestures become a paradigm that we're stuck with in the future.


As an embodied cognition guy, I disagree here pretty strongly. Gestures are a powerful form of communication - consider deaf signers, scuba divers, even in time sensitive operations like military and sports. Gestures likely have a deeper evolutionary history than spoken language too. They are intuitive. Babies mimic long before they speak or manipulate.


That's an interesting perspective that I hadn't thought of before.

The usual approach to a 3D gestural interface is the kind of thing that's shown in the video for Leap—writing in the air, using mimed actions in space to represent manipulation of objects on a 2D screen, et cetera.

Gestures as an abstraction, like sign language or even the everyday hand gestures we use like flipping the bird, the "A-OK" sign, and such make a lot more sense. If you move away from the idea of using hand waving as a stand-in for direct manipulation of objects, and look at gestures as a form of communication, it's a whole different ball game.

Thanks for that. I still look at the demo video for Leap with fear and loathing, but using that same hardware for a communicative gesture system like you suggest is exciting indeed. Now I'm going to be distracted all day thinking about ways to incorporate hand signs into a UI.


The big question for me is whether gestures began as a proto-language. Consider mimicry and the use of tools in social learning. That's animal cognition too.


Upvoted both of you for civility :-)


> Gestures are a powerful form of communication - consider deaf signers, scuba divers, even in time sensitive operations like military and sports.

In all of these examples, gestures are being used to interact with another sentient being. We use gestures to talk to people, not to tools.

I think gestures are great, but I don't want to have a conversation with my computer, I want to use it. I want to feel like I'm a craftsman, and not a manager of the work I create on it. (For that same reason, I'm not enthusiastic about voice recognition either.)

That being said, I probably would be excited to use gestures (and voice) for social software that was intrinsically about interacting with other people. Think multiplayer games or video chat.


Yes for communication with humans, but for interacting with a computer? Gestures are imprecise, physically tiring and non-intuitive for the very precise and specific actions we have to do on a PC every day.


Those are based on static symbols and are not used for precise, realtime control of a stateful 2-D system.


Pardon me for a shameless plug here, but I had jump in as this is an interesting discussion where we have at Flutter have long thought about it. We came up with a simple gesture to play/pause a song (itunes/spotify) by asking thousands of humans what gesture they would use if they did not have mouse/keyboard/voice. Almost everyone came up with the one we picked... even for next set of gesture, we're looking at static human gestures that are micro and intuitive (and doesn't make you fatigued).

When we released the first app, we got countless emails asking for hand-swipe as a next song gesture. We had felt a long ago that even though it seems natural it is completely impractical in certain context (i.e. coffee shop) and doing hand swipe 20 times starts to wear you down. It was important to try this number of times as same gesture can because to go to next slide or next photo or next album.

Hence, we ended-up picking thumbs-right and thumbs-left as metaphor: flutterapp.com/next

We will also hit our 4th Million gesture this month. That's 4 million times someone has either played or paused a song.

Please try our app and send feedback - would love to get your thoughts on this!

This is one of the best thoughtful discussion I've seen on this subject! thanks all for stimulating thoughts. I just thought I had something to share...


Fred Brooks agrees. He broadly outlines a design system using gestures in The Design of Design. (I know that PHK already exceeded the quota for Fred Brooks references in a week; I guess, just get over it.)


If you look at it, it's quite sensitive enough to follow finger motions, not only whole hand motions. So it could even follow your fingers as your hands rest on a desk.

I'm not convinced the lack of tactile feedback is a problem provided there is very good visual feedback. Do you have any studies to this point?

Furthermore, I think basic pointing and pinching are only the beginning of the capabilities this system can provide. More complex hand signals, or even face, body and posture signals could drastically increase the bandwidth of human/computer interaction, even by supplementing a keyboard.


This isn't a study, any scenario where you might have impaired vision is a case where tactile feedback is necessary.

While I do think tactile feedback is extremely important, I won't discount the ability for humans to adapt. Should a compelling enough system be developed, I imagine any person who desires would learn and adapt to a system -- the problem here is getting the gross majority of users to adapt to it.

There's obviously going to be a lot of backlash against a product like this (and equal amounts excitement). I think the problem here is after such a well publicized film, everyone assumes tech like this is going after a Minority Report type paradigm. While they definitley have to (somewhat) pitch it like that, I'm very excited to see different ways that something like this can be used. It definitely seems like a great step forward.


If you look at it, it's quite sensitive enough to follow finger motions, not only whole hand motions. So it could even follow your fingers as your hands rest on a desk.

I agree, you don't need to use your whole arm. It's like playing tennis on the Wii. You don't have to flail your arms around. You just need enough motion for the accelerometers.


There are many good reasons this is a useful product: demoing 3D models, education platforms, music performance.. I love my keyboard when working but aren't the wii and kinect one of the most successful gaming platforms? If this has a good API there's no reason it can't fill a niche need.


I agree that the lack of touch feedback is a terrible experience. But this is nevertheless an amazing piece of technology that opens new doors to a lot of applications (in the sense "apply", not software). We've seen what the community around Kinect has been able to build with much less powerful captors. I'm excited to see what people might imagine and achieve with this one. Moreover, nothing prevents you to couple this impressive captor with any other device that would provide you with other sensorial feedbacks. And why narrowing it down to a control device? Why couldn't it be used to gather other kinds of input? Monitoring activity? Assisted vision? Who knows what people have in mind for it?


The way I see it, it's an innovative interface that has no drawbacks to us (users and companies) in testing it. If it's not intuitive or useful, people won't use it and either the company adapts to the users or will fold. Simple as that.


I can imagine a world that uses this or similar technology, holograms, and some sort of soft electrical shock to manipulate 3D interfaces.

Don't write it off. This might be Xerox Parc's mouse, waiting for Jobs.


"There are more things in Heaven and Earth, Horatio, than are dreamt of in your philosophy."

Yes. I saw an obvious use for this in our products and it would solve a known problem we have so I immediately fired off an email to them. The sooner I can get a few of these in my hot little hands the better.

Thank You to whomever submitted this link!


How about for presentations? You are already using your hand and talking naturally, why not incorporate that into your talk?

I wouldn't use it as a keyboard replacement, but for other tasks that naturally fit into this type of human input.


There's too much focus here on the assumption that this will be used as a full-time computing input device. I don't think anyone is realistically advocating banishment of all keyboard/mice to the netherworld.

Let's be more creative than that. Think about using it as an alternate input in spaces that a physical keyboard/mouse isn't appropriate, and also 'short term computing'.

Will this replace input on your workstation? I doubt it- but what about a large map that's installed in a public place? What about some sort of restroom or medical computing device where you'd rather not touch the surface that someone else just touched? You're not going to sit there 12 hours a day. You're going to pull up the map in the hospital and zoom/pan around on it. Why do we need another surface to clean? And in 15 seconds- you're done. No gorilla arm syndrome, no pain, and no real learning curve.


Ultimately no this will not replace the keyboard and mouse but these guys seem to think it will. Check out the headline on the about page:

> Say goodbye to your mouse and keyboard.


To be fair, I remember seeing a mouse demoed in the mid eighties and the fellow at the demo explaining that you'd never need a keyboard again. He then proceeded to write a short sentence by using the mouse to click on letters on the screen.

I think that every new HID comes with a marketer promising to get rid of your keyboard.


I'm reading that as more of a marketing/concept line. I'm pretty sure that if you'd walk into their offices they'd all still got keyboard/mouse combos still.


Whenever I see something like this, I immediately think of gorilla arm syndrome. There's a reason I was saying they will never become widespread when all my friends were screaming about Minority Report-style interfaces (ever since Minority Report).


"Gorilla arm" was a term originally tied to touch screens that the user had to reach out and touch a screen in front of them. If the Leap can detect small motions of your hands as they lay on the desk, then using one would look nothing like a "minority report" interface, and avoid gorilla arm.


You still have to consider the muscle strain involved in the round trip from your eyes to your fingers.

When typing, you don't really wait for the text to show up on screen before going on to the next letter, and chances are you're hitting backspace before you recognize the letters because you already know you screwed up.

With this interface, you'll have no such luck. You'll type slower, have more errors, and also have muscle strain from not having a more immediate response.


You are implying hand waving all around with such use case.

I, for instance, imagine it as replacement for trackpad - I type happily on my keyboard, and I just do some slight gestures for scrolling, switching, closing, minimizing and pointing. And all while keeping my hands close to home row. It's even in similar form factor as trackpads!

I don't know why everyone thinks it will be like signaling on an airport or doing any strenous activity at all.

It's a very delicate device for very delicate input.

And that's all not counting cases where wide space hand usage would be extremely useful (3D modelling, directing a virtual orchestra, interfacing with kiosks...)


Why is it more difficult to do those gestures on the trackpad? The trackpad is closer to your keyboard and it is very quick to switch between the two.


No. Trackpad device may be close to keyboard, but Leap is covering your entire working space. I.e. keyboard actually may be completely unplugged and Leap could detect what are you typing.


The fact that a trackpad is closer is one of it's failings. I disable the trackpad on my laptop unless I'm in a space that I can't use a mouse, due primarily to the fact that the trackpad has a bad habit of getting touched or bumped by my hands as I type. If I could shift my hands forward past my keys to gesture up or down to scroll I believe the issue would be negated.

I'd love to see a laptop with one of the leap devices embedded just along to top edge of the keyboard as an alternative to the trackpad.


I agree with you. However, there might be a niche for rare tasks with very intuitive usability. So far, a solution looking for killer app.


I agree, I think the Kinect is very good for what it's used. I can also see 3D modeling/manipulation benefitting greatly from such an interface.



I'm currently learning Blender. It's an open source 3D-modeling program with one of the most non-intuitive GUIs ever created. It's like the Vim/Emacs of 3D-creation. Being able to just grab a 3D-object with your hands and kneed it to the shape you want would be freakishly amazing.


More than one comment below mentioned the use of this device for 3D modelling. There are certainly scenarios when an artist could use LeapMotion, like sculpting and painting, but the actual modelling part is heavily keyboard-supported.

I image you'd need both hands to replace two mouse buttons and a scroller, and to me that seems to break the deal.


> Say goodbye to your mouse and keyboard.

This single line is enough to help me see through their flawed assumptions. The keyboard and mice aren't going away anytime soon, just because these guys have found a way to integrate gestures with computers. I personally hate the Applesque marketing promising users to 'Own the future'. Gesture technology has been here for long and I don't see it being the future by replacing the mouse and the keyboard. Think about developers like us...no developer would find it useful, because we need to code efficiently, which is and never will be possible with gesture technology.

So, from a developer's perspective, this is something intended to be too cool, but fails to understand the basic underlying principle of the purpose of a keyboard and a mouse. Maybe, this would appeal to ultra Hi-Fi executives who want to flaunt to the world a new way of using their Powerpoint slideshows, but not the common man/developer who owns an average computer (Something like a c2d).

I was honestly expecting this to have some features like the Kinect, which developers have hacked to use it as a motion-capture system, especially for use in creating Animated movies (which is awesome because a standard decent mo-cap setup will cost you atleast $5k). This gadget is unfortunately too basic and solves a very small problem that no one really cares about, IMO.


Seems developers can candidate for a free Leap+SDK if they like your project idea and think you can deliver.

>How can I get a free developer kit?

>We’re distributing thousands of kits to qualified developers, [...] register to get the SDK and a free Leap device first, and then wow us.

Apply here: https://live.leapmotion.com/developers.html

I like the small size and reasonable price. Might be cool as a 3rd input device, or for specialized terminals.



I don't know why but reposting seems to happen a lot on HackerNews. It's not the fact that they get reposted that's bugging me but the fact that the same people upvote them. Why would you still upvote something if you know you've seen it before on HN? It eludes me.


Probably because they haven't seen it before. This is the first time I've see this posted.


I wouldn't really say that it bothers me. I posted the link to the above discussion because I think if someone comes to the comment section they might be interested in it.

But it certainly wouldn't hurt to do a quick HN search before posting a new submission.


  > ... it certainly wouldn't hurt to do a quick
  > HN search before posting a new submission.
Won't happen. I've been saying it literally for years, and I can tell you now, it won't happen.

I implemented the first-phase of a semi-automated duplicate detector to help people find previous submissions, and I got hate mail for it. Some people are actively against duplicate detection.


Plus dupe detection is trivial to defeat. Basically, the current set-up works in favor of people gaming it.


It'll be new to some people (it was new to me), and so it'll get upvoted. If everyone had already seen it, it wouldn't get upvoted. Don't see anything wrong with that.


This one only went through because it's missing the www, so it didn't trigger the repost blocker. Maybe that should be taken into account?


It proves that re-posting is effective for content owners, although it's really annoying when reposts don't include anything new. This one seems to sneak through because it's really interesting. I just pre-ordered one.


>the fact that the same people upvote them

Evidence for that is what?


Yes, but I think the last time it was on HN, you couldn't actually pre-order one.


You could pre-order then and the webpage said delivery in January. Now it says February...


I don't think anybody, including Leap, thinks the keyboard and mouse are going anywhere. Also, this isn't minority report. If leap can deliver on the sensitivity of the input, then small, precise gestures can be made without moving your hands from the keyboard. That makes it useful in cases where switching from the keyboard to a mouse isn't fast enough for my taste.

I can envision opening certain applications with a gesture (save you from typing the name into quicksilver or finding and double clicking on the icon). Tasks that you repeat over and over could be assigned to a gesture with great effect, like swiping a finger left and right to change windows.

3D editing could be interesting, where you move an invisible object in 3 dimensions with your hand. Anybody who's done 3d modeling or game development in unity can attest that a mouse and keyboard are limited in 3 dimensions.


Hmm. I feel like I'm in that other news site with all the reposts.


I would imagine the number of people accessing the site from different timezones is likely to be the reason for it, an article can quite rapidly disappear from the front page before someones had the chance to see it.


First time I've seen this. Obviously the first time a lot of other people have seen it too, hence it making it all the way to front page.

Anyhoot, I can't deny it, this is very interesting.


Can anybody explain how this works, ie: the technology behind it? The page itself doesn't disclose much more than that it uses some kind of secret algorithms to track hands and fingers, but I'd be interested in what kind of sensors and processing are used, and how such a small box can track 3 degrees of freedom so accurately. How can this work so well compared to the crude tracking that Kinect does with 3 cameras and a laser projector?


I think this has great potential for use in conjunction with wearable computing such as Google Glass. I'm not sure how the current interface for Glass works, but I imagine it's based on voice input and possibly some buttons on the unit itself.

Imagine wearing a smaller Leap controller on your wrist - you would be able to use gestures to control the Glass and mostly likely interact much more intuitively with your surroundings as seen through Glass.


I one day hope to be able to close an application on my computer by showing it my middle finger. It would make ragequits much more satisfying.


I wonder what the constraints on the background are? If it's not too fussy, then hang one of these around your neck and hook it up to an Oculus RIFT VR.

Immersive VR + hand-tracking == ????


1. Immersive VR 2. Hand Tracking 3. Profit.

You've solved it! Congrats!


I can imagine so many possibilities for this technology, but, please don't assume, people will write in space in the future, its just not happening :)


So... does it run Android?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: