Hacker News new | past | comments | ask | show | jobs | submit login
Project Soli – touchless gesture interactions by Google (atap.google.com)
182 points by danr4 on May 31, 2016 | hide | past | favorite | 42 comments



As always Douglas Adams had some keen, if slightly cynical, insight: "The machine was rather difficult to operate. For years radios had been operated by means of pressing buttons and turning dials; then as the technology became more sophisticated the controls were made touch-sensitive - you merely had to brush the panels with your fingers; now all you had to do was wave your hand in the general direction of the components and hope. It saved a lot of muscular expenditure of course, but meant that you had to sit infuriatingly still if you wanted to keep listening to the same programme." Hitchhiker's Guide to the Galaxy.


At Google I/O a couple weeks ago they said they had miniaturized the Soli chip and they demoed a smartwatch with the chip in the wrist band, as well as a gesture-controlled speaker. They announced the Soli beta dev kit coming "next year". You can watch starting around 21:40 here https://www.youtube.com/watch?v=8LO59eN9om4


I'm sure a lot of consumer implementations come to mind pretty quickly for most, but this could be really huge for accessibility... if you could say, train gestures tailored to the very specific nuances of a person's available range of motion. Avoiding the limitations tied to physical hardware would be huge.


It could also be huge for ergonomics. Many people develop RSI issues through repeated keyboard and mouse usage. People have as gone as far as using their nose as an input device:

http://www.looknohands.me

There are lots of other RSI stories here: https://github.com/melling/ErgonomicNotes/blob/master/README...


Exactly.

An unintended result of the advent of reliable/portable touch devices was the reduction in physical hardware input.

Instead of clamping down a joystick to a desk and moving it around with your face... or looking away from content to type on a keyboard with your nose... you can buy an off-the-shelf product and use it on just about any solid surface. The iPad was huge for people with limited limb control.


It'll shift to RSI doing these gestures instead. They're still fine motor control which is what's involved in RSI. Notice how runners don't get RSI in their legs but pianists do.

Background: I have RSI and have battled it for 15 years now. There's a lot that's mental and nervous system, and can easily shift in the body (almost gave myself RSI in my throat and eyes doing voice rec and eye tracking to avoid typing).


Right, but if there's flexibility to change the specific inputs... maybe you can change motions when/if they're becoming a problem?

Or maybe people should work less?


Anything fine motor, even if the motions are changed, will inevitably cause problems. The duration doesn't have to be that long -- I could feel the fatigue in my eyes with eye tracking after just 30-40 minutes. The irony of course is that my eyes are doing fine motor control all the time, but when you do it as part of a control-loop then _something_ (neuromuscular?) will cause you to go into RSI mode and you fatigue quickly. If you don't pause for the fatigue you can do permanent damage -- which is what happened to my hands on the most non-ergonomic Apple Newton keyboard.

I use fine motor control for pleasure, not just work.


I wonder if this could be used for some kind of modified eye/eyelid tracking?


That could be interesting, eye tracking is often expensive and can require a "just-right" type of setting. Reducing that barrier can be huge for many.

I think outside of a single solution like eye-tracking there's a lot of room for a device like this to be much more adaptable to a broad range of conditions. There's so much variation of how physical symptoms can present, even within a single disease, that it can be really difficult (and expensive) to customize hardware for an individual... and degenerative diseases can take constant readjustment and new hardware.

The hardware barrier seems endlessly frustrating — you have an off-the-shelf device like a joystick/keyboard/mouse that was originally designed for hands, being used by feet and with mouths...

...a good example of hardware reduction making a meaningful impact was the advent of reliable portable touchscreens — people could actually start directly touching the interface without some of the ergonomic constraints of a mouse and keyboard. You can reliably surf the web on an iPad using your nose. Imagine trying to do the same with an off-the-shelf joystick or mouse.


It is truly amazing to see the photograph of the huge and complicated prototype that they managed to put on a chip in under 2 years.



I may be old school but for every example given in the video, I would rather have an actual button to press...


the physical size of buttons/knobs/sliders requires devices to be large enough to accommodate those elements. with a chip like soli, designers can create devices that are dramatically smaller.

as the soli gets smaller (which future versions likely will), you can imagine tiny devices that maintain rich interactivity.


Exactly. I also prefer physical buttons but imagine what could be done in terms of reducing space taken by controls alone. This is in many ways vastly better than touchscreen interfaces since it will allow for many different kinds of interaction in the same space.


However if you have a button, you have a button. With this you can have a button, or a dial, or a slider or sliders in the other two directions. And which you have switches with the context.


I want one of these in my glass so my desktop computer would put the focus on the screen I'm looking.


This can be done today, and probably more accurately with a webcam.


The video suggests that using radar they can more accurately track hand motions than using a camera.


It depends what you are tracking. They specifically said that they could more accurately track depth. For eye motion tracking I suspect that depth isn't a major component and the 2D image is better to track eye motion, especially because the pupil has very little change in depth compared to the surrounding area on the eyeball.


Reminds me of the Leap (https://www.leapmotion.com/)


The Leap Motion uses infrared light. It was hugely disappointing at first. Supposedly it got much better after the Orion SDK release. http://blog.leapmotion.com/orion/

Here are a few other ideas gesture devices like the Mylo, FingerIO, and Intel Real Sense:

https://github.com/melling/ErgonomicNotes/blob/master/README...


Thanks for the heads up on Orion. I bought a Leap Motion at it's initial release, and like some others here was disappointed in it's ability to accurately track my fingers/hands. I just might pull it off the shelf and give it another shot now.


Why was it hugely disappointing? I've had one since the start and it was always very accurate, precise and reliable. It had some issues with not being able to persist in tracking hidden fingers, but it worked really really well on extended ones.


For me it was hugely disappointing because they passed the API off to individual applications rather than providing some OS level drivers for a common API so you could drop it in place of other interaction devices. That, and it used an immense amount of CPU resources on my computers.


Your setup must be near perfect then. There was an unacceptable level of noise and disappearing fingers once you rotated past 30 degrees of level in the original versions.

Orion is almost magically better.


This seems like it could be really interesting for home automation and accessibility projects.


How's Google's Biz Dev?

Selling B2B is a whole new ball game for Google (outside of Adwords which is a special case since it is so valuable to advertisers).

I'd be very wary as a purchaser of this chip on how long Google et. al. have committed to producing it and their willingness to move up the value chain (i.e. does Google have a good chance of copying me if I come up with something valuable?).

The Infineon relationship may help in both cases since they have solid analog products


Don't miss the video. They made a great job demonstrating the interactions with nice visualizations. You can watch it in 1.5x speed with subtitles. Very cool.


This is an incredibly exciting technology!

My first concern is how can these gestures be discovered? Will we just get used to the same sorts of gestures in different applications over time and intuit what to do?


Watching the gestures reminds me of a theremin.


Could this be used to create an “Air Keyboard” that allows you to type by pretending to type into the air?



With the size of the chip, its possible to make a wearable on the wrist to capture the finger input so your arms can face anywhere.


Hmm, that and AR and you have some interesting potential.

A wrist worn device pr hand that can track individual fingers. Thus freeing up the hand to still do grasping and such.


The problem there, as i see it is twofold.

1. the screens were mounted upright. Notice how keyboard are nearly flat by comparison.

2. touchscreens thus far can't tell a finger resting from a finger pressing. So while a near flat screen avoids much of the gorilla arm issue, longer durations will put strain on the wrists.


The technology looks like it might be able to, though the focus is finger-on-finger interactions so that there is haptic feedback, which "air keyboard" doesn't provide.

OTOH, if it is precise enough, you might be able to make a virtual chorded keyboard / keyer [0] with it using finger-on-finger or finger-on-hand gestures.

[0] https://en.wikipedia.org/wiki/Chorded_keyboard


I am really surprised that none of the current VR kits seem to offer a chorded keyboard on their controllers.


This, plus mature augmented reality, minus device's screen... that could start to get very interesting. Maybe something like a wearable (iWatch or some such) with this, plus an AR interface creating a 'virtual screen' to interact with.

For my full sized device though, buttons have serious advantages that don't go away until the motion sensing is hooked up to a REALLY smart computer.


Has anything happened on this since it was announce a year ago?


Here's this year's demo. Starts at 20:30

https://youtu.be/8LO59eN9om4?t=20m30s


Using a physical button to move slides...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: