Hacker News new | past | comments | ask | show | jobs | submit login

Yeah, I wouldn't use facial and eye tracking instead of traditional input interface. They are meant to enrich interaction with other people. Eye tracking could be used for some optimizations, like dynamic LoD based on user focus.

I thought about input interface that could replace keyboard for VR and mobile devices. Brain implant would be perfect, but that's not feasible with today's technology. Voice is good contender, except for privacy issues, and it's not really usable in work environments. Keyboards are our best solution so far, but they don't evolve and to me they seem like a dead end (plus, they are not portable and require both hands to use effectively).

I was thinking about touch sensitive surface that recognizes drawn glyphs. The idea is that anyone who is literate can start using it without any training, just draw letters instead of typing them. With machine learning and some clever visual feedback, both user and machine could adapt to each other to increase the input speed (by simplifying glyphs and by defining 'snippets'). Interface could be expanded to both hands to double the speed, but it would be completely functional with just one hand. It could even be used blindly. For mobile devices, such sensitive surface could be placed on backside - virtual keyboards that take half of screen are just horrible.




> I was thinking about touch sensitive surface that recognizes drawn glyphs.

I'd like a touch sensitive surface... on a keyboard.

Consider an expert keyboard user. With a nice keyboard. Adding downward-facing hand tracking, such as a head-mounted Leap Motion, adds value. Now you can gesture on the keyboard, and above it in 3D. Permitting a very rich input vocabulary. And, for example, using the entire keyboard as a touch surface. Except... while fingers are highly sensitive to tactile contact, neither traditional keyboards, nor current hand tracking, can provide good contact information. Finger-tip position tracking isn't quite good enough to infer contact existence and pressure. So while you can use say the J-key keycap surface as a trackpad, and even infer touch-untouch events from gross finger motion (better than requiring a keypress, but not by much), you still can't get light-touch events (eg "I was clearly pressing harder when I stroked down, but just skimming when I moved back up - I clearly felt the difference, so why didn't the keyboard?") So, I'd love a contact sensing mesh which could be laid over an existing nice keyboard, without compromising key feel. I can get position information elsewhere, and keypresses from the keyboard of course, but there's no existing source of multitouch contact, let alone pressure. Any ideas?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: