Hacker News new | past | comments | ask | show | jobs | submit login

This does not scale well when your drawing is more complicated. A simple example is a square, which can start in 4 places and go 2 directions, now you have 8 samples, but it gets more complicated because some people use multi-stroke for the square.

The other algos in the family are more robust to this, but after experimenting, a RNN or vision model does much better on the consistency side of things.




What I meant is to add both the clockwise and counter-clockwise variant of same gesture. Rotations are another matter, $1 unistroke can be made to be either sensitive or insensitive to gesture rotation, depending what you want. Often you'd want to discern "7" from "L".

Uni-stroke is much more elegant input method than multi-stroke. You can react to user's gesture as soon as they lift the mouse button (or stylus or finger), without introducing some arbitrary delay. Users can learn and become very fast at using gestures. Multi-stroke on the other hand requires coordination of each stroke with previous ones and to me it doesn't justify its complexity. I admit I have preference the software where users adapt and become proficient, while many products with wider audience need to be more accessible towards beginners. Different strokes...


right, but for a square, you have to add 8 samples, not 2, to handle the 4 starting points and 2 directions, but this does not account for the users who multi-stroke

> Different strokes...

I see what you did there :] I'm definitely in the reduce user burden camp.

https://quickdraw.withgoogle.com/ is a good baseline to start from for a more resilient gesture recognizer




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: