Hacker News new | past | comments | ask | show | jobs | submit login
Back to the Future of Handwriting Recognition (jackschaedler.github.io)
142 points by jabagawee on June 11, 2018 | hide | past | favorite | 37 comments



Handwriting recognition is a great example of technology whose development seems to have plateaued before it became "good enough." Stroke-based recognition has been in development for half a century now, but my iPad Pro still makes errors at least a couple of times per line, which is enough to make it pretty much useless unless you're writing only for your own later consumption. That and voice recognition. It's shocking how bad Android and iOS still are at that, even after decades of work on voice recognition technology.


>I think it’s worth asking why anyone in their right mind should care about mid-century handwriting recognition algorithms in 2016.

Lots of people care, specially in Asia(Chinese and Japanese). It is just that the problem is incredible hard.

We put 5 very smart people working for a year on that, and it was totally impossible meeting people's expectations, specially people like doctors taking notes fast(and ugly).

We thought that the market was in creating mindmaps or something instead as people could write slower and better.

But people write a double u and expect the computer to see an "m". With deep learning is possible but extremely flimsy.


This a cool exploration of technology, and I don't want to take away from that.

> The program was efficient enough to run in real-time on a IBM System/360 computer, and robust enough to properly identify 90 percent of the symbols drawn by first-time users.

I just want to point out that 90% accuracy is, from a user's point of view, awful handwriting recognition performance. It means you will be correcting on average about 10 words per paragraph! Even 99% percent accuracy is not nearly good enough to give people a sense that the computer is good at handwriting recognition.

I also want to point out the difficulty and danger in interpreting strokes when doing handwriting recognition.

In the last demo box, try writing a capital Y without lifting the pen. You'll have to go "up and down" one or both upper branches. Because of this, the recognizer will call it a K, A, or N even though it is obviously a Y when you're done.

This demo is constrained to only using one stroke per letter, but systems that permit multiple strokes still get into trouble when the strokes don't match what they are expecting--for example if you draw an X using 4 individual strokes outward from a central point.

This also happens with words. In Microsoft's handwriting recognition in Office in the early 2000s, writing the letters of a word out of order completely borked the recognition. For example writing "xample" and then going back and adding an "e" at the beginning would not produce a recognized word of "example."

My point with all of this is that there is a reason you probably don't do all your computing with natural handwriting. It's a surprisingly difficult problem. Users do not expect it to matter how they form letters and words on the page. And they have very low tolerance for correcting computer mistakes.


> This demo is constrained to only using one stroke per letter, but systems that permit multiple strokes still get into trouble when the strokes don't match what they are expecting--for example if you draw an X using 4 individual strokes outward from a central point.

Arguably, an X drawn this way should NOT be recognized as an X--that's not how an X is spelled.

If the task is communicating with the computer, then recognition of the gesture is a valid approach. Just as there are conventions regarding the spelling of words, there are conventions involved in the formation of letters. Why not use them? It would even seem incorrect to leave these out.


The human convention of written language it to interpret the symbols after they have been completed, not during the act of writing them.

A computer that interprets the behavior of writing, rather than the final symbols, is going to violate user expectations at some point.

Why? Because people do not always write as linearly as you might expect, especially when writing fast. They might drop or mis-write letters or words, then go back and fix it. Or quickly jot down just enough letters to remind themselves of what they heard, then go back and fill the rest in. A routine that interprets actions in order is going to have a hard time with actions that the user completes out of order.


"human convention of written language" is a bit much. Stroke order is almost as important as what the actual strokes are in the definition of a Chinese character, for example. Of course unless you literally watch someone write you observe the characters after they're written, but the most predictive latent mental representation of a character does include an order component. I know this because I made the mistake of memorizing many characters almost like bitmaps and have had to go back and learn how to reliably write/read hand written characters.


I don't know what to say other than that the entire purpose of written language is to carry information between people who aren't in a position to directly observe each other writing. (If they were, they could just talk and would not need to write.)


There exist counterexamples in the broader world. Historically in the Sinosphere it was commonish that two people might share a command of written Classical Chinese but not really be able to speak to each other. For a modern example, consider the paper below: https://eric.ed.gov/?id=ED515291


Even learning English the stroke order can help considerably. Mostly it helps in just learning to write legibly, but that really is just a fancy way of saying it increases the accuracy of recognition. :)

I completely concede that it is possible to get the same results with other stroke orderings. However, there is a reason when teaching children how to write, we often get fairly prescriptive with stroke ordering, as well.


In calligraphy there is the notion of 'ductus', or how and in which order the strokes are written. It has a significant effect on the end result, and for each script, there is arguably only one "correct" ductus. A similar concept can be applied to normal handwriting.


I highly doubt people are using non standard stroke orders, unless they are very young or it isn't their first language. However, this scheme probably won't work for cursive, which is how people actually write.


Oddly,I know more folks that were non cursive. Even more interestingly, I would think most cursive is more strict in stroke order.


Going back and fixing up is almost never as legible as getting it right the first time. Even for human readers.

If you truly want to write fast, you go with a shorthand system. I don't know many folks that have tried reading other's shorthands. It probably isn't as tough as you'd imagine, but most of those systems are more demanding on stroke order, not less. If only because the speed is gained by being very prescriptive.


>The human convention of written language it to interpret the symbols after they have been completed, not during the act of writing them.

Not exactly. E.g. Japanese handwriting and the order of strokes etc (also in traditional caligraphy/penmanship)


I understand the idea of looking at completed characters and inferring the original order of strokes.

But are you saying that Japanese writing is only readable if you observe the writer during the act of writing? Because that's what some stroke recognition engines do.

Here's a tangible example. Imagine I write "h l o", pause, then go back and place an "e" in the first space, and an "l" in the second space, then hold it up to you. You're going to see "hello," right?

But an algorithm that tries to interpret the act of writing itself, might see "hlo el", because that's the order in which I wrote the characters.


>But are you saying that Japanese writing is only readable if you observe the writer during the act of writing? Because that's what some stroke recognition engines do.

Not exactly, but the situation, as I understand it, is somewhat related: Japanese writing is better (and thus more readable) if the writer observes (respects) a specific stroke ordering.

So, one doesn't have to observe a writer while he is writing to be able to better read what they wrote. But the ordering of strokes can have impact on readability, even when one sees the written words after they've been completed.


I've seen people write letters in all manner of unexpected ways. If the resultant marks on the paper look enough like the intended letter, then it's readable by a human, and if it's readable by a human, it should be readable by a machine.

Not that I don't think "meet me halfway" type approaches (like the Graffiti system) aren't worth using, but in this case we're talking about recognizing writing (the artifact), not writing (the verb).


Interesting discussion, thank you.

I am reminded of the Graffiti handwriting notation used by Palm OS. That was single stroke, and devices came with a card depicting all the characters.

I was never able to become fluent.

https://en.m.wikipedia.org/wiki/Graffiti_(Palm_OS)


Your comment illustrates why Palm Pilots found market success with handwriting recognition (as far as I know, the only product that ever did so).

The trick was that they treated "Graffiti" as a new alphabet that users had to learn. Thus, when the recognition engine failed, many users blamed themselves (i.e. their Graffiti fluency) rather than the product.

In contrast, when products that promised to recognize natural handwriting had a recognition failure, the users tended to blame the products.

It's a good lesson for product development--user satisfaction will depend in part on user expectations.


Wow, what a nostalgia trip! The Graffiti handwriting system was brilliant. I only owned a PalmOS device for a short time (I was very late to the party and they were already old hat) but I picked it up very quickly and still remember how to write most of the "letters".


I still think that Graffiti was very fast and efficient. It had real effect on my handwriting and I still see myself simplifying letters in graffiti way when I try to write fast :)


My mom and I were so fluent in Graffiti when I was in High School that the post-it notes my mom left for me when she was out would be written in it


Yeah, when I was learning Japanese it was a useful thought of how order of strokes actually matters and there should be always 1 way to write a letter, but, no -- everything I can recognize computer should recognize as well. No matter how fucked up, if I can guess it -- the program should guess it. That's being good in handwriting recognition. Everything else will be perceived as subpar by the enduser.


for many of the examples you gave, I think that could be solved through an autocomplete style correction that sure, it's not perfect, but it seems good enough for smartphone users: xample is not a word, so it's probably a typo, so it's probably example...

you could also keep multiple interpretation of a word pending (and a text search for all of them would take you there) and eventually ask the user to disambiguate if the user wants to. I assume this would be an acceptable solution for non dictionary words too...


I just want to point out that 90% accuracy is, from a user's point of view, awful handwriting recognition performance. It means you will be correcting on average about 10 words per paragraph!

Wait, what? Doesn't that imply that a paragraph needs to have 100 words in it, in order for 10 of them to be recognized wrong at 90% success rate? That seems super-long, anyway.

My stats are really rusty, perhaps that's just one of those unintuitive cases that confuse people like me.


It's a (somewhat dated, probably) copyediting rule of thumb that a written paragraph has about 100-200 words in it. This would be in a writing style you might see in a novel or an essay. For online writing, perhaps more like 50-100. Even that might be long for the style of writing where each sentence is its own paragraph, supposedly for impact or whatever. Not sure you can really call it "paragraphs" when each one is only a sentence.

For reference, the above paragraph is 78 words long.


I don't really disagree, but I think you overstate it, to an extent. For most people, simple 99% accuracy of their input on their phone's system of capture is probably overstating it. There is a reason people have the clever footers "written on phone."

That is to say, people have a higher tolerance for things that are within expected norms of their environment. Ideally, we want no corrections. But, having to do them constantly for a time will quickly desensitize people to this. (And yes, this is currently just an assertion of mine, I don't have data backing it. Just some anecdotes.)


I always saw the "Sent from my iPhone" footer as nothing more than advertising, and the ensuing "Sent from my x" as a small act of rebellion or tongue-in-cheek reference.

I hadn't considered that it was intended to act as a warning that the content might be more error-prone.


I've seen a few that were direct statements of more typos because of the device used. Probably did start and largely remain advertising, though.


This is kind of interesting. I had a through about how to approach the handwriting recognition problem a few years back and surprisingly I though of this curvature based approach also. I never implemented it (too lazy to try...) but its cool to see how well something like that might work.


The linked demo is by far the most impressive thing I've seen all week. I wish a certain Microsoft chart editor was as easy and unfinicky to use as this demo from 1966 (52 years ago), and that's still one of the better editors out there.


Comparing this with the Graffiti system on my old (2000-ish) Palm Pilot, this is somewhat more reliable even on a first attempt than that was after I'd made a concerted effort to learn it. Very cool!

Edit: I think where the Afterword says "inputting text with a stylus is likely slower than touch typing", they're forgetting that we still don't have a really acceptable way of inputting text on mobile devices. Swype and its ilk are close, but still hamfisted at times.


I missed it the first time, but the article has linked source code (github.com/jackschaedler/handwriting-recognition) for all the D3.js demos that is worth a read.


All this constant talk of AI and singularities and whatnot.

Reality check: Our machines do not yet accurately manage simple reading tasks.


I did something like this in Visual Basic and submitted it to PC PLUS in the UK, back in the early 90's.

It was (yay!) published as recognit.bas (VB) and I'd be really happy if someone still has a copy.

It recognized just numbers but the basis of operation was similar to the linked article.


I wonder if it was possible to use Hinton's idea of local features (where a 3 is recognized as an E in a 180 rotation map and a W in 90 deg. rotation map) to make the recognition partially rotation invariant....


so much time spent on manual feature engineering which could be implicitly picked up by RNNs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: