Hacker News new | past | comments | ask | show | jobs | submit login

Sure, but machines are actually pretty good at spoken and written translation these days, so I would expect that once the image recognition is solved they could handle ASL readily as well.



Oh absolutely, I agree. I just wanted to make certain it was clear that it isn’t as simple as classifying images as words.


Since ASL doesn't have a massive corpus of scrabeable training data (at least I would think). It's probably on the harder side when it comes to creating machine translators.


There must be a huge corpus of TV broadcasts, filmed live events etc. where a person speaking is being translated into sign language in real time.


Those are usually subtitled.


in theory with a mastery of speech recognization one would assume you could then apply that to any diction that has an accompanying ASL translator to derive ASL elements from the contexts given by the spoken word.

in practice that's all difficult, if at all possible -- but just sayin; we have decently good speech interpretation at this point, perhaps we aren't far from self-training ASL against something similar.


Usually, but I'm referring to the broadcasts that have a live sign language interpreter on-screen.


If Apple is filming its ASL employees, it’s about to get a lot more


I suspect that buoys and placement make signed languages harder for machine translation than spoken ones.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: