Hacker News new | past | comments | ask | show | jobs | submit login

It'd be nice, but I don't think that model would be workable here.

Somewhat oversimplified, but the two Japanese syllabaries hiragana and katakana are around 50 distinct characters each, so that a core 3x4 board of "keys" responding to tap(-and-maybe-swipe (up|down|left|right)) will give you roughly the full set of each. Generally, tapping the key designates the consonant; swiping (or not) gives you the vowel. There are 5 vowels, and roughly 10 consonants. There's a couple of other symbols added on as modifiers for voicing, etc. Again, oversimplified, but that's roughly it.

(Side note, and I'm guessing here, but I suspect this model probably evolved from T9 texting)

From a brief inspection of https://en.wikipedia.org/wiki/Ge%CA%BDez_script, the Ge'ez syllabary/abugida (used e.g. for Amharic) needs 6-8 vowels across at least 26 consonants, and then some more combinations for labialization/velarization, and then some more for application in specific other languages.

Following the Japanese model, that'd be a pretty big grid :) Phonetic input seems a more workable model to me at least.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: