My impression from reading about steno is that a lot of the complexity in steno systems comes from disambiguating similar-sounding words. Have you found this to be the case?
If so I'm thinking predictive techniques could help.
While predictive techniques can help, you really want something that is accurate 100% of the time so you don't have to keep an eye on your output. I'd rather memorize two different chords to get 100% correctness on the I or eye problem rather than 99% correctness on it with a predictive engine - the fact that you have to keep track of what you're typing to catch its mistakes really makes predictive techniques not so good for speed.
Yeah, one of the things I love about steno (as opposed to predictive systems like voice recognition or autocorrect) is its 100% determinism. That tiny pause of hesitation to wait and see whether a word has come out properly is so completely disruptive of flow for me. In the most recent video, I basically did the whole thing just keeping my eyes on the text I was transcribing from, rather than watching my output. You can see me correcting a few errors, but that's because my fingers told me that I'd made one, which prompted me to look over and figure out what I'd screwed up. Otherwise I could trust that whenever I hit a stroke, it translated as the exact same thing every time, so I never have to keep hovering over my text watching for errors. It makes the whole process way more pleasurable.
If so I'm thinking predictive techniques could help.