I had my second surgery just a few months ago. I can type again, but each time I had to use my left hand for months. Initially, it feels like your brain doesn't function properly anymore (not mentioning the psychological effort you have to make in order to be focused on work when you feel your hands are falling apart). Keyboard speed is directly related to how fast you can move your hands to support your thought flow. I tried Kinesis and even a split vertical keyboard (KeyboardIo) but none avoided the pain and numbness that came with typing. The other problem with thumb-cluster keyboards is that your IDE productivity goes to zero. I was faster with just my left hand on a regular keyboard than with both. I think this would be fixable with a good amount of time remapping shortcuts, etc. Now that my hand works again, I think I should start spending time getting used to my KeyboardIO and at least try to buy some time.
The "voice coding" space is maybe not a mess, but far from great or even acceptable. However, there seem to be more recent efforts to make better tools. I would definitely check https://serenade.ai/ out.
The main problem, I think is that "voice coding" is too much focused on editor typing which they can't do right as, when combined with code syntax, it becomes too complex. Instead, they should focus on higher level actions (which btw, Serenade does) along with a different approach to typing. I think Vim is a good example of where editing should be. IntelliJ refactoring is where voice coding should start. With all the AI buzz, it's unbelievable how bad voice recognition is. I'm not talking about "Siri set an alarm", but instead separating context from tone, not having to say things 2-3 times having good response latency, etc.
Lastly, I wish there was simple voice assistance for code navigation - like go to definition, find usages, etc. This is much simpler to "parse" than code structure. Unfortunately, this is not even tackled by any tool as far as I've seen.
I’m one of the creators of Serenade—thanks for mentioning us! We totally agree about the need for higher-level layers of abstraction, and we’re working on some of the code navigation functionality you mentioned right now. If you have any other ideas or feedback, we’d love to chat more, I’m matt@serenade.ai.
Among all demos above Serenade seems the only product that does the right thing. Why the hell I should say aloud "colon" or "quote" where tool can put them automatically.
The "voice coding" space is maybe not a mess, but far from great or even acceptable. However, there seem to be more recent efforts to make better tools. I would definitely check https://serenade.ai/ out.
The main problem, I think is that "voice coding" is too much focused on editor typing which they can't do right as, when combined with code syntax, it becomes too complex. Instead, they should focus on higher level actions (which btw, Serenade does) along with a different approach to typing. I think Vim is a good example of where editing should be. IntelliJ refactoring is where voice coding should start. With all the AI buzz, it's unbelievable how bad voice recognition is. I'm not talking about "Siri set an alarm", but instead separating context from tone, not having to say things 2-3 times having good response latency, etc.
Lastly, I wish there was simple voice assistance for code navigation - like go to definition, find usages, etc. This is much simpler to "parse" than code structure. Unfortunately, this is not even tackled by any tool as far as I've seen.