I understand and won't argue on your preference for the core Talon app. However, as all of my wav2letter code, models, tools, training methodology, and general advice (e.g. I am very active on the github/facebookresearch/wav2letter issue tracker helping others) are open source, and wav2letter as used in Talon is built from the public repository and dynamically linked, I don't think the speech engine is the place to speak against Talon's source policy.
Sorry, I was only using the speech engine accuracy as an example. But the freedom of open source stands for any part of software: Dragon's spectacular failures for me are only in part because of its engine. Also, is the command portion of Talon's wav2letter backend open source? Nonetheless, thank you for releasing some of your work. It is all helpful.
Yes, the decoder in the open-source talonvoice/wav2letter/decoder will decode commands alongside speech if you hand it an NFA blob describing the command graph. It's up to you to generate that NFA, but it's probably identical to the graph you're creating with FSTs, and the C structures are described in the source/header.