What you said used to be the standard. In fact, it used to be using your Linux distributions package manager which is even more convenient. I cannot even imagine a piece of software that you cannot get working as easily as cloning the git repository and then following the instructions, instructions that are typically pretty easy to follow and are supposed to work, or using a programming language's package manager, or your Linux distributions or BSD's package manager.
At any rate, what I am trying to say is that if the case of having poorly documented (i.e. usually untested documentation) piece of software is high, then we definitely are doing something wrong. You should be able to follow the installation instructions and it should work, i.e. just read INSTALL or README and follow the instructions, like good old times!
You said it yourself: "I don't know why this is a hard bar to clear, but bravo.". It should not be, it should be expected, and it should be done. It should not be a magical or surprising thing.
Ah, yeah, very much agreed. I don't mind stuff not being packaged in official distro repos (although it would of course be very nice), but a build/install process being more complex than `configure && make && make install` borders on "bug" territory in my mind. In fairness, however, speech-to-text seems to mostly suffer from problems after the build/install step proper; it's common enough that getting to the point of running the program is okay, but it's useless without the language/speech models that make it actually understand speech, and those are... arcane, I think is the best word. Or huge files. A major differentiator for this one was that getting and using the model was "wget this single 40M file and place it here". And then at runtime, the program correctly grabbed the right microphone and automatically output text to stdout and was mostly accurate at doing so:) Others have done any one of those steps, but never all.
I agree. I was going to write exactly those commands. OK, you usually have to execute "autogen.sh" or run "autoconf" when you clone a repository, and I am usually fine with perhaps even "mkdir build && cmake ..".
> A major differentiator for this one was that getting and using the model was "wget this single 40M file and place it here". And then at runtime, the program correctly grabbed the right microphone and automatically output text to stdout and was mostly accurate at doing so:)
Damn, that is crazy as well. If I was the one whose software was dependent on models or assets (that are relatively large in size), I would either put them into their separate git repository and use them as a git submodule, or host them (probably with mirrors) and have a Makefile target (probably) that downloads it using "wget" or "curl". That, or execute a script from Makefile, or anything of the like.
For programs like this, grabbing the right microphone and outputting text to stdout should be pretty common, too[1]. If the program does not do this, then it should be considered buggy, and I would think that the developers did not even test their own stuff!
[1] I still have related issues with Audacity to this day! I cannot complain that much, but yeah.
What I like so much about using Arch and the AUR is there is a community who will do these fiddly processes for you. Of course not everything is in the AUR but the barrier to entry is much lower than typical Linux package distributions.
At any rate, what I am trying to say is that if the case of having poorly documented (i.e. usually untested documentation) piece of software is high, then we definitely are doing something wrong. You should be able to follow the installation instructions and it should work, i.e. just read INSTALL or README and follow the instructions, like good old times!
You said it yourself: "I don't know why this is a hard bar to clear, but bravo.". It should not be, it should be expected, and it should be done. It should not be a magical or surprising thing.