Download Whisper and the models, run it in a Docker container as a server, and it's Open Source.
Honestly, try see it as a favor that it's using OpenAI's endpoint, since some of us won't think it's feasible to have a GPU-loaded server running 24/7 just for some occasional transcriptions.
Honestly, try see it as a favor that it's using OpenAI's endpoint, since some of us won't think it's feasible to have a GPU-loaded server running 24/7 just for some occasional transcriptions.