Because otherwise, people would be able to use non-Tesla GPUs for cloud compute workloads, drastically reducing the cost of cloud GPU compute, and it would also enable the use of non-Tesla GPUs as local GPGPU clusters - additionally reducing workstation GPU sales due to more efficient resource use.
GPUs are a duopoly due to intellectual property laws and high costs of entry (the only companies I know of that are willing to compute are Chinese and only a result of sanctions), so for NVidia this just allows for more profit.
Interestingly, Intel is probably the most open with its GPUs, although it wasn't always that way; perhaps they realised they couldn't compete on performance alone.
AMD do have great open source drivers, but they have longer lag behind with their code merges compared to Intel. Also at least a while ago their open documentation was quite lacking for newer generations of GPUs.
Yeah, but now the comparison for many companies (e.g. R&D dept. is dabbling a bit in machine learning) becomes "buy one big box with 4x RTX 3090 for ~$10k and spin up VMs on that as needed", versus the cloud bill. Previously the cost of owning physical hardware with that capability would be a lot higher.
This has the potential to challenge the cloud case for sporadic GPU use, since cloud vendors cannot buy RTX cards. But it would require that the tooling becomes simple to use and reliable.
Certainly, and both AWS, GCP and Azure even on CPU are much beyond simply hardware cost - there are hosts that are 2-3x cheaper for most uses with equivalent hardware resources.
GPUs are a duopoly due to intellectual property laws and high costs of entry (the only companies I know of that are willing to compute are Chinese and only a result of sanctions), so for NVidia this just allows for more profit.