"It depends". The answer for this has multiple parts. Though my preference is using my OS package manager (so I'm biased towards that), I'll try to explain my reasoning as best as possible.
And I'm using "OS" in a liberal sense, to also mean "different linux distros" and "BSDs".
---
One, you have to also think about what happens after installing. You have to consider upgrades, uninstalls, and also errors happening during those processes. With these scripts, if something goes wrong you're mostly on your own (though yes, you can go to the specific support channels for the specific software to fix the specific problem for your specific situation).
Your Deno link at least is versioned so it would be less bad to debug if something goes wrong, but that's only relatively speaking.
A package manager deals mostly with static archives, and keeps track of any file controlled by it. It is trivial to know whether a particular file belongs to a specific package or not. Upgrades can be centrally managed. Since it knows about what programs it has installed, it can print you a list of names and versions of everything so you can compare it against a list of CVEs or something.
Custom installation scripts are custom, and each has their own way to do these things.
---
Second, those instructions tell you to run a random script without even verifying its integrity first. No versioning for that script, no auditability trail, no signature to guarantee that the script is indeed the same as the script's author uploaded some moment in the past and has not been tampered with since.
A package manager deals (usually) with static archives whose signatures are verified before trying to do anything with them. An install operation is a simple, boring, "extract archive". Maybe with a small post-install script (that was part of the signed archive).
And the public keys used by the package managers are (usually) already in your local machine, so for most operations you are using an already-known public key to validate signatures.
(Note: Presence of a signature does not imply the software is trusted. It only means that the person that signed the thing had access to the private key.)
---
Third, though this sometimes causes problems (but I have never experienced them, or at least nothing comes to mind), the maintainer of your OS package repository usually knows your OS better than the upstream developer, and can apply patches to the packaged software that can be bug fixes, or even disabling/removing telemetry that the upstream software has enabled by default.
And with a package manager you know there's at least one other person (the maintainer of the package) that has used this program in this specific OS. There may be the case that some versions are just packaged without actual testing (just "git pull & run packaging scripts" or the equivalent), but in those cases there's the escape hatch of installing a previous, working version of the package.
---
With that said. If you were to ask me if there would be a situation where I would use those scripts, then _maybe_ (big maybe) it would be (1) inside a container where I don't care about any garbage left behind because I can nuke the whole thing at once; or (2) in an isolated environment where I don't even care if the script needs root privileges (this second case implies I would not do this in Docker).
And I'm using "OS" in a liberal sense, to also mean "different linux distros" and "BSDs".
---
One, you have to also think about what happens after installing. You have to consider upgrades, uninstalls, and also errors happening during those processes. With these scripts, if something goes wrong you're mostly on your own (though yes, you can go to the specific support channels for the specific software to fix the specific problem for your specific situation).
Your Deno link at least is versioned so it would be less bad to debug if something goes wrong, but that's only relatively speaking.
A package manager deals mostly with static archives, and keeps track of any file controlled by it. It is trivial to know whether a particular file belongs to a specific package or not. Upgrades can be centrally managed. Since it knows about what programs it has installed, it can print you a list of names and versions of everything so you can compare it against a list of CVEs or something.
Custom installation scripts are custom, and each has their own way to do these things.
---
Second, those instructions tell you to run a random script without even verifying its integrity first. No versioning for that script, no auditability trail, no signature to guarantee that the script is indeed the same as the script's author uploaded some moment in the past and has not been tampered with since.
A package manager deals (usually) with static archives whose signatures are verified before trying to do anything with them. An install operation is a simple, boring, "extract archive". Maybe with a small post-install script (that was part of the signed archive).
And the public keys used by the package managers are (usually) already in your local machine, so for most operations you are using an already-known public key to validate signatures.
(Note: Presence of a signature does not imply the software is trusted. It only means that the person that signed the thing had access to the private key.)
---
Third, though this sometimes causes problems (but I have never experienced them, or at least nothing comes to mind), the maintainer of your OS package repository usually knows your OS better than the upstream developer, and can apply patches to the packaged software that can be bug fixes, or even disabling/removing telemetry that the upstream software has enabled by default.
And with a package manager you know there's at least one other person (the maintainer of the package) that has used this program in this specific OS. There may be the case that some versions are just packaged without actual testing (just "git pull & run packaging scripts" or the equivalent), but in those cases there's the escape hatch of installing a previous, working version of the package.
---
With that said. If you were to ask me if there would be a situation where I would use those scripts, then _maybe_ (big maybe) it would be (1) inside a container where I don't care about any garbage left behind because I can nuke the whole thing at once; or (2) in an isolated environment where I don't even care if the script needs root privileges (this second case implies I would not do this in Docker).