I actually came into the comments to make a joke about that very thing :-)
Even if you read the file first, you have no guarantees that the file bash gets from curl, should you copy/paste the commands in the readme, is the same thing. Yeah it’s way outside what anyone would ever consider “reasonable” to be that paranoid, the mere mention of certain 3-letter agencies trigger any informed person’s built in “paranoid mode”, and rightly so.
(This is in answer to the below question from another user. I’m too lazy to do two threaded responses on mobile. Maybe if mobile wasn’t such a shit platform with tiny ass screens and software keyboards that didn’t duck up everything you typed-ahh help there it goes again I swear to god Steve when I die I’m coming down there to kick your ass you son of a…)
If you are slightly more paranoid, keep in mind that the file you downloaded for manual inspection may not be the file you download automatically for bash-piping. This article from 2016 was making the rounds on HN about three years ago: https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-b...
Of course for a properly, professionally paranoid person, you would download every component manually and store the received artifacts locally (for caching, reproducibility, TOFU principle and general BCP needs). Then build from those only. In fact, in high-trust environments it's common for CI systems to not be able to hit internet at all.
Well, this is exactly why it's the open source. You can build everything yourself, check the scripts etc. At the end of the day all these tools are intended to help people find issues as early as possible.
the script downloads a binary blob and copies it into your bin folder. no hash check ... If somebody can replace the binary blob, there's no security check before I would execute it.