I don't think there is a need to have someone's reputation score to use their software.
There are plenty of pieces of open source software running on operating systems that were contributed by people who are effectively anonymous.
Also, it doesn't seem like a contributor's overall character would be a great measure of how malicious their contriubtions are, as evidenced by plenty of examples of assumed good people eventually doing bad things.
I don’t understand why people believe that open source inherently makes software secure and trustable. Yes, you have access to look through the code, but I usually don’t have the expertise to understand what I’m looking at. I wouldn’t know how to look for well hidden exploits or malicious intent. I’m still reliant on others to find these issues.
At the moment, I do rely on reputation before I trust open source software. But in the case of an app store, I can trust the reputation of the store. I can trust that the app store has to work to uphold their reputation which is their motivation for maintaining a good track record of identifying problem apps. I agree this is far from perfect, but I think it’s much safer than relying on open source.
I love the idea of open source and I hope that it will never be replaced by app stores, but I don’t feel that software is inherently more trustworthy if it’s open source.
I like to distinguish "trustworthy" from "trustable". Trustworthy software is worthy of trust: it is not malicious or unacceptably buggy. Trustable software is software which can, in theory, be verified to be trustworthy. OSS is trustable, but not necessarily trustworthy. Closed-source software might be trustworthy, but it's not trustable (since trustworthiness can't be verified).
I believe it’s not necessary to fully verify a piece of software before it can be trusted. We humans are all black boxes, no one can read our minds, but we can trust each other through our reputations. I treat software the same way; as long as software comes from a reputable developer, I’ll give it the benefit of the doubt until proven otherwise.
Verified trustworthy is too high a standard to hold to software. Take for example Log4j, an open source logging library used by many enterprise Java apps worldwide, had a huge vulnerability existing in its code base for over 7 years. Even with its widespread use and open sourced code, the exploit was not reported in a timely fashion.
Thus I’m left with reputation as the only practical means of determining trust; imperfect as it may be.
Exactly, in fact the reddit post talks about this situation -- the code that sents sensitive information is right there on GitHub but nobody saw it before OP did. And what could happen is that the developers maintains two codebases, one "clean" version on github and a "dirty" version that is almost identical except the part where it secretly sends your password, and use that version to build an iOS app. How would you ever know that?
There are plenty of pieces of open source software running on operating systems that were contributed by people who are effectively anonymous.
Also, it doesn't seem like a contributor's overall character would be a great measure of how malicious their contriubtions are, as evidenced by plenty of examples of assumed good people eventually doing bad things.