The adopted metric for this seems a little wierd. I checked in my own bailiwick (C++), and noticed that of the top 200, lmms is ranked 103, MuseScore at 122 and Ardour at 192.
There's nothing necessarily wrong with this ranking (even if I do feel personally slighted as the lead dev of Ardour), but ...
These are all end-user applications. None of them are dependents for any other open source project. None of them are libraries, or infrastructure or plumbing. The criticality score has no rating for "uniqueness" or "irreplacibility".
Missing from the list are such notables as:
* boost (the actual boost, not a fork)
* all of the *mm wrappers for the GNOME stack
* sigc++ (C++ standard compliant signals & slots)
Meanwhile, #1 on the list is cmssw which is a library of software components for particle physics. I'm not saying that particle physics isn't important. It's just hard to argue that a widely used component for particle physics software is "the most critical OSS C++ project".
The idea of determining what OSS projects are most "critical" seems like a good one. It's just hard to believe that Pike's metrics, as used here, really accomplish that.
A linear ranking of all open-source software projects, to determine which projects will receive future funding, to improve the security of digital infrastructure for civil society.
Announced on a Friday, not a wonderful day for news coverage.
How are the following input criteria determined for projects which do not use Github as their primary workflow, e.g. those which use mailing lists?
initiative which is interesting in its own right. Not being able to trust where the software we rely on originates, and hasn't been tampered with along the way, is a weakness that we've been turning a blind eye to, for too long.
Interesting that they refer to the Core Infrastructure Initiative. Not in the sense that they participated in funding it. But interesting because that project does not seem to be very alive. I browsed the web site recently and the general impression is it has not been updated for years. So after the first initiative faded away they create the next one?
Not even arguably - explicitly. From the CII website:
> The CII has been replaced by the Open Source Security Foundation (OpenSSF). Please go to the OpenSSF site for current activities in securing open source software. In particular, the CII Best Practices badge work continues as part of the OpenSSF Best Practices Working Group, while the CII research conducted on open source software security by Harvard continues as part of the OpenSSF Securing Critical Projects Working Group.
> This CII website is being retained to preserve historical information and to help with transition to the OpenSSF.
To keep the website a historical reference is a good practice. But if not every page on the site has a banner to tell this background it can be very misleading. Visitors don't necessarily read the front page.
Given the benefit Google has received from open source software, tossing $1M at this and patting themselves on the back for it seems more like a slap in the face to the open source community than anything.
There's nothing necessarily wrong with this ranking (even if I do feel personally slighted as the lead dev of Ardour), but ...
These are all end-user applications. None of them are dependents for any other open source project. None of them are libraries, or infrastructure or plumbing. The criticality score has no rating for "uniqueness" or "irreplacibility".
Missing from the list are such notables as:
Meanwhile, #1 on the list is cmssw which is a library of software components for particle physics. I'm not saying that particle physics isn't important. It's just hard to argue that a widely used component for particle physics software is "the most critical OSS C++ project".The idea of determining what OSS projects are most "critical" seems like a good one. It's just hard to believe that Pike's metrics, as used here, really accomplish that.