I was thinking actual PageRank would be interesting, ranking libraries by who uses what, transitively.
Might be nice to include weightings of app usage by users and downloads (because apps aren't used by other libraries), but just sticking to code-used-by-code could be interesting:
So, "one lib/app, one vote" (weighted by how much it is used by other libs/apps), measures coders' evaluation, not app-users'.
And not have the search part (which PageRank ranks), just the ranking - e.g. a top 10
> The end result is that developers contribute to open source in a vacuum; they develop, hoping — but never knowing — whether their library is being used at-large.
Is popularity is the main reason behind releasing and maintaining open source software?
Unlike celebrity culture, popularity in the open source world translates to actual impact on the web. As an author of a popular library, your code plays a direct part in how other developers structure their codebase, and -- depending on the library -- the end user experience.
And, yeah, impact/change/popularity (whatever you want to call it) is certainly a main reason behind releasing and maintaining open source software. Perhaps other dominant reasons include giving users differently opionionated alternatives that better suit their workflow, advancing the technical know-how of a field, and simply experimenting for expressiveness' sake.
It's not the main reason, but it's natural to be curious if someone is using code you spent so much time on, and there's nothing wrong or anti-open source about that.
I've been using the http://www.ruby-toolbox.com popularity score as a similar type of proxy to usefulness, reliability and desirability for ruby gems.
It attempts to sum up how widely used and active a particular gem is. Plus the site categorizes gems, so when looking for a solution to a new problem you don't have particular expertise in, you can see at a glance which gems may not only help you out, but also those you can trust will be maintained for awhile.
The score simply being the number of top sites feels pretty difficult to understand in isolation. For instance, knowing that React has a libscore of 203 doesn't mean much if I don't know both that this is out of one million sites polled and the score of some other library that I might already have a vague mental picture of its popularity (i.e. compared to jQuery's 600k+).
Other than that, very interesting information to sift through.
Exactly. That's why we removed any mention of a "score" from the copywriting before launching. We realized how vague a score could be. We focus on site counts now, since they're raw/unfiltered data.
This is awesome! Thank you for your work and Stripe and DigitalOcean for the sponsorship
I've just noticed that the search is not very good, a part the requirement of having to write exactly the correct name. Even writing "Angular" (which is the name showed in the global data) doesn't work.
A fuzzy, case insensitive and autocompleted search would be much more useful
The problem was that fuzzy search would have been technically overwhelming to implement due to the size of our data sets (1 million sites * avg. # of leaked global variables). Also, it would have resulted in a lot of confusing matches because of how arbitrary JavaScript variable names are.
Keeping it to one-for-one case sensitive lookups was the only way to clearly express searching behavior and return accurate data every time. The downside is that we force people to read our homepage how-to to learn how to use it :)
Not very functional at the moment. Especially if you don't use jQuery - the top 200 are practically useless IMO.
That being said, being able to search for a library im already using, and seeing what are the top libraries that are used in conjunction with it would probably help.
This sounds pretty similar to http://nerdydata.com which lets you search the source code of webpages. Does this only look at the window variable on webpages?
It's not working at the moment. All of my searches ended with no results, and it took me a while to notice that the request to the web service is returning a 502. Better error messages would be helpful.
SSL will be on later tonight, we didn't have the time to make the change before launching today and worried that there might be a bit of down time.
I imagine we can programatically pick up on CSS namespaces though...you should create an issue over here -> https://github.com/julianshapiro/libscore it's a good idea
what's interesting about the work we've done on libscore is that it shows the end result -- whether a lib was actually ultimately used on a site. npm can tell you download stats, but that's where its data ends.
I noticed that you actually had the source code in the repository, and later removed it. The commits history still contains all the code.
Question: Why did you decide to go private? You might have your reasons, care to share? Otherwise it is worrying: helping open source community, yet not being open source.