Hacker News new | past | comments | ask | show | jobs | submit login

People ask this question a lot in academia. The system just about incentivizes creating dupliware and/or abandonware that is critical to a domain, field, or niche, so there is tons of it (there is also A LOT of very good stable software, but ignoring this for now). This makes it very difficult for new parties to find open source software they can rely on, for funders to determine what software to support, for institutions to track their contributions, etc.

I am working on a project solving for this specific question (and many others), across the open source and open science ecosystems, starting with open source research software but ultimately intends to touch the whole space. Among other things, we want to take continuous measurements of the health of open source projects, the use of open source projects, the perception of open source projects, the "impact of open source projects", and the needs of open source projects. We are combining data collection with stigmergic markers and eventual webs of trust.

It is incubated by NumFOCUS, and includes collaborations from across the academic industry.

Bringing it here for your thoughts.

The project is called "The Map of Open Source Science" (MOSS) and is built on the "Simply Omniscient Layer" (SOL). It is essentially an omniscient open permissionless graph database of the digital knowledge and research ecosystems, as well as a corresponding visualization (eventually people will be able to build their own visualizations interfacing with SOL).

Very recent presentation at PyData Vermont: https://www.youtube.com/watch?v=7c51njj9JPs

Recentish update: https://www.opensource.science/updates/the-map-of-open-sourc...

Landing page for the program: https://opensource.science

From our site:

"MOSS is a comprehensive, composable, interactive map of the digital knowledge and research ecosystems. We identify connections between open source research software projects, research papers, organizations, patents, datasets, funding pathways, AI models and applications, and the people who drive it all.

The MOSS proof of concept so far demonstrates nine use-cases:

Identify relevant tools for your research

Showcase the impact and connections of the people that make and maintain open source research tools

Showcase the impact and connections of the organizations that build, support, and fund development of open source research tools

Showcase the impact and connections of open source research tools

Identify gaps in open source research tooling

Navigate repetition of open source research tool features

Identify, prevent, and reinvigorate abandoned open source research tools

Streamline the grant submission and review process

Navigate security flaw identification - who to contact, what downstream tools are effected, what alternative tools exist"




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: