Hacker News new | past | comments | ask | show | jobs | submit login

Currently (at least for the-eye) it's about IPFS's barrier of entry. I expect LibGen's case to be similar. Most people don't know about it, and if even those that knew about it had to learn how IPFS works etc, they would probably just try to find the book they're looking for elsewhere.



No need to conflate the frontend (the end-user interface that 'most people' use when trying to 'find the book they're looking for') with the mirroring/archiving backend (the distributed/p2p technology used to 'make sure LibGen never goes down').

The frontend would still be a user-friendly HTTP web-application (or collection of several) that pulls (portions of) the archive from the distributed/resilient backend to serve individual files to clients.

The backend can be a relatively obscure, geeky, post-BitTorrent p2p software like IPFS or Dat, as long as those willing to donate bandwidth/storage can run it on their systems. This is a vastly different audience from 'most people'.

The real question is which software's features best fits the backend use-case (efficiently hosting a very large and growing/evolving, IP-infringing dataset). Dat [1] has features to (1) update data and efficiently synchronize changes, and to (2) efficiently provide random-access data from larger datasets. Two quite compelling advancements over BitTorrent for this use-case.

[1] https://docs.datproject.org/docs/faq#how-is-dat-different-th...


I am not fully aware how IPFS operates, but wouldn't it at least solve the back-end mirroring? Front-end servers would then "only" need to access IPFS for continuous syncing of metadata (for search) and fetching user-requested files (upon request).


True, I too find it not ideal, but having such a massive library available over it surely would increase the interest in lowering the barrier of entry?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: