Hacker News new | past | comments | ask | show | jobs | submit login

Why not publish the site over IPFS, that would make P2P hosting much simpler?



In my experience ipfs doesn't actually work. I'd love to be proven wrong, but the reason why nobody uses ipfs even when it seems like a great fit is bect it's not really usable.


This is my experience as well. In theory, IPFS is exactly the right thing for LibGen, but in practice I consider it unusable.


FWIW: StavrosK has actually been putting some serious effort into making IPFS accessible.

See here for example: https://news.ycombinator.com/item?id=16521385


It's not really an accessibility issue so much as a performance one. If it was a reasonable alternative to something like the rsync daemon I'd use it all the time.

Unfortunately the performance issues and overhead is just too much.


IPFS isn't really an alternative to rsync, but the rest of your point stands.


Not in the general case, no. But one of the major uses for rsync in synchronizing mirrors, and it could very much be a replacement for rsync in that particular niche.

For example mirroring project gutenburg.


Better rsync support is actually in the works (though probably still a ~quarter out to land end to end in go-ipfs). The two main components are improving ipfs "mount" support, and adding metadata support into unixfs so we can only update modified files. See details here: https://github.com/ipfs/team-mgmt/blob/master/OKR/PACKAGE-MA... -- mount currently has IPNS read support but write support needs an owner to get it over the finish line and unixfsv1.5 (with metadata) should be landing in js-ipfs later this week!


Ah, yes, you mean pinning a specific set, you're right. Unfortunately I've found that the way the daemon does pinning currently doesn't lend itself to that use case (a single unavailable file will stall the pin for hours).


Thank you, I really hope IPFS improves.


Sorry to hear about your bad experience, StavrosK.

I think this perspective really depends how you're trying to use IPFS. For example, the ease of use of running a local IPFS node has improved a ton with IPFS-desktop & companion, and tools like ipfs-cohost (https://github.com/ipfs-shipyard/ipfs-cohost) also improve usability and tooling for shared community hosting. I think this has actually seen a ton of progress and end consumer usability has improved significantly in the past year (and is now even coming out-of-the-box in browsers like Brave and Opera!)

I definitely hear that running a beefy IPFS node for local hosting/pinning still needs work, but pinning services like Infura, Temporal, and Pinata have helped abstract some of those challenges from individual applications like this. From a developer perspective, there are a lot of performance improvements for adding, data transfer, and content resolution coming down the line very soon (https://github.com/ipfs/go-ipfs/issues/6776), and there's also been a lot of work improving the ipfs gateways and docs to support the dev community better. I definitely think there is still lots of room for improvement - but also lots of progress to recognize in making IPFS usable to exactly these sorts of applications. Check out qri,io - they're doing collaborative dataset hosting like this and it's pretty slick!


You are correct, the end user experience has improved tremendously, I tried the desktop bundle the other day and it was indeed very easy to get started with.

> pinning services like Infura, Temporal, and Pinata have helped abstract some of those challenges

I wonder if you omitted Eternum on purpose :P

(For context, I created and run Eternum, and that experience is mostly where my opinion of IPFS comes from.)


Gotcha! Thank you! Running a pinning service definitely still has rough edges =/ but I know the Infura team recently open sourced some of the tooling they built to make it a bit easier: https://blog.infura.io/new-tools-for-running-ipfs-nodes-196d.... Might help others who are self hosting a large chunk of data on a persistent node too...

If you ever want to chat about how we can make pinning services on IPFS easier to run, would love to chat! I know cluster has been researching how to improve "IPFS for enterprise" usage and would really appreciate the user feedback!


Ah, thanks for that link, that would have come in handy a few weeks ago when I migrated the node to a new server.

I would love to chat. My #1 request is to make pinning asynchronous, and generally improve pinning performance. I think that's most of my frustration, followed by slow DHT resolves, followed by large resource usage by the node.


Pinning services are nice, but the idea of pinning services is a bit antithetical to the basic philosophy of p2p. If the only way to make something available is e.g. putting something on pinata, I might as well put it on S3.

The basic problem is that the DHT is currently not working, and IPFS is using the DHT in a very demanding way compared to, say, bittorrent or DAT.

I know that there are some fixes in the works, but the next releases really need to solve the DHT problem, otherwise no amount of usability improvements is going to matter...


How would that work with adding new books and metadata? IPFS archives are immutable, right? I think something like Dat might be better because the people with the secret keys could update the archive and everyone else would automatically seed the updated version


You can just have it pin an IPNS CID, or you can publish a new hash for people to pin. There are ways.

That said, maybe Dat would be better, especially if it works well.


Currently (at least for the-eye) it's about IPFS's barrier of entry. I expect LibGen's case to be similar. Most people don't know about it, and if even those that knew about it had to learn how IPFS works etc, they would probably just try to find the book they're looking for elsewhere.


No need to conflate the frontend (the end-user interface that 'most people' use when trying to 'find the book they're looking for') with the mirroring/archiving backend (the distributed/p2p technology used to 'make sure LibGen never goes down').

The frontend would still be a user-friendly HTTP web-application (or collection of several) that pulls (portions of) the archive from the distributed/resilient backend to serve individual files to clients.

The backend can be a relatively obscure, geeky, post-BitTorrent p2p software like IPFS or Dat, as long as those willing to donate bandwidth/storage can run it on their systems. This is a vastly different audience from 'most people'.

The real question is which software's features best fits the backend use-case (efficiently hosting a very large and growing/evolving, IP-infringing dataset). Dat [1] has features to (1) update data and efficiently synchronize changes, and to (2) efficiently provide random-access data from larger datasets. Two quite compelling advancements over BitTorrent for this use-case.

[1] https://docs.datproject.org/docs/faq#how-is-dat-different-th...


I am not fully aware how IPFS operates, but wouldn't it at least solve the back-end mirroring? Front-end servers would then "only" need to access IPFS for continuous syncing of metadata (for search) and fetching user-requested files (upon request).


True, I too find it not ideal, but having such a massive library available over it surely would increase the interest in lowering the barrier of entry?


How about Tahoe-LAFS? I haven't used it, but it should be stable by now.

There's also ZeroNet, though IDK if it can handle the traffic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: