Hacker News new | past | comments | ask | show | jobs | submit login
I moved my blog from IPFS to a server (neimanslab.org)
330 points by neiman 8 months ago | hide | past | favorite | 166 comments



Well done to the author for writing this up.

Having tried fringe technologies over the years, spun up a server and run them for a few months, struggled and seen all the rough edges and loose threads, I often come to the point of feeling - this technology is good, but it's not ready yet. Time to let more determined people carry the torch.

The upside is:

- you tried, and so contributed to the ecosystem

- you told people what needs improving

Just quitting and not writing about your experience seems a waste for everyone, so good to know why web hosting on IPFS is still rough.


Totally biased founder here, but I work on https://github.com/n0-computer/iroh, a thing that started off as an IPFS implementation in rust, but we broke out & ended up doing our own thing. We're not at the point where the iroh implements "the full IPFS experience" (some parts border on impossible to do while keeping a decentralized promise), but we're getting closer to the "p2p website hosting" use case each week.


Super intriguing. Thanks for sharing!

It reminds me a bit of an early Go project called Upspin [1]. And also a bit of Solid [2]. Did you take any inspiration from them?

What excites me about your project is that you're addressing the elephant in the room when it comes to data sovereignty (~nobody wants to self-host a personal database but their personal devices aren't publicly accessible) in an elegant way.

By storing the data on my personal device and (presumably?) paying for a managed relay (and maybe an encrypted backup), I can keep my data in my physical possession, but I won't have to host anything on my own. Is that the idea?

[1] https://upspin.io/ [2] https://solidproject.org/


Ah <3 Upspin! It's been a minute. I've personally read through & love Upspin. I always found solid a little too tied to RDF & the semantic web push. The solid project is/was super valiant effort, but these days I feel like the semantic web push peaked with HTML & schema.org.

> By storing the data on my personal device and (presumably?) paying for a managed relay (and maybe an encrypted backup), I can keep my data in my physical possession, but I won't have to host anything on my own. Is that the idea?

We're hoping to give that exact setup to app developers (maybe that's you :). We still have work to do on encryption at rest to keep the hosted server "dumb", and more plumbing into existing app development ecosystems like flutter, expo, tauri, etc. but yes, that's the hope. Give developers tools to ship apps that renegotiate the "user social contract".


I feel like this type of project is a natural fit as the transport layer for CRDT-based applications. Something like: each user/device has an append-only log of CRDT events, then applications merge events from multiple logs to create a collaborative experience. (I have no idea if iroh supports append-only logs, but it seems like a common thing for projects in this space to support.) What do you think?


yep! Iroh documents [1] give you a very nice primitive that is technically a CRDT, but in practice most people use it as a key-value store. We really wanted a mutable solution that would support real deletions (instead of tombstones), and starting with append-only logs locks you out of that choice.

With Iroh + CRDTs you have three choices: 1. Use iroh's connection & gossip layers in conjunction with a mature CRDT library like Automerge or Y.js. 2. Build a more sophisticated CRDT on top of iroh documents. 3. Worry a little less about weather your data structures form a semilattice & build on a last-writer wins key-value store (basically: just use documents)

We've seen uses for all three. Hope that helps!

[1] https://iroh.computer/docs/layers/documents


Is it named after the Avatar character?


I can neither confirm nor deny, but oh boy does uncle iroh seem cool


Yes, and that makes me happy every time I see it.


> it’s quite an inconvenience to run your own IPFS node. But even if you do run your own node, the fact you access a website doesn’t mean you pin it. Not at all.

This has always been my major UX gripe with IPFS. The fact that `ipfs add` in the command line does little but generate a hash and you need to actually pin things in order to "seed" them, so to speak. So "adding a file to IPFS", in the sense of "adding a file to the network", requires the user to know that (1) the "add" in `ipfs add` does not add a file to the network, and (2) you must pin everything you want to replicate manually. I remember as recently as 2021 having to manually pin each file in a directory since pinning the directory does not recursively pin files. Doing this by hand for small folders is okay, but large folders? Not so much.

More importantly, the BitTorrent duplication problems that IPFS has solved are also solved in BitTorrent v2, and BitTorrent v2 IMO solves these problems in a much better way (you can create "hybrid torrents" which allows a great deal of backwards compatibility with existing torrent software).

This isn't a UX issue, but another thing that makes it hard for me to recommend IPFS to friends is the increasing association with "Web3" and cryptocurrency. I don't have any strong opinions on "Web3", but to many people, it's an instant deal-breaker.


‘ipfs add’ pins the file by default, not sure if that’s recent behaviour though.


It's not recent, been like that since at least 2015, if not earlier.


IPFS provides nice stable links to media, and there are HTTP->IPFS gateways if needed. That seems useful for embedding content on multiple apps/sites. Yeah it happens to fit NFTs particularly well, then again we all know what BitTorrent is known for. And yes I agree IPFS has some UI problems.

Would BitTorrent also be suitable for hosting embeddable content? I haven't seen that yet. A magnet URL is mainly a file hash and doesn't seem to encode a particular peer server, kinda like IPFS. But every time I've torrented Ubuntu, it's taken half a minute just to find the peers.


> IPFS provides nice stable links to media

Anyone who has tried to torrent an old movie or lesser-known television show knows this is simply not true.


I mean it's not like HTTP where all URLs are tied to a particular webserver and can even be changed on that server. If someone different starts seeding, you'll get the same data again at the same URL, with built-in checksumming.


That is something I wanted to know, does IPFS guarantee that same two files have same two IPFS URLs / hash links?

Otherwise, someone sharing same data again, because it will be in different IPFS folder won't be actually discoverable as same data.


Yes, a file's hash is only based on its contents. The way I understand it, a file doesn't really live in a directory, it's more like a directory (which is a kind of file itself) references files. So the same file can be in two directories, yet it'll have the same URL/hash. And if you "add" files to a directory, you're really uploading a separate copy of the dir that'll have a different hash.

I checked myself on this, but someone else might want to check me cause I'm not an expert.


This is generally true, though it’s possible to encode the same data into a slightly different shaped DAG to optimise for eg video streaming performance afaiu (balanced vs imbalanced). UnixFS vs raw bytes may also be different but I’m not 100%


From the fs's point of view, these are different file contents. But yeah, there's nothing stopping you from pinning something different that looks the same to a person.


Once decoded they would be the same file contents - imagine one DAG where the depth is log(n) and it’s a perfectly balance tree, and another where the depth of the left-hand branch is 1, right hand branch contains another subtle with left-hand size 1 etc etc etc.

The leaves are the same in both cases, so the file contents are the same, though the latter is quicker to stream (though not to verify) and the CIDs will be different


Yes, IPFS hashes individual "blocks" (pieces) of files. If two files have the same content, they will share block hashes.


Basically it depends on specific settings that can be changed in the client as to how the individual block pieces are encoded and therefore what the resulting hash ends up being. So no there's no inherent guarantee but you may get lucky with some copies of the same file.


Caveat: The other comments mention the file's contents being the only dependency on the hash, but the algo used to hash would also need to be the same. If the hash algo changes in two cases, the same content would have a different hash in those two cases.


In this case, would pinning the file make it accessible from either hash? I'd expect it to, but idk, I've only ever seen sha256 hashes on IPFS.


Kinda. Shooting from the hip based on fuzzy gatherings from IPFS usage here, but as I understand it: The leaf-level data blocks will be shared between the Merkle trees, but at least the tip (the object a given hash actually refers to) and maybe some of the other structural information will be different.


> Would BitTorrent also be suitable for hosting embeddable content?

Same as IPFS: gateways can exist. It's not specific to Bittorrent, or IPFS.

> A magnet URL is mainly a file hash and doesn't seem to encode a particular peer server, kinda like IPFS.

Magnet links can include a HTTP server that also hosts the content


I'm sure a BitTorrent gateway can exist, but I'm wondering why it doesn't seem to "be a thing." I've never seen one used, nor do I see an obvious public one when searching. Whereas IPFS gateways are so mainstream that even Cloudflare runs a public one.


It's because of the kind of content that is shared. BitTorrent serves a lot of content you are not allowed to redistribute, so having an open gateway immediately puts you at risk of aiding the distribution of content. But it does work, someone even made something native to browsers so browsers themselves can share content: https://webtorrent.io/. There are even fuse "gateways" to make it native to your computer and pretend the files exist locally: https://github.com/search?q=bittorrent+fuse&type=repositorie...

IPFS doesn't seem to be used for that kind of content much, it seems to be targeted more towards web-native content (html pages, images, that kind of stuff). It's probably safer for Cloudflare to run this.


> I remember as recently as 2021 having to manually pin each file in a directory since pinning the directory does not recursively pin files. Doing this by hand for small folders is okay, but large folders? Not so much.

Can‘t you just use a glob?


Filecoin, which is based on IPFS, creates a market for unused storage. I think that idea is great but for adoption it needs to be as simple as Dropbox to store files. But visit https://filecoin.io/ and the dropbox-like app that you could be willing to try is nowhere to be found. So maybe it is an enterprise solution? That isn't spelled out either. So I am not surprised that this has little traction and the article further confirms the impression.


> to be as simple as Dropbox to store files. But visit https://filecoin.io/ and the dropbox-like app that you could be willing to try is nowhere to be found

I agree with this fully. But as said elsewhere, it's kind of far away from that, and also slightly misdirected.

Imagine asking someone to get started with web development by sending them to https://www.ietf.org/rfc/rfc793.txt (the TCP specification). Filecoin is just the protocol, and won't ever solve that particular problem, as it's not focused on solving that particular problem, it's up to client implementations to solve.

But the ecosystem is for sure missing an easy to use / end-user application like Dropbox for storing files in a decentralized and secure way.


That flagship app you are looking for seems to be https://nft.storage/ (by Protocol Labs).


This is, in my opinion, is the first and only "solution" to a real problem built using the blockchain.

Distributed file storage, if done correctly, can be a transformative technology. And it can be even more revolutionary implemented at the OS level.


Fileverse is an app built on ipfs and it is very user friendly: https://fileverse.io/


Here's your $.10/day for that 1GB with bandwidth... but running the filecoin stack will cost you a $50/mo server.

That fucker's a PIG on cpu and ram.


IPFS is as well.

Clearly much more going on but take a machine that can serve 10k req/s with [insert 100 things here] without flinching and watch it maybe, just maybe, do 10 with IPFS.

I'm not kidding.


this is what storj.io does.


I'm surprised by the beginning of the post talking about pioneering in 2019. Maybe it is the case for ENS (I never cared for it), but regarding IPFS, my website was available over IPFS 3 years before that in 2016 [1]. Granted, I was never IPFS only. I also started publishing a series of article about censorship-resistant internet (Tor, I2P, IPFS, and ZeroNet) in 2600 Magazine – The Hacker Quarterly – back in 2017 [2].

Anyway, I came to the same conclusion as the author, but several years ago: in the end, nothing is actually decentralized, and maintaining this illusion of decentralization is actually costly, for no real purpose (other than the initial enjoyment of playing with a new tech, that is).

So I stopped maintaining it a few years ago. That decision was also because of the growing involvement of some of these projects with blockchain tech that I never wanted to be a part of. This is also why I cancelled my article series in 2600 before publishing those on IPFS and ZeroNet.

[1] See for example this archive of my HN profile page from 2016 with the link to it: https://web.archive.org/web/20161122210110/https://news.ycom...

[2] https://pablo.rauzy.name/outreach.html#2600


I never fully understood the use of ipfs/iroh for websites, but I really like the idea for data caching in data science packages.

It makes me more sense to me that someone would be much more willing to serve large databases and neural network weights that they actually use everyday, rather than 'that one guys website they went to that one time'.

I'm very surprised it's not as popular, if not more popular to just have @iroh-memoize decorators everywhere in people's database ETL code.

That's a better use case (sense the user has a vested interest in keeping the data up) than helping people host we sites.


IMO the case for something like IPFS gets worse and worse the larger proportion of clients are on battery. This makes it a really poor choice for the modern, public Web, where a whole lot of traffic comes from mobile devices.

Serving things that are mostly or nigh-exclusively used by machines connected to the power grid (and, ideally, great and consistent Internet connections) is a much better use case.


This is half the reason why P2P died in the late 2000s. Mobile clients need to leech off a server to function.

The other reason why it died is privacy. Participating in a P2P network reveals your IP address, which can be used to get a subscriber address via DMCA subpoenas, which is how the RIAA, MPAA, and later Prenda Law attacked the shit out of Gnutella and BitTorrent. Centralized systems don't usually expose their users to legal risk like P2P does.

I have to wonder: how does IPFS protect people from learning what websites I've been on, or have pinned, without compromising the security of the network?


Spotify killed music sharing, not the RIAA.

There's still plenty of video and book pirating happening. Until the streaming industry gets its shit together and coalesces into a single provider, or allows peering, then that's going to continue.

The legal and privacy risks of P2P are both mitigated very simply with a VPN.


They also need to just sell a 'license for a "personal private viewing copy"' of a work and provide non-DRM files that users can self archive and maintain.

No, DRM is not necessary, it's already proven that someone, among the 8 billion monkeys (with some really smart ones) hammering away _will_ figure out a way of liberating the data from the shackles. The whole premise is fundamentally broken in that the viewers are distrusted from seeing the data in the clear. It just adds cost, friction, and failure points.

Convenience (EASE OF USE!!!), a fair price, and content that doesn't go away are how alternative distribution methods die. Just low how bootleg booze largely doesn't exist outside of prohibition since the market functions.


>> Just low[sic] how bootleg booze largely doesn't exist outside of prohibition since the market functions.

Tell me that you hang out with law abiding citizens without telling me...

Moonshine, home brew... people are out there sticking it too the man as much as they can.

If you have made home made cider, or beer, or yogurt, pickles, canned anything you know that its a labor but the product is better and far cheaper than what you can buy.

Convenience, quality, ease of use... People will pay a massive premium for these things. This (to this dismay of HN) is the Apple model. You can bleed the customer if they love you, if you have a good product.

This was a problem in early film, and the paramount decree was a thing: https://www.promarket.org/2022/12/12/the-paramount-decrees-a...

One would think that this should apply to streaming services, but sadly no, they get treated like network television did (does).

And I know that one of you will glom on to the paramount decree as an argument for the iPhone App Store shenanigans of late. Sadly they aren't remotely close to each other Apple isnt restricting your timing, or telling you what your price should be.


They had a single provider. They purposefully moved away from that model to make more money, and it's working.


Maybe, but it's also encouraging people to move back to pirating.

And as I understood it, the streaming wars are more about not wanting one service to dominate the whole industry (or if there is, it's my streaming service) rather than a co-ordinate plan to extract more money from subscribers.


If everyone's on VPNs, nobody can connect to each other. I'm only aware of a couple of VPN services that offer port forwarding.


If enough people need port forwarding, VPNs will support port forwarding


How did it "die" in the late oughties, when FAIs were boasting about releasing routers with built-in NAS with torrent support in 2011, and projects like Popcorn Time only got popular in 2014 ?


A clueless gen-z user maybe, which was born with smartphones. On some media, ED2K and Nicotine+ (Soulseek network) it's the only way to fetch that content. Classical series/movies/comic books/special vinyl editions ripped into FLAC... those won't be at full FLAC/CBZ quality (high DPI png's) on YT/Spotify or whatever web site or APP for tablets.


I misphrased my first sentence. P2P's legal uses died with mobile. For example, Microsoft spent a huge amount of time and money removing the P2P functionality in Skype, because nobody was going to tolerate the data cost and battery drain of running a superpeer in your pocket.

Yes, people still use P2P for piracy. This is actually part of the problem, and why it was so easy for mobile to kill P2P. While P2P itself is legal, associations with piracy meant nobody was willing to use P2P, which meant nobody was going to invest the time or money into making mobile P2P work[0].

Tangent time: Have you ever wondered why we distribute online video through YouTube instead of BitTorrent? Remember, you could just put magnet links in RSS, there was even an RSS/BitTorrent combo client called Democracy[1] which was basically YouTube before YouTube. But a lot of corporate suits didn't want to touch BitTorrent in any capacity, even for things they intended to distribute for free, because of the piracy stink. YouTube was also ridden with piracy, but they cleaned up their act and brand with a bunch of automated moderation tools. So when online video became corporate, all the monetization and ads went to centralized platforms and not decentralized ones.

And as all of this should have tipped you off by now, I'm not a "clueless gen-z user born with smartphones". Even if I was, Zoomers figure out this shit anyway, despite Apple and Google's attempts to starve their brains of oxygen by denying them access to real computers.

[0] Yes, I know about AirDrop. AirDrop is just a taste of what we could have had if real engineering hours had gone into mobile P2P, instead of moving the few early adopters back onto centralized services.

[1] This would later be renamed to Miro and then abandoned as their YouTube API integration broke. Yes, they were also NewPipe before NewPipe.


Peertube can use P2P seeding just fine. Yes, I remember Miro. On 'piracy', that has no sense. A corporation would just use P2P without even mentioning it, such as Wow did for online downloads. Even Steam could do it to save up bandwidth.


And Activision-Blizzard eventually removed the peer-to-peer functionality from their update downloader.

I'm not sure why you are focusing so much on the mobile aspect though, most PC uses are still not done on a smartphone.

And what is possible changes over the years, even smartphones are massively faster than a decade ago.

I don't recommend that anyone distributes anything through YouTube (or any other platform) any more, and guess what, since a few years ago we now *actually* have a working YouTube (and more recently, Twitch) alternative !

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

And guess what, PeerTube is also P2P (based on WebTorrent) !

Just ignore these walled gardens, their enshittification is well under way anyway, and as long as you can show that alternatives are possible, you'll get enough users to flee them to greener pastures (see Xitter => Mastodon and Reddit => Lemmy as recent examples - though federated rather than p2p ones - I'm not a "decentralization maximalist"...).

P.S.: I've actually used Democracy Player / DTV / Miro for a while, but it was created (slightly) after YouTube, not before... anyway alternative YouTube players like NewPipe are missing the point - there's also the whole side of having to upload your own video through YouTube's shitty interface and random whims of their ContentID. And the whole ContentID extortion business goes away once the extortionists actually have to do the hard work of sending a full blown DMCA takedown, and then possibly have to fight in court against a fair use defense, and this whole thing becomes even more unlikely to work if your server is in a country that basically ignores those (consider how VLC violates DMCA because it's distributing libdvdcss, but they are pretty much untouchable because based in France - or also how MPEG doesn't bother going after the license violators that played a DVD with VLC without acquiring the license).

EDIT : heh, ninjaed about Blizzard and PeerTube...


P2P it's still pretty much alive, at least for Bittorrent and ED2K.


Desci Labs (1) do something like this - papers and all the associated data (raw data, code, analyses, peer review comments) are published and linked together - along with all the citations to other papers - in a big DAG.

I believe their data is stored in a p2p - it might interest you!

1. https://desci.com/


> Anyway, I came to the same conclusion as the author, but several years ago: in the end, nothing is actually decentralized, and maintaining this illusion of decentralization is actually costly, for no real purpose (other than the initial enjoyment of playing with a new tech, that is).

Do you have any writing (blog posts, HN comments, etc.) where you explore this thought more? I'm in the thick of building p2p software, very interested in what you came to know during that time.


The main thing is that "true" (in the sense of absolute) decentralization does not actually work. It doesn't work technically, and it doesn't work socially/politically. We always need some form of collective control over what's going on. We need moderation tools, we need to me able to make errors and fix them, etc. Centralized systems tend to be authoritarians, but the pursue of absolute decentralization always end up being very individualistic (and some kind of wrongly placed elitism). There are middle grounds: federated systems for example, like Mastodon or emails, actually work.

That is not to say that all p2p software is bad, especially since we call p2p a lot of things that are not entirely p2p. For example, BitTorrent is a p2p software, but its actual usage by humans relies on multiple more-or-less centralized point, trackers and torrent search engine.


I think you and I would have a very interesting conversation.

I agree with a lot of your points, but not your conclusions.

> We always need some form of collective control over what's going on. We need moderation tools

I agree with this, and also think it is possible in peer-to-peer systems. Ideally the collective is self-governed. Particularly, when it comes to moderation, the closer the moderation controls are to being under the control of the user consuming the content, the more just the system.

> , we need to me able to make errors and fix them, etc.

Yes, 100%. Equity < Cryptography. It's far more important that your equity be rock solid than it is for your cryptography to be rock solid. If someone steals your property in a cryptographically sound way, equity should always trump cryptography. It's far more important that you have the title to your vehicle than it is that you have the keys to your vehicle.

I feel like many p2p systems have gotten this one backwards.

> There are middle grounds: federated systems for example, like Mastodon or emails, actually work.

I consider these centralized systems, just N copies of the same problem. I don't feel like the power imbalance between the users and the system administrators are addressed in a sufficient way on the fediverse as it is currently implemented.

These systems have a classist, hierarchical, system where system administrators belong to a privileged class while users are second class citizens on the web.

---

I feel one of the issues in the current centralized architectures is equity. When you go about moving through the world, you generate a large volume of valuable property (your data). But, today, you give away nearly all equity in that data. Centralized providers accumulate equity in their user's data, and that equity is how they pay the bills.

I do believe that, in a very meaningful way, equity and privacy are nearly synonymous when it comes to corporations respecting the privacy of their users. Just because Netflix delivers a video to your smartphone doesn't mean you can turn around and sell that video to your friend. You have been granted access to the video, but it is not your property. The inverse needs to be true too. Just because you share your viewing habits with Netflix doesn't mean they can sell that data to Warner Brothers, that's not their property (I mean, today it is, but it shouldn't be). If users had equity in their data, the data broker market as it exists today would be piracy.

P2P systems have failed to create a world where humans, their content, and their devices are meaningfully addressable for the web in a way that expresses equity as a first class citizen.

A decentralized world is possible, it's just not possible on the internet. The internet, as it exists, is insufficient for expressing the concepts of the modern web in a way that is possible without centralized servers.

We don't need web3, we need internet2.

> That is not to say that all p2p software is bad, especially since we call p2p a lot of things that are not entirely p2p. For example, BitTorrent is a p2p software, but its actual usage by humans relies on multiple more-or-less centralized point, trackers and torrent search engine.

libp2p and scuttlebutt are pretty cool too. Both with their problems, but those problems seems solvable. Both seem more like internet2 than web3.

p2p needs a new overlay network on top of the internet, just like the internet started as an overlay network on top of the telephony system.


True P2P networks don't scale, because every node has to store an (accurate if partial) representation of the whole network. Growing the network is easy, but growing every single node is virtually impossible (unless you control them all...). Any attempt to tackle that tends to increase centralization, e.g. in the form of routing authorities.

And if you try to tackle centralization directly (like banning routers or something), you will often create an anti-centralization regulator, which is, you guessed it, another form of centralization.

So your decentralized P2P network is either small and works good, medium and works not so good, or large and not actually decentralized.

The best P2P networks know their limits and don't try to scale infinitely. For human-oriented network, Dunbar's Number (N=~150) is a decent rule of thumb; any P2P network larger than that almost certainly has some form of centralization (like trusted bootstrapping/coordination server addresses that are hard-coded in every client install, etc.)


> True P2P networks don't scale, because every node has to store an (accurate if partial) representation of the whole network

Former LimeWire dev here... which P2P networks use a fully meshed topology? LimeWire and other Gnutella clients just have a random mesh with a fixed number of (ultra)peers. If the network gets too large, then your constrained broadcast queries hit their hop count before reaching the edge of the network, but that seems fine.

Last I checked, Freenet used a variation on a random mesh.

Kademlia's routing tables take O(log(N)) space and traffic per-peer to maintain (so O(N log(N)) for global total network space and traffic). Same for Chord (though, twice as much traffic due to not using a symmetric distance metric like XOR).

There are plenty of "True" (non-centralized) P2P networks that aren't fully meshed.


Creator of Freenet here. Freenet[1] relies on peers self-organizing into a small world network[2].

Small world networks have the advantage of being able to find data in log N time where N is the network size, they're also completely decentralized, self-healing, and distribute load evenly across peers. The principle is similar to DHTs like Kademlia but more flexible and intuitive IMO, while having similar scaling characteristics.

It's surprisingly common for people to confuse small world networks with "scale free networks", but scale free networks rely on a subset of highly connected peers which do a disproportionate amount of the work - which isn't truly decentralized.

The new Freenet design incorporates adaptive learning into the routing algorithm. When a peer is deciding where to route a message, it predicts the response probability and time for each neighboring peer based on past performance and chooses the best. With conventional "greedy routing", peers choose the neighbor with a location closest to the data being retrieved. The new approach is adaptive to actual network performance.

[1] Both the original Freenet from 1999 and the new sequel we're currently building - see https://freenet.org/ for more. We hope to launch the network in the next few weeks.

[2] https://en.wikipedia.org/wiki/Small-world_network


Thanks for the great work, Ian!

As far a second-generation Freenet, I heard i2p started out as a proposed re-factoring and generalization of Freenet's encrypted transport layer. Are there any plans on using i2p to carry Freenet traffic?


Thank you :)

I2P was created by someone who was previously involved with Freenet, but its design is a lot closer to Tor than to Freenet. Both I2P and Tor are anonymizing proxies, they allow services to be hidden, but they're still centralized.

While they are quite different, there is enough overlap that running Freenet over I2P (or Tor) would be wildly inefficient and slow, so I wouldn't recommend it. Freenet is designed to run over UDP directly.

The new Freenet is designed to allow the creation of completely decentralized services. Briefly, it's a global key-value store in which keys are webassembly code that specify what values are permitted under that key, and the conditions under which those values can be modified. This key-value store is observable, so anyone can subscribe to a key and be notified immediately if the value changes.

This is just scratching the surface, for anyone interested in a much more comprehensive explanation of the new Freenet please see this talk I gave a few months ago: [1] You can also find a FAQ here: [2]

[1] https://www.youtube.com/watch?v=yBtyNIqZios

[2] https://freenet.org/faq


I think ive been following the dev chat long enough to answer that the new freenet is a new, seperate network to the original (now called hyphanet I think) that handles transport by itself, and end to end encryption is not in scope of the project but can be built on top


> the new freenet is a new, seperate network to the original

This is correct - while old and new Freenet both rely on a small-world network, they are very different and not compatible. Borrowing from our FAQ[1], the main differences are:

Functionality: The previous version of Freenet (now called Hyphanet) was analogous to a decentralized hard drive, while the current version is analogous to a full decentralized computer.

Real-time Interaction: The current version allows users to subscribe to data and be notified immediately if it changes. This is essential for systems like instant messaging or group chat.

Programming Language: Unlike the previous version, which was developed in Java, the current Freenet is implemented in Rust. This allows for better efficiency and integration into a wide variety of platforms (Windows, Mac, Android, MacOS, etc).

Transparency: The current version is a drop-in replacement for the world wide web and is just as easy to use.

Anonymity: While the previous version was designed with a focus on anonymity, the current version does not offer built-in anonymity but allows for a choice of anonymizing systems to be layered on top.

[1] https://freenet.org/faq#faq-2


Doesn't Java have the widest variety of hardware running it, thanks to its Virtual Machine ?

I can even remember my Motorola Razr being arguably (almost) a smartphone because, while a far cry from Symbian, it could already run Java applications ! (Notably, IIRC, Opera mini ?)

P.S.: Also, I tried Freenet about around that time too ! I'm a bit confused about this being a "new" project... why not naming it "Freenet 2" then ? Why did Freenet "1" had to change its name ??


> Doesn't Java have the widest variety of hardware running it, thanks to its Virtual Machine ?

Java has the advantage that you can run it on a wide variety of hardware platforms without recompilation, but it has largely failed to attain broad usage/support for desktop apps and so it's a bad choice for something like Freenet in 2024.

A systems programming language like Rust gives us a lot more control over things like memory allocation, allowing apps to be a lot more efficient. This is important with Freenet because we need it to run in the background without slowing down the user's computer.

Rust can also be compiled to run on all major platforms, Windows, Mac, Linux, Android, iOS, etc.

> P.S.: Also, I tried Freenet about around that time too ! I'm a bit confused about this being a "new" project... why not naming it "Freenet 2" then ? Why did Freenet "1" had to change its name ??

Using the name for the new software was a difficult decision and not without risk.

The "Freenet" name was never intended to belong to a specific codebase. From the start we viewed the original Java implementation as a prototype, which is one reason we never actually released version 1.0 (even 7 years after the project started we were still on version 0.7). At the time I had no idea that it would be over 20 years before I had a design I thought would be suitable, but here we are.

This new Freenet is the original concept but designed, not as a prototype, but as software that can gain broad adoption. In that sense it is the fulfilment of my original vision.

I did consider calling it Freenet 2, but these days not that many people have heard of the original, so on balance I believe it would have been confusing for the (hopefully) much bigger userbase we hope to reach.


I've always taken interest into p2p, but never heard of Freenet so thanks for being here!

Question: how good is the latency once connections are already established, say for a real time video call over Freenet, or if this is not possible? Is there any server in the middle that all packets need to route to, especially for peers behind firewall


> how good is the latency once connections are already established, say for a real time video call over Freenet, or if this is not possible?

We're aiming for the delay between a contract's state being modified and all subscribers being notified to be no more than 1 second, which should be acceptable for applications like IM.

If you were doing something like a video call you'd negotiate it over Freenet and then establish a direct connection for the video/audio for minimal latency.

> Is there any server in the middle that all packets need to route to, especially for peers behind firewall

Freenet uses UDP hole-punching to establish direct connections between peers even if both are behind firewalls. A new peer uses a public (non-firewalled) peer to join the network initially but once joined all further communications can be between firewalled peers.


Clever. I care about congestion control issues mainly. Got that handled? Tried ecn?


The low-level transport is one of the final components we're working on prior to launch - we previously hoped to use something off-the-shelf but on close examination nothing fit our requirements.

We handle congestion by specifying a global maximum upload rate for the peer, which will be a fraction of the peer's total available bandwidth - the goal being to avoid congestion. In the future we'll likely use an isotonic regression to determine the relationship between upstream bandwidth usage and packet loss, so that we can adaptively choose an appropriate maximum rate.

This is a more detailed explanation of the transport protocol, it's quite a bit simpler than ECN but we'll refine it over time: [1]

At a higher level a peer's resource usage will be proportional to the number of connected peers, and so the peer will adaptively add or remove connections to stay under resource usage limits (which includes bandwidth).

[1] https://github.com/freenet/freenet-core/blob/186770521_port_...


Freenet is very cool. You did good work. Absolute giga chad.


Thank you, first time I've been called a Chad - I'll take it ;)


Sure, but ultrapeers/supernodes/routers/etc are forms of centralization. Locally, the network is centralized around these supernodes, and they can be annoying/impossible to bypass. The "inner network" or backbone of supernodes, if exists, can also represent a central authority. Nothing necessarily wrong with any of this, but it can stretch the meaning of P2P if it really means P2central-authority2P.

Functionally there is almost no difference between me sending you an (anonymous, encrypted) message over Facebook versus over some sophisticated, large, hierarchical "P2P" network. We still have to trust the local authorities, so to speak.


How are Gnutella ultrapeers centralized? Nodes self-promote themselves to ultrapeers if they meet uptime and resource availability requirements.

It's a mechanism to allow low-spec'd peers to participate without getting crushed with traffic, and also an optimization to reduce churn in the routing tables. The ultrapeers are just long-uptime regular peers.


Your software cannot be more decentralized than your hardware.

For example, true p2p can only happen if you meet with someone and use a cable, bluetooth or local wifi. Anything over the internet needs to pass through routers and *poof* decentralization's gone and you now need to trust servers to a varying level of degrees


"varying" is a pretty wide range here. If you mean "trust" as in trust to maintain connectivity, yes, but beyond that there are established ways to create secure channels over untrusted networks. Could you provide specifics about what you mean if anything beyond basic connectivity?


Sure: what is the process to start downloading a torrent? what is the process to message someone on Jami? what is the process to call someone on (old) Skype or Jitsi? Answer this and you realize you can only get as decentralized as your hardware infrastructure


Well for torrent it starts by contacting one of the various trackers that know other peers. It's not centralized but there's only a couple of trackers out there.

There's no trust between any of the peers, but each of the torrent piece has associated hash, meaning peers cannot give you invalid data without being caught (unless hash collision occurs).

Peers can be discovered with DHT magic, but ultimately, can only be dialed if the ISP allows peers to receive connections.


>Anything over the internet needs to pass through routers and poof decentralization's gone

That's not true. Yes the strict p2p connection is gone but decentralization is what the name says, a network of connections without a single center. The internet and it's routing systems are decentralized. Of course every decentralized system can also be stressed to a point of failure and not every node is automatically trustworthy.


I'll add to what others have said better.

Decentralized, globally accessible resources still take some kind of coordination to discover those resources, and a shit-ton of redundant nodes and data and routing. There is always some coordination or consensus.

At least, that's my take on it. Does not tor have official records for exit and bridge nodes?


I may be missing something, but name resolution has been touted as one of the more legitimate and sensible uses for blockchain for a very long time. Could you clarify what your issues with it in IPFS context are?


It isn't. Unless you want a long incomprehensible string.

Someone is always going to want a short, unique, and memorable name. And when two people share the same name (McDonald, Nissan, etc) there needs to be a way to disambiguate them.

If people die and are unable to release a desirable name, that just makes the whole system less desirable.

I know one of the canonical hard problems in Computer Science is "naming things" and this is a prime example!


And if you want a long incomprehensible string we already have that .onion sites work without a blockchain, too.


Namecoin has existed for a long time. It acts just like a traditional domain registar. First person to register a name gets it, and they have to pay a maintenance fee to keep it. Don't pay the maintenance fee and then someone else can register the name.


Yes, but statistically nobody uses it due to those problems. Squatters quickly snapped up the most popular DNS names but since nobody uses it there’s no financial benefit from paying Danegeld, and that’s a vicious cycle of irrelevance.

This is the core flaw of most of these systems: people hype them up selling the idea that it’ll become huge later, but without some link to real value or authority there’s no reason to favor one implementation over another or doing nothing at all.

This comes up a lot in the case of IPFS because the core problem is that most people won’t host anything on the internet. It’s expensive and legally risky, which forces you to vet who you host and then you have a less revolutionary pitch which has to be based on cost, performance, etc. and that’s a tough market.


It might not be popular in general, but surely IPFS crowd specifically would be a lot more receptive to such a thing? IPFS itself is similarly niche.


Perhaps, but that doesn’t mean they’re suckers. If you’re going to have to deal with bootstrapping an alternate root anyway you probably aren’t motivated to pay off a speculator for the privilege of making their holdings more valuable.


ENS (which is what the GP refers to) has human readable names. But it doesn't have support for A/AAAA records today (does anyone know why? A-record support was in the original spec). Aside from that the only reason you wouldn't be able to type "mycoolsite.eth" into your browsers URL bar and have it resolve to an IP is because no browser has an ENS resolver built in. Yet.


Brave does right?


I didn't know that. That's good if it does.


> but name resolution has been touted as one of the more legitimate and sensible uses for blockchain for a very long time.

Blockchain enthusiasts have a history of talking out of their ass and being susceptible to the lies of others.


Downvotes, nice. Whatever helps you sleep at night.


Well, I do not actually believe that blockchains can do name resolution correctly. First and foremost, the essential thing to understand about blockchains is that the only thing that is guaranteed by writing an information on a blockchain is that this information is written on this blockchain. And that's it. If the writing itself is not performative, in that it's mere existence performs what it describes, then nothing has been done. It works for crypto-assets because what makes a transaction happen is that it is written on the blockchain where that crypto-asset lives and where people look to see what happened with that crypto-asset.

But for any other usage, it cannot work, blockchain are useless. Someone or something somewhere has to make sure either that what's written on the blockchain corresponds to the real world, or to make the real worlds corresponds to what's written on the blockchain. Either way you need to have a kind of central authority, or at least trusted third parties. And that means you don't actually need a blockchain in the first place. We have better, more secured, more efficient, less costly alternatives to using a blockchain.

Back to name resolutions.

Virtually no one is going to actually host locally the blockchain where all names are stored. That would be way too big and could only get bigger and bigger, as a blockchain stores transactions (i.e., diffs) rather than current state. So in practice people and their devices would ask resolvers, just like they currently do with DNS. These resolvers would need to keep a database of the state of all names up-to-date because querying a blockchain is way too inefficient, running such a resolvers would be a lot more costly than running a DNS servers so there would be less of them. Here we just lost decentralization which was the point of the system. But that's just a technical problem. There is more: what if someone gets a name and we as a society (i.e., justice, whatever) decides that they should not be in control of it? Either we are able to enforce this decision and it means the system is not actually decentralized (so, we don't need a blockchain), or we can't, and that's a problem. What if a private key is lost, the associated names are gone forever? What if your private key is leaked by mistake and someone hostile take control of your name?

Using a blockchain for names resolution doesn't actually work, not for a human society.


> Either way you need to have a kind of central authority, or at least trusted third parties.

You lost me here. Couldn't the local user ('s process) reference the same block chain instead of another trusted party?


The problem with block chains is you need the entire history, or at least a good chunk of it to walk it for data. The latest end of the chain doesn't contain everything, it simply contains the most recent transactions.

This can be hundreds of gigabytes if not more at scale.

This is where the central authority comes in play, in the name of storage and performance efficiency.

Even crypto wallet apps use third party central servers to query your wallet totals. Because you aren't fitting the download of the block chain on your phone.


I don't think the blockchain walk has to be done locally. Much like someone looking up DNS records don't need to participate in the DB management to get their records, there can be intermediaries which provide the response and still rely on the blockchain as a source of truth?

The value of the blockchain (in the context of name resolution) would (should) be limited to enabling trustless-ness of the response. I can cryptographically authenticate the origin of the response. If you don't trust that source, zk proofs would enable the user to validate the response is part of the latest version of the blockchain's state without looking at all of the history.

I think the cost of carrying the whole history is a red herring.


> there can be intermediaries which provide the response and still rely on the blockchain as a source of truth

But then you have to trust the intermediaries. You can verify their claim, but doing so is so costly it's what made you turn to intermediaries in the first place.

> I can cryptographically authenticate the origin of the response.

A blockchain is not needed for that, certificates can do that.

> zk proofs (…) the latest version of the blockchain's state (…) cost of carrying the whole history

Knowing enough information about the latest version of a blockchain's state to validate responses would require either that you trust the third party which would provide the hash of the last block to you, or that you follow, blocks after blocks, what's added to to ledger, verifying each block's integrity and all. I'm not saying that's not doable, but that it either requires some boot-up time or to be online all the time; i.e., it more or less amounts to running a node which is what we seem to agree is not something most people / end devices will do.


You should be able to cryptographically prove a) the block height of the current block and b) the state of any account, in constant space, using zero-knowledge proofs.

You don't need to trust a third party and do not need to be online all the time for that.


> Either way you need to have a kind of central authority, or at least trusted third parties.

Not everyone needs to run a node, and not everyone could, but it is totally feasible for an individual to run their own if they decide they can't trust anyone else for whatever reason. Especially if you were running a node specifically for the purpose of name resolution you could discard the vast, vast majority of data on the Ethereum blockchain (for example).

> what if someone gets a name and we as a society (i.e., justice, whatever) decides that they should not be in control of it? [...] and that's a problem.

No, that is a feature of a decentralized system. Individual node operators would be able to decide whether or not to serve particular records, but censorship resistance is one of the core features of blockchains in the first place.

> What if a private key is lost, the associated names are gone forever?

The association wouldn't be gone, it would just be unchangeable until it eventually expires. This is a known tradeoff if you are choosing ENS over traditional domain name registration.

> What if your private key is leaked by mistake and someone hostile take control of your name?

As opposed to today where someone hostile, like for instance the Afghani government (The Taliban), can seize your domain for any reason or no reason at all?

---

I think we just have a fundamental disagreement about what types of features and use cases a name resolution system should have. That's completely fine, you're entitled to your own believes. You can use the system that most closely resembles your beliefs, and I'll use the one that most closely resembles mine. Fortunately for us different name resolution system can peacefully coexist due to the nature of name mappings. At least for now, none that I know of collide in namespace.


> No, that is a feature of a decentralized system. Individual node operators would be able to decide whether or not to serve particular records, but censorship resistance is one of the core features of blockchains in the first place.

Exactly, take a look at Sci Hub or The Pirate Bay continuously needing to change domain names due to seizures, for example. I'd want them to be able to actually own their domain names, either via blockchain or private key (e.g. Tor).

In fact Sci Hub tried HNS for some time but seems to have dropped out of it.


>There is more: what if someone gets a name and we as a society (i.e., justice, whatever) decides that they should not be in control of it?[...]or we can't, and that's a problem

That's a feature.


Do you think lawlessness is a feature?


Depending on the laws, it can be.

Given typical abusive IP laws around the world, I'd say that it is.


> I also started publishing a series of article about censorship-resistant internet (Tor, I2P, IPFS, and ZeroNet) in 2600 Magazine – The Hacker Quarterly – back in 2017

I very much enjoyed your articles on Tor and I2P :) I2P was entirely new to me, so I found that particularly interesting. I did idly wonder when the next article was coming, so I’m glad I didn’t just miss it in some issue. Totally understand where you’re coming from.


Thanks! It's always great to have feedback on paper-published content :).


> Maybe it is the case for ENS

Oh yeah, I was referring strictly to IPFS+ENS websites. I have been working with it for several years so my mind goes for this use-case automatically.


When evaluating use-cases where blockchain technology is leveraged to disintermediate, I came to your same conclusions. Technically novel? Yes, sure. But, for what?


For incentive alignment, consensus, trustless, etc


I see where the author is coming from but I find something else strange: Considering that the blog is in practice a collection of static files, I don't see the benefit of paying for a server at all. Host it on github, if github gets killed off for whatever reason, switch to something else and move on. Seems like an unnecessary overhead to me.


I get told that a lot! xD

My original aim was to write an IPFS blogging engine for my personal use, so I needed some dynamic loading from IPFS there.

Now I switched to Jekyll, and it would be easier to host the blog on Github indeed, but I'm kind of playing a quixotic game of trying to minimize the presence of Google/Microsoft/Amazon and other big-tech in my life.


Free tier of indie https://neocities.org supports static sites like Jekyll.


https://codeberg.page .. similar idea to Github Pages.


Same. IPFS seems far more useful for hosting static content that might be embedded in multiple websites.


> the more readers it has, the faster it is to use it since some readers help to spread the content (Scalable).

In other words: Once a few big websites are established, no small website will ever be able to gain traction again because the big websites are simply easier to reach and thus more attractive to use. And just like an unpopular old torrent, eventually you run out of seeders and your site is lost forever.

One can argue about the value of low traffic websites, but I got to wonder: Who in their right mind thinks "Yeah, I want to make a website and then have others decide if it's allowed to live". Then again, maybe that kind of "survival of the fittest" is appealing to some folks.

As far as I am concerned, it sounds like a stupid idea. (Which the author goes into more detail, so that's a good write up)


This is a false dilemma. Why would you not "seed" (pin) your own site, and be at others' mercy? You pin it, and when others also do so, the readers get faster and more redundant service.


For "unpopular" sites having a single origin somewhat removes the advantages of IPFS, it's not decentralized, not censorship resilient, and still costs the publisher for ongoing infrastructure to host it. Yet still had the disadvantages and complexity of IPFS vs a static http server.

So if you're not going to be publishing something that will always have multiple copies floating around, why use IPFS?


1. to give a chance to avoid being slashdotted. 2. to allow anybody who finds it valuable to archive it, or parts of it?

The complexity of IPFS is another thing, which should be solved. However popular or unpopular your site might be, you must host is somewhere somehow, if you wish to be sure it sticks around. It is simple as that.


It helps to use more specific terms than "decentralized" and "censorship resilient", there are a lot of attack vectors for both. IPFS certainly does address some of the attack vectors, but not all. For example if the "centralized" thing you're worried about is DNS and certificate authorities, then you can avoid those authorities entirely with IPFS. Replication is one aspect of centralization, and IPFS doesn't completely fail at it, it's just more expensive (you can guarantee two node replication, you will have to run two nodes though). And there are other aspects not addressed by IPFS at all like its reliance on IP address routing infrastructure.


If you need to pin your content anyway, it's actually faster and less expensive to host a normal website then. And if you want to get it to readers faster, there are a lot a cheap or free CDN available, but that's generally not even an issue with the kind of website we're talking about here when they're served normally, over the web.


Yes, that is the state of affairs now. I can use cloudfront for my site, but cannot use it to pin my ipfs site (should I have one) as far as I know.

You are fighting a strawman. If you don't take of your site, but expect others to take care of it (pin it), then it is not your site. You must ensure it has at least one pinned version. Others might o might not pin it, it depends on the popularity, or the accessibility of the stack, which is lacking right now according to the article.


It is also worth noting that most IPFS peers will cache content for some period of time even if not explicitly pinned. So if you site hits the top of Hacker News (and everyone was using a native IPFS browser) you would suddenly have thousands of peers with the content available. So in theory your one node can serve infinite users since once you serve one copy to the first user that user can replicate it to others. (The real world is of course more complicated but the idea works decently well in practice.)


It's not up to others alone; you get a say too because you can seed your own content and that can be fast. In the worst case of no interest, then it's approximately the same as you hosting your own website in the world of today. This doesn't exonerate the shortfall of the "old torrent" pattern though, as you say.


I think the main difference between IPFS and bittorrent in terms of usage patterns is that IPFS is being used to host content that could easily by just a regular HTTP server, whereas bittorrent is hosting data which is highly desired and would be impossible or very expensive to host on HTTP.

And so naturally relays pop up, and the relays end up being more convenient than actually using the underlying protocol.


Key difference between a regular HTTP server and IPFS is that you can always try a different gateway/relay to get the very same content, and you can be sure it's the same. Cannot really do that with HTTP today as it's usually tied to locations, in one or another way.


I'm exploring a similar project, having a "decentralized" website (hosted on github) which saves users' data in the blockchain itself and then provides that same data to other users through public APIs and/or from actual blockchain clients.

Wonder if there is actual use or need for such thing.


Is there anything that allows one to mount an IPFS dir as a read/write FUSE drive yet? Once they have that, I'm all in, even if it's slow...


We have a FUSE mount in Peergos[0][1] (built on IPFS). It lets you mount any folder read or write (within your access).

[0] https://github.com/peergos/peergos [1] https://peergos.org


https://github.com/djdv/go-filesystem-utils/pull/40 lets you interact with IPFS as an NFS mount


I build a blog in IPFS, its basically reliant on several centralized services to actually work in browsers (DNS , GitHub, Fleek, etc). I wrote about how I build it here, the experaince was underwhelming. https://labcabin.eth.limo/writing/how-site-built.html


What about a cross between IPFS/Nostr and RSS? RSS (or atom) already provides a widely-adopted syndication structure. All that's missing are signatures so that a middleman can re-broadcast the same content. Maybe with signatures that's really reinventing SSB[1]. But if we think of the network in a more social sense, where you're connecting to trusted peers (think: irl friends), maybe the signatures aren't even that important. All that's left then is to separate the identifier for the feed from the location--today those are both URL--so you can fetch a feed from a non-authoritative peer.

[1]: https://en.wikipedia.org/wiki/Secure_Scuttlebutt


I've been hosting my website on Peergos (built on ipfs) for years now (founder here). Peergos solves a lot of the issues with mutable data (also privacy, access control). You can see how fast updates show up from an independent server here: https://peergos.org/posts/p2p-web-hosting

My personal website served from a peergos gateway (anyone can run one) is https://ianopolous.peergos.me/

If you want to read more check out our book: https://book.peergos.org


Any thoughts on the services that make hosting a blog on IPFS easier?

Services like https://dappling.network, https://spheron.network, https://fleek.co, etc?

I've seen some DeFi protocols use IPFS to add some resiliency to their frontends. If their centralized frontend with vercel or whatever is down, they can direct users to their IPFS/ENS entrypoint.


Did they ever address the issue with IPFS where running a node necessarily required you to distribute an unknowable amount of pieces of material you may not want to be hosting (like CSAM, for example)?


That has never been an issue. You only seed what you choose to. It's basically the same model as BitTorrent but with about 15 fewer years of R&D behind it and much less organic user adoption.


As a user I’d stay away from sharing IPFS because of legal reasons. Just like torrenting, by actively distributing content I take up legal responsibility (at least in Europe where I’m located) - that’s risk is tolerable for a legal torrent because the file doesn’t change over time. For a changing web site, I’d constantly have to monitor the site for changes or trust the site 100% - which is not happening as soon as the site is somewhat controversial…


Yeah this is exactly my experience with IPFS. Nobody actually uses IPFS directly, and even those few that do never actually pin anything because it's an extra step.

(Also I heard it's computationally costly, but I am not sure if it's true, I can't imagine why it would be the case actually.)

As a result it's actually more centralised than web, there are like 3 pinning services that everyone uses. At which point I don't get the extra hoops.


I host all the static files of my Netlify and Vercel servers on IPFS

it is simple enough and free even on hosted solutions, and it keeps my Netlify and Vercel free during spikes in traffic

but the availability issue is perplexing, just like OP encountered

some people just randomly wont be able to resolve some assets on your site, sometimes! the gateways go up and down, their cache of your file comes and goes. browsers dont natively resolve ipfs:// uris. its very weird.


“ and it keeps my Netlify and Vercel free during spikes in traffic” — how exactly would this help re: potentially breaching outbound transfer limits?


if static assets arent being requested from their server then it would not contribute to your bandwidth meter


You still have to transfer that data through the network pipe to the end user no? How the server itself accesses the files to do so seems irrelevant to me.


I think if the assets are served in the html from ipfs gateway, then it doesn't add to your outbound traffic. Also once browsers support natively ipfs:// static content the game changes and the IPFS party really gets started.


> I pinned content from my own server and played forever with settings and definitions, but couldn’t get the content to be available everywhere.

> The blog you’re reading now is built with Jekyll and is hosted on my own 10$ server.

> don’t get me wrong, I’m still an IPFS fanboy.

...how could you still be a fanboy? When IPFS cannot fulfill even the most basic function of globally serving static content, why does it deserve anyone's interest? It's not even new or cutting edge at this point. After 8 years of development how can the most basic functionality still not work for even an expert?


It worked fine before for many years when it was slightly less popular.

They're having growing pains due to scalability problems and some libraries, like Helia (the JS library of IPFS), being new. I guess I'm also quite stubborn in wanting to do it my way, without the aid of any services, and for the content I pin to be available in all places, including Helia in the browser.


The official IPFS client in Go has always been very, very hungry for resources. At some point it crashed because it needed to many file descriptors if it ran for too long. Even for a simple static site with infrequent update it needed some maintenance. But even if one was willing to put the effort in, it was not actually rewarded, because if your server were you pin your website is offline, the truth is that your site is offline too, so what's the point?


It's working fabulously for hosting phishing websites frontend by the popular IPFS gateway providers like Cloudflare, so at least there's that...


[wrong thread]


What on earth are you talking about? IPFS has nothing to do with "coins", it's a distributed data management system. You can use IPFS for finance in the same way you can use a database for finance. You can also use IPFS for literally anything else that falls in the "data that you want other people to be able to access" category.


But it seems associated with blockchain and something named filecoin?

There goes all credibility.


Yeah we've kind of been using the block chain since the end of the 15th century. Something using public "double" book keeping to ensure data integrity across a network (digital or otherwise) doesn't magically make that data a currency just because some Japanese guy's pet project became too popular for its own good =)


Oh. Are you sure there aren't stone tables with block chain?


oh shit, I replied to the wrong comment.


On the need to run a node, i have a little project to wrap the static site content with an IPFS node in JS.

E.g there is already helia.

https://github.com/ipfs/helia

Just waiting for running a node on the browser tab to become insignificant, resource wise.


Perhaps use the DAT p2p network as an alternative? https://kovah.medium.com/publishing-a-static-website-to-the-...



Oh yes, I even had a free IPNS pinning service (for community members mostly) built with a friend.

https://dwebservices.xyz/


I did a similar thing on a project of mine where I used to use IPFS as the only long term storage layer on a project of mine. I still use IPFS but now I use it more as "long term unreliable storage". Note that I use IPFS without Filecoin or any external pinning services, instead chose to pin the content from my own server on regular basis.

The current status is that I plan to bring back IPFS usage more in the future for my project, but will wait for the ecosystem to mature a bit more first with regards to libraries.


I'm not sure quite how relevant it is to Neiman's work, but this is a pretty interesting blog post on decentralized web apps, and the tradeoffs with using various versions of IPFS in the browser: https://blog.ipfs.tech/dapps-ipfs/


Aside: visited this curious as to what the Nieman Journalism Lab (https://www.niemanlab.org/) was doing with IPFS if anything. Not the nicest near-collision naming move.


For a BitTorrent based take on IPFS: https://GitHub.com/anacrolix/btlink


This seems significantly more viable than IPFS, if you can leverage all the existing available torrents out there.



"This is a huge difference from BitTorrent where the only way to get content is to run your own software, and when you download something you also share it, by default."

As far as I understand this isn't a solved technical problem - but mostly a cultural quirk and probably just due to how the early torrent clients were configured

There is for instance a major Chinese torrent client (that name escapes me) that doesn't seed by default - so the whole thing could have easily not worked. If IPFS clients don't seed by default then that kinda sounds like either a design mistake or a "culture problem"

I've always wondered if there was a way to check if a client is reseeding (eg. request and download a bit from a different IP) and then blacklist them if they provide the data (or throttle them or something)


They probably don’t seed by default for a good reason. While torrents aren’t inherently political unlike signal and others, their culture is close to “legal ussues”. As a seeder I respect that and that’s why I’m seeding to high ratios. For every seeder of a specific piece of content there are 10x (20x? 50x?) more people who cannot share it back.

But if you want to fence them off, you can use private trackers with ul/dl ratio accounting.


yup ipfs literally doesnt work, why else did they need to do an ico and introduce centralized indexers...


Did you try FileCoin instead?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: