Hacker News new | past | comments | ask | show | jobs | submit login

1. IPFS attaches ALL network interfaces (internal and external) to your identity.

I don't know what you mean by attaching a network interface to your identity or even just identity. IPFS identifies a node by PeerID which is mostly a public key, that's it.

2. Tor is still "experimental" done by 3rd parties. https://flyingzumwalt.gitbooks.io/decentralized-web-primer/c...

Tor has nothing to do with IPFS. There have been some people that have worked on some integrations to use them together.

3. Due to 1 and 2, any hosted content is EASILY trackable to a user's computer,even behind NATs. A machine cryptokey also helps cement that (but can be changed). This allows easy DDoS'ing of any and all endpoints hosting content you don't like.

Yes, it is easily trackable. There is nothing about the design or goals of IPFS to be anonymous. In some ways the entire point is to be found and announce that you have content available. I don't know why people associate decentralized with anonymous. Not sure how you think it's cemented. Just toss the key and make a new one. How does this make DDoS'ing endpoints any easier than anything else hosted on the internet let alone DDoS'ing all of it. In face being distributed and content addressed can mitigate DDoS if it's malicious or just a hug of death.

4. It is trivial to ask the dHT for who has a certain content key, and get all (or the top 50?) computers hosting that content. (this matters with regards to "sensitive" content)

I hope so, that's the way IPFS works. Ask for who has what you're looking for and retrieve it. I have no idea what you mean by "sensitive" content. If it's sensitive you can encrypt it.

5. Running a node is still high cpu, ram, and network chattiness - so using a VPS to keep IPFS off your local network is still tenuous to run.

There are configurations to mitigate this but just making a blanket statement that it's high cpu, ram, and network is just FUD.




(Disclosure: I work for the Filecoin Foundation/Filecoin Foundation for the Decentralized Web).

I do actually agree that the privacy and anonymity aspects of IPFS are not well- conveyed. I think people get hooked on the "censorship-resistant" nature of decentralized systems, without understanding that even if you have multiple sources, for instance in a content-addressable network like IPFS, aggressive censorship systems have other strategies to dissuade dissemination or punish readers. You always have to be thinking a few steps ahead. Services like Tor and, I hope, the IPFS network both try to convey what threat models they are useful for, and which they are not, but it's really hard to stop overenthusiastic re-statements that give them super-powers they do not, in fact, possess.

That said, there's a bunch of careful thinking right now going on about how IPFS's privacy story could be improved: https://blog.ipfs.tech/ipfs-ping-2022-recap/ has a couple of sessions on this, and is a great summary of some other recent developments in the space.

One of those improvements is in the point about nodes being high CPU, RAM, etc. (I actually find this to be more of a challenge when running the full IPFS Go node locally on my desktop, rather than on a VPS; it requires some tweaking.)

The strategy right now is to encourage more implementations of IPFS to cover more use-cases; the original go-ipfs had to do everything, including maintaining some legacy decisions. Nowadays, there's a lot of effort on alternative IPFS implementations that can be slimmer, or optimised for particular scenarios, e.g. on an embedded device, serving a high-load web gateway, or providing millions of files. Protocol Labs recently renamed their canonical go-ipfs to kubo (https://github.com/ipfs/kubo ) to make it more of a peer with other implementations.

Of course, I love all these new generation implementations EQUALLY, but if you pushed me, I've enjoyed playing around with https://github.com/n0-computer/iroh , a modular rust implementation building off the increasingly robust rust libp2p etc libraries. There's some more to pick from here: https://docs.ipfs.tech/basics/ipfs-implementations/


First off, thank you for the comment!

I was an early adopter (0.3 !) of go-ipfs. I was experimenting in shifting massive public scholar data archives to it, and then referring to blocks to remix content in other archives. Naturally, I was thinking this could be applied to Internet2 for even faster transport of bulk data. (I was also playing around with private IPFS clouds for sensitive data.)

At the time, on a 1Gbps connection to the internet, I was getting consistently 115MBps, or 92% of linespeed!

However, when I poked further at the protocols, there were definite things I wasn't happy with. Naturally, without providing metrics to point at (and refute... sigh "fud"), I did notice that go-ipfs was a HOG. We all know it. It's getting better, for sure, but yeah. Found that one out when I got TOS'd off of a VPS provider when I started using 80% of cpu and 90% ram. Then again, $5/mo dealer :D

Outside of an academic setting, my other concern for me-hosting was that when I joined the network, it was putting every network adapter into the dHT as my machineID. I get why, so that local nodes could talk with each other to reshare highly requested content. But I'm definitely a "least surprise" kind of person, and having my internal IP4 and IP6 addresses put online was definitely a big surprise.

I also remember the old discussions of how to handle the /tor/(onionsite) network connection, and how it appeared to get tabled/scrapped, over issues on how to do so without violating anonymity beyond the onion-name. I remember back in the day on IRC helping 2 people who mostly figured that out, or at least got it to emit a sacrificial IP.

Again, I still disagree with not putting something like Tor or I2P in the limelight with "Offer these files via Tor/I2P". Doing this would allow all the peer command to probe who's offering and who's downloading a way to do so without outing your whole network. It would also have a nice side-effect of increasing the size of Tor/I2P and thus also strengthening those networks.

And quite frankly, given that FileCoin was meant to pay others to host your content, its the cryptocoin I have least issues with. It makes sense, and seems to be grounded in reality of finite storage/bandwidth. And IPFS is completely usable on your machines without paying a cent in filecoin. Seems like a win-win, honestly.


No problem! I hear you about Tor/I2P integration -- I've been following that bug/issue since it was first raised, and speaking personally, I've been an advocate for building a privacy-protective "stack" that uses existing tooling for a long time (see https://www.eff.org/deeplinks/2013/08/tahoe-and-tor-building... , which was intended to be a discussion of composing existing tools together, though in retrospect seems much more Tahoe-LAFS than I meant it to be.)

It does seem harder to pull off than it at first seems though, which I think is why numerous people have bounced off it on both the IPFS and Tor sides.

I have a general theory that this is true of a lot of interoperability initiatives: they are by their nature tasks that sit on the edge of "nice to have" periphery of an existing project. Plus they often require dev unicorns -- people who are able to understand the architecture and cultures of two different development spaces.

One thing I've been talking to a few people about informally as part of my work at FFDW is to work out an institution or funding initiative that would be a wrapper around these kind of interoperability ventures more generally. Dan Lynch's INTEROP was a vital part of the early Internet's success, and I think we miss something occupying that space in our new decentralized world. We have so many amazing tools, but have so little time to make them work well together.


I really wish y'all had a grip on and were measuring the bufferbloat you are creating with TCP_INFO. hundreds of flows in slow start are really tough on the network even with fq_codel or cake, and ledbat...


>> 5. Running a node is still high cpu, ram, and network chattiness - so using a VPS to keep IPFS off your local network is still tenuous to run.

> There are configurations to mitigate this but just making a blanket statement that it's high cpu, ram, and network is just FUD.

it’s not FUD. this is the out-of-the-box experience for many users, and has been basically since its inception. i know enough about IPFS to import, retrieve, and pin content, convert between different CID formats, use the IPNS and dnslink stuff and host a gateway, configure the set of resources that gateway will serve, and so on. what i still don’t know how to do after being an IPFS user since 2015 is how to make it not noticeably bog down the typical low-end/SBC home servers lots of people in this space use for all their other p2p stuff.

it’s not FUD: perf/resource use is a real problem that is legitimately hampering adoption. try running just basic IPFS workflows on the latest gen of raspberry pi before labeling perf complaints as FUD. if you’re close to the dev team and somehow don’t understand that this isn’t FUD then setup a conference call with me and i’ll demo perf of a typical home setup for you.


It is when you don’t include any context. You at least included that your standard of performance is a raspberry pi.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: