Hacker News new | past | comments | ask | show | jobs | submit login
IPFS, Again (macwright.org)
545 points by unicornporn on June 12, 2019 | hide | past | favorite | 220 comments



Who is going to defend the free internet agains Azure, AWS and Google Cloud? They are the very opposite of a free and open internet where everyone can "run a website" on her own machine.

It pains me to see a great idea like the Interplanetary File System still not working. I had similar experiences with IPFS and yes, we do need a project like this, only without the broken incentive structure attached to it. Why a "Filecoin"? We already have a tested and working native internet currency: bitcoin.

I've long believed that a browser itself should evolve to be both viewing visited websites but also to host said website for an x number of minutes/hour/days. A bit like webTorrent aims to work. This way a website that gets visitors gets decentralised hosters as well :-)


Filecoin isn't a currency. It's a token that proves "I replicated x amount of data, y reliably, for z amount of time", and (protocol-wise) can be exchanged for file storage services only. The market decides how much file storage is actually worth, by arbitrage.

The same principle goes for Namecoin, incidentally. The token embodies the value of the resource. I actually think this kind of thing is a much more stable base for a currency than Bitcoin's "it's valuable because we say so" system.


It’s worth noting that Filecoin and IPFS are independent, but mutually beneficial projects and networks. Using IPFS and Filecoin together is completely optional - as you can see by projects like Pinata, OpenBazaar, Qri, and DTube already using IPFS in the wild today.

By design, the Filecoin Network stores data on Filecoin nodes — not IPFS nodes. While we want to make these two networks even more interoperable (https://github.com/filecoin-project/specs/issues/143) — so that, for example, you could choose to supplement IPFS nodes with Filecoin nodes — we want to leave the choice up to users. There is a big opportunity to have a distributed marketplace for ensuring IPFS persistence, and multiple solutions optimizing for different use cases will likely coexist.


We don't need tokens.

We need P2P stuff that works.

Like https://github.com/webtorrent/webtorrent and https://github.com/amark/gun

They're both run in production, at scale (millions of users), and do NOT require any tokens.


>, and do NOT require any tokens.

You're looking at it from a pure technical perspective of pushing bytes around in a decentralized way.

What the folks pushing "tokens" are trying to solve is the game theory of financial incentives to store & serve those decentralized bytes. In contrast, things like Bittorrent/Beaker/webtorrent/etc depend on others' "altruism" to host and serve files.

Because altruism doesn't scale, that's why nobody wants to seed my 100 gigabytes of personal vacation photos. Sure, they'll be happy to seed & disseminate the latest cracked copy of Adobe Photoshop or a bluray rip of the latest Marvel Avengers movie. But my personal files are uninteresting to the current decentralized web.

(But I'm not claiming Filecoin actually solves the incentive puzzle. I'm merely pointing out that the "problem" Filecoin tries to solve is at a higher abstraction level (the economics) than webtorrent (the protocol).)


> altruism doesn't scale

Incentives don't exist on a one dimensional scale from pure greed to pure altruism.

People have lots of contextually dependent reasons for storing or sharing information. Those reasons are dynamic and super diverse.

In your case, why would you need lots of people to seed your 100 GB of personal vacation photos? You can seed them just fine! But don't use bittorrent, use the right tool for the job...

As a matter of fact, for the past 4 years I've been using btsync/resilio/syncthing to backup my personal media across different devices, as well as 70GB of audio projects. It's been working great.

That seems like a relatively small scale. And I'll bet some of those photos would be interesting enough to family and friends that they'd seed them.

From my naive perspective, the information itself is much more raw of a "currency" than any abstract economy built on top of it. Its "value" is determined by its relevance to one or many people, and p2p sharing, I believe, has the ability to accommodate this across many scales.

Why did public trackers fail? It wasn't because people didn't want to seed the content, it's because seeding the content became legally dangerous (and the trackers got shut down). To me, that's not a failure of "altruism", it's a failure caused by greed and an inability of industry to adapt.

And also by the reality that the tools are prototype level. Bittorrent and bitcoin are two incredible examples of applied cryptography and network science. They've showed us what's possible, but they're only the tip of the iceberg.

For many applications, lack of commodification is a feature, not a bug. What's holding us back is that the tools aren't built yet, but that's a work in progress!


>, why would you need lots of people to seed your 100 GB of personal vacation photos?

I think you're missing some context for the motivation of Filecoin. One of the elevator pitches is that it can disrupt cloud storage like Amazon S3.[1][2]

People do put personal files (e.g. via a cloud backup service) on cloud storage like AWS S3. Instead of a paying a centralized Amazon, Filecoin claims they have a way for money to go to the decentralized owners of harddrives.

Therefore, if "backing up my personal vacation photos" to Amazon S3 is a use case, that means the decentralized peers hosting my uninteresting personal files is also supposed to be possible.

>But don't use bittorrent, use the right tool for the job...

Yes, exactly. That's what my reply to gp was explaining: his suggestion of webtorrent/bittorrent/gun is a protocol that solves a different problem than the one Filecoin tries to solve.

[1] https://www.google.com/search?q=filecoin+amazon+aws+s3

[2] deep link of Juan Benet of Filecoin mentioning the centralized cloud services: https://youtu.be/6h2WNxEV8q4?t=482


> the motivation of Filecoin

was to sell tokens. Take an existing idea, build an incomplete implementation with a token attached, talk about features that don't exist as if they do - ICO success!


I agree, the problem is that Bitcoin and Filecoin (per author's IPFS scaling issues) do NOT scale though.

You must solve the technical scaling problem first, then sure, heck, add tokens if you dandy.

WebTorrent/GUN/etc. do scale. Add economics to that.

Preferably, add something that is time-scarce so people do not have to lose money (they don't pay FB or Google! If they have to pay Filecoin, they'll still choose free FB), something like BAT or Pirate Booty ( https://hackernoon.com/hollywood-crypto-behavioral-economics... ).


You can't always just "layer economics" on top. Sometimes the problem is economical.

Consider what it would take to replace DNS with a distributed system. In a global namespace, names have value. You can't just operate on first-come first-served - there needs to be a system that ensures that names go to whoever wants it most. In other words, the challenge to be solved there isn't the technical one of having a DHT - that's a mostly solved problem - it's how the hell to make the names cost money, and who the money goes to. (And if you don't think the names should cost money, how do you propose allocation should work?)


It isn't valuable because "we" say so (whoever "we" is). One reason it is valuable is because millions of dollars of energy resource is put into it to ensure that it is decentralized


That's a ridiculous misunderstanding of value. Just because you waste valuable resources to produce something doesn't mean the thing you produce is now valuable. If I feed my dog millions of dollars worth of beluga caviar, it doesn't mean his dogshit is now worth millions.

To the degree that bitcoin is valuable today, it's because speculators think somebody else will pay them more for it in the future.


Did you read the last part of my statement "to ensure that it is decentralized"?

True decentralization at this level is why it is valuable


The fact that you have to waste ever-escalating amounts of electricity to make bitcoins doesn't ensure that it's decentralized.

If anything, it means that economies of scale will favor larger organizations. In fact, that's what we see, with a few large Chinese mining syndicates controlling a majority of the hashrate: https://www.buybitcoinworldwide.com/mining/china/

Compare bitcoin to a real decentralized system like bittorrent, or hell, email.


> We already have a tested and working native internet currency: bitcoin.

Bitcoin manages low 6 figure transactions per day, using as much energy as the entire country of New Zealand to do so. That’s far from internet scale (or even “single Raspberry Pi”-scale) and, because it was designed around gold bug economics, the protocol says it can never improve because the whole point was to scale the overhead as a function of network size.

Fundamentally, the reason why none of these have worked comes down to economics: if you’re storing things, you want provable access times and reliability but the costs of doing so in a decentralized system are much higher than in a centralized model because you need more copies to balance out the lack of trust. That also hurts on the other side of this: if you’re considering hosting, you need to charge more to cover your risk of getting involved in legal proceedings and there’s a fairly high threshold where that risk just isn’t worth it.


>Why a "Filecoin"? We already have a tested and working native internet currency: bitcoin.

Maybe the "proof" section of the FAQ will explain the difference: https://filecoin.io/faqs#what-s-the-difference-between-all-t...

If you've already read that FAQ and still disagree, are you saying you can embed the incentives to store terabytes of others' data and also the proof of that storage into existing Bitcoin blocks?

(I'm not saying IPFS "proof-of-storage" scheme will work. I'm just saying Bitcoin's proof-of-(cpu)-work doesn't seem to easily map to verification that the peers' harddrives actually has the data they claim to have stored.)


In fact, Bitcoin's blockchain is not the right data structure for something like this.


> Why a "Filecoin"? We already have a tested and working native internet currency: bitcoin.

Because it’s a lot easier to raise $300 million on a bit of hype selling your own coin.

(Also Bitcoin micropayments aren’t quite there yet, but I’m not sure any other truly decentralized cryptocurrency really is either)


The problem with any alternative token transfer systems is that they either introduce additional points of centralization (hence attack vectors) or end up being significantly inferior to bitcoin in terms of scalability, reliability, testing and user base.


Also, why filecoin -at all- when siacoin exists and works already. Filecoin is vaporware.


I feel Nano is doing an outstanding job. It's what bitcoin was supposed to be.


The funny thing about every comment which positively mentions Nano is the writer invariably has zero idea what tradeoffs are made in the consensus protocol to make it faster. There are no free lunches in distributed systems which must contend with byzantine failures.


You've succinctly captured exactly how I feel when I hear people mention nano, so thank you for that.


Pray enlighten us.


There is an alternative stack that works pretty much as you describe, its browser being Beaker: https://beakerbrowser.com/docs/tour/

There is an option in there to seed visited content as well as to host your own stuff.


Yeah I was so confused why the author didn't just use dat it's so easy


He also tried dat.

https://macwright.org/2017/07/20/decentralize-your-website.h...

Hasnt written about it again though.


I'd like to read a comparison between the two hosting experiences; I'm not sure that eg. the linking-to-other-content part would be easier in dat-world, unless they run a pinning/re-hosting/gateway service tied to their current domain (which is doable, either standalone, or via Hashbase).


> I've long believed that a browser itself should evolve to be both viewing visited websites but also to host said website for an x number of minutes/hour/days. A bit like webTorrent aims to work.

As far as I can remember that's one of the features of https://beakerbrowser.com/


> Who is going to defend the free internet agains Azure, AWS and Google Cloud?

This is exactly what the http://metacurrency.org project aims to bring to life: http://holo.host/learn/ and http://ceptr.org

The distributed pattern they have created combines Git local source chains with cryptographic signing and BitTorrent’s DHT to create easily evolvable apps using http://holochain.org. Peer validation enforces rules defined in the logic of each app. Apps run locally instead of on a corporate server. If the current tech ecosystem is Platform Capitalism, then this pattern is Protocol Cooperativism.

I haven’t come across anything like this.


Why don't we go back to simple store and forward like mechanisms like net news, just with enhanced content formatting? Store and forward static websites.



Yeah I think we should, and storing and forwarding is exactly what Holochain does. I think there are different use cases for different tasks. I really like, and use, Scuttlebutt and DAT - which also store and forward data.

Scuttlebutt has been awesome for me to see that a social network can be completely hosted by it’s users.

I think Holochain and Ceptr are here to help us evolve how we work together and do our accounting.

What I love about Holochain's pattern is that it essentially provides a fully distributed, immutable and peer-to-peer crypto accounting framework for barter transactions/sharing work and resources. It does not focus on creating and managing artificially scarce coins (as is the case in the current financial system, and also in Blockchain). It is trying to show us that we are using Industrial Age tools (interest-bearing-debt national currencies), and that instead we could focus on creating peer-to-peer Information Age currencies [1].

Holochain's pattern enables communities to make their own judgements on what value something holds and how they want to account for it. It allows us to build that into a flow language - a currency. This means instead of using a fiat 'medium of exchange' backed by US imperialism and it's military might/control [2] - we create, redeem and destroy currencies based on asset-backed mutual-credit principles that mimic the way Mother Nature works. I think I read somewhere that today over 90% of money in the money system is speculative, and interest-bearing monopoly debt-money has no roots in the real world.

An example of a mutual credit currency could be a currency for seats on a train. When I buy a ticket, my ticket is tied to a certain seat on the train. If I went to Mars, my ticket would be worthless. The ticket is only valuable because the ticket is tied to the train system - to one of the seats available. This makes it asset-backed. Another example would be farmers being able to pre-sell their produce locally, without going through expensive middlemen. They can create currencies that represent their produce (the asset).

The MetaCurrency Project is bringing us a into a world where we have more agency when interacting with the world, using evolve-able peer to peer protocols that can be forked and re-forked, again and again, to better meet the needs of it's users.

They are giving us the tools to see the world with new eyes, and to allow multiple perspectives - instead of one rigid, centrally enforced truth.

Moving towards using mutual credit systems, together with reputation currencies (imagine your ratings on Uber, eBay etc. was hosted by your peer-to-peer community), allows you to build trust together in a way, un-intermediated or hindered by a corporate app and UI. These reputation currencies already exist, yet are contained and held hostage by our current paradigm. Other examples of reputation currencies I can think of are ‘FairTrade’ or ‘Organic’ labeling.

Holochain is BitTorrent + Git + Cryptographic Signatures + Peer Validation + Gossip [3] - making it agent centric instead of data centric. It is not anymore about the database in the middle, but about the relationships between peers and their balances/interactions, and the agreements we make together on what actions we are allowed to take in our different apps and networks. Together we can see and improve the app logic and I can bridge between my apps, as I now have complete control over my own data [4].

HoloREA is a project that is using Holochain to build a modular, cooperative and transparent economic supply chain framework. It uses the Resource Event Agent accounting model to enable users to steward and follow flows of economic actions. These relationships are today tightly controlled in corporations. HoloREA aims to create a cooperative web of interactive flows of people, actions and resources.

Essentially, the vision is to reclaim The Commons and give us better tools to steward the world together more peacefully and compassionately.

If you’re interested in this paradigm, a few keywords I've found useful: Protocol / Open Cooperativism (currently we have Platform / Surveillance Capitalism), mutual credit, myth of barter, buddhist economics, social darwinism.

I also like the work of Elinor Ostrom, Bernard Lietaer, David Graeber, Douglas Rushkoff (Life Inc.), Charles Eisenstein, E.F. Schumacher, Silvia Federici, Mariana Mazzucato and Anand Giridharadas.

For technical material: http://ceptr.org

Footnotes/links: [1] https://prezi.com/xmzld_-wayho/new-economy-new-wealth/

[2] “Nixon floated the dollar in order to pay for the cost of a war in which, during the period of 1970–1972 alone, he ordered more than four million tons of explosives and incendiaries dropped on cities and villages across Indochina—causing one senator to dub him “the greatest bomber of all time.”7 The debt crisis was a direct result of the need to pay for the bombs, or, to be more precise, the vast military infrastructure required to deliver them. This was what was causing such an enormous strain on the U.S. gold reserves. Many hold that by floating the dollar, Nixon converted the U.S. currency into pure “fiat money”—mere pieces of paper, intrinsically worthless, that were treated as money only because the United States government insisted that they should be. In that case, one could well argue that U.S. military power was now the only thing backing up the currency. In a certain sense this is true, but the notion of “fiat money” assumes that money really “was” gold in the first place. Really we are dealing with another variation of credit money.” — David Graeber: ‘Debt: The First 5000 Years’

[3] https://holo.host/faq/what-is-holochain/

[4] https://medium.com/holochain/holochain-reinventing-applicati...


How is reducing the fungibility of money not a step backwards?


Using this pattern increases fungibility. Could I ask you to share what makes you think it decreases fungibility?


> When I buy a ticket, my ticket is tied to a certain seat on the train. If I went to Mars, my ticket would be worthless.

What if I don't want a train ticket? What if I want to sell my widget-coin for some brussel-sprouts-coin. Now I can't go to any farmer because he's only got carrot-coin or broccoli-coin, I have to find the brussel sprouts farmer-and not only that, but the brussel sprouts farmer who wants to buy widgets.

And if you say "nonsense, you can exchange any coin for any other coin", how is that different from what we have now?


It’s different because the US dollar is the worlds’ reserve currency, and it is violently backed by US imperialism.

The MetaCurrency Projects’ Holochain is the next generation P2P no-middle-man accounting system which open sources and decentralizes the rules around money creation and usage. It allows us to evolve the rules together and build mutual credit, interoperable, open and unencloseable carrier’ currencies. Currencies are backed by real world commodities, leading to a rich network of flows between a community.

Your comparison shows me that you might not yet have had the privilege of critically exploring the underlying patterns and stories that have made Industrial Age money the tool it is today. For me ‘The Future of Money’ by Bernard Lietaer, and David Graeber’s ‘Debt: The First 5000 Years’ are very paradigm shifting books and helped me to challenge the underlying stories of economics that I believed. These stories stopped me from seeing a more beautiful future.


Decentralizing how we link, address, and move content is the basis of IPFS and Filecoin both. Both are solving different parts of moving our digital infrastructure to infrastructure that we control, that we govern, and that we are not held hostage by.

While we still have a long road to go w/ IPFS usability, it’s worth pointing to what works today -- there are millions of end users benefiting from IPFS, 100Ks of libp2p nodes, we see PBs/mo of traffic in the infrastructure that we run, and millions of daily requests to our gateway. Look to fully decentralized applications and systems like OpenBazaar, Textile, Dtube which use IPFS and paving the way. New technology -- especially new technology that seeks to build new platforms from the ground up -- is hard and extensive to build, and to polish. We’re working on it, and though nowhere close to where we want to be yet, there’s a lot of utility already provided.

Re Filecoin incentive structure -- this comes from recognizing that computers are not all the same -- there is huge utility to be had in dedicated infrastructure running 24/7, well maintained, high performance spread out around the world. P2P systems of the past have failed to match the reliability, uptime, and performance levels of centralized systems. Cryptocurrencies enable the creation of Open Services (like Bitcoin) that run public infrastructure w/ high uptime and reliability guarantees. With Filecoin, we aim to bring that to file storage and distribution. More here: https://www.youtube.com/watch?v=6h2WNxEV8q4


Sadly a lot of the mechanisms behind bitcoin make for it to be difficult to use in a vast transaction setting (millions/sec, similar to credit cards), which would be required for using its blockchain for proof of work purposes.

That’s why alternatives exist in order to solve these “problems” that the BTC community either choose not to fix or simply won’t. It’ll be interesting to see what crypto people would intend to use for something like IPFS.


IPFS is from the crypto people. This is what blockchain advocates put forward as the solution to mass storage for their blockchain dreams.

IPFS is basically BitTorrent with magnet links - hashes to address content, if you want to be sure your obscure content will stay around then you have to seed it, etc. There's some fancier stuff on top, but if you know that's how it works, all else follows.

You can tell how "blockchain" IPFS is from the way that advocates seem chronically unable to tell "could" from "does" - stuff that doesn't exist and/or doesn't work is routinely talked about as if it's here and working in the present. This leads to disappointed posts like the one linked.


IPFS is useful to blockchain projects because of the permanence of content hashes. That doesn’t mean it’s not also useful to many others. QRI is using it as the publishing medium and backing store for all kinds of data sets, for example. No blockchain, just shared storage across interested parties.

You make a really good point about communicating readiness - the author of the post is expecting more from pre-alpha software than it is ready to provide.


IPFS has had public releases since 2015 with rhetoric implying general utility, not to mention $300M in funding. If it’s still “pre-alpha software” that’s not a mere communications problem.


Er, I meant pre-beta not pre-alpha. It's still in development. Sounds like the core API is pretty much settled and not expected to change (much).

Building a platform is not like building an app. And a platform that changes significant parts of internet infrastructure is different from just building any platform. I worked on Firefox for 13 years... the internet is a harsh and fickle thing.

That said, people are using IPFS with millions of users today.

I very much agree about the rhetoric. The homepage is polished, and speaks to a bunch of features that are possible to varying extents in varying environments, but not easy to demonstrate. Bandwidth savings is a good example of this.


> That said, people are using IPFS with millions of users today.

This seems like the thing they should lead with: what are those users doing and why would be a great way to explain the value.


The middle ground might be to use a local hosting company that has it's own servers and racks. Pay a bit more, but boycott the big clouds for personal websites. You could probably host a few dozen static blogs on a physical machine for say $100/m. If 100 people do that it's affordable.


> You could probably host a few dozen static blogs on a physical machine for say $100/m.

100 bucks!? I host a dozen unique domains with static content (and a few other services) on a 1/1 VPS for 3 euros/month. At Hetzner, not at a hyperscaler.


Yeah I was thinking a whole server not vm.


Even a real-metal physical box can be had for far less than $100. Even excluding the ranges that are out of stock Kimsufi has a number of options that are more than capable of running "dozens of static blogs" with ease for <$20/month (the €8/month Atom based machine would do the job, as would the currently-out-of-stock €4/month ones).

There are other providers with comparable offers, I mention Kimsufi because for some time I've had one of their low spec units running a couple of simple tasks.

You are not going to be getting brand-new unused high-spec kit with that sort of deal, of course, but you pays your money, you takes your choice!


Uberspace [1] does something like this in Germany. You get to pay what you want, too.

[1]: https://uberspace.de/en/

Edit: I'm not affiliated with them other than being a customer.


> If 100 people do that it's affordable.

If one of those people doesn't mind running as sys-admin and help-desk for the rest...

Though your $100/m is rather higher than it needs to be for that level of service so you won't need the same number of people for that cost/person, someone is still taking on a part-time job doing admin for others.


The real threat to free internet is not Azure or AWS but things like AMP. You can still host your website even on a server in your basement.

IPFS is a great idea but it is very hard to implement something like this. It was tried before (hello Wuala) but no success. I am not sure if this idea can be implemented at all.


Opera 11 could act as a server back in 2008 but nobody use it then so :/


There happens to be a great solution for this known as gaia hubs: https://docs.blockstack.org/ which you can learn about in these docs in conjunction with the immutable identities associated with them.

I have also in more context spoken recently with someone and convinced them why they should use gaia over IPFS: https://forum.blockstack.org/t/cannot-find-ipfs-driver/6147

and they brought up some other usability issues not mentioned in this critique, particularly about file pinning to keep files alive, and in general the waste behind the idea of data duplication by not separating out the auth of the user owned data with the duplication of the data itself in the DHT network, for which Blockstack has "Atlas".


> Why a "Filecoin"? We already have a tested and working native internet currency: bitcoin.

Filecoin's promise is to be tightly coupled with file storage, to set up contracts that are automatically enforced by the network. Bitcoin does not do this.

I don't know much about it, but maybe Ethereum could be used to do that, though


Or you could just use siacoin, which already does this for storage


Just today I was looking into IPFS vs DAT, does anybody have any insights about the similarities/differences other than the ones listed here [1]?

From far away, DAT looks smaller and better documented (perhaps less ambitious, too?) Apparently the best IPFS overview is the 2015 paper [2] which looks pretty daunting and does not seem to cover any practical considerations.

1: https://docs.datproject.org/docs/faq#how-is-dat-different-th...

2: https://github.com/ipfs/papers/blob/master/ipfs-cap2pfs/ipfs...


I consider dat:// to be the better protocol, in part because of what you mentioned. Other advantages are the lack of duplicating data on disk (IPFS makes a copy of all data it shares) as well as having a versioned history of all changes. That way app owners can'tp ublish malicious versions while preventing people from using the non-malicious ones.

Essentially, dat:// behaves like BitTorrent but the torrent data can change.

The only downside for both protocols I can think of is that the integration story outside the browser and CLI tools is very poor (there is no FFI/C lib Ic an bind my Rust app to)


> (there is no FFI/C lib Ic an bind my Rust app to)

Language choices in both have set off my "this is doomed to obscurity" alarm bells for precisely that reason. You don't write the reference implementation for a new Internet protocol—especially the core library for it, and especially a very complex one—in a language that can't easily be included in most other languages. So, probably C.

Dat in particular seems great but ain't no way I'm relying on a large JS project for anything I don't absolutely have to, on my own time, especially if it deals with my files.


With regards to IPFS (don't have much experience with Dat), I have a hard time understanding how choosing Go to implement their protocol, as well as Javascript makes it doomed for obscurity. A lot of distributed/decentralized applications and platforms are being written in Go. This is the first time I've heard this argument.

Not to mention you can call into Go funcs over shared libs from C.

Rust is a great language definately, but not using it is a completely sane design choice. C has been a source of a number of security issues due to memory management, that type safe languages solve.

Not to mention, there are various parts of the IPFS/LibP2P stack that are being written in Rust by other teams.


There is a BitTorrent extension for updatable torrents:

https://www.bittorrent.org/beps/bep_0046.html


Yeah but there isn't any client that supports it. Or more importantly, easy-to-use libraries that allow me to use it in my projects.


Dat-rs exists but seems to be on pause.


Datrs dev here. I implemented all of the Hypercore feed protocol last year. The next step was to add the networking layer, which was blocked on async IO in Rust. So that's what I've been working on this year (the Runtime project).

Datrs development should be unblocked again soon, starting by moving the fs layer over to async IO. And then tackling the network layer.


Awesome to hear that! Thank you so much for your work!


DAT is run by influential hobbyists of independent means, with occasional funding from non-profit organisations.

IPFS is essentially run by a very well funded (300musd) private company.

For this reason alone I think DAT is the more likely to succeed. It seems hard to reconcile the longevity of a truly distributed protocol with the need of a private company to retain control.


> IPFS is essentially run by a very well funded (300musd) private company.

That is news to me - I've thought they were a scrappy startup. The issues mentioned in the post like glitches in the docs are excusable for a project that relies on volunteers who prefer writing code to polishing docs, but if you have $300M in funding then wtf? Just, you know, hire good project management and docs people.


My understanding was that most of the team focus has shifted over to the filecoin project in the last year or two, and they aren't dedicating as many resources to ipfs. That said, I agree that this is pretty deplorable.


Several of the key members have shifted focus but there's still a core IPFS team. But we're still spread quite thin.

We're still paying down years of documentation debt but there has been quite a bit of progress:

* Expanded https://docs.ipfs.io/ with a concepts section. * Tutorials https://proto.school/#/ * A ton of work on libp2p specs (https://github.com/libp2p/specs/commits/master) along with a full time documentation writer.


My bet is both likely to fail. NIH syndrome is strong with both cases.

For efficient file transfer protocol between peers, Bittorrent protocol have multiple independent implementations that is working right now. They should build on top of that. Instead, both DAT and IPFS try to implement their own protocol with dubious additional features. For IPFS, it even relies on traditional DNS. What are they thinking?


> NIH syndrome is strong with both cases.

Seriously. IPFS decided the standard URL format wasn't good enough so invented something worse, which was pretty funny/sad. I never saw reasons for it that made any sense.

See:

https://github.com/ipfs/ipfs/issues/227

[EDIT] here's the original, deeply "LOLWUT?" justification for it, quoted in this comment on another issue. Other justifications were given but this was the motivation. Oh man. Wow.

https://github.com/ipfs/go-ipfs/issues/1678#issuecomment-139...

[EDIT] farther down, same author as the quoted text in the above comment: "wish unix:// wasn't taken by unix sockets." Oh FFS, guys. Hahaha.


IPFS also needlessly rolled their own TLS replacement:

https://github.com/ipfs/specs/issues/29

> I would add that if TLS can be used without all the X.509 nonsense, and with our own choice of pubkeys, including using different signing public keys (not the DHE keys) in the same conn, then we can consider breaking our "not TLS pls" stance.

TLS fit their requirements all along, they just… decided to reinvent it instead of reading about how to use it.


Their scheme as described in #227 would make sense, if they were designing IPFS as a service for Plan 9.


IPFS doesn't rely on DNS. Most Dat deployments do, however.


This strikes me as a little disingenuous. Neither IPFS nor DAT are inherently dependent on DNS. If you're referring to HTTP the gateways that lots of people use with both IPFS and DAT in order to host static sites, arguably that's because support for UDP and other useful p2p tools in the browser is experimental! But people still find both IPFS and DAT useful for hosting static files. (libdweb is really exciting by the way: https://github.com/mozilla/libdweb)

From what I can tell, most "serious deployments" of DAT and IPFS are made by people directly using the loose underlying collections of libraries that implement each of them. These people often end up putting together application specific transport and discovery layers that work for their specific application.


Except that, from the article, the only way to really use IPFS for pretty names is to use DNSlink and not IPNS. Because IPNS is unusably slow.

So IPFS might not theoretically rely on DNS, but it seems that it does practically rely on DNS if you actually want to use it.


If you want to use it with DNS names, yes.

I have a hunch that IPNS is just broken in its implementation (manifesting as "unusably slow") but I haven't yet had the spare time to investigate this theory..


afaik several of the Dat folks worked on Bittorrent implementations beforehand so it's possible they were thinking...something?


There are a bunch of differences but the important ones (imho) are:

Dat doesn't use immutable addressing (addresses stay the same when content changes) while IPFS does.

Dat at the lowest layers is stream-oriented, allowing stream-oriented services and applications that are near-real-time. IPFS is static blob/object oriented.

IPFS has a better developed "discovery" network at present (if you use Dat today you are typically in your own island whereas with IPFS you're part of "the" IPFS network). This is being worked on however.


Some decent answers in this stack overflow thread: https://stackoverflow.com/questions/44859200/what-are-the-di... and a good response from IPFS creator jbenet here: https://github.com/ipfs/faq/issues/119#issuecomment-21827839...

I think it's possible to view Dat and IPFS as two different layers of a stack that can interoperate and each solve useful problems at their layer. For example, Dat has more UX focus and high-level abstractions making generic app development smooth and easy (an area IPFS is weaker - though https://medium.com/textileio has been working to make this much better), while IPFS has the benefit of global name-spacing and content-addressing primitives that enable deduplication across identical datasets and validate the content is what you asked for (used by tools like Qri to do dedup within a data commons: https://qri.io/faq/). I've seen demos of projects using both together, each for their unique strengths - but there's still a ways to go to make interop easy.

If you were looking for nice IPFS overviews, I'd recommend: - https://hackernoon.com/understanding-ipfs-in-depth-1-5-a-beg... - https://medium.com/textileio/whats-really-happening-when-you... - https://docs.ipfs.io/introduction/overview/ (see the concept guides for easier to parse explainers on CIDs, Pinning, etc)


Don't forget Swarm. https://swarm.ethereum.org/

So Swarm vs DAT vs IPFS.


If it has a blockchain in it, it immediately crosses the "too heavy" line for me.


Swarm would probably cross that line even without any of the blockchain stuff. It's more akin to Freenet than Bittorrent and would probably function better as a decentralized backend for most people. Expecting an average person to run a Swarm node is probably unrealistic in anything resembling its current form but allowing resilient decentralized sites which are accessible from any one of a number of gateways should be doable when/if they actually get around to implementing insured data storage that is the key component to the entire idea.


You're right about that, but I do wonder if this will be perfect for making distributed darknet marketplaces and torrent trackers a la what.cd.


No. It doesn't depend on blockchain except for the ENS resolution.


There's also Arweave, which is targeted on hosting web sites in a decentralized way.

- https://www.arweave.org/ - https://github.com/ArweaveTeam/arweave


I've got a comment comparing IPFS to Dat up in the Dat thread that's up now: https://news.ycombinator.com/item?id=20162881

tl;dr: I feel that IPFS is the smaller more tightly scoped project that fits better into the existing web ecosystem. Dat has its own browser and versioning and other stuff bundled in, and IPFS works with normal browsers (https://news.ycombinator.com/item?id=20162972) in a way aligned the web's graceful degradation principle.


I work for a cryptocurrency company as a software engineer and I definitely agree that a lot of projects backed by crypto funds tend to over promise and under deliver.

It's clear that those who control the funds aren't always the best at judging tech talent. Big tech corporations have had years to settle down and build a reputation to attract top talent. Crypto companies tend to attract greed over talent and it shows in practially all projects.

Its improving for some projects but not others.


I've seen this argument before and tend to disagree. There is certainly greed, but I think the larger problem is that crypto doesn't have users.

I don't think it's particularly novel in 2019 to say that a successful app is built off of the feedback of its users. Any app, regardless of intentions or engineering prowess, is going to struggle if it doesn't have a sizeable user base providing feedback (and devs who listen to that feedback).


I think you're touching on an important point. There are already other vessels for the niche that cryptocurrency fills—at least as far as websites and web services go—that are way easier to both implement and use. With that said, I'm still rooting for cryptocurrency as a general competitor to state-backed currency, that in turn gives people all over the world more freedom to conduct their business without state or corporate intervention. That is most of all about freedom, and the only bogeyman they've yet to conjure against it is "it's used for criminal stuff". Like the dollar isn't also used for crime...


The distinction between "gives people more freedom to conduct their business without state intervention" and "enables crime" seems very elusive.

I can see an argument that enabling crime under oppressive regimes is moral. Or even enabling particular crimes that you feel shouldn't be crimes.

I have trouble seeing the legitimate use case for cryptocurrency in first world countries. If the main use case is crime, money laundering and speculation, it will either be squashed or remain niche.


I tend to agree with your assessment, but you are dancing around an important point.

Most people in the world don't live in a first world democracy. Today, many of these countries are stable and providing currency for their citizens. But ask a Venezuelan if they can see a reason why we might want an alternative to a state backed currency and I feel like the answer is self-evident.

Vitalik saw this use case and talked about how disheartened he was by the ICO boom because it was use cases like this that he believed in, so it isn't at all an afterthought.


> Like the dollar isn't also used for crime...

Crypto people say this like it's an argument in their favour - whereas it actually means: even for its one and only genuine consumer use case, crypto is utterly trounced by conventional currency.


I work in an election services company... you wouldn't believe how sales driven the space is ridden with blockchain based solutions. In the end, most would either not add enough value or require too much client buy in to work with the variety of districts/cities/counties/states we deal with.


Are there any ICO-backed projects that didn't overpromise and underdeliver (or not deliver at all)?



IPFS was started in 2014, long before Protocol Labs started working on Filecoin.


IPFS is a victim of the great plague of modern software development: over-engineering. They've spent far too much time coding and far too little time thinking of ways to avoid coding by simplifying their algorithms and protocols and most importantly by carefully defining and scoping their problem.

The massive amount of cryptocurrency funny money they raised makes this problem worse, not better. Lots of money leads to too many cooks in the kitchen. Large organizations require immense discipline to avoid scope creep and runaway complexity growth. They need someone to say "no!" 95% of the time and relentlessly exterminate features and not allow things to be released unless they are ready and actually solve a problem.

Most large organizations don't have this, which is why most "enterprise" software is a mass of twine and chewing gum. In the case of enterprise over-engineered bloatware you usually have some corporate requirement forcing its use. No corporate requirement forces IPFS or any of Protocol Labs' other sprawling mega-projects to be used, so nobody is going to use them.


We've been playing a lot recently with the concept of IPFS websites.

We even created a plugin (https://github.com/almonit/almonit-plugin - unreleased officially yet!) for websites that combine IPFS with ENS (a decentralized DNS). Our repository contains a list of decentralized websites using this method. We found about 20 so far.

Technically, IPFS work well for us, but we made sure to have a server seeding our website at all times (where we suffer similar problems like the author of the post describes).

Even then, we still had to make sure that the website is available in all main gateways, since most people don't run their own IPFS daemon. The strange part is that sometimes content is available in one gateway, but not in others. Or sometimes it's available in one gateway, but we can't get it on our local IPFS node.

I understand that IPFS is not a blockchain, so I can't expect all the nodes to have the same content. But I do expect the main gateways to communicate more directly with each other.

Conceptually, IPSF websites are a bit like sending your website to someone via an unreliable slow mail; i.e., it's not that attractive. You can make it somehow more dynamic, using what they call IPNS (it allows you to update your content). But the result is so slow, that even the most devoted monk would lose his patience eventually.

A workaround is using a decentralized name system, like ENS.

This works very well, but the result are still static websites. No comments or anything really interesting happening.

Those websites are censorship-resistance and very robust, you don't have to worry about ddos attacks. But then again, how many people worry about such things?

That said, I still like the concept of IPFS. We are exploring a few options to add dynamic behavior to that now, where the dream is to mimic existing services in a decentralized way.

Surely, they won't work as well, but the pro would be that it will be controlled by the users, and that they will be able to survive financially with no ads.


Almonit looks cool. The IPFS in Web Browsers working group has also been collaborating with Nick Johnson et al from ENS to create an “EthDNS” server to resolve ENS names to IPFS content in a decentralized way (along the lines of https://github.com/mcdee/coredns). Should be a working demo in time for IPFS Camp in ~2 weeks!


Sounds interesting. Is there more info about it or I'll have to wait for the IPFS camp to satisfy my curiosity?


Lidel has written a lot about this in issues (nice index here: https://github.com/ipfs/in-web-browsers/issues/147), but I think the live (recorded) demo will have to wait. =]


This is excellent feedback! Thanks!

> We even created a plugin (https://github.com/almonit/almonit-plugin - unreleased officially yet!) for websites that combine IPFS with ENS (a decentralized DNS).

This is great! We’ve been wanting something like this for a while-- and there are a bunch of utility in bringing ENS and IPFS together.

A lot of the problems you mention are known, and being worked on. Let me describe a bit more in each.

> Technically, IPFS work well for us, but we made sure to have a server seeding our website at all times (where we suffer similar problems like the author of the post describes).

The IPFS content model requires somebody with interest in the data to keep serving it. So for now, yes you need to keep some ipfs nodes with the content around. For example, we serve all our content (all our websites are distributed w/ ipfs) using ipfs-cluster, which connects to the gateways. We’re working on ipfs-cluster and filecoin as the ways to solve this. Ipfs-cluster for when you want to run your own infra (or a community gets together), and filecoin for when you want to hire someone else to serve it for you.

> Even then, we still had to make sure that the website is available in all main gateways, since most people don't run their own IPFS daemon. The strange part is that sometimes content is available in one gateway, but not in others. Or sometimes it's available in one gateway, but we can't get it on our local IPFS node.

This is a big problem that we’re working on right now. Many of our recent releases aim to fix / reduce this problem. The nature of this comes from content-routing scaling. We’ve detected this getting worse as the network grew orders of magnitude, and we’re working on fixing this right now.

> I understand that IPFS is not a blockchain, so I can't expect all the nodes to have the same content. But I do expect the main gateways to communicate more directly with each other.

Yes, definitely. We agree-- on it.

> Conceptually, IPSF websites are a bit like sending your website to someone via an unreliable slow mail; i.e., it's not that attractive. You can make it somehow more dynamic, using what they call IPNS (it allows you to update your content). But the result is so slow, that even the most devoted monk would lose his patience eventually.

IPNS key names are just not working well -- and getting worse with scaling. We’re working on fixing it, but lower prio than other more important problems. For now, we’ve been directing people to use DNS -- check out https://docs.ipfs.io/guides/concepts/dnslink/ -- these should be fast for you. But yeah, figure you know about this and may just be avoiding DNS in favor of more decentralized tools (ENS, IPNS key names).

Also check out https://dev.peerpad.net/ -- this is an experimental tool w/ dynamic content (give it 10-20s -- unfort takes a while to “get online” -- this is the dev pad). Once peers are connected you get a pad that’s distributable across ipfs nodes.

> A workaround is using a decentralized name system, like ENS. > This works very well, but the result are still static websites. No comments or anything really interesting happening.

+1 for ENS. And, you could use ENS to point to something like the content hash of peerpad above (see the latest content via `ipfs dns dev.peerpad.net` or `dig TXT _dnslink.dev.peerpad.net` == /ipfs/QmWbsqqqG9YpNYDt5afp6HY8TrKMtCtdGUtUfgkS9fRYeH -- and this would be a _static_ html5 bundle that gives you a _dynamic_ local app. The hard part is backing up your content beyond your own browser-- pinning it to an ipfs-cluster or other tools, etc.

Anyway, the picture is clearly not there and usable yet, but the implications are big: you can get fully distributable p2p apps w/ fully dynamic content. (you could even encrypt the app bundles themselves, but this needs an extension to decrypt browser side) -- this feels like this: http://www.accademia.org/wp-content/uploads/2014/01/slaves-b... -- the form is emerging, but much work to be done.

> Those websites are censorship-resistance and very robust, you don't have to worry about ddos attacks. But then again, how many people worry about such things?

A lot.

* https://en.wikipedia.org/wiki/Block_of_Wikipedia_in_Turkey and https://ipfs.io/ipfs/QmT5NvUtoM5nWFfrQdVrFtvGfKFmG7AHE8P34is...

* https://en.wikipedia.org/wiki/Censorship_of_Wikipedia#China

* https://firstmonday.org/ojs/index.php/fm/article/view/9402/7...

> That said, I still like the concept of IPFS. We are exploring a few options to add dynamic behavior to that now, where the dream is to mimic existing services in a decentralized way. > Surely, they won't work as well, but the pro would be that it will be controlled by the users, and that they will be able to survive financially with no ads.

It takes a long while for all of this kind of tech to come together. Keep at it-- you make significant progress YoY, and it adds up. What we can do now in 2019 is vastly superior to what we could do in 2015. Big innovation leaps take significant time to develop -- but the good news is _a lot_ changes decade over decade:

* https://en.wikipedia.org/wiki/History_of_the_Internet

* https://en.wikipedia.org/wiki/History_of_hypertext

* https://en.wikipedia.org/wiki/History_of_personal_computers

* https://en.wikipedia.org/wiki/History_of_operating_systems

* https://en.wikipedia.org/wiki/Cryptocurrency#History

* https://en.wikipedia.org/wiki/History_of_self-driving_cars


Wow, thanks for the fantastic reply:-)

Peerpad looks awesome, and is exactly in the direction of stuff we're interested in. Ironically, a friend sent me its link literally 5 minutes before I saw your comment. Great minds think alike.

We've been playing a bit adding a p2p network in the background of static ipfs website (using webRTC). That way if a website has enough visitors, they could mimic to each other many functions that normally a server does. I did similar things in previous projects I was involved in. But there it was native softwares, where here there are extra challenges caused by the (relatively) limitations of web technology.


Unfortunately, a lot of the problems in this blog post can only be solved by squaring Zooko's Triangle[1]. Most refutations of Zooko's Triangle depend on some form of blockchain.

[1]: https://en.wikipedia.org/wiki/Zooko%27s_triangle


It seems like only one problem from that blog post can be solved by that. The rest of the usability and performance issues are solved by acknowledging a need followed by a focused development effort.


Just to give an example of a blockchain approach to this: Ethereum Name Service [1].

1. https://ens.domains/


I can't support Ethereum as it uses proof-of-work, and while it is a different implementation than Bitcoin, proof-of-work is inherently energy-intensive / environmentally problematic.

If blockchain technology were to become popular and widely used by ordinary people, its already worrying environmental impact would skyrocket, assuming the technology is capable of scaling at all. We need to bring the energy consumption of blockchain technology down to a manageable level, and that means abandoning proof-of-work as far as I can tell.


Can someone give a proper counter-argument to this?

I keep telling my friends hyped by bitcoin that I do not believe in its future as a currency for real daily exchanges because of this energy consumption problem.

Is there any serious track to address this issue?

Unlike most technologies, I do not see the traditional efficiency gains when the technology gets more mature. It seems inherent to proof-of-work and cannot be improved in this paradigm

Assuming a solution is found, is it possible to "update" Bitcoin?


The Bitcoin wiki has a number of counter-arguments https://en.bitcoin.it/wiki/Myths#Bitcoin_mining_is_a_waste_o...

The future for ethereum is proof-of-stake. This project is called Casper and will be in Ethereum 2.0. For Bitcoin most transactions will be off-loaded to layer-2 networks but PoW will remain on the main chain.


the bitcoin viewpoint is that being backed by energy consumption is a Good Thing. Its harder to manipulate than Proof-Of-Stake or other alternative consensus protocols.

At a larger societal level, we should definitely be worried about tackling emissions for climate change. For this, very few people are advocating getting rid of energy consumption in our societies. Yes, in the short term, people recommend being mindful of the emissions impact of your lifestyle (e.g. a flight has a large carbon footprint) but in the longer scheme of humanity's development we'll always have greater and increasing energy needs.

With that in mind, a lot of focus has been on using non-emissions producing energy (solar, wind, nuclear, etc.) to mitigate climate change. Seen in this context, bitcoin is agnostic to the source of the energy, and its Proof-Of-Work protocol is fine.


There was namecoin way before that, with .bit domains and failed to solve anything.


I never tried ENS but namecoin works flawlessly as intended. It is actually more reliable than classic DNS.

It may have failed at adoption, but the problem of decentralizing DNS has been solved.


> It may have failed at adoption, but the problem ... has been solved.

I am unconvinced that a solution nobody uses counts as a solution to a problem anyone has.


People have not been hurt enough by centralized DNS and CA certification yet. As soon as that happens the masses will adopt the decentralized version in droves.

It might just never happen. Until then the "worse is better" rule applies.

I have used ncdns in conjunction with dnsmasq in the past and I am happy that there is a solution out there that works just in case the other fails.


According to Wikipedia, Namecoin uses proof-of-work just like Bitcoin, and is therefore very energy-intensive / an environmental disaster. Therefore I could not support Namecoin even if it works perfectly as you say it does. We're going to need a more environmentally-friendly solution to decentralizing DNS.


As long as Bitcoin exists you can do merged mining with Namecoin. This means there is literally zero additional cost and impact on the environment for mining.

If the very unlikely case occurs and bitcoin stops being a thing there is always the possibility to change the POW algorithm. Until then Namecoin is a solution that works ultra reliably since years and most importantly unlike other solutions it works today.


If Namecoin were to become popular, wouldn't that encourage more mining of Bitcoin even if you were doing merged mining? It seems like if Namecoin took off then people would just say the reverse, "Don't worry about the environmental damage of Bitcoin, people were already mining Namecoin so there's no additional harm."


Good question.

Do you assume that if Namecoin were to become super popular, the price of its coin would become very high? I don't see why. Can you explain this assumption?

Namecoins are not Bitcoins, they're not supposed to be used as a currency, so the logic that the popularity of Namecoin would create a huge raise in its price is not straightforward.


It will take a long time until NMC will be as popular as BTC if ever. Until then let other people figure out how to implement a proper proof of stake and switch to it when the time comes.

In the mean time merged mining is the best compromise between rock solid chain security and environmental impact.


It may have failed at adoption, but I'm really hoping Handshake succeeds. They have a better go-to-market strategy.

https://handshake.org/


I believe that Handshake also uses proof-of-work, based on their paper at https://handshake.org/files/handshake.txt, and therefore I can't support this effort either, because AFAIK all proof-of-work systems use unjustifiable amounts of energy by design (e.g. because anyone attempting to edit a past transaction would have to consume impossible amounts of energy to replicate the entire history).


Blockstack addresses Zooko's triangle in their original whitepaper: https://blockstack.org/tokenpaper.pdf

and their concept of gaia hubs is the association of immutable identities with user owned storage without the need for IPFS pinning or otherwise data duplication: docs.blockstack.org

You can see the most recent conversation about it here: https://forum.blockstack.org/t/cannot-find-ipfs-driver/6147 in which I convinced someone looking at IPFs to use gaia instead

full disclosure, I am an Engineer at Blockstack.


Anyone remember the P2P craze of the early 2000s'? Or was it late 90s? What's left over from that? Freenet?

What I'm always wondering about: Someone needs to foot the server/bandwidth bills. Who? If you run a Pi on your home network and serve requests with your 5MB/s upload: That's fine. More power to you. We need more of that. (and in that case, you are paying your ISP)

But the transformation envisioned and the bandwidth and compute required doesn't come for free. Someone's gotta pay. In things like dollars. How can this work?


Exactly.

P2P systems historically have dealt with the problem altruistically, or with limited tit-for-tat, which both work in many cases, but have so far failed to work for large scale long-term resilient systems.

This is where Bitcoin managed to do something remarkable: achieve high uptimes typical of the best centralized systems, through a very clever, but still open and permissionless, economic incentive structure. Markets have been shown to be extremely useful to create a robust Open Services. More on these ideas here: https://www.youtube.com/watch?v=IfLIoOr4p0A -- we think this kind of thing is going to lead to extensive, global, public utilities run w/ internet-native money.

But it will take a while-- this stuff is extremely difficult to build right now-- it feels similar in nature to very early Web, or pre-unix systems. Lots of hand-rolled primitives, many with the capacity to cause serious failure (not very old cryptography, and complex security questions). Perhaps better programming languages will help us build these systems dramatically faster/easier. For now though, you can see the entire blockchain space wrestling with these problems.


P2P craze gave birth to BitTorrent. And p2p bandwidth does actually come for free for modern broadband ISPs with fiber to the building kind of infrastructure. In fact p2p allows ISPs to even save on expensive bandwidth by pushing more traffic over local and inexpensive routes that sit unused. Although bandwidth costs are already insignificant, take like $1 per month per consumer tops, so it shouldn't even be of any concern.


It's not the cost that's preventing what you mentioned. It's the ISPs which don't allow incoming traffic and have symmetric NATs all over there network. IPV6 is here and we should have gotten rid of NAT long ago but there's no incentive for corporations or ISPs so we are stuck here.


That's true as well. ISPs have no interest allowing end users to host anything meaningful.


well it's against the terms of service (though if you have no other ISP options, it's more like terms of slavery)


The main barrier to P2P isn't cost or bandwidth or algorithms. The main barrier is NAT. As long as IPv4 with P2P-unfriendly symmetric NAT is the dominant way of accessing the network, P2P will remain hard and niche.

One ugly hack to get around one problem (IPv4 address scarcity) has single-handedly transformed the structure of the Internet from a mesh to a top-down monopoly-driven medium. NAT is like literally Satan.

It wouldn't be quite as evil if it weren't so often symmetric, but for some odd reason symmetric is what many vendors implement. I can't for the life of me understand why symmetric NAT exists when the same scalability can be achieved with port restricted cone NAT that falls back to symmetric-like behavior if port preservation is not possible due to resource exhaustion. That would yield working P2P >90% of the time instead of <5% of the time.


Things aren't that bad. Hole punching can still work most of the time for broadband and you don't really need all that many nodes with routable IP addresses. Scale makes this problem even less of an issue. Mobile networks are somewhat problematic though, but they cannot be heavy nodes either way and have to be lightweight clients that piggy back from normal nodes.


We actually find NAT to be a pretty big problem still. Even using NAT-PMP, upnp, and hole punching, we still see a roughly 70% (sometimes much higher) undialable rate. Especially for people running ipfs in China (though high failure rates are also observed in other countries).

We're pushing hard on getting libp2p relays up and running to help get through this. The idea is that we can use a relay to make the initial connection, then ask the remote peer to try dialing back (assuming that both peers involved arent undialable).


> NAT is like literally Satan.

NAT is merely a tool. Using it to restrict user activity is bad. Using it on your home network to preserve your privacy (ie how many devices you have and what each is doing) is good. There are also other use cases (both good and bad) that I'm omitting here.


I get what you're saying, but I still disagree.

NAT doesn't help privacy either. There are a million plus one ways to fingerprint or track a web browser without knowing anything about the end user's IP. A modern browser presents a ton of surface area. It's also quite easy to track by externally visible IP or IP prefix (e.g. /24 or /64 in IPv6) if you make certain nearly-always-valid assumptions about the timing and sequential nature of user behavior.

I think a lot of people just don't grasp how easy tracking actually is. Think of how obscure and sophisticated hardware-level attacks have become: Spectre, RAMBleed, etc. Now imagine the surface area presented by a browser. It can and does get that clever.

The only way to really prevent tracking is to use a sandboxed browser with strong anti-tracking features and redirect your traffic... or use a P2P decentralized system!

If we didn't have NAT, we'd probably have much more privacy-respecting P2P alternatives to the centralized web and we'd also have easy to use P2P systems for anonymizing requests by bouncing them off other peers. In other words without NAT our privacy tech would be better.

I'll be more accurate than saying NAT is Satan. NAT is a massive piece of technical debt. It's a "simple" hack that breaks something fundamental about the Internet, namely namespace unity. That in turn makes a ton of other stuff exponentially more difficult.


> NAT is a massive piece of technical debt.

Well at the ISP level, yes, absolutely!

Regarding privacy, your response seems very focused on web browsing. I agree that NAT plays no role there - it's at a lower level. I was referring only to the potential for masking devices - with NAT, there's no way to tell how many (or how few) devices are producing a given stream of traffic. To the extent possible, I'd rather external observers (particularly my ISP) not be able to tell what's going on inside my network or how it's configured.

In case this isn't making sense, imagine a scenario in which your ISP equates simultaneously active IP addresses to number of active devices and then adopts a fee structure based on this. Or perhaps just tries to profile the types of devices that are active on your network in order to sell that data to third parties. For example, identifying how many Alexas or smart locks or whatevers that you have, correlating that with how many devices are streaming or browsing, correlating this with customer demographics, and so on.

NAT is a useful tool for engaging in namespace shenanigans - consider NAT-based load balancing, for example. It can also be used for ill, and I agree that the current state of affairs is unfortunate.


There are cases where NAT can be used beneficially but those are probably less than 10% of NAT usage. Most NAT usage is to get around IPv4 address shortages. Symmetric NAT is just pointless.

I also still think you're not fully enlightened as to just how easy tracking can be. It's not just web browsers. Every single distinguishable characteristic of a client forms one bit in a hash that can be used to track it, and thus distinguishing precision is 2^N where N is the number of bits of information that can be gathered.

Your IPv4 /24 already provides 24 almost always unique bits, so that's a good start for any tracker. Now start correlating /24's over time using clustering algorithms. Now start TCP fingerprinting, keeping track of pinned certs, measuring anything and everything that can be measured about a client. Pretty soon you're up to something like 32 bits which is one in four billion.

I use web browsers as an example because they're just embarrassingly easy to fingerprint.


> Symmetric NAT is just pointless.

Fair enough, I don't know enough about usage of different NAT types to debate such things (and never intended to).

Regarding tracking, I'm still not sure that we're talking about the same thing here. Are you saying that fingerprinting could be used to accurately extract per-device data from the aggregated stream? That is, if an entire network is hosted behind a single external address via NAT, are you suggesting that the carrier could reconstruct the separate fingerprints from the aggregate data stream that they have access to?

If so, that would be news to me. Obviously they can make some educated guesses (a single device probably isn't originating simultaneous Netflix and YouTube video streams), but I'm assuming that aggregated (TLS encrypted) data streams are going to be fairly difficult for an external observer to tease apart.


And this can be fixed, but by Congress. They need to make it a law that ISPs replace most of the NATs with a better solution.


BitTorrent, Tor, Bitcoin, and the other cryptocurrencies are the results of "the P2P craze of the early 2000s". They resulted in Wikileaks, Snowden, and Trump, not to mention the current blockchain feeding frenzy. Freenet is, by comparison, somewhat less important.

I'm puzzled by your comment because in the second paragraph you provide the answer to the question your third paragraph is about. If people are paying their ISPs for bandwidth in things like dollars, what's the problem? It's true that bandwidth is more expensive when you buy it retail from ISPs rather than wholesale in data centers, but only by a factor of two or three, so this is only an issue for the most bandwidth-intensive applications, such as Netflix streaming.

Bandwidth and computation are extremely cheap now, so I don't think this is really an issue. As a point of comparison, when I started browsing the web in a graphical web browser in 1993, I was running it on a ≈100-MIPS RS/6000 with a couple of dozen other people. There were nearly a thousand web servers (in the world, not on the RS/6000), but each one could only serve about one or two hits per second, mostly on machines similar to that one. Shortly after that, my university, with a few thousand students, upgraded its shared connection to 45 megabits per second, as part of a contract to manage a supercomputer center in another state.

My cellphone runs about 10,000 MIPS, and would have no trouble handling 5000 hits per second; this is more than the entire WWW at the time and well into 1994. Most homes in rich countries have more individual bandwidth than I was sharing with hundreds of other active users at the time.

We don't have a shortage of bandwidth or computation. We have a shortage of community, a shortage of innovation, a shortage of cooperation, a shortage of imagination, and a shortage of freedom. But bandwidth and computation are abundant in a way that was unimaginable in the days when we built the WWW.


Even freenet is kind of a ghost town..


IPFS is providing some value already for blockchain dapps to deploy UIs, albeit currently with a dependency on centralized ipfs gateways.

Eg. https://augur.net/ipfs-redirect.html


I don't think this is ultimately an IFPS problem. How I've always understood IPFS is the protocol being a decentralized blob storage. You would use it in place of a CDN, to share files, or other building blocks for something higher-level. This is in contrast to browsing where there are established semantics on addressing absolute and relative URIs, and clustering relevant content under hierarchical addresses in the URI path.

The author insisted on doing this on bare IPFS but I think this is (vaguely) analogous to building a website based on IP addresses and port numbers, not URIs. To the point: the semantics just aren't there.

I could imagine an IPFS based web site being built with a local URI resolution map as part of the bundle of objects that is the web page. The sub pages would refer to symbolic URIs like before but the browser would also download a site map that links URIs to actual hashes of the latest revision, and resolve references like "foo/bar.html" or "/root/foo.html" based on the map. Or a proxy could do this transparently, translating URI requests to hashes and fetching data directly from IPFS, then serving it back to the browser as if it was downloaded from "/root/foo.html" instead of "ipfs://50ad443758222efea0286f3a94db2c25".

The top-level entry to the web page would basically be this URI resolution map which, as content-addressable, would effectively refer to a single revision of the web page. This could be implemented as a separate URI scheme, like ipfs-uri://bb9f6cbcc28829b57dd25102f67b9d37/main/news.html where ipfs://bb9f6cbcc28829b57dd25102f67b9d37 would point to the URI map and the URI handler would resolve the relative URIs such as /main/news.html based on the offered mappings.

But all this does require extra lifting from the browser or a proxy. I don't think IPFS as designed is feasible to replace something like HTTP which was explicitly meant to work with URIs.


IPFS already supports path-based lookup, starting from a hash which identifies a directory tree. For example, this is a valid IPFS URL which renders as a directory listing:

https://gateway.ipfs.io/ipfs/QmYXYP1J346Pic6UHV5AkUy1UgabtJV...

Normally that link would retrieve the content via a public IPFS gateway, but if you install one of the IPFS browser extensions you can have URLs like that automatically redirected to a local IPFS client.

As long as you stick with relative paths, it's not difficult to host a static website on IPFS. You can even configure a CNAME to point to a public gateway with special DNSLink TXT records so that visitors don't need to know the hash of the root directory. The end result looks just like a normal HTTP URL.[1][2]

[1] https://docs.ipfs.io/guides/concepts/dnslink/

[2] https://developers.cloudflare.com/distributed-web/ipfs-gatew...


Yes, IPFS addresses some of the use-cases of HTTP while solving other problems where HTTP falls short. It isn't a complete drop-in replacement, but does set up a foundation for an internet that will last longer and (at some point) perform better across varying network conditions.

> but I think this is (vaguely) analogous to building > a website based on IP addresses and port numbers, not URIs.

IPNS is a naming solution designed to address this:

https://docs.ipfs.io/guides/concepts/ipns/

It's not super fast right now, but there's some work happening now to make it much faster.

ENS, the Ethereum name system, is also an emerging way of doing this.

> basically be this URI resolution map

IPLD is a data model that works with IPFS to address the use-case you're describing, where you have a permanent reference to a mutable set of data: https://ipld.io/


All right so there is already work ongoing to solve these problems, and that work is built on top of IPFS instead of extending the base protocol, and the author who just insisted using plain IPFS was thus suffering from expected difficulties as IPFS really isn't the direct answer to that particular usecase.


Hey, but it's great if you want to piggy-back a botnet on it, with no way to kill c&c: https://www.anomali.com/blog/the-interplanetary-storm-new-ma...


Yeah, we're watching this. They aren't so much piggybacking as they are just using the open source code for libp2p and our pubsub (which is fully peer to peer, not reliant on any central server that could shut it down). However, the way they are doing it makes it really easy to find infected nodes. Running a scraper we see around 3k peers involved


Just out of curiosity, how would you delete something off IPFS? Is the only solution to have the address hash point to a program, or app, if at all possible? Or a space that does not update the hash, iven if the "thing" is updated? I'm new to this interplanetary world... :D


From IPFS: You don't. As long as a connected node is hosting the file, it will be available to other peers. For a file where a single host is "pinning" a file (= purposfully hosting the file), it will fade out organically once it leaves the caches of all the nodes that accessed it once.

From IPNS: You point the IPNS entry to something nonsensical and then by default it will be gone once the TTL of the old IPNS entry is reached (kind of like DNS).


If you’re the one hosting a file on the network, it’s as easy to delete it as cleaning your cache / stopping hosting it. Folks might have the “address” for your file, but they won’t be able to access it if you aren’t hosting it. Now if someone /else/ has the content already, you can’t take it away from them and delete it off their server - so same model as the web already has (just without central players in control of these decisions, and the useful ability to still find content even if the original host changes)


Not sure you probably store encrypted data and delete decryption key if you want to discard. Now unused blobs will probably fade away with time. Not sure if it works this way.


A good alternative to IPFS that works today is magnet links and torrents.

There are "youtube competitors" that offload the high bandwith requirements via webtorrents running in javascript.

You can not host your full webpage with it but you can reduce bandwith costs drastically.

A good alternative to DNS is namecoin. It already works flawlessly and I honestly wonder why it has not been adopted more widely.


I think if you're proposing something as a "good alternative", then being able to understand why it got near-zero adoption, and failed in the real world, is probably important.

I would say that Namecoin appears to solve a problem that nobody actually has in practice.

I know advocates are pushing Ethereum Name Service, which does the same thing - but again, what are the lessons learned from nobody bothering with Namecoin?

(Namecoin is interesting, as it was literally the first altcoin. It hadn't occurred to anyone to fork the Bitcoin code and make their own coin until then.)


I remember seeing work on using magnet links for decentralised sites about a decade ago. I think the "kio-magnet" project in KDE was along these lines, but ironically the blog posts detailing it seem to 404 now ;)


I built some open source tools to accomplish everything that this article tries to achieve. It's totally possible and borderline-easy to use IPFS to create uncensorable websites:

Check it out. Make up your own opinion:

http://troutsblog.com


Here's your periodic reminder that IPFS is a distribution mechanism, and not a storage mechanism. It doesn't persist anything. It's like a more granular BitTorrent, and not really a filesystem at all.

This can be very useful in certain situations and projects! But it means that it is not a replacement for a centrally-hosted website... and the IPFS site does a very poor job of conveying this limitation.

(I can't take Filecoin seriously as a 'solution' here either - as far as I can tell, it suffers from the exact same problem as every 'blockchain-backed storage system' I've seen before... it's unable to reliably verify that some peer is actually storing data, without a local copy to verify against.)


> it's unable to reliably verify that some peer is actually storing data, without a local copy to verify against.

Filecoin can actually do this. I'm planning on doing a blog post about how this works soon (in all that copius free time), but a good summary is here: https://github.com/filecoin-project/specs/issues/155


For years I've been reading HN comments sections for IPFS links for years. In the past IPFS developers would show up to defend it, but for this link at least so far I haven't seen any.

Is this an admission that IPFS is unworkable, and they don't have any prospects of making it usable in the next few years?


Hello! o/ We've responded to a number of points here.

You can check out our roadmap

> IPFS is unworkable... making it usable in the next few years

Absolutely not.

The OP brings up a lot of great, useful feedback for us, and we'll respond to it.

But the OP is also simply wrong in saying it's "not usable". There are millions of end users benefiting from IPFS, 100Ks of libp2p nodes, we see PBs/mo of traffic in the infrastructure that we run, and millions of daily requests to our gateway. Look to fully decentralized applications and systems like OpenBazaar, Textile, Dtube, and others.

Beyond that, we're well aware of the many shortcomings, and working on them. We're unfortunately spread thin across a lot of projects (IPFS, libp2p, filecoin, ipfs-cluster, etc), but each is seeing significant growth and improvement.

You can see the long-term IPFS roadmap here https://github.com/ipfs/roadmap


Hey there! I'm usually one of the people who likes to engage on HN. I'm actually taking a couple days off to visit family, someone just texted this thread to me. I'm gonna go through and respond to some stuff now.


For all the time you spent on this you could have just posted an onion site in about five minutes and spent the rest building a little web-of-trust indexing site to fill the demand for the less than 1,000 people who want to post personal sites on a decentralized web.


Unicornporn thamks for the writeup, saved me a lot of time (I thought IPFS was ready to go for this sort of use case)

P.s I think there is a typo at ‘ipfs pin add’ where it’s used twice instead of ‘ipfs pin add’ then ‘ipfs add’ I think?


I didn't write the blog post. Tom MacWright did. :)


IPFS looks good on paper. In practice my experience wasn't very good. ZeroNet, on the other hand, feels like the future for DWeb and that's the direction I'm personally headed. What about you?


What do you use ZeroNet for ? Does it gives you reasonable latency performance ?


I use it for zero-cost, uncensorable hosting. It's fairly trivial to get set-up if you're running Linux. Here're instructions for Manjaro if you fancy Arch-based distros: https://habd.as/post/surfing-uncensorable-web/


It appears that ZeroNet uses Namecoin, and therefore is reliant on energy-intensive / unsustainable Bitcoin technology, so I cannot support it.

It blows my mind that almost every decentralized system mentioned in the comments here relies on unsustainable proof-of-work systems. Blockchain technology is possible without proof-of-work, and decentralization is possible without blockchain. We can do better, folks.


I agree that IPNS is pretty unusable (do names still expire after 1 day?).

Last I saw, the IPFS devs seemed to be pretty excited by a pub/sub mechanism they were building into the system; potentially to replace the workings of IPNS.

Is that a stable or useful alternative for indexing changing content like a blog? How decentralised is the pub/sub (i.e. do new subscribers need to contact the publisher, or are messages persisted for a time in the network)?


The way IPNS uses the DHT faces some deep challenges if you really need the “latest” IPNS update (if you don’t there are various ways to make IPNS perform significantly better, see https://discuss.ipfs.io/t/ipns-resolution-takes-a-very-long-... for guidance).

There is also some work to make IPNS work over PubSub independently from the DHT. That work is being tracked at https://github.com/ipfs/go-ipfs/issues/6447, and should significantly improve IPNS performance as well as add in features like allowing users other than the author to keep IPNS records alive (so no 1 day expiration issues).

As for the decentralized nature of pubsub it is essentially an opt-in system. Random people on the network are not holding or forwarding your messages as they would in the DHT. However, anyone who has subscribed to a topic will propagate messages for you to other subscribers. This means subscribers do not need to directly contact the original IPNS record publisher to get a record, but instead can get it from anyone who has it and is advertising that they do so.


I think there's a proposal (https://github.com/libp2p/go-libp2p-pubsub/issues/175) and a PRs in progress (https://github.com/libp2p/go-libp2p-pubsub/pull/171) for IPNS over pubsub - still a WIP, but aims to address these performance challenges.


>Ready to publish it to the web? Not so fast. Clicking a link brings us back to my issue in 2017 [link]: the way that the IPFS gateway works will break your links.

(following the link to 2017 post)

>And even if I used the base tag to re-route that link so that it points http://localhost:8080/ipfs/QmR96iiRBEroYAe955my8cdHkK4kAu6SA... ... It wouldn’t work: the correct URL for that Recently post is, instead, http://localhost:8080/ipfs/QmTbJ6RSLZDmVYy8dgdoeQLCtKya7UrNT.... So IPFS content links are fully content-addressed. I suppose to make my site fully IPFS, I’d have to build each individual page and then construct a home page that linked to the generated hashes for those pages. That leaves an open question: how could two pages link to each other? Adding a link from one page to the other would change its hash, so wouldn’t it be impossible for pages to reference each other? This might be a lack-of-coffee problem on my part.

They've misunderstood that. Both links of these links would be valid. You would have all the pages be under http://localhost:8080/ipfs/QmR96iiRBEroYAe955my8cdHkK4kAu6SA... and refer to each other by relative links.

The author's point of relative links or a base tag being necessary for the site to work on the ipfs gateway URLs is valid though.

(back to new article)

>So, links don’t work. I posted an issue detailing this issue, and while I got an encouraging response that there’s a real solution planned, there’s no real solution. People use specific plugins just for IPFS, like this one for GatsbyJS, to get it to work. I ended up writing make-relative, a script that rewrites my built site to use relative links. This is where the story about IPFS being useful here and now for web developers breaks down a little. I’ve done enough HTML-mangling and path-resolution in my decade in industry that writing this script was straightforward. But the knowledge required to do it is not all that common, and I think this is where a majority of web developers would call it quits, because IPFS’s ‘website hosting’ story would look broken.

You can either use relative links, a base tag, or keep absolute links and only support your domain with dnslink (and eventually when the web gateways support the hash in the subdomain field, or extensions support the ipfs:// protocol, then your site will also work through that). I'm hoping for the hash-as-subdomain support which will make things simple.

>IPNS

Yeah, it's awkward. This is because of multiple reasons (zooko's triangle and hosting of the names) that can only be addressed with a centralized service (see DNSLink) or a blockchain. Thankfully there's the DNSLink option too. I wish the IPFS docs wouldn't push IPNS as much because it's not ready / it's not what most people want.

>That was an incorrect assumption. IPFS-based websites do update their DNS records every time that they update their website, so that they can avoid using IPNS, because IPNS is just too slow. This was a tough discovery, because it works against everything I know about DNS – a system that isn’t particularly designed to be fast or scriptable.

IPFS isn't for frequently changing websites. You're not going to host a forum or other dynamic site on IPFS. (Or if you do, it's going to work by being static content hosted on IPFS that contains some javascript that talks to some regular external HTTP(S) servers or maybe even a WebRTC swarm for all of the dynamic content.)

>Unfortunately, once I started getting this set up as a ‘pinning device’ [server], the fun stopped. I tried running ipfs pin add with the hash generated earlier from ipfs add -r, but it just ‘hung’ - outputting nothing at all. After a while I realized that, like ipfs pin add, IPFS doesn’t communicate very well when it’s having a problem. So I figured out how to turn logging information all the way up, and then… I was never able to get past a ‘got error on dial’ failure, despite trying all potential configurations of the IPFS daemon, enabling logging, upgrading to the newest version, and so on. There are about 63 similar issues in the tracker, 21 of which are marked as bugs.

I think the "got error on dial" errors are just it complaining that it can't connect to some random people in the p2p swarm. Like people who may have recently turned off their computers, etc. You can't expect to be able to connect to everyone in the p2p swarm. IPFS is just trying to connect to tons of people in the swarm until it finds people with the blocks that you're trying to pin, and it's reporting errors that it can't connect to some of the people in the swarm. The real problem is that it should be giving some kind of logs of how it's trying to find some content, it's successfully connected to a bunch of people, but none of them it's talked to yet have it. Also of course it should eventually make a connection back to your computer and successfully get those blocks. I'm not sure why that's not happening and that's disappointing it's not working. I really hope IPFS improves in things like this because the concept of IPFS seems great.


Recently I was playing with IPFS: https://genesis.re/kleros-metaevidence-metahash/

Using another part of crypto ecosystem (Infura)

It's still not straightforward.

BTW... A few weeks ago I saw on HN an automated tool to publish static website on IPFS. It was pretty sleek...


Can you link the tool?


Maybe this one:

https://github.com/agentofuser/ipfs-deploy

Makes it super easy to push static sites on IPFS.


https://www.shiftnrg.org/ might be interesting here too - IPFS-based project targeting web hosting specifically (includes a custom DNS + working on serving dynamic content, CMS, etc).

BTW as a proof of concept their website is already hosted on their own IPFS cluster.


Their website is hosted on CloudFlare:

https://redbot.org/?descend=True&uri=https://www.shiftnrg.or...

They’re using DNSLink so basically what we’re seeing is that using DNS and a major CDN makes your site reliable, which does not seem like a particularly novel advance.


Non-native speaker here, is the usage of "than" in "Your browser than uses DNS, a decentralized naming system, ..." actually correct? I happen to see this a lot and I am not sure if this is a spelling mistake when people want to write "then" or if it is actually intended.


This was a simple typo; I accidentally used a sound-alike. It's fixed in the post, thanks for noting it!


It is a very common spelling mistake.


I wonder why? It seems to be common with native speakers.

I'm not native in English but, for me, it would seem to be nearly impossible to mix these two words even in a moment of carelessness. They mark greatly differing meanings and thus they practically live in different slots in my brain. Even a blind typo won't explain it because 'a' and 'e' are not too adjacent on a qwerty.


Anecdotally, from what I've seen in various languages, native speakers do seem to make more mistakes that are tied to spelling/pronunciation (than, then, their they're in English, aller, allé in French) whereas non-native speakers make more grammar mistakes (for example I tend to put articles at wrong places or forget them in both English and French, but the spelling mistakes are absolutely jarring).


I think because in some English dialects "than" and "then" sound very similar and many people (myself included) write by basically transcribing their internal monologue.


Interesting, this would definitely explain these errors. Similar to there's/theirs, they're/there which are similar examples of the same. Writing out words based on such a phonetic memory would explain all that.

But the next question to wonder is whether this is English specific or person specific?

Is it that native English speakers form their language primarily through phonetic patterns and translate back to spelling from there, or is that some people simply crunch spoken language and other people crunch written language in their heads?

I definitely start from letters and words, and I'm often lost at how to pronounce something because, in English, you can't deduce pronunciation from the written form. In many cases you just have to "know". But I'm not a native speaker so can't tell if this is just because of how my brain works or because I learned English later in lafe.

I can't backreference to my native language, Finnish, either. Finnish is spoken exactly as it's written. If you know how to say a word you know how to write it, and vice versa. So the phonetic and written forms do not differ.

If anyone native to English could verify that what they keep in their head is "written English", that'd be a valuable counter-point to suggest it just depends on the person.


As a native English speaker, words in the language appear to be stored against sound rather than anything else. When trying to write clear English to be read, I am reading it to myself as I write to ensure that it reads clearly, and so it's more an audio process, allowing for confusion of words that sound similar.

I would guess that those who have learned the language have a more logical, grammar based structure and write more deliberately.


"Could of" instead of "could have" being a prime example among native speakers. Although it's shocking how many people genuinely believe the former to be correct.


> I would guess that those who have learned the language have a more logical, grammar based structure and write more deliberately.

We have our own native languages with their own idiosyncrasies and we don't make mistakes like "could of" or "you're/your" in those languages. There's just no excuse for those simple mistakes...

Greek has 5 very common and 1/2 rare spellings for the sound 'i' (as in 'kit'): ι, υ, η, οι, ει /υι,ηι

Imagine if people who can't tell you're/your and write 'could of' had to face this reality... They are lazy ignorant people and the other native speakers need to stop enabling them and making excuses.

<edit>

I'm not actually sure if 'ηι' exists, google is showering me with irrelevant results.. in any case, furthermore, there's

two ways to spell 'e' (as in bed): ε, αι

two ways to spell o: ο,ω

two ways to spell the av/ev sound: αβ/αυ , εβ/ευ

two ways to spell the af/ef sound: αφ/αυ , εφ/ευ

and more. Maybe the difference is that all this crap makes you actually pay some attention to the language if you don't want to embarrass yourself


As a native German speaker, who is basically at the same level in English now, I also started doing these weird mistakes native English speakers do. I never had this issue in German not sure what it is about English, I think it's just the random spelling vs pronunciation in English. Your brain just can't keep up matching your vocal thoughts to the right spelling when typing quickly.


No.

> For journalistic integrity, this post hasn’t been edited or reviewed by anyone. Hence the typos.


IPFS is a decentralized file store. It isn't a decentralized website host, although it can be used as one. That is just a nice consequence, not its reason for being. IPFS is a building block. If you want a better experience, build a tool that takes advantage of IPFS.


I think this comparison is pretty good: the author's problem with absolute URIs is essentially the same as using `file://` URIs and accessing the site from different directories (although `file://` doesn't have the chicken-and-egg problem of pages trying to link to their own hash; IMHO that would be worse than relative links anyway, even if it were possible).

Still, I think that the author's criticism is valid, since the IPFS project does seem to encourage the view that it's suitable for Web hosting, rather than just "a nice consequence". The author's struggle with IPNS, and recommendation that it be de-emphasised in the docs, is in line with my own experience.


The IPFS documentation only provides a "basic example" of hosting a website with IPFS. In general, the IPFS website focuses on its uses as a file system.


I tried to port libp2p to c++. Each time, I got discouraged by so many sub projects linked together. I really don’t like the way they organize code.


There is a C++ implementation of libp2p being developed at https://github.com/soramitsu/kagome with help from the go-libp2p original implementers (and funding from web3.foundation). They've implemented a lot of the major functionality already. I'm sure they would appreciate contributors if you want to get involved. (Full disclosure: I work on the libp2p project)


Anyone tried urbit?


This might sound a bit weird, but when I look at the employee base of protocol labs (the makers of IPFS), the most impressive employees are the business people. Multiple Harvard Business School graduates, tons of Stanford degrees. For the technical people, some are impressive, but no where near the stature of the business people. No distinguished ex-FAANG engineers, no principal engineers from notable companies, a few top STEM PHDs (Not cs), but lots of obscure developers. This isn't to dig on them, but with the kind of money that company has, and the level of technical complexity they are trying to solve...why don't they have better engineers?

EDIT: Not looking to argue about whether leetcode filters for good programmers.

EDIT 2: Self taught developers at random companies can be amazing, but for a company "evolving the web" and 300 Million in the bank, they have hired almost no nationally recognized experts, and theyre greater developer base is not made up of people with 20 years experience. But rather a bunch of developers who have been coding for 2-4 years.


Something 20+ years of development experience has taught me is that developers vary a lot. People who went to Ivy League schools can be amazing and they can be awful, and the same is true for every other dev who went to a 'lesser' college or didn't get a degree at all. Certificates don't tell you much. Output is what matters. You should judge Protocol Labs on what they build rather than where their employees went to school.


I do, and I expected much more, unfortunately. The daemon still has problems with rampant memory usage, chews through two CPUs during normal operation, the pinning API is abysmal, DHT resolution is so problematic that you routinely need to connect two nodes directly together so they can discover each other's files, etc.

I don't want to say they're bad at what they're doing, as I don't know how hard it is, but at least a better pinning implementation seems easy to do and relatively useful. Currently, to pin something, the command will block for hours/days or even for ever if the content is large and rare. If my torrent client worked that way, it just wouldn't get used.


I agree. Personally I've had similar thoughts but instead of viewing it as a signal of their lack of talent, I viewed it.. sadly, as a signal for their lack of attention to IPFS.

From a largely outsiders perspective, it has felt like FileCoin was the thing they wanted to do. Either that, or they just don't have enough bandwidth to take on all of these projects.

Regardless, it doesn't feel to me as if IPFS has had the attention it deserves from ProtocolLabs. I love the idea of FileCoin, but I'd at least like ProtocolLabs to get one thing finished before they chase the next shiny project. Hard to imagine the "next internet" being created with such floaty attention spans.


Hey there! I’m one of the first developers to work on IPFS, and was the maintainer of the project for around three years. I’ve since been pulled into working on Filecoin, which has definitely taken away from the time I can spend pushing IPFS forward. That said, the IPFS Project has a really solid team and OSS community still pushing the project forward, and there is an entire team working on just libp2p (the network stack used by IPFS and other projects like Livepeer, Ethereum 2.0, OpenBazaar, and more). There’s been a ton of progress made recently, and the network has grown significantly. We’ve gone from nearly 30k peers last year (this is hard to estimate accurately, but best guess) to around 400k peers on the network today — roughly 13x growth. This massive influx in usage has taken a lot of our attention up, things catching fire that we didn’t expect to catch fire, and other bugs popping up that need to be fixed now. All this detracts our attention from things like docs and website publishing, but we recognize it’s a really important area we need to make time for and step things up. For example, there’s an overhaul of ipfs-cluster docs: https://github.com/ipfs/ipfs-cluster-website/issues/89#issue... We also put a lot of effort into our new docs website: https://docs.ipfs.io/ And we’ve been pushing a lot of specs for everything we build: https://github.com/libp2p/specs https://github.com/ipfs/specs https://github.com/filecoin-project/specs https://github.com/ipld/specs


I feel that is vice versa, the goal has always been IPFS. However I thought it would be quickly apparent that with no incentive structure to do the actual hosting to a significant scale, it is a dead in the water project. The idea is great however it comes with multiple pain-points that exist at the same time, and honestly jumping in the ICO craze at the point that they did was a smart business decision because imo something like Filecoin is absolutely necessary.

I mean for success you need to achieve ample storage capacity, strong financial incentives that strengthen the network, and scaling. Major issues here being that following the ICO they need to release something that matches what investors wanted to some degree (more focus on Filecoin) and that a lot of scaling/tooling needs to come from outside Protocol Labs. They are heavily relying on Ethereum to successfully scale in a way that is friendly to the Filecoin protocol and that is an issue that has been dragging out for at least the last two years on how to do it properly.


This is exactly the impression I've formed as well. I'm sure they have super smart people (and I've talked to many of them), it just doesn't seem they're interested enough in improving IPFS.

The "canary in the coal mine" for me is that pinning API ticket, which has been open for years and amounts to "please put an async UI over pinning". If something that simple and useful doesn't get fixed quickly, I'm not optimistic about the future of the project.

It's too bad, because I think it's an extremely useful idea.


Doing this right is not simple. It requires an entire task management system internally, which (as you know) i've sketched up here: https://github.com/ipfs/go-ipfs/issues/3114

Sure, we could hack this together quickly, but we're trying hard to avoid adding technical debt at this point. Adding every feature requested would put us in a bad spot.

Prioritizing this over the countless other things people continually ask for is hard, but we hear you. We welcome pull requests, and since you needed this for the service you were building, helping us out here would be fantastic.


I hear you, it's just that the current way the daemon does pinning is unacceptable. For example, I can't tell my customers how much of their file I've pinned, or, in some cases, if I've pinned it at all.

Maybe I'm in a small minority of people who are interested in this feature, but how do people pin things without it? Do they just `screen ipfs pin <hash>` and leave that terminal there for ever? Wouldn't it be in the project's best interest to make pinning work well?

Unfortunately, I don't know Go at all, or I would at least take a stab at this...


So, at a high level you want more than just the backgrounding the pin task, you also need to be able to tell what the progress is, periodically. This probably means we need to do something with the api endpoint output (i was initially thinking 'just throw it away and mark completion').

> in some cases, if I've pinned it at all.

What cases would those be? If the file is pinned it should always show up in an `ipfs pin ls $CID`.

In the short term, I assume your service is a server side app that is making requests to the ipfs node, right? You could set some background task in that application that just waits on the `ipfs pin add --progress` call and keeps track of the progress for that pin, that way, when a customer queries it, and its in progress, you can return that information. Agree this should be built into ipfs at this point, but that seems like a reasonable workaround for now.


> you also need to be able to tell what the progress is

That would be nice, but not really required.

> I assume your service is a server side app that is making requests to the ipfs node, right?

Yes, exactly.

> You could set some background task in that application that just waits on the `ipfs pin add --progress` call

I did that, there were problems with the fact that you can't have too many pinning calls waiting due to resource constraints. I can't just have 100k requests open and waiting, they time out, I need to restart the server, things happen. At that point, every single task needs to fire up again and make an API call to the server.

I did the same thing with batching, but then long-running pins (files not in the network) would block and new files that were on the network wouldn't work. This happens with IPFS Cluster too, I just don't have to write my own code to worry about it.

Believe me, I've spent lots of time on working around this, and the workaround can only be so good with the current system.


The problem is they have both: lack of interest towards IPFS and $300 million weight to keep control over IPFS development. So doing all the necessary work to give up control and encourage quality independent implementations is out of the question.


I don’t even look at qualifications any more when hiring senior engineers (junior is a different matter but I don’t make that the deciding factor) simply because some of my best engineers have turned out to be ones without a degree.

At the end of the day, a good hire is always going to be a role of the dice. The interview process should stack the odds in your favour but there’s never any guarantees. Which is why good management is so important. Good management can bring excellence out of average engineers and identify the dead weight before they become toxic to the team.


Sure output matters, I agree. I'm just saying, only ~one programmers working in their engineering has an impressive track record. And ofcourse, self taught developers at random companies can be amazing, but for a company "evolving the web" and 300 Million in the bank, they have hired almost no nationally recognized experts, and theyre greater developer base is not made up of people with 20 years experience. But rather a bunch of developers who have been coding for 2-3 years.


There are a number of companies that have started that haven’t had “distinguished engineers from ex-FAANG companies”. And honestly, the quality of an engineer good at Leetcode is way different than the self taught Eastern European engineer who knows obscure protocol stacks up and down.

It’s experience and capability that matters, not being tied to distinguished companies or overpriced fancy degrees.


Now scroll down to the Engineering section on the DFINITY team page (a team in the same space with similar deep pockets)

https://dfinity.org/team


That looks really interesting, but apparently they are terrible at picking names. Their website talks about ICP (Internet Computer Protocol). Searching for this leads to the identically abbreviated (and apparently already well established) Internet Cache Protocol (https://en.wikipedia.org/wiki/Internet_Cache_Protocol).


Exactly, a lot of the other replies to my comments did't quite get what I was getting at. DFINITY section is more what I would expect to see, and it's puzzling why IPFS is staffed the way the way it is.


Hi! I'm a contractor who works on go-IPFS and have been for a year. I have no PHDs but I have been programming a couple decades across all kinds of products.

Rather than wade into the self-taught vs pedigree debate, I want to point out that based on my experience working on the project, I think technical challenges are actually not the biggest challenges.

Much of the challenge has to do with process -- managing a huge issue tracker, tracking a giant project that lives in multiple repos with many interlocking components, benchmarking a system that is quite hard to benchmark, developing an effective release process, developing systematized ways to collect community feedback, etc.

IPFS needs solid product focus and team direction. From what I’ve seen, the folks at Protocol get this. That’s why they have started focusing on achievable priorities like hosting package managers. The technical problems are complicated but not insurmountable-- not beyond the scope of what many talented senior programmers can solve. I think IPFS would be improved by strong team leadership and process, not more whiz kids, and I see Protocol taking steps to insure this happens.


Your argument is that IPFS would be a better product had they hired those distinguished ex-FAANG engineers instead of obscure developers? The article outlines mostly UX and product decision problems. It doesn't seem to me that their engineering teams are holding them back.


UX and product decisions for software tooling are usually made by engineering teams. Of course I'd also like to take offense to the ex-faang stigma but I think it's not unfair to assume having engineers with experience working on high quality high polish products would increase the chance of producing a high quality high polish product.


They might well have great engineers, this view seems somewhat myopic. I've known principal engineers at big tech firms who weren't that great, and self-taught people with no degree at all who massively, massively outperformed the PHDs on the same team.

Your criteria are off.


This old trope again?

I’m a hiring manager for a pretty significant albeit non-FAANG IT firm and I’ve good and bad applicants from Google et al just like anywhere else. I’ve since come to the conclusion that many skilled engineers simply can’t be bothered to go through the hiring process that FAANG companies put you through just for a little prestige.

Plus in my experience, the different between skilled engineers is less than the different it makes hiring based on team fit. You can stick a dozen of the worlds most prestigious engineers in a room and still not produce anything better than if you put a dozen good engineers who are united in any a common vision and good team fit.

But don’t just take my word for it. Do a search on HN for comments from other managers - even ex-FAANG managers have, on occasions, echoed the same sentiment.


That's a good observation that explains the state of things with ipfs. What they need is a few very senior engineers from say spanner that know how to do this stuff. It will be ludicrously expensive to hire them from google (a million/year per person just in compensation), it will cost them a good share of the company (for the risk), but at least they will start making progress. And no, nobody can be self taught to match the experience you get in spanner-like projects, just like nobody can self teach to build boeing-like airplanes.


Hi, I made this submission. If you want a take on the competition (Dat:// and Beaker Browser) I made another submission here:

https://news.ycombinator.com/item?id=20162199

Direct link:

https://www.kickscondor.com/on-dat/


Indeed IPFS performs very poorly in almost every way; it is extremely resource intensive, and generally unable to find objects which are not extremely well known.


A smile comes on my face whenver I seen IPFS on HN...

The author could have gone an step further and used ENS. IPFS+ENS is becoming a common duo. Web browsers can't do ipfs:// but they can do https://insertipfsgateway/ipfs/Qm... With ENS (.eth) domains can resolve to ipfs:// and Metamask picks those up and turns them into gateway links.

There's a list of .eth domains & other ENS+IPFS info here:

https://gateway.ipfs.io /ipfs/QmUJKsA1FQRo4rGKJ9WAXNnv6o6HXy4NoJiWH97gPZUddV/ens+ipfs/list-of_ENSIPFS-websites.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: