> The "permanent web" has now just become "the most popular pinned web" like torrents did. :(
That'd be great! Torrents are great for keeping content alive, and if it's not alive (not having any seeds), you can find someone who had the content before to just start their client, add the content again and all the content is still referred the same way. I remember a while back on HN, the oldest torrent was discussed but unfortunately it didn't have any seeds. The original author of the content saw this, and started it's node, allowing people to download it again, without having to pass around new links/IDs. I think it was a Matrix remake called "The Fanimatrix", but I can't find the comment anymore. Torrents are great for making sure content is available for a long time, something the current web suck at.
Imagine the same with websites. Rather than Yahoo deciding for the world that Geocities shouldn't be online anymore, people can decide to help out to host it, and no links would change. That's a future I can get behind! Would make it super easy to recover websites, compared to our current ways (hoping that Internet Archive have previously archived the website)
> I really think we need to expand our use cases to included mutable and immutable content.
> People simply are not going to switch to the dweb if they have to deal with this complexity.
Agree with both points! And I think I specifically have told you this before as well here on HN, IPFS does support dynamic content out-of-the-box with pubsub and raw streams to other peers. However, the basic building blocks IPFS provides, aren't really meant to replace the currently easy experience of web development.
if anything, torrents / ipfs lend themselves naturally to the way human culture was always disemminated: if something is popular and lasting, it was passed on for generations, some trivial stuff that literally nobody cares about was naturally lost. Since we can't expect the huge amounts of data created every day to be archived forever, this is a very natural way to keep only the most interesting parts.
I guess it's eventually distributed but but still centralized in terms of using ipfs and having to go through gateways.
A truly distributed means of hosting is everyone hosting their own websites from their home connections. It's simpler and no central authorities are needed. Home connections are plenty fast these days. And the best part is that you get tons of storage space and you don't have to 'upload' anywhere.
You can also make real websites instead of 'single page application' javascript monsters.
Sure, if you live somewhere where you can get 1 gigabit fiber, maybe. But I know Comcast/Xfinity throttles the crap out of your upload. Even if you get their 1 gigabit plan, your upload is limited to 35 mbps [0].
To put that into perspective, if you have a 1 megabyte image that ends up getting linked on reddit and 100,000 people try to download it in an hour, then you'd need over 200 Mbps upload. Very very few home users have that kind of connection.
This isn't how IPFS works FWIW. Your upstream connection is only needed to feed the small number of fetches that aren't cached somewhere else in the network. Much the same as a traditional CDN in that respect.
> But I know Comcast/Xfinity throttles the crap out of your upload. Even if you get their 1 gigabit plan, your upload is limited to 35 mbps [0].
Odd that you choose their second highest tier to fit your narrative instead of just jumping to their highest tier with is actually symmetrical 2Gbps up and down fiber.
Are you talking about the "gigabit pro" plan[1]? That is a $300/mo with a 2 year minimum subscription. That clearly is not a residential plan. You might as well just pay for professional hosting at that point.
That's exactly what it's marketed as. There are also plenty of symmetrical fiber services with hundreds of Mbps of upload from Frontier, Verizon, AT&T, and others.
Just because it's marketed as a residential line doesn't mean it's common at all. Besides, that line, or even the gigabit line, is not available in a very large market.
> There are also plenty of symmetrical fiber services with hundreds of Mbps of upload from Frontier, Verizon, AT&T, and others.
Most people don't have a lot of options.
The options in my neighborhood are either 150 mbps down/5 mbps up with Comcast, or 35 mbps symmetrical with Frontier.
If you have more than two options, and one of them is gigabit, you're extremely lucky. A considerable chunk of America only has 1 option, and the extreme majority doesn't have gigabit down, let alone gigabit up.
> Just because it's marketed as a residential line doesn't mean it's common at all
Just because something is expensive and therefore unpopular to your average consumer does not mean the product itself does not exist. If we're talking about residential ISP speed, then all residential ISPs and their respective tiers should be included.
Otherwise you're just picking and choosing data points to fit a narrative which is deceiving.
In this case you're still beholden to your ISP and whoever controls them. True decentralisation would be something like mesh networks, making use of only local device capabilities.
If your ISP is not a dumb pipe then everything else is already lost.
Mesh networks only work if you pay the big money for rental on building roofs and existing towers. This money, and so height, and so line of sight, is why cell networks work. You might get some non-profit going in areas especially conducive to it (ie, high population density cities where residents have access to their roofs, or coastal areas with large mountains).
> truly distributed means of hosting is everyone hosting their own websites
That means "self-hosted". Distributed hosting means you can't host it in one place. I agree though, that one might consider it as a form of decentralized hosting if everybody hosts their own stuff, which you also totally can now that we have cheap computing power like raspis.
Yeah that's unfortunate. Luckily on HN are also many people who live in more developed countries. ;-)
And on a more serious note: Even in your situation a self-hosted solution might be more free. Then don't host it at home, but discuss with universities, hacker/maker clubs and companies in your surroundings and host it behind their internet connection. Usually you need one member of such an organisation to support you and then you can simply do it.
There are some open source router OS projects. I bet they could be convinced to consider such a suggestion with some pull requests, emails and/or a few sponsoring $$$.
IPFS is "truly decentralized" in the sense I think you mean. The publisher runs the origin node. It's the http gateway that isn't. Except it kind of is in that you can use any IPFS http gateway. Ideally clients would use native IPFS, not http, but turnkey native client support in browsers is not here today.
> Is that hash at least decoupled from the content?
It sure isn't! IPFS is a content-addressed network. If you want to update your site without changing it's address then you'll need a layer of indirection, such as IPNS, which allows a node on the network to redirect ipns://<my key> to an updatable content hash. This feature has been immature for a while now.
In my experience, Dat is the more mature option, though it seems to lack an official http gateway[0]. Combined with Beaker though,it makes web authoring an absolute breeze.
I really like Dat. Too bad hypercore doesn’t really have any way for MULTIPLE users to write to the same file, or even a way to recover from a single user making conflicting edits at the same time. If I were to add a pluggable consensus module to Dat, where would it go?
I'm toying currently with an app to do play by post d&d campaigns on Dat, I make one user create a Dat archive to hold game data, then each player has their own archive for their character, the game archive hold references to the public keys for each one, and then the application takes each character feed and merge them to present a seamless discussion.
This feels like contributing to the same document, and yet each user is in total control of their data (almost total control, because they cannot decide to erase it if other people are already seeding it). I also love the possibilities of scaling it implies : when each user hosts their own data, the number of users doesn't matter (provided they all interact in reasonably small groups, of course).
EDIT: oh btw, I can create archives on-the-fly because I'm using Beaker browser's api. I don't know if it's something possible with Dat by itself.
So how do you build consensus if everyone is responsible for hosting their own part?
Suppose I wanted a group activity. Who enforces any rules? What if someone deletes something?
How does a personal swarm recover from the same user making two conflicting edits? Is it just totally borked from then on?
And about consensus... how in Dat can I have the others deny an operation in the Dat, if it violates some rule? Like if I wanted to double-spend a token, for instance. A single, non divisible token, mind you.
Note: I'll only speak for my toy app, not pretending to be an expert of Dat, obviously.
I don't try to build a consensus (if by that we mean conciliating possibly conflicting entries, like Stellar if doing, for example), I just merge feeds. For now, it's as simple as it gets : my feeds (one in each character's archive) are json array of objects each containing a timestamp and a message, I just concat them and sort by timestamp. Of course, this won't fly for long, I'll soon have to separate data sources in chunks and process them as stream to avoid loading everything in memory at once, but for now, it's good enough to explore what I can do with the protocol.
I have the feeling your questions could be sum up as : how do you implement authority if there is no central control of data? (my apologies if I got it wrong and make you say something you haven't). The answer is : you don't.
With a standard app on a server, if you don't enforce data integrity, one user can possibly break the application for all users. With an app like the one I'm building, an user only share their data with a small group of friends and can only affect them. If they corrupt their data, the app is only broken for them. "Congratulation, you broke your toy. Now what?" (edit: in my app case, removing that person from the group would be enough to fix the data).
Of course, this reduces seriously the scope of what kind of app you can build (forget banking apps, or anything where anonymous people interact with one an other at a public scale). I'm perfectly fine with that : I can build my usual small tool services without pondering if it's worth mantaining, renting a domain name and renting a server. I'm not trying to build uber or bitcoin.
Please note that I'm not saying that consensus protocol and security can't be implemented with Dat, I'm just saying that it doesn't matter for what I'm currently building (and which is my only experience with dat so far).
Not asking how to implement authority, but rather how to implement consensus. In other words, if A pays B, how does B know that the transaction really committed? If the data is there now, I need it to be there later, too. If everyone is just writing to their own Dat, how do I know they won't just "forget" that they paid me?
If you want to implement payment, that's clearly the wrong tool for the job :) (although, it could easily leverage cryptocurrency networks)
Personally, I see the dweb as internet of the early days, when we were all writing blogs and publishing tools just because we found it cool. If Dat makes it difficult for big players to launch commercial products on it, I would call that a feature (but it's too early to say if it's the case). They already have the web for that.
>That is significantly more work than posting a torrent to a tracker, with significantly less discoverability.
Is it?
Install Torrent client / IPFS client, create a torrent / add your files to it, then tell other people the torrent magnet link / ipfs link. Optionally, other people can open your magnet link / ipfs link with a web gateway site instead of installing software. (I'm not personally aware of torrent web gateways, but I assume some exist. The fact that they're popular for IPFS seems like a point for IPFS.)
>Is that hash at least decoupled from the content?
Content in both torrent magnet links and ipfs links is content-addressable. They're both based on a hash of the content. Distributing updated versions (of a torrent, or an ipfs link) is a task for a different layer.
I appreciate demos that show off the minimal effort path. It's a good way to take measure of a stack. If there's a quick golden path, you can tell the stack developers are thinking about how to make their users (app developers) more productive.
HN mods generally do a good job of banning/retitling tabloid click-bait titles like "X reasons to do Y", but this is definitely another form of title clickbait that needs to be added to that list.
Instead of using IFPS, which currently needs gateways, you can get a fully distributed site using https://beakerbrowser.com/ It's based on the https://datproject.org/ which is also used to archive and distribute scientific data.
> If you want to keep this file available, even though people might not constantly open it, you need to pin it to an IPFS gateway. We can help you with this, just send us the hash of your webpage, and we make sure it stays online. And since you are sending us an email anyway, maybe include some feedback/ideas for the further improvement of Dweb.page
Email? For every update of the website? Sounds to me like the opposite of convenient.
> The best way to solve this is to create a single page application (SPA) and put all the code into one HTML file. This way your webpage loads much faster on the distributed web and you don’t have any issues with links.
>The best way to solve this is to create a single page application
Another way (arguably more convenient than redesigning the site) is to make a flattened copy of the site's files using a tool such as https://www.httrack.com/ such that all the component files (CSS, JS, etc) end up in the same IPFS directory and hence accessed under the same hash.
The email is just a first solution to test it and also to get some feedback. It’s open source project that just started: https://github.com/PACTCare/Dweb.page
I like to see where progress is at on Decentralized, but so far much of the progress is written in expert-talk. Even a glossary on one side of a (mostly-empty) page is way better than -expecting- non-experts to grok in enough-fullness.
Experimenting with new tech. Helping activists by building (and supporting) technology that protects them. Sharing information critical of authoritarian regimes. Leaking documents or photos that provide evidence of official corruption.
I really think we need to expand our use cases to included mutable and immutable content.
People simply are not going to switch to the dweb if they have to deal with this complexity.
I did a talk on this at the Internet Archive's Decentralized Web Summit a few months ago, and how we can fix this, alongside a bunch of others in the dWeb space like DAT, SSB, etc. ( https://www.decentralizedweb.net/videos/talk-better-algorith...).