Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Hearth – A Dropbox-like, IPFS-powered personal website publisher (eternum.io)
315 points by stavros on March 5, 2018 | hide | past | favorite | 104 comments



Why not use Beaker[0] and achieve the same thing, but on linux and windows too?

It has a great community too

[0](http://beakerbrowser.com/)


It's personal preference, some people will prefer Beaker, we personally liked the "drag some files to a folder and that's it" UX.

By the way, I'll mention this here because it isn't evident on the Hearth page right now: You can get your own username on Eternum for free, and it will redirect to the latest hash that Hearth publishes (i.e. the latest version of your website) automatically.

You can do this even if you don't use Hearth (we have an API for it), here's an example with my username:

https://www.eternum.io/user/stavros/


> we personally liked the "drag some files to a folder and that's it" UX.

Beaker does this...

In terms of getting a free username on a HTTP website, https://hashbase.io/

Disclaimer: I haven't used Hearth, so I can't compare, but Beaker doesn't seem like it could be much easier.


Am I missing something? Beaker requires you to use their browser? Not really apples to apples


Beaker is a browser that supports http:// and dat:// - you can:

- use Beaker to publish/"host" dat:// websites (as you can use Hearth to publish ipfs websites)

- use Beaker or any dat:// client to browse dat:// websites

- use any browser to browse dat:// websites via http through hashbase.io (similar to eternum.io)


On the cli side, you can also browse/explore dat content with https://github.com/millette/dat-shell


OK, but IPFS sites are just HTTP/S via a gateway. So any browser works. Indeed, there are IPFS gateways on Tor onion services.


The last point mentioned has that too, hashbase.io is a HTTPS gateway for dat://


Oh I see, thanks. I'm not very familiar with Dat/Beaker, but I'll try it out some more, as I missed that mechanic.


Quick demo if you don't want to install: https://www.youtube.com/watch?v=wwLaEyKGc90


That website gave me a good chuckle. Was that your actual website from back when, or did you just "take inspiration" from the old days?


Haha, no, I just made that yesterday as a quick test of the redirection functionality (it uses bootstrap), although I did also find my actual old website from when I was 16:

https://anonymoussoftware.stavros.io/

Ahh, iframes and tables, so cutting edge :)


> we personally liked the "drag some files to a folder and that's it" UX.

How much extra work would it have been to add a "drag some files to a folder and run publish_website" interface, for Linux and Windows users? Hardly any, given that the functionality is already there.

Not only that, it would make the system more easily scriptable. For example, it is probably easier on MacOS to get a Makefile to run an executable than to drag files to a folder.


> For example, it is probably easier on MacOS to get a Makefile to run an executable than to drag files to a folder.

Easier for whom?


The person writing the makefile. Make can easily run executables. Getting it to simulate GUI movements is, I expect, possible, but harder to do.


Fun fact: Beaker used to support IPFS[0] and the IPFS creator used to work with the Dat project[1].

[0]https://github.com/beakerbrowser/beaker/issues/480#issuecomm...

[1]https://github.com/ipfs/faq/issues/119#issuecomment-21827839...


I spent several years at Cargo Collective (https://cargocollective.com), a publishing platform for designers, musicians, architects, etc… Pre-SquareSpace, Wix, all that… Days away from releasing the public beta of Enoki, an experimental publishing tool for the peer-to-peer web. Currently running inside Beaker Browser and built on Dat.

Beta registration was opened last week, although it is still trying to fly a bit below the radar :)

https://enoki.site


Or BlockStack[0] even

[0](https://blockstack.org)


Or add realtime updates to IPFS, Beaker, or Blockstack using a P2P/decentralized Firebase: https://github.com/amark/gun

IPFS + Firebase = Lovechild ^


I remember having read about Gun a few years ago [1] and there was a lot of (apparently) valid criticism. Do you know how much of that is still valid today?

[1]: https://news.ycombinator.com/item?id=9076558


If you intend to use GUN for banking or any globally consistent (CP) system, then yes.

However, for everything else, the criticisms no longer apply.

GUN is an CRDT based AP (highly-available, partition tolerant) strongly eventually consistent system, and therefore should not be used for banking-like systems.

I hate the jargon so sorry to reply with it, but it allow for a quick/concise summary of GUN's tradeoffs. Some more resources below:

- Cartoon explainer of academic stuff: https://gun.eco/distributed/matters.html

- CAP Theorem tradeoffs: https://gun.eco/docs/CAP-Theorem

- How to implement the CRDT and how it solves for Split-Brain failures: https://gun.eco/docs/Conflict-Resolution-with-Guns

There are a bunch of other resources, but I'm more than happy to reply/answer any specific questions if you have them. Cheers!


That's exciting, thanks a lot! I was just considering using a graph database for a future side project.

I also read Neo4j was an inspiration. Is there any "human-made" comparison between the two? All I could find was "A vs. B" comparison tables.

Thanks again!


@edjroot I don't know if this is any different than what you already saw, but the biggest differences are:

- Neo4j is Master-Slave, GUN is Master-Master (or Multi-Master, or Masterless). Basically, GUN is P2P/decentralized, Neo4j is centralized.

- GUN has realtime updates/sync built in, Neo4j does not.

- GUN has offline-first features, Neo4j does not.

Other considerations:

- Neo4j has its own query language, GUN has a FRP (Functional Reactive Programming) based JS API.

- Neo4j is over a decade old, GUN since late 2014.

- Neo4j is more monolithic, GUN is more microservice-y.


Wow, this is great. Thanks for sharing this.


Ok, but to access a site published on Beaker you must use the Beaker browser, right?


If it's published on Hashbase, you can access it with any HTTP client (though this of course somewhat defies the point of it being p2p).

Otherwise, you will need a dat:// client - Beaker is the main web browser with dat:// support, other clients sofar have more specialised uses, e.g. https://github.com/codeforscience/sciencefair


> this of course somewhat defies the point of it being p2p

Beaker/Hashbase dev here. Not quite, imo. The key difference is that a service like Hashbase has no special authorship priveleges (unlike Google Docs or Dropbox), and is instead an agnostic peer. We think that's an important distinction.


Would it be possible to incorporate dat:// support into existing browsers as a plugin?


Yes. Look for "Support for Decentralization Protocols":

https://blog.mozilla.org/addons/2018/01/26/extensions-firefo...


Cool. I wonder when we'll see an uprise in internet websites that reference dat://

Is there a first HN reference to a valid dat:// website already?


Note that it's still going to require extensions


> though this of course somewhat defies the point of it being p2p

Not really. Need ways of bridging the gap.


I guess what I meant was that using hashbase exclusively over http and not switching to using dat:// clients would be somewhat defying the point, but yeah, it's a good bridge.


Yeah. This is where appropriating existing patterns like pay-walls becomes interesting. Instead of requiring registration and a fee, simply view in a p2p-enabled environment and help offset bandwidth costs. Of course there is more to it that that, but money not spent on infrastructure is money in the pockets of content creators and publishers.


Hashbase seems to function as one of the nodes in the DAT network, so even if none of the peers are online at least Hashbase is still there. That's what I gathered from a quick scan.

The DAT network seems interesting and is new to me, but like you said, needing a central node to cache the data seems like breaking the p2p story.


Hashbase is a convenience feature, bridging the gap, so to speak, between p2p and centralised systems. It's not required at all for dat:// to work.

The main benefit is that users of http browsers can visit your dat:// site.


I find that Beaker doesn't cut through corporate/university firewalls as good as IPFS. I'd be interested to know others experiences.


Why Google Drive if you can Dropbox? Or whatever.

This comment is just advertising for the Beaker browser. It could at least be open about it and say: "Also check out Beaker, with similar functionality".

It's not nice or intelligent to say "don't do this in a different way because we are already doing it in our own way".


Never having used or looked into Beaker, dat, ipfs, etc, some immediate questions about security implications come to mind:

1) Someone accesses your "site" directly on your computer via ipfs/dat whatever it is. This is static content I guess? And it's like your computer is just running a static webserver (kinda/sorta? if not, or something more dynamic please advise). So, are there security implications here? Is there an attack surface that a client/viewer of your content could leverage?

2) Sort of opposite question as in 1. You are the client/viewer of the "site," directly connected to some dude's computer via ipfs/dat or whatever. Obviously I should use standard security measures and not just go clickity on anything/everything I see, but beyond that, are there any other security implications for the client? I could download a virus I'm sure by clicking or downloading a malicious file, but beyond that, could I get hacked by the target in a more dynamic way than that?

3) How do I "navigate" in ipfs/dat-land? How do I know what URL to go to, and if this "URL" might be a safe site, or like what the hell is this URL if it's just some kind of hash? Is there a "google" of ipfs/dat sites/content?

Stuff like this. :) Any feedback would be muchly appreciated!


1) Yep, exactly. There shouldn't be security implications, as long as the IPFS daemon is secure.

2) Again, as long as the IPFS daemon is secure, standard security practices apply. It's just static files.

3) It's all just static files. You get linked to from other files. Same as you navigate the rest of the web. It's not hard to wrap your head around it, really, just imagine that the web was all static files. That's it.


For number 3 literally the web 20( 30?) years ago


We definitely had dynamic websites in 1998, guestbooks, forums, etc. Mostly via CGI. But yes, it's very close to that.


I mean, sort of. We had some files that weren't expected to change, but nearly nothing was truly static.

Content Addressable .. content is truly unchanging. Which, imo, is something the modern web is lacking.


> 1) Someone accesses your "site" directly on your computer via ipfs/dat whatever it is. This is static content I guess? And it's like your computer is just running a static webserver (kinda/sorta?

It's like static content served by bittorrent. The content might be served directly by your computer, or by other nodes in the ipfs/dat network which are caching your content.

> So, are there security implications here? Is there an attack surface that a client/viewer of your content could leverage?

An attacker could try to exploit vulnerabilities in the IPFS/DAT network. This would be similar to exploiting vulnerabilities in bittorrent.

Or they could try to steal the private key used to publish your content, and publish malicious updates.

If access to your content is restricted (ie with a secret hash or url), an attacker might try to get access to it.

Overall the security model is quite straightforward, with a smaller attack surface than typical dynamic web stacks.

> 2) Sort of opposite question as in 1. You are the client/viewer of the "site," directly connected to some dude's computer via ipfs/dat or whatever. Obviously I should use standard security measures and not just go clickity on anything/everything I see, but beyond that, are there any other security implications for the client? I could download a virus I'm sure by clicking or downloading a malicious file, but beyond that, could I get hacked by the target in a more dynamic way than that?

It's pretty much the same security model as regular web content, as you mention. The only exception might be a vulnerability in the ipfs/dat code, since your computer might be running a local node.

> 3) How do I "navigate" in ipfs/dat-land? How do I know what URL to go to, and if this "URL" might be a safe site, or like what the hell is this URL if it's just some kind of hash? Is there a "google" of ipfs/dat sites/content?

IPFS has a naming system called IPNS, where addresses are based on a unique crypto keypair. Addresses look like `/ipns/<hash>`. You can use specialized ipfs software, like the `ipfs` cli, or navigate your regular browser to an http gateway. For example, with the official gateway you can navigate to `https://ipfs.io/ipns/<hash>`. Ipfs also has a facility to easily alias a DNS domain to an ipns key, so you can navigate to `/ipns/<mydomain.com>`

Dat I believe has a similar system. The main difference is that, thanks to Beaker, you can bypass the gateway system - just navigate to `mydomain.com` and, if it's dat-enabled, the browser will lookup the corresponding key and fetch the content via dat instead of http.

There's no reason IPFS couldn't work the same way - you just need a browser to implement iofs support directly in the same way. Personally I hope Beaker will find the time to support both!

> Is there a "google" of ipfs/dat sites/content?

Not that I know of. But I think it's inevitable!


> It's pretty much the same security model as regular web content, as you mention.

I can't say much about IPFS, but this is not quite true of Dat. The Beaker team is working on a complementary set of APIs to read and write to Dat archives, so you could have, say, a TiddlyWiki-like site that can update itself.

It's always been a big problem that the security of file:// URIs is unstandardized. I can publish a repo, for example, that includes some tests/runner.html. You could open it Firefox to run the automated tests, but Chrome will choke on it because Chrome doesn't allow XHR for file:// URIs. The only way around this is to either put it online or run a local webserver (e.g., `python -m SimpleHTTPServer`) and access it through localhost. This is perverse. Dat is solving this in a way that's very similar to what would happen if file:// got first-class attention from the browser makers. If it's in a Dat, it doesn't actually matter if it's on your computer or hosted somewhere. All that matters is that you have the content.

This might not be exciting to HN types who are comfortable and willing to go rent some DigitalOcean instance and put up with administering it, but this will have a huge impact on the long tail of businesses—both small and large—that are quietly chugging along on little more than Excel and email (maybe the odd SharePoint installation).

On the other hand, developers should be excited about this, too, because it means they can start building apps that are _actually_ serverless (compared to Amazon and Google's definition). We're back to where we were in the 90s: you can go write some neat client software and share it with me, without having to think about the headache of needing to run a server in perpetuity just so that it can continue being useful.


I completely agree that the file access APIs in Beaker are extremely cool, and solve a real developer problem.

From a security point of view, it does slightly expand the attack surface, since there could be a vulnerability in that code which isn't present in other browsers. But given the simplicity of the APIs, I think it's a very acceptable risk given the benefits.

The biggest downside IMO of these additional APIs is that they only work in Beaker. If I were the Beaker team I would prioritize making them available to mainstream browsers in some way: javascript library, browser extension, webassembly... It would really help adoption if they found a way to do that.


No, you still need to host it yourself or pay somebody else to, otherwise it still has a chance of dissappearing... but I agree that it's much close to the literal interpretation of "serverless" (I personally don't mind the term because it seems rather defined at this point - an idiomatic label)


I'm not getting this. Fine if you want to go back to the early 90s when sites were all static but what does this model offer for dynamic sites and in particular sites which are intended to scale beyond a single, home user box? How do I migrate my Rails app to dat/ipfs?


My understanding is that you can't - the use case isn't for dynamic websites - it's for static content. You could migrate your angular/react SPA (who's front-end is based off of static content), and continue to use the REST / WS services you already have, but not sure what the point of that would be.

I think it's better to think of this more as a data store than web sites/apps.


This is not true. See the remarks about the DatArchive APIs in my original comment. You can also look at Fritter[1], which is a prototype Twitter clone built as a Dat web app.

It's true that there are some types of web apps that exist on the traditional (HTTP) web but that cannot be made as a Dat app, but it's also true that there are a class of apps that could be built with Dat but that cannot be built on the traditional web.

1. dat://fritter.hashbase.io


Ipfs is awesome. It's more like bit torrent. You have hashes generated based on files. Same hash always points to same file, provided at least one ipfs node is mirroring it. You don't automatically mirror content with ipfs unless you elect to.

When you update your site, the hash must change, and thus the link changes as well.

Security stuff... Yeah it can be used maliciously just like bit torrent can, but ipfs hashes can be accessed also by ipfs.io/hash and I haven't heard of any problems with malware or anything of the sort, things get filtered out pretty quickly it seems.

Ipfs is awesome. Read the white paper ... It's such a nice middle layer for projects wanting to use a distributed file network without building out their own torrent network.


Fantastic. The web falling into the AmaGooBookSoft silos has been convenient, but it hasn't been good for censorship or personal sovereignty. Making it push-button easy to publish to IPFS a great way to return editorial control and ownership to individuals, without the downsides of needing to run and maintain servers.

I hope Hearth or someone like them expands beyond websites to self-publishing of store front ends, crypto payment, social feeds, real time video, and all the other services we expect AmaGooSoftBook to provide today. And to integrate Tor as necessary.


> I hope Hearth or someone like them expands beyond websites to self-publishing of store front ends, crypto payment, social feeds, real time video, and all the other services we expect AmaGooSoftBook to provide today. And to integrate Tor as necessary.

Agree but I hope we take super small baby steps and get what above is accomplishing right the first time without complicating the use cases just yet.


>but it hasn't been good for censorship or personal sovereignty.

On this subject—

I'm going to be that guy this time because I haven't yet seen a good solution to the problem, and I ask with sincerity:

What happens when somebody decides they're going to start an illicit website/content of some kind? (let's avoid semantics because I'd rather not go down that path)

Everybody then has a forced part in hosting it, and there are just some subjects that do not have any redeeming aspects.

Is there a resolution built in for those scenarios?


I'm not sure why you'd say that everyone has a forced part in hosting it, since IPFS only pulls files you request to your computer.


After your comment I did a quick bit of reading and realized I'd made a fundamental mistake regarding how I thought I knew IPFS worked (vs. other decentralized file system formats).

The mistake I'd made was thinking it worked more or less similar to Bitcoin where every node holds the whole chain, where it really works more like torrents in that you download the set of files and then reseed. Thanks for pointing out the difference.

Because of my error I guess the question changes. It sounds like it may be even more damning for those who regularly view illicit content, but at the same time making it harder to shut down caches of the same illicit content. I recognize this is already a problem with torrents, but then my question becomes how do we not repeat the mistake?

Is there a mode of resolution outside chasing down every single bearer of the node to try and remove the content?


This is the same question that applies to any other medium, context, or means of interaction: who decides what is illicit?

Just because something CAN be used to commit a crime (say, the expectation of privacy in your own home), we don’t eliminate it to avoid that edge case.

Not all technology needs a law enforcement backdoor, in my view.


I agree with you. Not only that, but society progresses precisely because not all crime can be caught. Ask any interracial couple a century ago, or any gay person fifty years ago, or any marijuana smoker last year. These are all things that would never have become legal if every single person who practiced them was caught right away.


>medium, context, or means of interaction

The first analog that is at all comparable is hypertext and the web interface to the internet. This is much the same except it's harder to clean up.

Who decides what is illicit? Like I said, I'm not trying to start a semantics discussion. I'm not talking about morally debatable subjects. There are some blatantly disgusting things out there that have ruinous effects on people who've made no decision to take part. A platform like IPFS allows that to gain a new root, and deeper this time.

Hence my posed question.


When it comes to imposing your views on others, everything becomes a matter of morals; What you and I may find irredeemable is acceptable to others. There are many activities we engage in that have ruinous effects on others that are accepted as necessary or even celebrated by society, so negative effect on others isn't a meaningful criterium for moderation.

Without determining who decides what is illicit and what isn't, talking about moderation is meaningless.

My expectation that moderation will be done by law enforcement, as is the case with other protocols like bittorrent.


... at the same time making it harder to shut down caches of the same illicit content. I recognize this is already a problem with torrents, but then my question becomes how do we not repeat the mistake?

The mistake to avoid here would be trying to design censorship into an otherwise open and decentralized content distribution system.

Don't try to shut down the content. If someone was harmed in the making of the content, then use it as evidence to deal with the root of the issue; otherwise it's none of your concern.


>If someone was harmed in the making of the content, then use it as evidence to deal with the root of the issue; otherwise it's none of your concern.

I think you're missing the larger picture, though. It's never isolated to singular instances. If the content is decentralized, then so can the problem be.

You wouldn't say smallpox shouldn't be completely eradicated because "it never hurt me, so it's none of my concern."

Your solution is reactive. The problem with reactive solutions is that they don't solve the problem, they can only continually try to keep up with it.

Your comment inspires another question: what is everybody so concerned about having censored? It's not exactly a problem [in North America or online].


If the content is decentralized, then so can the problem be.

Sure, but the content is not the problem, and eliminating the content does not eliminate the problem. The content is merely evidence of a problem. It may even help you locate the actual parties responsible. Dealing with the root cause would be harder if the content were more difficult to find.

You wouldn't say smallpox shouldn't be completely eradicated because "it never hurt me, so it's none of my concern."

What I said was that "it's none of your concern" unless "someone was harmed in the making of the content". Smallpox actually hurts people. Protecting people from smallpox is a worthwhile concern. We do that mainly by making people resistant to smallpox so that it doesn't get a chance to infect people and spread. Censorship is not like vaccination, much less eradicating the smallpox virus; it's more like eliminating all knowledge of how smallpox works. It doesn't address the problem, makes actual solutions harder, and has far-ranging side effects.

Your solution is reactive.

Taking down content you've identified subjectively as "undesirable" is a reactive response. Worse, it's reacting to and addressing only the surface symptoms, which means it isn't a solution to the real problem, that someone was harmed to create the content. You aren't going to stop people from being harmed in the real world by redacting the evidence.

On the other hand, designing a system to be resistant to censorship from the outset is a proactive approach to dealing with the threat of censorship.

what is everybody so concerned about having censored? It's not exactly a problem [in North America or online].

You have cause and effect reversed. It is (mostly) not a problem in North America or online because everybody is so concerned about it. It is a problem in places where censorship is treated as a normal and routine practice—for example, China.


The content becomes a part of the problem when it is liable to spread the "disease" (to continue the allegory).

I get the feeling we're discussing this on differing levels of severity, here.

I'm not talking about dissenting political opinions or pseudoscience or conspiracy theories or counter-culture or drugs or anything silly like that, but things more impactful and corrosive to humanity.

The persistence of certain content that isn't proliferated as a study has [arguably] the effect of normalizing itself, and whatever it resulted from.

---

I've been avoiding semantics, but let's go with a relatively specific example:

A person was raped. It was recorded.The gruesome story was detailed as entertainment for a sick (yeah, letting my bias out here) group of fans and interested or curious parties. The guilty party is found and apprehended, sentenced, and punished according to local law. Justice was served according to society's mandate. We consider this good.

Now of the persistent content. It's now in the hands of hundreds to tens or hundreds of thousands of people globally. Curious parties become interested parties, interested parties become fans, fans become culprits.

And all the while the victims get to live, not only with the experience, but with the knowledge that there are swathes of people enjoying the repeat viewing of the situation and the knowledge that it will never go away.

---

It's a big 'what if'. I could say the same about your position that if we don't gain total and open ability to publish whatever information whenever and have it persist forever that we will succumb to evil, oppressive forces. It's speculative.

I don't want to come across as if I'm coming down on the IPFS platform, or the interface Stavros ingeniously developed for it. Quite the opposite. I think that not enough thought goes into how powerful the platform really has the potential of being.

Software and network developers have a lot of inherent power to move large ideas with relatively few resources, and it should be recognized more often. And it should be discussed, honestly, whether one implementation or another is the best we can do before releasing it on society.

I'm of the opinion that ethical questions should be asked of engineers, and that empathy needs a larger role in the process of [not necessarily research, invention and development, but] implementation.

---

"We must always take sides. Neutrality helps the oppressor, never the victim. Silence encourages the tormentor, never the tormented."

Ellie Wiesel

---

Feynman on a Buddhist saying:

“To every man is given the key to the gates of heaven. The same key opens the gates of hell.

And so it is with science.”


Comparing wrongthink to disease has a sorry history.

If you don't want to see bad things on IPFS, don't access them.

If you wish to force other people to not see them either, the process is similar to anything else on the Internet.

There's no need for moral panic.


Who's panicking?

I was posing the question as to how the community could deal with something horrific. At this point in time the power is in the hands of the community developing the technology. Surely there were lessons to learn from the implementation of the web.

I posed it again because each time there is no discussing potential for improvement and only responses crying censorship with pseudo-Orwellian lingo.

There's a line someplace. Nobody is preemptively stopped from producing snuff, but we don't support it by allowing it to be stocked at the local library in the name of anti-censorship.


Exactly, and this applies for all "levels of severity". There is no content so "bad" that censorship becomes an appropriate solution.


How is this not another silo? A big one called "IPFS"


You're making a category error. IPFS (and friends—like Dat) are analogous to HTTP. Would you call HTTP a silo? No; services like Facebook and Google Drive are silo-y, but HTTP is not. The people working on the P2P web are just trying to make some design choices at the protocol level that facilitates its use in a more decentralized way than the services that HTTP has encouraged to flourish.


Content under HTTP can be thought of as siloed if the user doesn't have an HTTP client. Same effect no?


I don't think so. Everything would be a silo by that definition. A better way to think about it is that silo is something who's owner can cut you off. HTTP is a protocol where ownership does not apply.


How does Hearth achieve the "decentralized" status? I assume Hearth is offering a small-volume pinning service on the side as a charity right now? If that's true, it's essentially no more "decentralized" than a normal website.

What am I missing?


IPFS


IPFS doesn't actually distribute hosting unless someone else goes through the trouble of doing pinning. Some gateways may keep local copies but they're also free to destroy local copies.

In the typical deployment where nodes refer to their own local IPFS services, it's often the case that the primary IPFS pin serves the majority of traffic.

If we're going to call that "distributed" then so are normal webpages with random caching options.


We call bittorrent distributed, I think IFPS counts too.


Bittorrent has a lot fewer single points of failure than IPFS.


Does it though? I though IPFS only has gateways because browsers don't speak it...

Which I guess makes you right.

Let me revise my opinion to say that if there was native browser support for IPFS then we could call it distributed like bittorrent.


No, because the hosting problem really isn't solved and FileCoin was something of a massive train wreck.

When the most charitable critique of your ICO is, "We don't think this is a scam because the people involved aren't famous scammers but it certainly looks scammy" you've failed to solve your problem.


What are the points of failure in IPFS vs bitorrent? I'm curious now, because it seems like you're dissing IPFS because you dislike FileCoin (which I don't care for either)

Going back to the IPFS/But torrent analogy, I don't understand how bitorrent hosting problem is any more solved than the IPFS hosting problem.

I bring this up because my orginal point that I think IPFS counts as distributed, because bittorrent counts as distributed. So far you haven't really convinced me that they're all that different.


Many IPFS nodes only serve local clients, never serve remote clients or the larger network, and don't expect to be throttled by peers.

In short, IPFS is basically an alternative content addressed routing system that tends to have some slight endpoint caching.

Bittorrent at least heavily penalizes nodes that don't play ball rather quickly. So it rewards nodes that disseminate info as they acquire it, and make it trivial for storage to participate.

I don't see IPFS as solving distributed storage problems at all. Neither do the creators, which is why they started a related project called FileCoin to help with that. Too bad about that.

If you think they're not "all that different" then I refer you to the white papers. I'm disinclined to play a longer adversarial lecture game.


OK. Now I get where you're coming from. I didn't realize that FileCoin was started by the creators of IPFS. That is too bad. You really don't need crypto as a vehicle for paying someone to mirror content.

With regards to Bittorrent... I guess where you and I differ is that I think Bittorrent is just as broke as IPFS when it comes down to it. Or maybe same type of broke, but less so than IPFS. While the seeding incentives certainly help bittorrent, I don't think it fundementally makes it better than IFPS. I know what when I've torrented things, I've always seeded less than I leeched. Long-lived torrents are either a) pretty popular or b) have some people intentionally keeping it alive... which is what I would expect in IPFS too.

I will have to go read those white-papers. If I'm reading your comment right, it sounds like you consider the broken-ness of IFPS to be so much worse than that of Bittorrent that you think it deserves its own category.

Or perhaps that the expectations of IFPS compared to its own broken-ness puts it in its own category. I think of Bittorrent and IPFS as distributed distribution, not storage. And while Bittorrent people see Bittorrent as distributed distribution (caching), IPFS markets itself as distributed storage. Which, that makes sense given the use cases IPFS seems to want to fill - it wants to replace static http stuff, whereas bittorrent seems to serve small more canonical files, stuff that's already always identical independent of the source. Like ISOs or other things that people were generally sending each other directly anyways.

I think it too until you last comment for all of the above thoughts to congeal. And I'm sorry it came off as adversarial. Thanks for taking the time to respond.


> I guess where you and I differ is that I think Bittorrent is just as broke as IPFS when it comes down to it

With all due respect...

> I will have to go read those white-papers.

Yes.


I need to leave my mac running (or install IPFS on a VPS) so that if anyone wants to read my personal site there is a source online that can provide the content right?


Although not IPFS, Dat[1] is another protocol (associated with Beaker Browser, linked in another comment here), and there's a group trying to build the P2P web on that. In the case of Dat, you can create an account on Hashbase[2], and that will give you a permaseed for your content, even if your device is offline. And then, of course, if you have enough fans they're able to relay your content amongst themselves and any other interested parties who show up.

1. https://datproject.org

2. https://hashbase.io


That's the reasoning behind IPFS's Filecoin project. You pay a small fee for people to permanently pin you IPFS content. With incentives for storage providers to have available space, and data duplication. It's set up so that storage providers themselves are 'miners' in the blockchain. Which drives them to compete for the lowest price possible for storage, benefiting users.


Not necessarily. IPFS is sort of like Bittorrent on steroids. You can think of yourself as the first seeder of your website, but as long as there are other network participants that have a full copy of your site, your home PC doesn't need to be online for people to access it.


Right, but when you want to publish something on the internet, wondering (and hoping) whether there is at least one copy out there in the ether is the last thing you want to have to worry about: your site has to be accessible all the time.

In practice, it does mean you need an origin available all the time, and it's either you providing it, or some other service you pay to do it. I don't think there's any magic.


Agreed. It does do some magic for you, when your site gets posted on HackerNews, but in practice you still want to maintain at least one permanent host for anything semi-important.


Exactly. They do need to get a copy of your site first, though, and they do that by accessing it (so someone else needs to have visited your site first).

Otherwise, you can use https://www.eternum.io/ (which is a service we also made) to basically seed your site for a few cents a month.


I'm a layman in this stuff, can IPFS sites be accesible from http?


Yes, through any of the publically available IPFS gateways. ipfs.io is one of the more common official ones, but since they're all connected to the same network, in theory the content should be available from any of them. You can also host your own gateway (locally or publically) and point your browser at it.

If you're wanting to make something with DNS in front of it to mask the long IPFS content hashes, you can do that too, there's a good overview of the process in their examples:

https://ipfs.io/docs/examples/example-viewer/example#../webs...


As much as email, usenet irc and jabber/xmpp can be accessed over Http(s) - there are gateways.

(xmpp might be a special case here - I'm uncertain of the underlying transport for jabber; could be that js+http(s) is enough to simulate actual jabber client?)


IPFS is its own standard, so your homepage might look like ipfs://blog.iagovar.co

But since most browsers don't support IPFS natively yet there are IPFS gateways that allow you to access IPFS sites through them.


through gateways, yes. There a bunch of public ones, or you can run your own.


Can this be used for simple IPFS file hosting with pinned files?

I'm building a small web app and want to use IPFS for image storage. In testing I was able to create the files but after a few months of not running the IPFS daemon the images are lost as they are no longer pinned.

Edit: Just saw https://www.eternum.io/ in other comments. Perfect!


To answer your question, yes, it can. Just put the files in there, it'll pin them.


Looks interesting, but I can't get it to work.

I've added some files to my Hearth folder, the menu bar icon changed to the "syncing" state and then seems to have got stuck like that. The finder integration doesn't work either, so I can't copy the Hearth link.


Hey there, the on-going 'syncing' state means that the Hearth daemon still processes your changes.

How many files have you tried to add and how big were they?

If that doesn't stop after a while, consider restarting Hearth app.

Regarding the Finder integration, if neither the context menu item to be shown nor the Hearth toolbar button then make sure the Hearth Finder extension is enabled (Visit the System Preferences > Extensions > Finder and enable Hearth) and the restart Finder by option+right clicking the Finder icon in your Dock and selecting 'Relaunch'.

Hope that helps!


Hmm, that's odd. You're running the app but aren't getting the icon in Finder?


IPFS seems to be really taking off lately in terms of people building useful things on top of it.


Peerweb¹ will be much more user-accessible since it works with normal browsers and retains your current URL structure (even for user-generated content). Additionally, our optional resource sharing permission manager app (which supports Windows, macOS, and Linux) enables fully decentralized websites with far superior performance than IPFS.

ETA later this year

1. https://peerweb.net


Please elucidate the timing.

Your link redirects to twitter, and I can't find anything about it following links to the https://oswg.oftn.org/ or on your own website. So it looks completely non-existent, which makes me skeptical of "soon"


It's described in the first tweet on that twitter link. It will be out this year.


Your first twwet is what makes everybody skeptical. Your post here makes it sound like you're just polishing things up. Your tweet makes it sounds like it might not have even started.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: