Strategically, this (advertising IPFS as an anti-censorship tool and publishing censored documents on it and blogging about them) doesn't seem like a great idea right now.
Most people aren't running IPFS nodes, and IPFS isn't seen yet as a valuable resource by censors. So they'll probably just block the whole domain, and now people won't know about or download IPFS.
We saw this progression with GitHub in China. They were blocked regularly, perhaps in part for allowing GreatFire to host there, but eventually GitHub's existence became more valuable to China than blocking it was. That was the point at which I think that, if you're GitHub, you can start advertising openly about your role in evading censorship, if you want to.
But doing it here at this time in IPFS's growth just seems like risking that growth in censored countries for no good reason.
Agreed. Also, if IPFS is not the preferred way to go about this kind of endeavor, there are the guys at https://lunyr.com/ who are building a Wikipedia in blockchain architecture. About time someone actually attempted this. Additionally, the guys at Everipedia also have the entirety of English Wikipedia imported and forked very nicely so there is at least another large scale effort with a robust dump of Wikipedia already in production.
Lots of different attempts at keeping the vision of "sum total of human knowledge" on the web. :)
I like wikipedia, but I always felt like it was missing crucial pieces of information to make it the "sum total of human knowledge." For example, I can look at the articles for ISO_31-11, Spheres, and Sphere Eversions, but I don't think that information alone is enough for me to go from "society of post-apocalyptic 'we lost how to do math'" to "we can prove sphere eversion is possible once again." I chose this example because I'm shit at math and never learned much of it anyway so I can maybe be considered a mad-max post-apocalyptic uneducated peon? Anyway point is I think there's the crucial "teaching" step missing.
Perhaps with a collection of a great deal more articles together, the result "prove eversion is possible" can be taught, but missing for example the ISO_31-11 article alone would make it monumentally difficult, I feel.
I believe, at some point, Wikipedia was talking about partnering with Archive.org to turn all their citation links into Wayback Machine snapshots—including the ones for journal PDFs.
Presuming such a thing ever gets done, an archival copy of Wikipedia + the Wayback Machine pages Wikipedia references would then make a pretty good "sum total of all human knowledge."
There's a lot that's not written down. Didn't NASA try to coax Apollo engineers out of retirement for their new big space stuff? The engineers had knowledge that wasn't written down, wasn't accessible, or was simply lost.
Sure, with sufficient time you could recreate the knowledge with brand new engineers but you'd be rediscovering lost information.
I think sum total is probably overstating it, but Wikipedia provides more than enough of a scaffolding to massively accelerate rediscovery of math. It's less about teaching and more about giving inquiring minds a roadmap with some answers filled in. How far do you think Euclid would have gotten if he could've read Wikipedia for the entirety of his life?
> How far do you think Euclid would have gotten if he could've read Wikipedia for the entirety of his life?
Much less far than he actually went, given that Euclid is known solely for compiling existing work. There's not so much reason to do that when it's already been done and you're happily reading the compilation.
In Euclid's day, "compilation" meant taking disparate and sometimes halfheartedly proved theorems with wildly different terminology and framing and laying them out in one consistent chain of deductions. So I have trouble writing him off as a trumped up textbook author.
It's like saying the formalization of continuity by Weierstrass and Bolzano was compilation because prior mathematicians had already been working with the concepts.
Did you look into Lunyr much at all? They are just using IPFS on the backend (or plan to use, per their ICO communication to raise money), and using an Ethereum-based ERC20 token to 'incentivize' people to contribute and edit.
Blocking the domain is not an effective step. Every censorship circumvention tool's domain is blocked, except for ones that aren't popular and so are essentially unknown. As with many other areas, half the battle is just entering the arena. Until you do that it's hard to know what your real challenges will be. The IPFS team should be commended for entering the arena, and we should all help however we can.
My god Chris, you're seriously interpreting this action as a business strategy? If being docile in the face of evil is Keybase's business strategy, I strongly urge you to re-prioritize, because if the government goes rogue, it's going to kick down your door looking for access to encrypted communications from dissidents, whether you're polite about it or not. How many John Does have you waffled on because it was a "better business strategy" so far?
The OP and GP provide an example of the age-old dichotomy between deontological (kyledrake) and consequentialist (cjbprime) viewpoints. Which is more correct, has been debated for centuries and unlikely to be resolved anytime soon, as it's highly subjective.
Should Youtube, have refrained from using copyrighted material that contributed to their popularity, given it greatly increased their chances of success compared to the other nascent video sharing sites at the time?
Was reddit.com, wrong for using sock-puppets accounts to kickstart content when they were starting out?
Some will say it's always wrong to use 'x' in a strategy no matter what. Others would argue that while 'x' might be a bit evil, it's necessary to ensure survival, and will result in a greater good (ie the service being useful to millions vs extinction) in the long run.
GP suggested that IPFS might not want to openly circumvent a specific country's censorship system, and blog about it, and you're comparing not doing that to the use of fake accounts by reddit? The issue is not as complicated as you're making it.
Well, yeah. The consequentialist viewpoint is if IPFS fails to succeed because it jumped too early, then ZERO people benefit from it. That's arguably the greater sin.
There's a saying, cutting off the nose to spite the face. That could apply here.
The greater sin is not failing, as failing is expected on the road to success. The greater sin would be to not jump when the opportunity presents itself.
Maybe look at it from a different point of view, if to someone in Turkey trying to access wikipedia it makes a difference then it's already a success.
It's a question of where you want to dray the line, and maybe the consequentialist drawing the line when ipfs takes over the world and gets 3 billions daily users is far fetched.
Youtube is probably not a sound example here as posting a trove of copyrighted content was a deliberate strategy in the larger scheme of having youtube bought by a giant actor for a hefty sum of money by a group of people from the paypal mafia with experience in this shady business.
From the get go the exploitation of illegal content (and other shady tricks) was made for the sole purpose of personal profit and had no intent on making the world a better place.
In other words youtube has been evil from the start, with evil intent, using evil tricks as part of an evil agenda. There was no room for ethics there.
Unless proven otherwise, IPFS seem to not have such a nefarious purpose even to work towards a greater good, due to its nature will hardly ever be made up for sell and has to leverage different mechanisms due to being a protocol and not a website in need of registrations.
Wow, that's ascribing a lot of motivation to things which have multiple interpretations.
Napster was rarely categorised as 'evil' except by 'big content'. In the music industry there was compulsory licencing which enabled it's successor - Spotift. There was nothing similar in video.
This stuff can lead to race-to-the-bottom issues. Consequences are different on a micro or macro scale. The optimal answer is to figure out how to stop others from doing the dishonest approaches. But in a macro context where everyone else is cheating, it's a lot more fuzzy and complex whether it is right or wrong to cheat as well in order to compete…
It's not just about business strategy. Pretty much all censorship circumvention that I am aware of leverages "collateral blocking" to achieve their goal. A determined enough state can block anything they want if they are willing to accept the collateral damage it causes. Turkey could try to shut down the entire internet inside their country in order to prevent the spread of information, but it would have serious economic consequences. Therefore, Turkey probably wouldn't block the whole internet.
Those trying to circumvent censorship leverage the fact that most countries can be modeled as at least somewhat rational actors. For a rational actor, censorship is a delicate balancing act. But this also means those interested in circumvention are also in a balancing act.
I've downvoted you because I feel the tone of this comment has upended what could have been a constructive, well-thought conversation and turned it into somewhat of a shouting match.
Ignoring your tone, it seems to me that you're drawing a false comparison (however to be clear, this isn't the reason why I'm downvoting). Chris's comment recognizes the importance of critical mass of adoption in network effects. He's pointing out that at this stage in its development, censoring governments would have no need to strong-arm IPFS. Simply blocking their domain is much lower profile and as such, much lower risk.
In this scenario there's no point at which "being docile" or "being strong" comes into play. If you think otherwise, then I'd argue that you may be mistaking the primary goal of the effort in question to be an open protest, rather than providing residents of Turkey access to censored information. I think you'd agree that the primary goal is for IPFS to give the people of Turkey Wikipedia, and while it might be nice if they also gave the Turkish government a big bold middle finger in the process, that's not in service of their primary goal.
If you accept that premise, then I'm not sure how you can take this idle musing from Chris and construe it to be evidence of how he'd behave when being legally strong-armed by a government while at the helm of KeyBase. The situation upon which that conclusion would be predicated just simply isn't represented in his original comment.
Finally I'd argue that if this is of critical concern to you, then why are you trusting a third party with your key storage in the first place?
I absolutely do need to know that Keybase is going to take privacy and censorship resistance seriously, and posts like this are not helpful in that "strategy". I depend on them for key distribution and so do a lot of other important, likely-to-be-targeted people, many of whom probably live in Turkey. When the Turkish dictatorship starts demanding information on political opponents from Keybase, what is Keybase going to do? Send the info to avoid being blocked? Wait until they're "strong enough" before fighting the important fights?
We're in a dark political period right now, this stuff is deadly serious. There's 110,000 people that have been detained in Turkey, they're not waiting for a future resistance, they need one right now. Stop spreading FUD about censorship resistance.
He(?) said that drawing attention to yourself while vulnerable is risky. Nowhere did he say anything about being less secure. Arguably it's the same security with more utility - better to be secure and available only to a few (because of limited advertising) than to be equally secure, and available to none because you got blocked early.
Now that last point is definitely arguable, and I think that's what you want to do, but that argument has no connection to the security of the information. Even if you think vocal advertising as an anti-censorship platform is the greater good, it doesn't make the information any more secure.
So what am I missing that explains your vehemence?
I think the problem is that pushing the anti-censorship angle lets more people avoid censorship today, whereas government censoring of IPFS is a problem for the future. And compromising on the present for fear of a future problem that may or may not actually happen, is a typical consequentialist trap that deontology is essentially designed to fix.
> If GP values adoption as the path to censorship resistance - then maybe their product will include backdoors to "stay compliant".
It's quite a leap to assume that - the poster said nothing of the sort, and saying "I don't want to get shut down so I can help people against censorship" is very different than "I care more about staying open than helping people against censorship"
I'm not saying it's impossible, but I certainly wouldn't assume it
Self censorship is the only way the book burners can really triumph. So long as humanities' collective ingenuity continues to exceed the sum of our malevolence; I ain't too worried.
Ps key base is awesome :)
So nobody should have done anything and we collectively should have just let Turkey censor Wikipedia with zero alternatives, that's the better solution here?
"Ignore the plight of a few websites now, to protect millions of websites in five years," is sorta-kinda the point everyone is making. You have to let Luke Skywalker get old enough to actually have a chance of defeating Darth Vader before giving him a lightsaber and letting him hare off across the galaxy; otherwise you just get a dead Luke and the Empire wins.
But that's not even the true compromise in this case, because you could have used any other, less network-fragile tool to accomplish the same goal. You could have called up Obi Wan (Freenet) or Yoda (Tor), but instead you handed the job to the ten-year-old kid.
What a poor choice of analogy. This is not a hollywood movie this is the real world with actual people in an emergency situation right now.
Your house is on fire and the firemen tell you they will not come to help because if they do they may not be able to deal with potential houses on fire in a not so distant hypothetical future ?
That is like 100% not was said. I will re-explain it in simpler terms.
We don't currently have a censorship-resistant way to get Wikipedia to Turkey, at all. IPFS is not it, because there's no reason for Turkey not to censor all of IPFS. There is no alternative to IPFS that Turkey won't censor, either.
If IPFS wants to become it (which is a good goal that we should pursue), it first needs to become used / useful enough that Turkey won't be able to censor all of IPFS without significant economic damage to the country.
There are people in turkey replicating this snapshot right now. people browsing locally get it there. Even if all connections to outside the country get shut down it can be distributed locally.
But it won't grow unless others can download IPFS software to continue hosting it and there is no reason to believe they could not simply block it internally as well to prevent spreading.
As said before, it needs to be used/useful to many normal people (like a previous example of GitHub) before there it requires too much political will to effectively block.
Installing software from a USB drive is within the reach of most normal teens and young adults in the developing world. So, sneakernet -> IPFS -> Wikipedia is a viable path so long as blocking the IPFS protocol, over all reasonable transports, is hard enough to do.
Responding to 0xCPM (sibling comment). IPFS was not designed to hide the sources but thanks to versatility of transports (you can ever run it on cjdns or mesh network) and possibility of using it offline it is not easy to censor.
The person I was responding to was talking about people getting the IPFS software via IPFS or in person delivery of media. This is not a promising method for distributing anything to normal people, even if it later lets you view Wikipedia.
What prevents Turkish ISPs from blocking peer-to-peer IPFS connections within the country?
(I'm having trouble figuring out exactly how IPFS presents itself at the network layer, and whether an IPFS connection over TCP/IP is noticeably different from some other normal connection type, so I genuinely don't know the answer to this question. It looks like it uses SPDY over TLS, but maybe something in the certificate gives it away?)
I was wondering the same, but knowing the kind of DPI equipment that has been sold to Syria, Lybia and other countries I assume that it possible to detect, monitor, tamper and block pretty much any kind of traffic, including IPFS.
I don't know how to make this work with IPFS' P2P approach: a request for www.google.com with a destination IP of some residential Turkey customer looks awfully suspicious. I suppose it's workable if App Engine has a colo inside Turkey.
There's a circuit switching relay protocol [1] in the works which will allow multi-hop connections. This is generally useful for situations where two nodes can't directly connect to each other, be it because of NAT, censorship, or simply because they don't have a transport protocol in common (e.g. js-ipfs in the browser).
That means nodes can soon use the Websockets transport to connect to a domain-fronted node (this part already works), which then acts as a relay.
Connections over the libp2p-tcp transport are trivial to spot, but there's more transports available (Websockets, WebRTC, UTP), and even more in the works (Onion, QUIC, FlyWeb).
No, he's just saying they probably announced this at a point where their network is (presently) too weak to achieve what they want, so they may be shooting themselves in the foot a little bit.
> their network is (presently) too weak to achieve what they want
That's certainly a plausible hypothesis, but without evidence, definitions of words like "weak" and a proper wash through the scientific method, I'm not convinced we can make conclusions like that at this point in time.
That's not a conclusion, it's a claim by a human in a discussion. It is certainly not the case that you need a double-blind study before you can have any opinion on a subject (and, I'd argue, a complete misunderstanding of the scientific method).
It's certainly a more defensible claim than the completely irrelevant ad hominem you made about Keybase being "docile in the face of evil". Why was the commenter's employer relevant to the discussion?
You not being convinced is no excuse for derailing the discussion.
Behavior like this makes people think twice about even trying to engage with the crypto-loving, privacy conscious, free-speech market segment, because even trying to do anything in that direction is met with abuse.
I think the original comment was more about the style and wording of the announcement, not about whether this ipfs group should or should not have done this.
In this case, business and political strategy unify in their mutual interest in IPFS's survival. Screaming and being outraged doesn't produce change. Methodical, contemplated action does. Flying under the radar until you're strong enough for conflict is a sensible strategy.
You should really keep your outrage in check until you have a better grasp of the issues.
Blind ideology-driven attacks without any kind of strategy may make your hind brain feel all tingly, but they are routinely squashed like a bug by the censorship apparatus, which has plenty of resources to block IPFS at the protocol level.
Rational strategy is not "being docile in the face of evil". It's just not openly marching into battle against a vastly stronger adversary who was only dimly aware of your existence.
Given the exposure and that this is Turkey, the IPFS project may be making the right call here, but strategy, not blind ideology, is the way to make those calls if you want to win.
This idea isn't even a particularly new or unusual one. When faced with a superior enemy, fighting opportunistically is an old as dirt strategy. People have continued to use it for literally millennia because it's effective.
Force is tilting the balance of power to your side by gathering advantages. Warfare is the Way of deception. Therefore, if able, appear unable; if active, appear inactive; if near, appear far; if far, appear near.
If your enemies have advantage, bait them; if they are confused, capture them; if they are numerous, prepare for them; if they are strong, avoid them; if they are angry, disturb them; if they are humble, make them haughty; if they are relaxed, toil them; if they are united, separate them. Attack where your enemies are not prepared; go to where they do not expect.
This strategy leads to victory in warfare, so do not let the enemy see it. - Sun Tzu, The Art of War
Relatively speaking, I'd say 'our enemies' are extremely strong. Probably best avoided (for the moment) if the overall objective is victory.
He said nothing about "business". And I'd hope Keybase's strategy is to be effective in the face of evil than just performatively non-docile in the face of evil.
Speaking as someone who watched this come together on IRC, it definitely was NOT a business decision! This is just a problem being solved by smart people with great tools! That's how it should be!
The technical factors are actually key to the political decision. Content addressing will occur at the browser level in order to improve caching behaviours, this will be implemented by organisations who are part of a political sphere (google, mozilla, apple) whose influence is largely determined by their technical performance. It is then a minor matter for it to move forward into a larger addressing scheme (likely to occur at the same time).
IPFS is approaching the issue from the opposite direction of course, but the two are so tightly coupled that I doubt there will be much risk.
IPFS is most likely to fail for any technical problems it may have.
Correct, but over time, Turkey would find their internet is progressively more broken as people switch over to IPFS.
More importantly, I doubt the censorship-efficiency gains would be improved much by banning IPFS, since nobody really uses it yet, and I suspect selling the "anti-censorship" angle will get more non-anti-censorship adopters (i.e. people who just use it for performance gains) in non-Turkey places, compared to staying silent on the anti-censorship angle of IPFS. And more widespread non-Turkey adoption means more pressure on Turkey to allow IPFS.
So China blocked github, and then eventually relented because it was too useful for people in China. Why wouldn't the same thing possibly happen to IPFS?
Because it's not useful? Make IPFS useful - only then can it be used against censorship. Otherwise it ends up like Tor: Something governments can block without consequence.
Maybe you have an example of how IPFS could so useful to China that it would revert a block, but it seems to me that the reason of existence and purpose of IPFS goes against the tide of Chinese governement preventing it from ever being useful to them.
The government has limits on how much it can defy economics - if it inflicts too much unnecessary economic damage on its corporate powerbase, they'll go support other factions, who promise to rectify this.
So basically you're saying "do not circumvent censorship and tell the world about it or your domain might get censored too", right ?
How does that make any sense ? ipfs is still available to install from github, ditribution repository and other places. IIANM blocking the ipfs.io doesn't block the protocol.
Moreover the Streisand effect shows that attempt at online censorship has the opposite effect and actually draws more attention to it making your point dubious.
> So basically you're saying "do not circumvent censorship and tell the world about it or your domain might get censored too", right ?
No, that is not what was said.
What was said is, if you're going to claim to be able to circumvent censorship, actually be able to circumvent censorship first. There are connections (HTTPS to GitHub was a specific example that was given) which cannot be censored without significant collateral damage to the GDP of the country. IPFS is not yet one of them. IPFS should become one of them, and then make these sorts of claims.
These things are usually an ongoing race between the government and the people fighting censorship. The first move here was Turkey blocking Wikipedia, next is Wikipedia on IPFS, then maybe later Turkey blocks IPFS, then IPFS gets on tor, which I understand is already coming pretty soon, and so on.
Yeah, the Steisand effect is important. If Turkey blocked ipfs, it would get publicity on places like the NY Times and all sorts of people who had never heard of ipfs would learn about it, and many would get intrigued and look into it.
I'm not sure it's ever advisable. Most users will figure it out and get the word out anyway. Best for the company to focus its advertising on the other ways it provides value.
There's a lot to be said for letting the government save face when they choose to back down, and shouting about how you defied them isn't always a great plan.
It's open source software. Can't we just upload the source to a new site when the old one goes down? It's different than GitHub, which is a proprietary, centralized system.
Of course, you could even work around the domain blocking in a variety of way: using uncensored DNS, proxy, mirrors, use of IP address, sneakernet, and so on.
The usual ways of working around domain blocking which has been demonstrated countless times to be ineffective.
Github is a centralized service so I don't know if the comparison is fair. Perhaps a better comparison is Bitcoin's adoption given the rhetoric of early adopters. It seems to me the whole point of decentralized services is to evade control. If IPFS isn't decentralized and censor resistant - then what exactly is the point of it?
The point of decentralization in case of BitTorrent is offloading files hosting to users. That is how Debian and Blizzard use it. It is better to advertise BitTorrent as such but then use it to avoid control.
Same point was raised about Tor changing its mission (https://twitter.com/torproject/status/635856569201246208), because it makes it vulnerable to states where "human rights", democracy etc. is associated with US propaganda.
It is better to advertise IPFS, GNUNet and FreeNet as a cache layer instead of censorship resistance. CDNs are used usually used due to domain fronting application. Censorship has hard time trying to convince CDNs to block particular sites and services (Signal) and it can't block just like 30% of the internet because of one "violation". But any CDN that advertises itself as VPN or some sort of censorship resistance service is easy to block.
So basically you're suggesting to make censorship resistance software invisible to people looking to resist censorship as a way to make them more useful and easy to find ?
I would never had found and supported freenet if it was advertised as a cache layer, I would be surprise if the larger population would even comprehend what a cache layer is and how it could help them defeat censorship.
For example, BitTorrent probably advertises itself as a peer-to-peer data distribution protocol, but not as a way to get pirated content. But you and I know through some other way that you can install a BT client and get pirated content, right?
Changing Tor to explicitly say "human rights" makes it a red flag in oppressive countries where they may claim these words mean "Western propaganda!". Previously anyone caught with Tor can say they're a network researcher or whatever, now anyone caught with them can be charged with "spreading Western propaganda".
Just like if a torrent client would advertise themselves as a way to get pirated content. Anyone caught with the software can probably be easily convicted of copyright infringement, because they actually have software that advertises itself as copyright infringement software.
Correct me if i'm wrong, but if accessing some content through IPFS makes you a provider for that content doesn't that mean that you are essentially announcing to the world that you accessed the content, which in turn can be used by those who do not want you to access it for targeting you?
In other words, if someone from Turkey (or China or wherever) uses IPFS to bypass censored content, wouldn't it be trivial for the Turkish/Chinese/etc government to make a list with every single person (well, IP) that accessed that content?
Well, there are more differences but the most important one is that with Freenet, you get content pushed to you, you help the entire network host whatever is there. Someone can push content to your node, that you might not want to host.
While with IPFS, the only data shared from your node, is data you explicitly agreed to share. So data is not implicitly available on the network, people have to agree to help share it.
As far as I understand the protocol itself is detectable and censorable both passively and actively, so it's not real anti-censorship in any way. Just a tool that hasn't been added to censoring DPIs yet.
IPFS can work just fine peer-to-peer over Websockets though, which pretty much look like a regular HTTPS connection. Other applications can make use of this too thanks to go-libp2p and js-libp2p.
I wouldn't call this trivial, but yes it's certainly possible. Well, it's possible for network traveling over the public internet in the first place. One of the main benefits of IPFS is that it's decentralized. If your friend has the content, and you connect directly to his machine and access it, the record of that access never left those two devices. A government would have to inspect them manually to detect the content in that case.
In simpler terms, it's every bit as traceable as traffic over the network it travels. Its primary advantage is, thus, not being tied to any particular network.
It is trivial, the command ipfs dht findprovs <hash> will list all the peer ids that do have the content in their repo so unless you blacklist every one but your friend (and he does the same) then the fact that you have accessed <hash> is public knowledge. You could also gc your repo as soon as you have downloaded that content but both of these methods defeat the whole point of IPFS.
Ironically, I've just discovered that https://ipfs.io/ has certificate signed by StartCom, known for being source of fake certificates for prominent domains[1]. So in order to work around censorship, I have to go to site which to establish trust relies on a provider known for providing fake certificates. D'oh.
Even more funny: There are individuals out there trying to help others. HN's top replies are sarcastic and critical. Hope the poor devs don't see this thread today. If so, thanks so much for the awesome technology!
I'm not sure how pointing out a security flaw contradicts helping others. Do you think if people try to help others, nobody should point out their mistakes? Are you also against submitting bug reports to projects that you consider good and only send them to the most evil ones?
I'd be really curious to hear more about how Goal 2 (a full read/write wikipedia) could work.
IIRC, writing to the same IPNS address is (or will be?) possible with a private key, so allowing multiple computers to write to files under an IPNS address would require distributing the private key for that address?
Also, I wonder how abuse could be dealt with. I've got to imagine that graffiti and malicious edits would be much more rampant without a central server to ban IPs. It seems like a much easier (near-term) solution would be a write-only central server that publishes to an (read-only) IPNS address, where the load could be distributed over IPFS users.
You can write to the same IPNS entry with multiple different computers, but i'm not sure that would be the desired way of updating this.
One way i would imagine it could be done is using crdts in combination with some global immutable log (not necessarily a blockchain). Then there needs to be some sort of concept of identity to help avoid spam and trolls, I really like the idea of a web of trust, and that could very likely work here. I'd love to see more widespread applications of CRDTs
I think the real problem with a decentralized read/write Wikipedia is conflict resolution (conflicting edits). Having a centralized site to handle those is extremely useful.
An alternative is to eschew conflict resolution altogether, and have various edits co-exist. That would basically be like the fork model on GitHub. The main issue then is how to help the user choose between competing edits. That's where metrics based on upvotes or the social network topology can help.
That sort of forking model really doesn't work for prose. Making a fork is easy, but merging those changes with other forks to combine improvements rapidly becomes impossible -- especially if anyone tries to perform any sort of large change to an article.
There doesn't need to be a merge. Just pick the most promising version and edit it to create a new version. The old version sticks around. An add-only data structure.
How do you determine "most promising", though? Especially when many edits may not individually look like much (e.g, copyediting, fixing typos, etc), but help improve the overall quality of an aticle.
What this is likely to turn into, in practice, is a forest of dead forks. Division of effort, without a combination of progress.
I presume that "use ethereum" would imply "the viewpoint with the most hashing power wins," which might not align well with how most of us would like to determine truth.
There'd be no voting. By using contracts you could simply lock an article while it is being edited, much like it happens on Wikipedia already.
Ethereum computes all "code" all the time on all nodes and makes sure that all nodes come to the same result. Thus, you can write one Wikipedia that runs in a distributed manner while still acting like one centralised application
I never edit articles on Wikipedia, so I just looked up how it is done there. The editing users are currently merging the changes. The same could be done with Ethereum and a contract. (Although it would be a bit tricky right now because the computational cost of calculating a diff for complex articles would make the operation expensive).
However, there're a couple of outstanding Ethereum Updates that would make this process easier and faster. I'd suppose that in around one year it should be possible to implement this in a fast and cheap way.
Alternatively, one could use 'Whisper', which is Ethereum's 0MQ alternative, and let multiple users work on the same document at the same time (like Google Docs or a multitude of other editors).
Ethereum is still a very young project, but it would allow to implement a truly decentralized Wikipedia.
> I never edit articles on Wikipedia, so I just looked up how it is done there.
I think you've misconstrued how edit conflict resolution works. There is no "locking" involved.
If two editors both open the same revision of a page and save changes, the second editor to submit changes may encounter an edit conflict. In most cases, conflicts are resolved automatically by the wiki engine. If this isn't possible, the second editor is prompted to merge their changes manually, or to reapply their changes.
This is not a locking mechanism. Having an article open for editing does not lock it for future changes; it just records the revision that you started with to help resolve a conflict, should one arise.
What happens where there are merge conflicts, as in git? A fully decentralized app would have to resolve those conflicts on it's own, and there's a very low chance it would do so perfectly every time.
Some additional information may help in the duty vs prudence debate. It's true that IPFS gateways can be blocked. But as noted, anyone can create gateways, IPFS works in partitioned networks, and content can be shared via sneakernet. Content can also be shared among otherwise partitioned networks by any node that bridges them.
For example, it's easy to create nodes on both the open Internet and the private Tor OnionCat IPv6 /48. That should work for any overlay network. And once nodes on such partitioned networks pin content, outside connections are irrelevant. Burst sharing is also possible. Using MPTCP with OnionCat, one can reach 50 Mbps via Tor.[0,1]
How is Wikipedia censored in Turkey? Are providers threatened to be punished if they resolve DNS queries for wikipedia.org? Or are they threatened to be punished if they transport TCP/IP packets with IPs that belong to Wikipedia?
Wouldn't both be trivial to go around? For DNS, one could simply use a DNS server outside Turkey. For TCP/IP packets, one could set up a $5 proxy on any provider from around the world.
Yes, it would be trivial to go around. But that's not the point.
It used to be case that people with technical knowledge got around DNS blocks by simply changing the DNS servers.
Today you cannot access to Wikipedia even with, say, OpenDNS because now some popular DNS providers are being "hjacked" in Turkey [1]. Yes, hjacking.
The key concern is the fact that an Internet service provider employs an illegal hacking technique under the pressure of a totalitarian government to censor the largest and most collaborative information repository of the whole human history just to cover one article mentioning the truth about that totalitarianism.
Do not think it is too far of a dystopia for the "more advanced" countries like U.S., especially with Trump.
That's probably why the Turkinsh gov't blocked Wikipedia - to keep people uneducated and thus retain his grip on power. It doesn't really make sense otherwise.
They claim the reason for it is that Wikipedia refused to take down articles of which Turkish government is "sensitive" about. These include evidence suggesting govermental support to ISIL.
What I don't understand is why do they block ALL versions of wikipedia?
They could have at least provided us with a "safe version" of wikipedia under a governmental domain like wikipedia.diyanet.gov.tr
As a citizen residing in Turkey, I'm concerned, afraid, but mostly frustrated.
This probably happens a lot. The people who have the technical knowledge, and probably are educated and know what's going on, they use VPN or a different DNS-server. This has happened before, and Turkey has blocked 8.8.8.8 and 8.8.4.4 in the past, just for this.
But I bet this is not about the people that can circumvent this. It's for the people outside of the cities, the ones who don't have this knowledge. It's meant to make sure they don't know what's going on. That's more important than convincing the people that can't be convinced.
It is the same in China. The government does not try to clamp down too hard on vpn in Shanghai, Beijing etc. They are worried about the people in the midland. Same is with Erdogan - he will never control Istanbul and Ismir. He is worried about his base - the pious, hard working, conservative people in the countryside getting the wrong ideas.
The moment a place has big international airport it is lost to cosmopolitanism. There is different Teheran lurking under the hijab. And a wild guess Jeddah and Saudi Aramco
headquarters.
So dictators grudgingly secede cultural control in the cuties that are gates to the outsude world, trade and riches, as long as they can control the heartlands.
These distributed file systems are really interesting. I'm curious to know if there is anything in the works to also distribute the compute and database engines required to host dynamic content. Something like combining IPFS with Golem (GNT).
If I may ask, do those 4 cents include bitcoin's transaction fee? How well have bitcoin transactions worked for you so far?
I've been pondering starting a cheap service that accepts bitcoin (too cheap for CCs and not worth the hassle of paypal), but wasn't sure about the viability of relying on bitcoin as the primary transaction currency.
4 cents don't include transaction fee, however payments to the service can be slow and don't require inclusion in the first block so they can very very cheap.
Bitcoin transactions in testing have worked perfectly & the display is zero conf which means you can pay and see that the payment on the hash the minute.
Customer uptake has been lacking for my project, but I believe that's because I've created something that works really well for two very niche markets (which fortunately have a lot of overlap.) depending on what your project is, you can always add a credit system whereby people can dump $5 or something via credit card and you allocate that to them for them to spend as and when they need.
Thank you for the detailed response. I don't have any working knowledge on how bitcoin is practically used as a currency, so your comment provides insight that's very valuable to someone like me. Zero-confirmation transaction seem like the perfect solution for an infrequent, low value service that can sink more than a couple of fraudulent transactions.
You have my thanks!
A note on ipfsstore: while ipfsstore provides a very valuable service for those who know why such a a service would be used, the website provides no real information for those who don't already have that prior knowledge. What little information it has is tucked away on a separate page. I think if you clearly advertised the primary value that your service provides (offsite data redundancy/replication) on your landing page, you might see more uptake.
If IPFS and bitcoin ever hit critical mass, I think you'll end up with a surprisingly lucrative venture on your hands.
Thanks for creating and sharing this; I think it is really cool. I'm curious: what is your plan to deal with the possibility (inevitability?) of hosting copyrighted or otherwise illegal content?
I'm happy to comply with requests from legal entities as and when they arrive however considering how uptake has been I don't see that being a problem any time soon.
Distributed storage still works alright when everyone doing the hosting just has puny home-network connections (see: BitTorrent.) Trustworthy distributed computation is a lot harder if you want to include residential users; you end up having to do highly-redundant work to ensure correctness, like Folding@Home does.
I've considered a strategy like this where the host is "distributed" across thousands of commodity VPS providers, though. There would need to be VPS providers in the country in question for such a thing to work, which isn't always the case.
But Wikipedia allows user edits, and so is inherently censorable. You don't need to block the site, you can just sneak in propaganda a little at a time.
That's a casual reductionist definition of censorship.
Community edited wiki != Censorship
Centrally administered information channels == Censorship
The state has equal power as citizens to make edits. Meanwhile the state has exclusive power to shut down network access, persecute people for publishing edits or providing access to censored information, chilling effects, etc. Pretty big difference.
That really depends on the community and the nature of their editing. What if Westboro Baptist Church members decided to do nothing but en-masse gradually edit their Wikipedia page? You would probably categorize their actions as an attempt at propaganda and censorship. (On a short term, of course.) It ultimately wouldn't work, since they are such a small group. However, if a large enough segment of the populace decided on a program like this, which also included an infiltration of the higher-level of Wikipedia's organization, such a program would probably work.
Content addressing means that all versions of the page remain available so long as one node hosts it, and they are easily differentiated.
On the internet 1000 voices don't speak 1000 times louder than one, it would be relatively easy for a small group to widely advertise that there is a discrepancy.
Effective propaganda would only be possible where any contradictions are unable to provide evidence, causing an appeal to authority to become one of the most viable means of resolving the dispute. This is where the majority - but not all - of our current propaganda sits.
IPFS would help. It wouldn't solve disinformation but it would tidy up the game.
Yes. A lie can run round the world before the truth has got its boots on.
For the second case, this is why I talk about the nature of effective propaganda. Evidence is the only reliable way to fight against an authority, as the larger group would be significantly less able to censor evidence they are forced to use unverifiable propaganda - this is much less effective for them.
This highlights an important failure of total lassiez-faire: by restricting or distorting access to information, you can effect a nefarious result without violence or even protests. It's like slowly boiling a frog. And it works especially well in a democracy, where political outcomes are (at least in principle) determined by those who are most susceptible to information control.
"Fire in a crowded theater" comes to mind. In reality, the judge making that comparison later rescinded it, and the guy won his case (about distributing anti-war pamphlets) anyways.
I do not claim that it is only a problem with lassiez-faire. Governments are of course capable of the same manipulation. I only claim that lassiez-faire does not solve the problem either, and indeed may be worse since the people believe they are free and are thus more complacent.
There are many articles on Wikipedia that have been deleted that I can no longer find. A few years back, some editors decided to do a huge porn star purge and removed the articles for many prominent adult actors/actresses due to notability.
IPFS with snapshots can help people go back to those moments in time and look at articles that are no longer around.
In certain times, people find that any system which allows exploitation, will be exploited. Furthermore, it will be exploited by groups of people who feel less powerful, led by demagogues who exploit their attention and outrage for their own benefit. Rules of etiquette and decorum which once protected such systems will be ignored by people who feel sufficiently "justified" in their outrage. Intellectual commons once supported by common values and a love of knowledge will become polluted.
The above describes Ancient Rome prior to the change of the Republic into the Empire. It also describes the world 1/6th of the way through the 21st century.
I'm not sure this thought makes sense, but just putting it out there for rebuttals and to understand what is really possible:
I assume IPFS networks can be disrupted by a state actor and only thing that a state actor like the US may have some trouble with is strong encryption. I assume it's also possible that quantum computers, if and when they materialize at scale, would defeat classical encryption.
So my point in putting forward these unverified assumptions is to question whether ANY technology can stand in the way of a determined, major-world-power-type state actor. Personally, I have no reason to believe that's realistic, and all these technologies are toys relative to the billions of dollars in funding that the spy agencies receive.
One important anti-censorship measure employed by IPFS is that it doesn't have to rely on the Internet backbone to exchange data. Well, it's not so much an active measure, it's part of the core design.
You can be in the same room with no internet connection whatsoever and still exchange data.
Network transports get added kinda regularly, so far there's TCP, UTP, Websockets, WebRTC, and a couple of work-in-progress transports: Onion, QUIC, FlyWeb
Probably not, but why make it cheap and easy to do so? Also, there are other bad actors with fewer resources and less-determination than a nation-state.
Compare and contrast with content-neutral networks, for example; we all like 'Net neutrality, right? But IPFS isn't content-neutral. Or content-oblivious networks; we all like Tor and I2P, right? But IPFS isn't content-oblivious.
IPFS has all of the tools required for censorious node owners to choose to block content, and the protocol doesn't have any underlying mitigations. It's not hard to imagine, especially in places where there's a monopoly of ISPs, like across large parts of the USA and the Middle East, that IPFS nodes might be easily isolated and vulnerable to relatively straightforward censorship.
Nothing obligates you to use a node which censors. At the same time, a node you own can't unintentionally (as with Freenet) end up with illegal or undesirable content.
A tradeoff has to be made. Yes, I censor when I don't pin someone's porn and serve it off my node. At the same time, I'm not preventing anyone else from accessing that information.
Another comparison would be to Bittorrent. Given x.torrent, my refusal to share its contents does, in a sense, make me a censor. But my refusal (unless I'm the sole possessor of its contents) doesn't prevent anyone else from sharing it. So the final contents remain uncensored (in the whole).
Hell, even freenet "censors" in that infrequently accessed content will eventually stop being replicated within the network.
You didn't say "IPFS", which made me realize that your post is largely repeating the same points in favor of censorship on Usenet.
This is only increasing my confidence that networks which are aware of their underlying contents are inherently unable to effectively counter censorship, because individual nodes can always be pressured to drop content, and the Pareto principle guarantees that this censorship will be effective as long as the pressure is put on the most popular nodes.
I hear and am sympathetic to your point of view. Google takes a similar stance with email, Gmail, and spammers; consider, "Yes, Google censors when we don't relay someone's spam and serve it off our servers. At the same time, we're not preventing any other mail relays from forwarding that mail." (I'm not speaking for Google, merely making a rhetorical argument.) This is widely considered a good thing.
I am merely disappointed that IPFS, which has a lot of backing and is growing in popularity, may become both the dominant content-addressable distributed object system, and also remain lacking in terms of anonymity and availability.
(Also, while I am no fan of Freenet, you are equivocating censorship with cache expiration. One is done by people and one is done by an content-oblivious algorithm. Tahoe-LAFS's garbage collection works in the same way and is also not censorship.)
In IPFS the user's computer connects to whatever node has the content, popular or not. This means that even if the the 100 most popular nodes in the world have enough pressure put on them that they refuse to host a file, I can still host the file on my laptop and everyone with an internet connection will be able to access it.
Compare it with DNS with HTTP(s) where simple DNS blacklist and/or IP blacklist (DPI in some cases) is all you need.
In case of IPFS you can ship HDDs behind the firewall and distribute content internally via local internet, intranets, mesh networks or just by moving files around.
It's about as censorable as bittorrent with magnet links. Which is to say, if you are sharing something that someone with power finds really, really objectionable, they'll have no trouble bringing down the hammer. So yes, totally censorable.
Currently I'd advice against directly running IPFS over Tor or I2P, because it'll likely leak your IP addresess. OpenBazaar have successfully made it work as an Onion service, and their work is going to be upstreamed into go-ipfs soon.
Censorship is suppression or prohibition of content. Putting wikipedia on IPFS makes it strongly resistant to many forms of censorship because it uses content-addressing. This means that suppressed content can be redistributed through alternate channels using the same cryptographically verifiable identifier. It also means that you have clarity about which version of the content you're viewing, so if some entity publishes a censored version of your content you have a way to distinguish between the two versions.
If you suppress it in one place, people can put it up somewhere else.
If you block one path, people can make the content available through another path.
If you modify it, people know that you modified it, have clear ways to distinguish between your copy and the unmodified copy, and can request the unmodified version without wondering which version they're getting.
If you destroy all the copies on the network, people can add new copies later and all of the existing links will still work.
Etc...
IPFS can't protect people from a government physically tracking down every copy of the censored content and destroying it -- that requires other efforts external to the protocol (ie. move copies outside their jurisdiction). It does, however, make it possible to move many copies of the content around the world, passing through many hands, serving it through a broad and growing range of paths, without the content losing integrity.
Doesn't the scope of your definition also cover cases of academic journals rejecting low quality? They are in essence censoring mere low quality, not even falsehoods.
An overly broad scope means that censorship loses its moral oomph.
When browsing the content, how does linking work? I mean, don't they kinda have to link to a hash? But how can they know the hash of a page when the links of that page are dependent on the other pages and this may be a circle?
It's just simple relative links. If you're in /ipfs/QmPage/wiki/Page.html, then a link to './OtherPage.html' will result in /ipfs/QmPage/wiki/OtherPage.html.
Bingo. Then you have IPNS, which uses a public/private key system to broadcast updates to a mutable hash value. So you can take the IPNS public key for so-and-so's Wikipedia mirror, query the network for the latest hash signed by the corresponding private key, and use that hash to fetch content from IPFS.
Maybe a very dumb question, but why didn't they build anonymity into it rather than advise users to route it over Tor? My guess is it may have something to do with the Unix philosophy. It's still a great tool regardless.
Building anonymity into a tool is not trivial. Doing it right is very hard. Doing it right also has significant drawbacks in terms of performance for people who don't need anonyminity. Tor is a great, well studied and well supported way for people to browse anonymously, so yeah, definitely a bit of the unix philosophy in there. That said, we plan on building anonymity tooling into ipfs in the future, its just non-trivial.
@whyrusleeping is right, and adding to that, there's a work-in-progress Onion transport which is already being used by OpenBazaar in their fork of go-ipfs.
The respective work on making IPFS's routing work safely in an anonymous use case will be upstreamed soon.
Currently I'd advice against directly running IPFS over Tor or I2P, because it'll likely leak your IP addresess. OpenBazaar have successfully made it work as an Onion service, and their work is going to be upstreamed into go-ipfs soon.
mDNS traffic is very easy to pick and filter. probably you're thinking dht, but even then, you can block the primed node list or just ask any other client which ip are on the network and block those.
this is not the average sysad we're talking, it's a state actor with access to all local isps resources.
Yes, DHT, mdns (for local nets), and manual connections via `ipfs swarm connect`. You can get a peers address through any other open means of communication, connect to them, and then through them, get other peers information. All it takes is one good node
We wanted to switch long time ago to Let's Encrypt, unfortunately it isn't that simple when you have 10ths of domains and subdomains, distributed among 5-10 servers.
startcom and wosign (startcom is now owned by wosign) certificates signed after a certain date are not trusted by some browsers by default due to the back-dating issue last year.
Because of the same trust issues some organisations have their OS and browser installations configured to not trust any certificate signed by startcom or wosign (or a certificate that chains to one of theirs).
Perhaps but it also seems to be a test. IPFS allows the Turkish version of Wikipedia to be available even if there’s no access to the internet backbone.
That’s a game-changer for residents of repressive regimes; they just need a snapshot of the content they care about and distribute it via ad hoc networks and even sneaker net, making it nearly impossible for the government to stop.
The bottleneck wasn't accessing the data once it had been archived, but archiving it in the first place.
Both source and capture bandwidth are finite resources. Several people with considerable infrastructure (a friend in Finland working at a major network provider is among the archivists) were supporting the effort. The limit is how quickly they can peel data off the source.
A few station wagons (or SUVs) full of SATA or Blu-RAY drives might be even more useful.
As far as I know there are people that are interested in putting this data on IPFS, it didn't happen yet as right now the focus is to rescue, save and archive this data. Redistributing it comes after it is safe and sound.
Most people aren't running IPFS nodes, and IPFS isn't seen yet as a valuable resource by censors. So they'll probably just block the whole domain, and now people won't know about or download IPFS.
We saw this progression with GitHub in China. They were blocked regularly, perhaps in part for allowing GreatFire to host there, but eventually GitHub's existence became more valuable to China than blocking it was. That was the point at which I think that, if you're GitHub, you can start advertising openly about your role in evading censorship, if you want to.
But doing it here at this time in IPFS's growth just seems like risking that growth in censored countries for no good reason.