Love the ZeroNet project! Been following them for a year and they've made great progress. One thing that's concerning is the use of Namecoin for registering domains.
Little known fact: A single miner has close to 65% or more mining power on Namecoin. Reported in this USENIX ATC'16 paper: https://www.usenix.org/node/196209. Due to this reason some other projects have stopped using Namecoin.
I'm curious what the ZeroNet developers think about this issue and how has their experience been so far with Namecoin.
What is the point of namecoin and a having a central domain registrar at all?
It seems like a publisher-addressable network (where documents are identified using a publisher's public key) or a content-addressable network (where documents are identified using a file hash) would be good enough by itself, so long as the protocol had builtin support for distributed document searching and ranking.
Casual internet users on the regular internet do not seem to be using domain names to locate resources anymore. They are using Google to locate resources, and only looking at the domain name to verify identity. If the primary purpose of the domain name is not to locate a resource but to verify identity, then it seems like this could be accomplished with a web of trust without a central name registrar.
Also, if you ever lose control of a namecoin domain you can say goodbye to it forever. A squatter will take it instantly and hold on to it forever unless you buy it from them for actual money.
Has squatting gotten worse on Namecoin? Squatting is fairly hard to handle in decentralized naming systems in general. Namecoin got a lot of squatting issues mostly because of the pricing function (price of names dropped over x years, and now it's almost free to register names). Here is another paper from WEIS'15 that studied squatting in Namecoin: http://randomwalker.info/publications/namespaces.pdf
Depends on the toplevel suffix. For instance, .fr (France) domains have a "no taking" period after the expiration date, where nobody can take it from their previous owners. The owner can then take it back, but it won't be re-activated for a couple of weeks, I believe. So the punishment for screwing up is a temporary blackout of your domain name.
.com, .net, .org domains are handled differently, and may be easier to lose permanently.
Has the code quality improved since I was told to screw off for bringing up security?
* 2 years out of date gevent-websocket
* Year old Python-RSA, which included some worrying security bugs in that time. [0](Vulnerable to side-channel attacks on decryption and signing.)
* PyElliptic is both out of date, and actually an unmaintained library. But it's okay, it's just the OpenSSL library!
* 2 years out of date Pybitcointools, with just a few bug fixes around confirmation things are actually signed correctly.
* A year out of date pyasn1, which is the type library. Not as big a deal, but covers some constraint verification bugs. [1]
* opensslVerify is actually up to date! That's new! And exciting!
* CoffeeScript is a few versions out of date. 1.10 vs the current 1.12, which includes moving away from methods deprecated in NodeJS, problems with managing paths under Windows and compiler enhancements. Not as big a deal, but something that shouldn't be happening.
Then of course, we have the open issues that should be high on the security scope, but don't get a lot of attention.
Like:
* Disable insecure SSL cryptos [3]
* Signing fail if Thumbs.db exist [4]
* ZeroNet fails to notice broken Tor hidden services connection [5]
* ZeroNet returns 500 server error when received truncated referrer [6] (XSS issues)
* port TorManager.py to python-stem [7] i.e. Stop using out of date, unsupported libraries.
I gave up investigating at this point. Doubtless there's more to find.
As long as:
a) The author/s continues to use out-dated, unsupported libraries by directly copying them into the git repository, rather than using any sort of package management.
b) The author/s continue to simply pass security problems on to the end user
... ZeroNet is unfit for use.
As simple as that.
People have tried to help. I tried to help before the project got as expansive as it is.
But then, and now, there is little or no interest in actually fixing the problems.
ZeroNet is an interesting idea, implemented poorly.
That's a pretty deep and well thought out security audit. Are they at least making progress? For a lot of open source projects that are labours of love, it's all about getting the time and funding to work on them.
That was OpenSSL's attitude. It resulted in harm to many more users who would've been better off with something else or with its own developers actually trying to prevent security vulnerabilities. A project advertising something to be "uncensorable" based on "crypto" or whatever should be baking security in from the start everywhere it goes. Or it's just a fraud.
Let me quote one of the ZeroNet team members when questioned about potential hacking.
> I wasn't aware of any hackers. The only problem I have since I have been running ZeroNet for a year, is the minor problem of file size mismatch, simply because not all peers in the network have the latest version of a file.
At best, that's an unhelpful attitude. It leads to things like: [0]
Nobody is going to use ZeroNet in the first place if it's not secure. "Users before security" makes no sense at all if the product you're selling is security.
We talked it over, decided I would do the test suite.
I started, found the bad practices, and showed how I could turn it into a fully automated system, new versions could be tested against, and if it works, it could output binaries for every system.
The response was, 'No don't do that. I like doing it manually. Means I can check for breakage.'
Followed by my PRs and issues being closed, and my emails bouncing.
We need more projects like these. Whether this project solves the question of a truly distributed Internet* is out of question. What we need is a movement, a big cognitive investment towards solving the Big Brother problem.
*I am referring to concentrated power of the big players here, country-wide firewalls, and bureaucracy towards how/what we use.
We need multiple internets, a big confusion, governments can't handle confusion, but if everything is standardized over Facebook and WhatsApp it's easy for them.
Well. Look, even if you have multiple internets, decentralized everything, distributed all systems, no more Google no more Facebook. What does the communication patterns in such a system look like? Do you use the system after work before going to sleep? Your and everyone else usage patterns, traffic can be analyzed. The endpoints, many of them would be honeypots run by Spooks, revealing even more what you are up to and giving you a false sense of safety while the Spooks could run the entire decentralized inter-network.
So your system would have to fake it somehow, fake requesting some hashes here and there, fake request/post comments and follows. Otherwise, the social data available when the SPOOKS join your social-network even if it is distributed like patchwork on scuttlebot, defeats its purpose.
That is what bitmessage does, but then you pay in high bandwidth costs. And yet, you cant just do random shit, random can easily be filtered out so you need more advanced method of finding fake social relations and using those to do fake data to actually conceal what you and everyone else is doing on the interweb.
EDIT: Im not saying "give up", its a very worthy cause, just the problem is harder and enters the social space quite fast - the problem is the same as "we are all nice developers and hackers" yet 99% seem to be employed by NSA/similar-services/Google and think they are doing great James Bond like type of jobs, while they are actually anti-hackers and anti-developers, in fact, anti-society.
ZeroNet is created for fast, dynamic websites, IPFS is more like a storage solution.
It's already possible to cooperate, eg. IPFS for static, big files, ZeroNet for dynamic user content
The project looks very promising but relies on running a lot of javascript from untraceable sources in the browser.
Given the long history of vulnerabilities in the the browsers, trusting js from a well-known website might be OK, trusting js from zeronet is unreasonable.
If ZeroNet could run with js code generated only by the local daemon or without js it would be brilliant.
Chrome added a feature a long while back I really wanted for ages. The ability to specify the checksum of a linked asset, so that it can be verified as it's downloaded (and untrusted/discarded if not). I just can't find the docs for it. :( My Google-fu is not strong.
It's kind of a shame they didn't let their imagination fly with that one... I wish integrity were a global attribute, because I could totally see using it for things like images and audio/video.
This is why native clients (real native clients, not browsers-in-cans) are so important: they enable one to be more secure against targeted attacks, and they enable many eyes to review code and hence make one more secure against untargeted attacks.
Frankly, given much of the history of successful Internet tools & protocols, I'd love to see some text-UI clients for ZeroNet.
They have different goals. FreeNet is about total anonymity. In the FreeNet world, everyone helps server small pieces of all the data, yet no one person knows what data is on the portion of their local drives. Things like Javascript are also disabled on FreeNet.
ZeroNet uses the torrent protocol and serves up the content you've chosen to view. You know what you're serving.
JavaScript is disabled through the Freenet proxy but it's possible to use the Freenet API to access the raw data and have a proxy that doesn't disable JavaScript. It would be possible to do something very ZeroNet like but with Freenet as the backend.
ZeroNet doesn't really use the torrent protocol. It has its own file sharing service that it runs to receive requests from other users for files. It uses the torrent trackers to map site addresses to IP or onion address.
You know what you're serving initially but the site author can add any files they choose and you'll start serving them if you've visited the site and are seeding it. You have no control over malicious sites that decide to store arbitrary data.
As much as I dislike Java (ML outclasses it as a language), that's probably much better than the obvious alternatives (C and C++): there aren't nearly as many undefined behaviours, and that eliminates a whole class of potential security vulnerabilities.
If it were written now, Rust could be a viable alternative: just as safe, potentially faster, with less dependencies (at least as far as the compiled binaries are concerned).
>ML outclasses it
you people are why nobody will
take haskell or ocaml seriously.
as a developer who works primarily with haskell: it is not a panacea, stop being so snobbish
I did not say ML was a panacea. I said it was better than Java. As a language, to be more precise: I disregarded tooling, libraries, and community.
Now as a language, I maintain that ML is better than Java on pretty much every account. It has sum types (or algebraic data types), a safer type system (without null), better generics (that weren't bolted on after the fact), a fine module system, easier lambdas… Ocaml in particular even has a class system, though it is almost never used: ML hardly ever need inheritance, so I count that as a negligible disadvantage. And of course, polymorphism (the OO kind) is easily obtained with with modules or plain higher-order functions.
Yes, yes, Java has an enormous community, loads of tools, and so many libraries that whatever you want to do has probably already been done. Yes, yes, it means that many projects would be cheaper and faster to write in Java, thanks to those libraries, communities, and plain available programmers. The JVM itself is pretty amazing, with a tunable garbage collector, and very fast execution once the JIT has warmed up.
While important, none of those advantages come from the language. They come from Sun, the staggering amount of working Java devs, and the heap of work they already accomplished. Network effects, mostly. A similar comparison could be made between C++ and Rust, though I don't know Rust well enough to have an actual opinion.
---
Also, "you people" should also admit that a language can be better than another, even if it's only Java8 vs Java7, or C++11 vs C++98. You should also realise that it's important to distinguish the language from the rest (community, tooling, libraries). If you don't, the older language always wins! Of course you wouldn't start a sizeable project in Rust in a C++ shop. Throwing away all the internal libraries and framework, all the training and painstakingly acquired experience? Of course not.
Still, one must not lose the long term picture. At that picture is provided by the language. Because ultimately, everything stems from the syntax and semantics of the language.
I think he was using the word "sane" loosely to mean common people, not a niche group. I agree though that it's hypocritical to label some sexualities as a disease or insanity but object to doing the same for other minority ones.
> Anonymity: Full Tor network support with .onion hidden services instead of ipv4 addresses
How does this track with the Tor Project's advice to avoid using BitTorrent over Tor [1]? I can imagine that a savvy project is developed with awareness of what the problems are and works around them, but I don't see it addressed.
Tor Project doesn't like people pushing HD video through its relays, because that degrades performance for other users. Torrent clients are very good at saturating links.
This project is about hosting generally. But if it were used for HD video streaming, Tor Project would be just as unhappy.
Yeah, but you can deal with that, if you know what you're doing. If you use Whonix, or roll your own Tor gateway, leaks around Tor aren't an issue. UDP is the hardest thing to deal with. I mean, with proper Tor/userland isolation, leaks don't happen. So all UDP just gets dropped. If you want UDP, you need to use OnionCat or tunnel a VPN through Tor.
ZeroNet doesn't use the torrent protocol for distributing file. It uses its own TCP service for that so avoids the issues of tunnelling UDP over TCP. Its use of "bittorrent" technology is limited to the protocol for mapping ZeroNet site addresses to IP/Onion addresses.
So will ZeroNet map addresses to immanent Tor onion addresses, which are much longer? That change will screw OnionCat, sadly enough.
Also, I wonder if MPTCP would play nice with ZeroNet. MPTCP works very well With OnionCat. I could create TCP streams with hundreds of subflows over all possible combinations of multiple OnionCat addresses.
I'm also suspicious, since they say that your blockchain address is used for authentication - couldn't colluding websites track your public key and use it to track you between websites?
Seems like that's only for publishing new content, not for merely browsing.
Though I guess unless you create a new identity for every site you want to post a comment on, your comments on one site could be proven to be posted by the same person as your comments on another site.
ZeroNet doesn't use the torrent protocol for distributing files. It uses its own file service that is exposed via a port to receive file requests and send files. It uses torrent trackers for mapping ZeroNet site addresses to IP or Onion addresses.
Neither argument has been tested, but the defense would that you were acting as an ISP with dumb pipes.
Which logically leads to an unrelated question -- if ISPs are doing DPI on every packet, they at least theoretically 'know' whether you're transmitting 'illegal' content. If I were a rights holder, I'd be making that argument against ISPs. I don't know how I'd sleep at night, maybe, but I wouldn't let ISPs have their cake (valuable user data) and eat it too (immunity based on status as ISP-only).
It's been tested for Freenet. LEA adversaries can participate, and identify peers. Judges issue subpoenas. Many defendants have accepted plea bargains. Plausible deniability doesn't work. What works is using Tor.
Yes, the FBI exploited a Firefox vulnerability to drop NIT malware on Playpen users. And said malware phoned home to FBI servers, bypassing Tor.
However, any Whonix users would not have been affected, for two reasons. One, this was Windows malware, and Whonix is based on Debian. Two, Whonix comprises a pair of Debian VMs, a Tor-gateway VM and a workstation VM. Even if the malware had pwned the workstation VM, there is no route to the Internet except through Tor.
Wait, did they reveal how their exploit worked? I thought they had already dropped two cases rather than reveal the internals of the NIT? Like Tor Browser could still be unpatched for this?
Yes, they didn't reveal the Firefox bug or the details of NIT. And yes, Tor browser could still be vulnerable.
You must isolate Tor process and userland in separate VMs, or even separate physical devices. Even if the browser gets pwned, and the NIT gets dropped, you'll be OK, because the Internet is reachable only through Tor. Whonix is an easy to use implementation.
I've been ragging on Tor Project about this for years. But they don't want to frighten people by making Tor too complicated to use. You could be cynical, and say that they want the cannon fodder for their government masters. Or you could say that they think it's more important to protect the most people, rather than to most strongly protect technically competent people. I have no clue what the truth is. Maybe there's a range of opinion.
If Tor is too difficult to use, people won't use it. Edward Snowden and Laura Poitras had to dedicate a significant amount of time to get Glenn Greenwald to just use TAILS, a plug and play Tor operating system. Someone like that is not going to use Whonix, even if maybe they should be.
Yeah, I get that. And I realize that I've gone off the deep end. It's hard to imagine anymore how easily people's eyes glaze over. I've written guides that lay everything out, step by step. And many people still can't seem to get it.
But Whonix really is trivial. You install VirtualBox. You download the Whonix gateway and workstation appliances. You import them in VirtualBox. You start them. You work in the workstation VM. There's nothing to configure. That literally should be enough information to use Whonix. Plus there's a wiki and a support forum.
The workstation VM has no route to the home router except through the Tor gateway VM. With Whonix, the gateway VM isn't even a NAT router. Plus there are iptables rules that block everything except Tor. The gateway VM only exposes Tor SocksPorts to the workstation VM. You'd need to break the network stack in the gateway VM in order to bypass Tor.
Right so can't I just add one then? Most vm setups I might have a default route to the other VM running tor but I can still talk to e.g 192.168.0.1 even if I'm not putting traffic through it.
Is this some kind of 'vm specific' virtual network which can't talk on the real lan? Is that implemented on the hypervisor?
Yes, it depends on VirtualBox. But there are versions for KVM, and for Qubes. More of a nonstarter, though. Or even using physical devices, such as Raspberry or Banana Pi.
Years ago, I created a LiveDVD with VirtualBox plus Whonix gateway and workstation VMs. I had to hack at both Whonix VMs to reduce size and RAM requirements. But I got a LiveDVD that would run with 8GB RAM. It took maybe 20 minutes to boot, but was quite responsive.
In theory breaking properly-configured Whonix would require a VM escape, pretty much the holy grail of exploits (a few have happened recently). The alternative is a complete break of Tor, which has proven unlikely.
I read up on this a while ago, but didn't keep links. There was some discussion on /r/Freenet. For example: https://www.reddit.com/r/Freenet/comments/5tnx81/freenet_use... Missouri police developed a custom Freenet client that logged everything. But I don't remember the name :(
Just like an ISP has to take your request and transmit it and the response. No difference in theory. In practice, I would worry whether courts would ignore theory.
Yeah, it's not the most documented repository. In the absence of that, you can check out the following document and implementation for some better understanding:
I quit giving TED my clicks long ago. They occasionally have some good talks, but many more that are pseudoscience garbage. Don't even get me started on TEDx. I hope ZeroNet find a better stage for their talk. Perhaps an organizer could contact them.
There is single point of failure, kill the tracker = kill the whole network.
You can get all the IPs from the tracker that are visiting certain site, it's not so secure if someone is not using tor.
I don't see how this could decentralize web applications though. Wouldn't each client have to be running the server software? Someone has to pay for that, too.
Yeah every client have to run the software or you use a proxy. If you have a site with many spreads you don't need a running instance. But if you have a an unknown site you would have to run a little server permanently.
If you have previously visited a page then the response time will be limited by your computers ability to locate and open the correct html document.
If you haven't previously visited a page then the response time will be limited by how many peers are available <b>and then</b> by your connection speed.
Several years ago I had Tor running on a server at home. It was a regular Tor node, not an exit node. Later I was put on a blacklist because of this. What is the risk of using this?
Presumably you only download the site you want when you visit it. If that's the case then can you view revisions of the web sites or do you only have the current copy?
If you click on "How does it work?" you get redirected to a short and sweet presentation[0]. According to the presentation, when you, as the site owner, push an update, content.json gets updated, the peers get a notification (using the WebSocket API) that a new content is available, and then they download the new version of content.json that contains the sitemap of the updated version of the website. Cleverly thought out!
IPFS is more low-level in terms of that IPFS is a protocol (in reality a collection of protocols) for P2P data transfer. Together with IPLD, you'll get a full suite of protocols and data structures for creating fully distributed P2P applications.
ZeroNet is a application for distributing P2P applications, using Bittorrent for the P2P layer. In theory, ZeroNet could leverage IPFS to get a better and more modular stack for the actual connectivity and transfering.
ZeroNet client has hosted on itself"s network,You can keep it updated without outside net.the site is own by zeronet's DEV:nofish.the update site is sync with github's project,the site is:http://127.0.0.1:43110/zeroupdate.bit
No comment about ZeroNet itself, but am I alone in the opinion that this website takes grid layout too far? It looks outright cluttered and overloaded.
It might be possible with the DHT alone, but I think for what the grandparent poster wants it would depend on the ability to query the DHT. Both in general and by popularity and insert date.
That might be possible, but with the prvelence of magnet links instead of everyone using that, I just assumed not.
"I thought it was pretty easy to disrupt / censor torrents, hasn't that been going on for a while?"
Not torrents themselves, only torrent search engines. Torrents are distributed by design, but traditional torrent directories/aggregators/search engines are centralized, thus easy targets for DCMA take-downs, ISP blocks, trials etc.
Yup, torrent search engines are the weak link when it comes to protecting the public's access to arbitrary large files, and also the front lines in the battle between the media industries and an uncensored internet.
ZeroNet is perhaps not enough on its own to solve this problem, though, since a good torrent search engine suffers from the same limitation as a good forum, which is the need to have some form of community-based moderation. If people can't remove spam search results, and spam comments, then the medium can be too easily exploited (using Sybil attacks, etc.) and become useless.
The missing piece which is holding back so many decentralised technology projects is a lack of a decentralised trust platform. A necessary step towards this would be a decentralised (and privacy-preserving) identity platform, which would have the added benefit of removing the "Log in with Facebook/Google" problem from the web.
We're moving forward as always. Latest features would include distributed pubsub, filestore (allows you to add files without duplicating them) and interop between the browser and desktop nodes. Any specific part you're looking at?
1) What's the status of (supported as a real feature, not just manually changing the bootstrap nodes and hoping everyone else does too) private IPFS networks? If it's there already, how stable is its configuration (i.e. if I get my friends on a private IPFS network will I likely have to get them all to update a bunch of config in 6 months or a year)?
2) Does filestore also let you store, say, newly pinned files in your regular file tree? That is, can you pin a hash for a file (or tree) you don't already have and provide an ordinary file system location where it should go when it's downloaded? Or do you have to copy it out of IPFS' normal repo manually, then re-add it in the new location? Also: how does filestore behave if files are moved/deleted?
3) What rate of repo changes requiring upgrades can we expect for the future? That is, how stable is the current repo structure expected to be? Is the upgrade process expected to improve and/or become automated any time soon?
4) Is there a table of typical resource requirements somewhere? I'm looking for "if you want to host 10TB and a few 10s of thousands of files, you need a machine X GB of memory. If you want to host 500MB, you only need Y GB of memory. If you have 2TB but it's in many, many small files, you need Z GB of memory", or else a formula for a achieving a best-guess for that. For that matter, how predictable is that at this point?
The use case I've been excited to use IPFS for since I found out about it is a private, distributed filesystem for my friends and family. Easy automated distributed backups/integrity checking on multiple operating systems, access your files at someone else's house easily, that sort of thing. Filestore finally landed, which was a big piece of the puzzle (the files have to remain accessible to ordinary tools and programs or I'll never get buy-in from anyone else), so that's exciting. Now I'm just waiting for docs to improve (so I'm not searching through issue notes to learn which features exist and how to use them) and for a sense that it's stable enough that I won't be fixing brokenness on everyone's nodes several times a year.
A little known fact: the Namecoin blockchain's cost-adjusted hashrate [1] is the third highest in the world, after Bitcoin and Ethereum, making it unusually secure given its relative obscurity (e.g. its market capitalisation is only $10 million).
[1] hashrates can't be compared directly due to different hashing algorithms having different costs for producing a hash.
Namecoin has a number of innovations. It's the first 'alt-coin' fork from Bitcoin, and it pioneered the technique of "merge mining" where a miner could do Proof of Work on both the Bitcoin chain and the NameCoin chain simultaneously. A lot of mining pools implemented merged mining. Even though the alt-coin space has become much more crowded and noisy, NameCoin retains that early hashing advantage. It's a very secure chain.
IIRC, that's one mining pool, not one miner. The power of mining pools is relatively limited. If the workers see that the pool is attacking Namecoin and devaluing their NMC (not to mention ruining a cool project), they're liable to switch to a different pool.
Little known fact: A single miner has close to 65% or more mining power on Namecoin. Reported in this USENIX ATC'16 paper: https://www.usenix.org/node/196209. Due to this reason some other projects have stopped using Namecoin.
I'm curious what the ZeroNet developers think about this issue and how has their experience been so far with Namecoin.