Hacker News new | past | comments | ask | show | jobs | submit login
BitTorrent Everything?
12 points by engxover on May 6, 2012 | hide | past | favorite | 9 comments
Q: Could/should the internet be converted into a P2P network model? By this I mean individual web pages are seeded/"leeched" as needed, instead of downloaded from a single source for every individual browser. How would this be accomplished? Would this have any advantages or disadvantages?

A: I can't answer the question, but I'd like your help with it.

Firstly, forgive my lack of knowledge on this question. I'm a mechanical engineer by trade, but have recently seen the tides shifting more and more to the tech end of the spectrum in my career opportunities. While I'm not too worried about my job right now, I'd be appalled if I became obsolete, which is why I'm currently trying to put myself through the pain of acquiring computing skills. Just in case. So imagine this as a teaching exercise: I'm just looking for knowledge, and hopefully someone here can enlighten me and perhaps point me in the right direction for more information. This seems like one of the more intelligent/mature places I've come across on the web, so I hope this question is as "deeply interesting" for you as it is for myself.

In my line of work, if we have an excellent solution to a problem (in this case the problem is data transfer, and the solution -at least for large clumps of data- is p2p file sharing) we go to that solution first, and see if it is applicable to the task at hand. It seems to me like these torrents are the fastest/best way to handle large files, so why couldn't they handle websites themselves?

Again, I'm quite ignorant on the actual inner workings of the protocols which actually run the web and organize the traffic inside it, so you'll have to forgive me if my questions betray this ignorance. So links to articles, research, textbooks, etc, related to my question would be greatly appreciated. And if you believe this is a question worthy of discussion, please add to that as well. Thank you.




I think bittorrents work well for big chunks of static data. Web pages are made out of lots of tiny bits of data that change for a lot of users.

Most sites these days use some sort of CDN which gets the data to you probably more efficiently than a P2P network.


About two years ago before I started programming I posted this on the what-wg mailing group about a real use case that is financially useful in reducing costs:

http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2010-Jan...

Back then it wasn't really technically possible, but technologies like WebRTC may change things since they make browser-to-browser communication possible.

Check out btapp.js for an implementation of bit torrent in the browser:

http://pwmckenna.github.com/btapp/docs/btapp.html

Besides that you may want to look at this html5-filesystem api polyfill: http://ericbidelman.tumblr.com/post/21649963613/idb-filesyst...

At the end of the day, something like this is only really going to be useful for file types that likely can't be modified to be malicious. e.g. css and images.

I know of a few people looking into stuff like this. Personally I think it is only a matter of time before we see it applied to more things. I don't really think it's a matter of the technology being available at this point. It's more a matter of someone building it. After all it took almost 5 years from the time the Mosaic web browser was introduced to the market for blogging to make an appearance in 1999.


I imagine there are many technical reasons why this is a bad idea. A few from the top of my head:

• Very few sites could allow their database to be distributed.

• Pages could be altered by nodes in the network for nefarious reasons.

• Anything with a session would probably be very hard work.

Shame really, as it'd make it harder of ISPs to shape traffic.


I'll admit I wasn't thinking of databases when I posed the question, and you're absolutely right about sessions. Anything with a log-in would be quite difficult, and out of my league to even imagine.

As for alteration of pages, what about "hashes"? Isn't the point of a hashsum to make sure the torrent is correct? My thoughts of how this process would work are:

1. You browse the internet as usual, except for a browser add-on that would check links to see if they were available in torrent form.

2. As you click on a link, your browser either loads the webpage in torrent form, or saves the site in torrent form for seeding.

3. More users = more sites in torrent form = faster browsing speeds.

Again, wouldn't work with log-in sites, but for something static like Wikipedia, wouldn't this be incredibly helpful for cutting costs, reducing server load, etc? And whenever a page is updated, Wikipedia simply uploads the new version with it's new hashtag for browser checks.

I hope that clarifies my thinking. So is this a good idea at all? Or still fraught with downsides?


• Pages could be altered by nodes in the network for nefarious reasons.

Not if you use a BitTorrent model. You'd get checksums from a known source and discard any chunks you receive from nodes that don't match the expected checksum. Not impossible to spoof/alter, but exceedingly unlikely.


Similar to our trust on the 13 root servers then.

I guess this all depends on who to trust.



There's also

https://gnunet.org/

I gather one difference is that GNUnet uses an economic model (like bittorrent) to allocate resources, whereas freenet does not.


It could, it should, and it's a really hard problem.

http://www.youtube.com/watch?v=8Z685OF-PS8

http://www.ccnx.org/




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: