Hacker News new | past | comments | ask | show | jobs | submit login
Introducing Tent - the decentralized social web (tent.io)
469 points by Titanous on Aug 22, 2012 | hide | past | favorite | 221 comments



My first thought was "Well, Diaspora with another name". But after reading and understanding it (it's not very well explained there) seems like a much more abstract thing. They want to build an abstraction layer to the social web, not a social network.

The idea seems pretty good. It's just the basics: you follow people and receive their content (text, images, whatever), and people follow you and you share content with them. The most amazing thing is that, just using these simple concepts the possibilities are infinite. As they said, every social network out there can be implemented in this way. ¿Twitter? It's trivial, just make the format description and you're ready. ¿Facebook? The only thing you should specify is that the relationships are symmetric (you can follow me only if I decide to follow you too, that is, we are mutual friends).

To me, the idea seems absolutely great. The problem will be execution: what apps are created using this protocol. I have also the doubt if apps will be interoperable. Example: I build a twitter-like app named Foo and another guy builds another twitter-like app named Bar. Both use similar formats, so, can an user using Foo see the contents posted with Bar? I imagine that this will be possible as long as they share the same post format, but I'm not sure.

Anyways, good work. I would really like to see Tent to expand and grow.


I got four words into that site, found, "protocol," and knew someone was finally doing this right. They have defined the protocol and are working on an implementation. That made it immediately obvious that they were already miles ahead of Diaspora.

Good luck to them!


We are putting together a wiki-style solution for community-curated post types. Common types are expected to be standardized so that apps are interoperable. For example status/microblog would be a standardized post type. We will launch with common types specified, so that everyone doesn't reinvent the wheel. After that, community managers will help keep the devs in line.


One issue I can see is that you only need to have one friend who uses a third party tentd with questionable privacy policies and now everyone they're connected to has their data exposed. Would it be possible to implement it such that you can say 'yes, you can have this data, but only if your server is not being run by 'blacklist' (would have to assume the requesting sever doesn't lie about it's identity).

On the positive I can see this catching on because it's easy for existing social networks to integrate, it would almost become do or die forever more if a network that does integrate them catches on


Maybe this could be done by a IP/DNS blacklist on your API side. So you maintain a list on your "exit node" so that everyone who requests your data needs to be "a good guy". If this part would be done by the requester it could be spoofed easily, so it had to happen on the exit point.


Maybe you should get in touch with the OpenSocial movement: http://www.opensocial.org/

Combine forces.


It sounds like a very nice idea. I'm putting together a "multi-protocol social client". I'd love to be able to use Tent. I am correct in assuming there isn't currently a reference implementation?


There is currently no reference implementation, but it will be on Github within a week or so.


You might want to have a look at federated wiki for this:

http://wardcunningham.github.com/


Isn't this something Activity Streams could be very useful for? Activity Streams is also JSON-friendly.


I feel like the success of this protocol will to large part depend on the specific implementation of the various tent service providers. I see two important (out of many possible) cases here: either one service provider nails the front end implementation (eg. ux) and becomes the dominant tent service provider (bad case), or we see users setting up their own tent services for themselves (good case, truly decentralized). however the latter might only be possible with some fee incurred on the users for the setup.

definitely a good idea, and if the setup for a tent service isnt too cumbersome I'll make sure to setup one for testing.


I had emailed the Diaspora guys back in the day, before Diaspora was released, begging them to create exactly this, an encrypted, secure social framework and associated protocol, so people could easily build their own social apps on it.

They never replied, I guess they were well out of their depth as it was...


inter-app protocols will emerge spontaneously.


A few years back, I started getting interested in a similar idea. I'm definitely intrigued in seeing how well it goes. Good luck to you.

Of course, as ideas go, I allowed myself to think further into the possibilities, and found some interesting avenues.

For instance, why allow the facebooks, twitters, etc to own domain over our content? Let people store their own data, and offer API endpoints giving facebook, twitter, etc access. They essentially become frontends and search engines to our shared content. We get control of our own data (and privacy therein), they get to provide an interface to that data in a way that fits what they're trying to offer their "customers".

And then if you take that even further, why allow anyone control over your data? Why not store all my purchase data and credit info on my own servers, and allow authorized companies access as needed? Census time? Popup shows up on my phone asking if i'd like to allow the government access to some of my data for census - I pick what data is allowed, and it's done.

Electric company's system automatically logs in to get my electric usage. Phone provider does the same. Publishing a book literally allows access by readers to your own servers. Releasing an album - same deal. We still have "stores", but those stores are merely search engines offering a service to both the content creators and consumers.

It went further, and weirder (in interesting ways). I'm not sure such a system would truly be beneficial, but I love the idea of allowing people to Truly Own their own data.

Apologies for the tangent. Good luck to you. I'm a fan of the idea as it's presented and I hope you're successful.


I have also thought about this idea that users of a social network could keep all their data and just let different clients (Facebook, Twitter) access it. I think it would be a great idea!

If I want to change my email address? Change it in one place. If I want to let my friends know something? Update it in one place. It's very DRY you know?

But then I think, why even let Facebook use my data? If we have these data stores let's build a peer to peer network to let trusted friends access our information. In real life, if I want to tell my friend something I don't pass it through a 3rd party first, I tell them directly! This can let us have really fine grained control over who we share with by authenticating the users that request the information. And with the authentication can come encryption.

This move to the cloud is, frankly, annoying. Why should we have to trust our data to all these people we don't know? They pretend that they give us "free" service, while actually using that data for profit. Good UI shouldn't have to come at such a cost.

I have been thinking about this for a while, if there are other people interested in building such a web I would love to know.


I've been thinking about this for a while too, and I agree that the peer-to-peer element is essential. So far, it appears that all the would-be open social media apps/protocols/whatever rely on you having your own server. That, frankly, is a deadly failing. Per Metcalfe's Law, social networks are only valuable when they are maximally inclusive, and the requirement to have your own server is something that will always exclude the majority of (non-technical) users. The need for easy non-technical access means that there are only two viable paths for any social media application. Either:

1.) Go to web page and use application, or 2.) Download app and use application

This is how Facebook / Twitter / etc. do it., and they're clearly doing it right. But there's no law which says that the application needs to keep its data on a third-party server. Why not have it run a Peer-to-peer service?

In fact, I think that Peer-to-peer is especially suitable to social networking, given that social networks are typically highly clustered (eg., people who are my friends are much more likely to be each others' friends as well). This means that if the every client application is able to redistribute copies of friends' status updates as well as its own status updates, then a sync between any given pair of clients is likely to allow them to bring a large subset of their friends' statuses up to date.

Of course if you're rebroadcasting your friends' status updates as a P2P host, then the question of spoofing/authentication becomes paramount. Obviously that would have to be solved by having all status updates transmitted in encrypted formats which can only be decrypted / authenticated with a users' public key.

Oh, and once you've got dual-key authentication embedded in widely-used social trust networks, you could probably solve a lot of other authentication problems while you're at it.

All of this would need to be completely transparent to the user, of course. If it's any more difficult to set up and use than standard Facebook / Twitter, it will fail. But I don't see any fundamental obstacles to making it that easy.

Unfortunately I don't have time to do anything about this!


And even if it is just as easy, or easier, to set up as facebook or twitter, it still might fail because of the user and the egg problem.

As much as I would love to use and have a network like this, until all the people I keep in touch with through facebook use it, I'll still have a facebook.

I wonder if it's plausible, even possible, to have all those p2p benefits (encryption, authentication, privacy) and some how also have interoperability with facebook?


Can you make a Tent (or more ambitious social p2p) client which is also an FB client? Of course.

FB and Twitter have APIs. Of course, they can revoke your key. But they have web clients, so they can be scraped.

Faced with a genuinely distributed opponent, there's no way the existing behemoths can keep your data in their silos.

For instance, FB can block tent.is at the API level or even at the IP level. But if Tent hosts pop up all around the internets, and if they are general-purpose enough that users can install their own scraping gateways, which can't be attacked centrally using technical or legal means... it's game over. To me this is one of the less recognized advantages of a distributed service.


I've been thinking about this idea a lot recently. I'm wondering if you would like to have a discussion? If so, please email me at maxim.hn.4418904@vekslers.org

Maxim.


" Let people store their own data, and offer API endpoints giving facebook, twitter, etc access."

That is the goal of unhosted. http://unhosted.org/ I really whish their RemoteStorage protocol would gain traction.

The locker project is heading into the same direction, although with a different approach. Posting content to Facebook et al and then recollecting it into your locker. http://www.lockerproject.org/


Thank you for the shout-out! :) There also is an unhosted social network now, check it out at http://friendsunhosted.com (my username is jan@5apps.com, feel free to add me)


I'd be all over this if you'd support the Opera browser :)


It's a great concept, but here's where it breaks down: once data is given out, it can never be retracted. Whether it's an author's novel or a shopper's purchase history, once the data is released to someone in a readable format, they are unable to stop the recipient from doing something they don't like with that data.

Granted, the issue can be mitigated with trust networks, social conventions, or laws, but it comes down the same issues faced by DRM media formats: if someone can consume the content, they can find a way to duplicate the content.


Just because you cannot prevent all threat models, does not mean you through privacy out the window. For example, if Alice wants to send Bob a message, in the current system: 1) Alice tells Facebook she wants to friend Bob 2) Facebook tells Bob, who accepts 3) Alice sends Facebook the message addressed to Bob (I think in plaintext) 4) Facebook sends the message to Bob. 5) The message is still unencrypted on Facebook servers, where it is available to Facebook, advertisers, hackers, and government agencies

Using known cryptographic methods, we can construct a system that works as follows:

1) Alice looks up Bob's public key (or gets it from Bob to avoid needing a trusted key service) 2) Using Bob's public key, Alice encrypts a message 3) Alice sends the encrypted message into the network, addressed to Bob 4) Using his private key, Bob decrypts the message

In the second example, the recipient (Bob) is the only one who can release the data. In the first example, a number of potentially malicious parties have access to the data without corporation from Alice or Bob.

I used a direct message for simplicity, but we do have crystallographic methods to encrypt a message such that an arbitrary group of people can decrypt it.


> we do have crystallographic methods to encrypt a message such that an arbitrary group of people can decrypt it.

This doesn't solve the problem you're responding to: "once data is given out, it can never be retracted". That remains the case. If you encrypt your photo so that each of your 770 friends can decrypt it, but then you unfriend Bob for being a jerk, he's still got the key and the encrypted data. So he still sees that photo.

In contrast, although you could theoretically save every photo and update anyone ever lets you see on Facebook, it would be difficult enough that in practice no one does.


> "once data is given out, it can never be retracted". That remains the case. If you encrypt your photo so that each of your 770 friends can decrypt it, but then you unfriend Bob for being a jerk, he's still got the key and the encrypted data. So he still sees that photo.

That particular photo, yes. But Bob could also have saved that photo as soon as he saw it or maybe he's got eidetic memory, so crypto can't help with that anymore.

However, any new photos shared through the same channel can't be seen by Bob anymore because I assume the key changes as the members of the access group change.

Still, while you can't solve the original problem (Bob can leak everything), you can still do slightly better than that by properly implementing Off-The-Record messaging[1]. This adds perfect forward secrecy, as well as deniability: The latter is very interesting because in OTR it means that both parties can authenticate received messages to be certain of the sender's identity, but they can also FORGE any message to look like it's been signed by the other. This means that even if Bob decides to publish your private messages to him, he can never prove you were the one that wrote and signed them, because he could have forged them himself.

Two problems with that: I'm not sure how the OTR protocol extends to multiple recipients (but I bet there's some research on it), and while this "deniability" might be enough for private (text) messages, it's not much use for photographs in many cases: If Bob decides to publish an embarassing photo of you that he had once access to, it's not going to be much use that you can argue "ha, but you can't prove I sent you that photo!".

Still, for textual communication, it adds a (thin) layer of extra security even though you can never beat Eidetic Bob.

[1] http://www.cypherpunks.ca/otr/index.php#docs and in particular http://www.cypherpunks.ca/otr/otr-codecon.pdf


I addressed the "Bob could have saved that photo" point: "although you could theoretically save every photo and update anyone ever lets you see on Facebook, it would be difficult enough that in practice no one does." Facebook has a sane default of not saving everything that you view on your computer. Yes, maybe someone saved one or two photos, but they probably don't have them all.

OTR is pointless for a medium that's mostly about photo sharing. For the Facebook use case, what's far more important than crypto is the set of social norms the site establishes via what's easy to do (share) and what's hard, e.g., archiving everything your friends share as it comes in. Not only would that be tricky to do without getting detected as a bot, but 99.9% of users would never think to try it. This problem is inherently more tractable with a distributed system, and that could be bad.


> In contrast, although you could theoretically save every photo and update anyone ever lets you see on Facebook, it would be difficult enough that in practice no one does.

Actually, facebook does, and that information is (potentially) available to third parties. The intent of my system is that in order for information to leek, you would need to have one of your trusted recipients be compromised.


If you're thinking in terms of Facebook showing deleted updates to law enforcement to track down activists — or some rogue employee at Facebook HQ taking a peek — then we're talking about totally different levels of paranoia. Sure, that is a legitimate concern for the relative few who have something to fear from powerful governments.

For the rest of us, though, someone we unfriend (an ex-partner, for example) being able to archive and still access all of our old photos/updates is a much bigger concern. I argue that distributed social software should follow Facebook's lead and offer a sane default of not having such an archiving feature, and not going out of its way to make it particularly easy to add one, either. Otherwise the "creepy" aspect of not being able to rescind access to your photos/updates is going to be a serious downside to this new network, and could keep people from using it.

Edited to add:

Also, what's more secure in theory isn't always more secure in practice. Moxie Marlinspike observed that the original version of Cryptocat (which did end-to-end encryption in JS) was potentially less safe than Gchat. Why? Because if you compromised Cryptocat's server, you could make it serve JS with a hidden backdoor. And Cryptocat, being a one-man shop, would likely be a softer target than a company like Google, which has had plenty of time and expertise hardening their systems. How much do you trust the guy behind Cryptocat, versus Google with their reputation to protect?

Along the same lines, imagine you use a hosted Tent server for social networking — you don't have time to bother running your own server, but you've heard Tent keeps your data safe from Facebook, so it'll protect your privacy better, right? But then the random guy hosting your Tent account turns around and leaks your info. Or he's running old software with a vulnerability in it, and gets pwned. Suddenly hackers have all your data. Would that have happened on Facebook?

When I was curious about signing up with Diaspora, one of the open community pods had a cheeky note from the server admin saying basically that he would peek in and read your stuff if he felt like it. So this is not entirely hypothetical. I'm a huge fan of efforts like Tent, but let's not forget there are upsides to Facebook and Google's stewardship, and an alternative system can easily have as many privacy/security cons as it does pros. Tread carefully.


Just FYI: With facebook now (for quite some time) integrating/implementing Jabber/XMPP -- OTR (Off the record messaging) works fine with facebook messages right now, if both users connect to facebook chat with something like Pidgin.

I guess the deniability part is sort of out the window, seeing as how facebook will be logging the traffic pattern(s) -- but you might at least claim that "no, that wasn't what I said".


I think security of data is where a lot of the open source networks have fallen down. (I'm thinking Diaspora and their SSL-encypted node).

Because you shouldn't be trying to close the analog hole, it's essentially impossible (just look at sites like failbook).

Instead you should be trying to be about as secure as email, and allow other people to secure it if that's what they want to try and do.


I've discussed this in the past with tent's authors, and that is a characteristic they not only tolerate but encourage (for better or worse).

Operating with these assumptions means that the things that you make public are totally and incontrovertibly public. It is, in a manner of speaking, playing for keeps.


This may be naive but isn't this just an endemic problem for all social networks. Anything I share on a social network can be copied by people allowed to view it and dispersed. Doesn't it just fall on trusting those you share with? If not, what is the difference?


> It's a great concept, but here's where it breaks down: once data is given out, it can never be retracted.

Isn't Twitter the same though? Deleted tweet's have a way of staying around... :P


I think the issue of everyone storing data in their own custom format would kill it well before that surfaced.


You mean like using Gmail as email client similar to Thunderbird and Outlook. (You know, where you add POP3-SMTP address ... )

Why not, it will go to that point.


The applicability of this is limited. Why? Imagine trying to get this to work for your grandmother.


Similar to what personal.com is trying to accomplish


Random thought: it might be helpful to give Tent-the-protocol a different name than Tent-the-server-implementation. In other words, "Tent" means either the protocol you're specifying, or the server software you're planning on releasing, but not both.

An analogue would be the naming distinction between HTTP, the protocol, and httpd, the first Web server (http://en.wikipedia.org/wiki/CERN_httpd). That naming split made it easier for people to understand what part of the system others were talking about, and helped make it clear that the two pieces were not tightly coupled to each other.

Maybe you're already planning on doing this when you release the server, it's not clear from the web site. If that's the case, feel free to ignore...


Definitely-- The first "Tent server implementation" will have it's own name, TBD. We spent a lot of time on nomenclature and still aren't 100% with any of the terms, they're effectively all in flux until v1.0. The wackier issue is whether users call their "Tent-servers" "Tents". If you have any suggestions on names for this or any other terms, we'd love to hear them!

edit: when the hosted version launches it will be on a different domain, to make clear tent.io is the protocol alone and we aren't monopolizing the hosting space (we're doing it more as a community service than as a business).


A collection of Tents could be called a "Camp".


A cluster of camps, perhaps related to a specific topic, could be called a 'Jamboree'.


Jamboree sounds a little too campy.


I think you should call your server TBD and then make up what it stands for :) It's harder than I thought, but the "T" is obviously Tent ..


Tent Bridge Directory?


Cool stuff! I like that you mentioned mesh compatibility.

Yurt, Caravan, Convoy?


Kind of a usability-meets-marketing problem. The good thing is that the word "tent" is a versatile one – maybe the server could become "Tentpole" or something along those lines.


Hey, I thought of tentpole too (that's what's getting the tent up and running).


I honestly kind of like "tentd" as a server name.


Why is it that every new protocol seems to want to piggy back on HTTP? It seems to me that maintaining state would be a useful feature for a social protocol.

Another issue is that this assumes that the web will be the client of choice in the future... with mobile apps being as big as they are in the social space, this seems a bit shortsighted.

Don't get me wrong, I like the idea behind having a "social server", but I don't necessarily think that starting with HTTP is the way to go.

I don't have any particular argument with using JSON for data transfer though... I think that is probably a good choice. Also using SSL for all connections is probably a good call too.


HTTP accomplishes all of our design constraints via REST and long-lived streaming connections. It is a very reliable, mature protocol, and the security and performance are very well understood. Also, if we invented a new binary protocol, users wouldn't be able to host their Tent servers on easy-to-use platforms like Heroku.

Developers may implement other protocols in the future, but we are targeting HTTP as an accessible starting point.

We are definitely anticipating heavy mobile use, both through mobile web apps as well as native. There will eventually be iOS and Android frameworks to handle all of the communication with Tent servers.


HTTP is a huge, hefty, inefficient and complex protocol whose only advantage is that HTML/JS supports it by default. Arguments that 'websockets' solve this are ridiculous in the face of the fact that we can just use 'sockets', like we always have. WebSockets are a work-around for the constraints of the browser.

As a mobile/desktop/server engineer, I would love the opportunity to work with other server-side teams that aren't wedded to the web/HTTP via historical accident and thus don't force us to use HTTP.


HTTP implements an architectural style which ensures reliability, scalability, decoupling of systems and support for hypermedia for a complex network of disparate, unreliable systems and networks.

Do you have any suggestion that provides the same features, or should we forgo them because HTTP is "hefty"?


> Do you have any suggestion that provides the same features

Message passing.

That's all HTTP really is, but it's dressed up in a bunch of historical complexity and inefficiency centered around supporting web browsers.


But do you have any concrete suggestions of protocols, or are you criticizing the choice based on an hypothetical protocol that would be very similar but incompatible with HTTP and all its existing tools (millions of tested and deployed caching servers, load balancers, etc), and for which whole new libraries would have to be written, just so you can make it somewhat more efficient?


I think you're grossly over-estimating the difficulty of defining a protocol. It's no more difficult than defining the protocol for which you'll use HTTP as transport.

Load balancers know how to load balance straight TCP. HTTP caching servers are an HTTP-centric idea.

The 'libraries' you'll need can be much, much smaller when all you need is a bit of framing and serialization, instead of a complete complex RFC compliant HTTP client stack.


I think you're grossly over-estimating the difficulty of defining a protocol.

It's not writing the protocol that I find the most difficult. It's reimplementing everything the uses the protocol.

Load balancers know how to load balance straight TCP.

Which is only useful if all the nodes are exactly the same, but that prevents you from distributing the data across them based on the user profiles, and then load balance according to the user id, as (if I'm not mistaken) Netflix does. Since they're using subdomains as user identifiers, you'd get that for free using an existing, well-tested HTTP load balancer.

HTTP caching servers are an HTTP-centric idea.

That's a tautology. The question is: are they a useful idea? Is being able to take advantage of existing and deployed solutions like CDNs useful? Seems to me like it would be.

The 'libraries' you'll need can be much, much smaller when all you need is a bit of framing and serialization, instead of a complete complex RFC compliant HTTP client stack.

I think you underestimate the advantages that some of the core HTTP concepts provide.


> Which is only useful if all the nodes are exactly the same, but that prevents you from distributing the data across them based on the user profiles, and then load balance according to the user id, as (if I'm not mistaken) Netflix does. Since they're using subdomains as user identifiers, you'd get that for free using an existing, well-tested HTTP load balancer.

I'm not sure what you think makes that complicated to implement without HTTP, or why you consider it 'free'. Netflix had to write custom code to support that, and could have just as easily done so on top of a message passing architecture ala ZeroMQ or even AMQP.

> That's a tautology. The question is: are they a useful idea? Is being able to take advantage of existing and deployed solutions like CDNs useful? Seems to me like it would be.

Not really, no -- neither a tautology nor are they particularly useful for API implementation. Their primary value is in caching resources for HTTP requests in a way that meshes well with the complexity of HTTP.

If you need geographically distributed resource distribution than HTTP may be a good idea simply because:

- There's widespread standardized support for HTTP resource distribution.

- Its inefficiencies are easily outweighed by the simple transit costs of a large file transfer.

We're largely talking about server "API", however.

> I think you underestimate the advantages that some of the core HTTP concepts provide.

No, the core concepts are more-or-less fine. It's the stack that's inefficient and grossly complex, largely due to browser constraints and historical limitations.


I'm not sure what you think makes that complicated to implement without HTTP, or why you consider it 'free'. Netflix had to write custom code to support that, and could have just as easily done so on top of a message passing architecture ala ZeroMQ or even AMQP.

It's free because it already exists. Load balancers for hypothetical protocols don't.

Not really, no -- neither a tautology nor are they particularly useful for API implementation. Their primary value is in caching resources for HTTP requests in a way that meshes well with the complexity of HTTP.

If you need geographically distributed resource distribution than HTTP may be a good idea simply because:

- There's widespread standardized support for HTTP resource distribution.

- Its inefficiencies are easily outweighed by the simple transit costs of a large file transfer.

We're largely talking about server "API", however.

Isn't the whole point of this system to transfer people's content - posts, pictures, videos, etc - between servers? I would think pure API "calls" would be a small part of the whole traffic.

No, the core concepts are more-or-less fine. It's the stack that's inefficient and grossly complex, largely due to browser constraints and historical limitations.

But to implement them, you need more than "a bit of framing and serialization".


> But to implement them, you need more than "a bit of framing and serialization".

I posit you're still grossly overestimating complexity based on your own experience with HTTP, coupled with grossly underestimating the complexity, time costs, and efficiency costs of the stack HTTP weds you to.

A TCP stream is simple. It's as simple as it gets. Load balancing it requires a few hundred lines of code, at a maximum. It only gets complicated when you start layering on a protocol stack that is targeted at web browsers, grew over the past 20 years, requires all sorts of hoop-jumping for efficiency (keep-alive, websockets, long-polling), requires a slew of text parsing and escaping (percent-escapes, URL encoding, base64 HTTP basic auth, OAuth, ...), cookies, MIME parsing/encoding, so on and so forth.

All this complexity is targeted at web browsers, introduces significant inefficiencies, and requires huge libraries to make it accessible to application/server engineers.

What's the gain? Nothing other than familiarity, as evidenced by your belief that the core of what HTTP provides is so incredibly complicated, and you couldn't possibly replace it.

No -- it's the complexity of HTTP that's complicated, not the concepts that underly it. Drop the HTTP legacy and things get a heckuvalot simpler.


It's not so much that you can't replace HTTP, as that you can't replace all the thousands of tools and packages that already work with HTTP, and that can be very useful for a project like this. And you can't easily replace the knowledge that people have to HTTP either. (Claiming that OAuth is part of HTTP doesn't help with your credibility either, I'm afraid.)

Furthermore, I think that even if the developers of this project could replace the required tools and forgo the rest, I doubt it'd make sense.

Frankly, you'd need a working prototype to convince me of the contrary, so I guess we'll have to leave it at that. I'm a stubborn man ;)


Yes, but who cares? You're just sending json down the wire. Fuck it.


If you really want a particular concrete protocol to compare to, how about AMQP?


While iMatix was the original designer of AMQP and has invested hugely in that protocol, we believe it is fundamentally flawed, and unfixable. It is too complex and the barriers to participation are massive. We do not believe that it's in the best interest of our customers and users to invest further in AMQP. Specifically, iMatix will be stepping out of the AMQP workgroup and will not be supporting AMQP/1.0 when that emerges, if it ever emerges.

By the way, the AMQP spec is roughly the same size as the HTTP spec, and the latter spends a lot of pages listing just status codes.

And of course, AMQP uses a model based on Sessions, which is great if the components of the system are static, but not that great if you're talking to a lot of nodes who come and go, since you'll end up with uneven load distribution on your servers. Regardless of HTTP as a particular implementation, I think statelessness makes perfect sense in a unreliable network of nodes.


WebSockets also have the advantage that they pass through corporate firewalls and open wifi networks, as well as many proxies, as they masquerade as HTTP traffic. And a nicer frames mechanic than raw socket, something I love. (nowhere near as low-level, but for me it's essentially stateful UDP that's reliable, i.e. TCP except with datagrams)


Corporate firewalls are the general boogieman, but in reality, I haven't seen evidence that they're much more than that.

To test this, we implemented fallback-to-HTTPS behavior in a very widely used previously non-HTTP client. We then observed the number of clients that failed to connect via our custom protocol, but succeeded in falling back to HTTPS.

The numbers were negligible.

It's ridiculous that we'd seriously believe that we can't trust that TCP works on the internet. We joke about it being the "interweb", but I see no reason to sow fear, uncertainty, and doubt, and thus and actually turn the interweb into reality.


Perhaps, but open wifi often only allows 443 and 80.


That also breaks IMAP(S), SMTP(S), Jabber, AIM, and a slew of other applications.

I don't see that we should model the internet architecture on bad technical choices made on a limited number of open wifi networks.

Or, we just frame our standard protocol over websockets as an (unfortunate) fallback, if it ever is revealed to be a real problem.


Port numbers are not protocols.


Yes, but many open wifi hotspots at commercial institutions only have 80 and 443 open.


I believe his point is that you can generally carry whatever protocol you want over port 443 (and often port 80).

Given how many other things are broken by networks that foolishly only open port 80 and 443, and their (in my experience) relative rarity, I'd suggest that it's not worth bothering with, except possibly as a fall-back to measure the actual number of people trying to use your service behind such a network.


Who says port 80 has to be http?


I think 443 will be a better example because (I think) it's harder for a middle party to profile https and see that it is indeed https and not something else.


I don't think so, even with https, you still have the handshake which middle parties can see.


> HTTP is a huge, hefty, inefficient and complex protocol

It's starting to sound like you've never used CORBA.


No, I just have worse things to say about CORBA and the notion of distributed objects in general.


You can use DSNP but I cannot because shared hosting do not provide that much support. I wish everybody could use it.


You wouldn't have to invent a new binary protocol, a simple text protocol can work wonders (see: memcached or Redis). I'd personally look more towards SMTP for inspiration (not security wise, of course). Since you're going to mainly be passing messages back and forth, not resources.

You really should also think about two separate protocols... one server-to-server, the other server-to-client.

At least, that's how I've been working on it :)


Actually, we do support passing resources back and forth and for a lot of use cases doing so saves quite a few round trips in the long run.

Our protocol has two distinct parts, one server-server, the other server-app. It's just easier to manage a single "API" than having some support server-server but not server-app--we're hoping to avoid ecosystem fragmentation into those who don't want to support apps or vice versa.


I upvoted this as it's good to question the assumption and it's easy to imagine this working better without HTTP. In practice though, I feel the project is orders of magnitude more likely if it's built with HTTP and REST. A social product needs to reach critical mass and any hurdles you're putting in front of users and developers need to be seriously justified. In this case, workarounds like cookies are Good Enough.


I'd argue that -- for native client authors -- HTTP and REST are actually larger barriers than a simple non-HTTP protocol.

I also don't understand the concern over "inventing" a new binary protocol. It's not like it's any more complicated or difficult than "inventing" a non-binary protocol. The framing rules just don't use ASCII/UTF-8 field delimiters. There are plenty of existing encodings to use, no more difficult than JSON.


Only if you're using C/C++ and only native OS libraries (or on windows, afraid to use WinHttp for fear of lack of SP1 support)


Marshalling back and forth between HTTP requests and in-app models via the like of NSURLConnection is far more ugly to do cleanly and efficiently than a simple state machine modeled on message passing over a socket.


That's only for Objective-C, it's nicer in Go.


People build things on top of HTTP because HTTP is a pretty general, reasonable, easy to implement protocol; most alternative protocols tend to re-invent something like it or not become popular. Most protocols resemble or can be implemented on top of (Action, URI, Headers, Body) => (Status, Headers, Body). Most of the improvements suggested to HTTP tend to come in the form of encoding messages more efficiently at the cost of extensibility. Reasonably efficient HTTP servers exist for every platform.

I'm not sure exactly what you mean by maintaining state, but if you mean long-lived connections: this already exists in HTTP with chunked-encoding and the new web socket extensions. Stateful protocols tend to be more difficult to write programs for and load balance.


> Why is it that every new protocol seems to want to piggy back on HTTP?

For the same reason cars all want to drive on the same roads.


My thoughts are similar. All due respect, this is a brittle approach and vary narrowly targeted.

Relying on HTTP, SSL, Ruby, JSON does not inspire confidence . When I see "Ruby 100%" on the github page for something that aims to move us forward out of centralisation, it makes me shudder. You need to think beyond the web, and get lower level (Tor hidden services, being outside the centralised web, are the right idea, but Tor has a certain stigma). You do need to get below the level of partisan languages- for a number of reasons, one of which is to make it open and easy for anyone to build on, not just web developers and people who know certain languages. That's a given. Get people connected (stateful) and all that will follow.

Something more like cjdns. It's the connection that's important, not the web or whatever else you chose to use the connection for. A webserver is one of many things that can be offered over a connection. I don't want to make the connection through an all-purpose application like a web browser, written by some company (Chrome in all its complex glory is trying to supplant your OS's DNS fer chrissakes, wake up- in the end, it's all about control). I want a very small and simple open source app that handles the connection to my peers and which works via the OS, not "the web". It makes the connection, keeps it alive and otherwise stays out of the way.

I think these kids need to go back to the drawing board. But anyone working on stuff like this should not give up!

Whatever the successor to centralised social networking is, it will largely be a matter of timing. The best solution might not prevail. Instead it might be the one that hits the news at exactly the right time and catches on for some unexplainable reason.

There will be more to come. That's a promise.


What centralized web? Barring DNS - which is not mandatory for the web - there's no such thing. The web is completely decentralized. Using the web essentially means, "when making a request, use a client-server model and communicate using HTTP", nothing more, really.

Tor hidden services, being outside the centralised web, are the right idea

But most Tor hidden services are as much as the web as normal web services! They talk in a server-client model using HTTP - that's the web, right there.

I don't want to make the connection through an all-purpose application like a web browser

Using the web doesn't mean using a web browser.

written by some company (Chrome in all its complex glory is trying to supplant your OS's DNS fer chrissakes, wake up- in the end, it's all about control).

...and it certainly doesn't mean using Chrome or other browser written by some big company; there's Amaya, Arora, Camino, Dillo, Dooble, ELinks, Flock, Galeon, GNU IceCat, K-Meleon, Konqueror, Links, Lynx, Midori, Firefox, SeaMonkey, Shiira, Uzbl, Luakit and more.

I want a very small and simple open source app that handles the connection to my peers and which works via the OS, not "the web". It makes the connection, keeps it alive and otherwise stays out of the way.

Which is completely feasible using HTTP - the web - as a core protocol.


"centralised web" was a poor choice of words. And I thought about that but let it go. You caught it. You're paying attention. That's good.

But the point is the same - the control is concentrated in middlemen (e.g. DNS, Hosting - why do I need them?[1]) - and you articulated this correctly: client-server. Calf-cow. Not peer-to-peer. That's what I'm keen to get past.

1. That's my point about Tor's hidden services. You need Tor's help of course but those services are free from the need for DNS and Hosting - free from the middlemen that control "the web" (not to mention controlling email- does anyone send email using IP numbers anymore? the spam fighting fanatics think every revolves aroudn DNS and domain names- your mail might well get rejected because you lack a "domain name"). I must confess I've never actually used the hidden services. I only read the docs and source code.

"Using the web doesn't mean using a web browser"

Not sure what you mean here. You need to use HTTP. So you need an http client. Chances are you'll be fed heaps of html and other garbage. Parsing it is a PITA. And eventually, if you want to view tables and such, you'll be using a "browser".

Then there's the matter of state. So you're saying HTTP but not in a RESTful way? You just like HTTP headers, chunking and what not? HTTP is popular but it is not exactly unique. There are hundreds of other protocols in the RFC's, all of which would probably work just as well. HTTP is aimed at the client-server concept. That's fine. But it's a limited use for a network with so much potential. Something like a telephone conversation is not "client-server".

I'm talking about making connections that are application agnostic. Like Ethernet. If you're suggesting tunneling everything in HTTP I think that's unnecessary. There are ways to deal with firewalls. HTTP tunneling is a last resort.

So why HTTP? Why does it have to be at that layer? Why something that tied to specific applications that presume so much about what I want to do?

I want freedom from applications. I can write my own apps, thanks.

I want freedom to create new protocols, just for me and my friends. I want my own network, that we control. [This is possible using stuff that's been around for many years, and I have a working prototype. You folks are a bit too cynical to be beta testers, it's command line driven. Maybe someday.]

I've tried nearly all those "browser" options you mentioned, believe it or not. All except for one suck.

And if I had my way I'd extract the html table parser from it and have that as a standalone filter. I'd make it a UNIX utility.

The whole "browser" concept is outdated. People want to watch video, listen to audio, look at photos and read plain TEXT (with great fonts of course). I don't need html to do any of those things. And I don't need html or other HTTP junk to do search to find video, audio or images. I need a tcp (or other) connection and a video player/audio player/image viewer/typesetter, as the task requires.

All that said, hypertext is neat. But it's not world-changing. I can still do great research without "hyperlinks". Hypertext is the great benefit of HTTP. But at this stage, it is so weighed down with cruft and used in so many silly ways, making everything dependent on a monstrous abomination of a program called a "browser" (Firefox is freakin HUGE), it has become more of a burden than a benefit. It is a limitation, not a path to the future.


But the point is the same - the control is concentrated in a small number of points (e.g. DNS, Hosting) - and you articulated this correctly: client-server. Calf-cow. Not peer-to-peer. That's what I'm keen to get past.

But the client-server is only for a single request, it's not a static property of a node. You can be both a client and a server simultaneously, making and accepting requests at all times.

Not sure what you mean here. You need to use HTTP. So you need an http client.

Yes.

Chances are you'll be fed heaps of html and other garbage.

No, HTML is definitively not necessary. Think of what it's commonly called "Web Services"; they're often only available using structured data encoding formats like JSON and they serve a whole lot of applications that are not web browsers: native mobile apps, for example.

HTML is just one of the many formats that can be transported by HTTP, nothing forces anyone to use it in order to use the web.

Then there's the matter of state. So you're saying HTTP but not in a RESTful way?

REST doesn't prevent state. It just prevents session/context state from being stored on the server. You can still store it on the client (which, again, would just be one of the roles of a node) and permanent state on both.

If you're suggesting tunneling everything in HTTP I think that's unnecessary.

If you mean tunneling as in SOAP, definitively not. I suggesting using HTTP as it's supposed to be used. Something similar would have to be reimplemented anyway, and HTTP is already there and supported by plenty of tools, services, etc.

Why does it have to be at that layer? Why applications? (...) I want freedom from applications.

I don't get what you're saying.

And if I had my way I'd extract the html table parser from it and have that as a standalone filter.

The whole "browser" concept is outdated. People want to watch video, listen to audio, look at photos and read TEXT. I don't need html to do any of those things. And I don't need html or other HTTP junk to do search to find video. I need a tcp (or other) connection and a video player.

Again, no HTML is needed. And no, a TCP connection and a video player is definitively not enough. You need some way to identify the video you want to watch, to decide what part of the video you want (e.g. if you already watched the first half), of knowing if that video still exists on the server, possibly some way to authenticate yourself (I don't want to share my personal videos with the world), and it would be nice if the video player could tell what formats it supports so that the server could send it the right version or say that none exists without having to download it.

HTTP is well known, ubiquitous, provides all that and it certainly doesn't need HTML for it.


They describe it as a 'protocol' on top of HTTP; I don't think that assumes much about the 'client of choice'. Most apps I've worked with already use HTTP and JSON anyway.


[deleted]


These aren't meant to be run from personal workstations behind firewalls, they are meant to run on servers. Firewalls in this instance shouldn't be an issue.

If firewalls were actually an issue, you could always run the service on port 443. I don't know how you'd tell anyone else that you're running on a non-standard port, but it would be possible. That's actually another potential problem... this is a server that is masquerading as an HTTPS server... I hope that it plays nice with non-Tent HTTP clients. It isn't nice to hijack ports like that.

They are seem to be assuming a web-based client, so that traffic (which does need to worry about firewalls), should pass though a firewall without issue.

Edit: parent got deleted, it was

    One word: Firewalls


Http gets let through firewalls. Other protocols don't.


This would be the "killer protocol" for the freedombox, if combined with some smart dyndns management.

Here is a use case scenario I am imagining. I define two servers for myself: home.me.com and cloud.me.com. Where home.me.com is a dyndns to the freedombox. Dyndyns being unreliable, if a tent msg cannot get to my home server, then the messages are sent to cloud.me.com and then pushed to home.me.com when it comes back online (think POP mail).

The facebook killer then, is a hosted service like cloud.me.com for non-tech people, but a seamless transition to the hosted at home service as soon as you buy a freedombox. This way you have the best of both worlds. Your face in the cloud, and long term storage at home.

Other app wishlist: tent to smtp and smtp to tent adapters for gmail killing


hosted service within one month. email apps VERY high on our to do list, been watching freedombox for a while, we'll reach out to them after initial release.


Since you mention Freedombox – did you also check out ownCloud as well as the unhosted movement, with the remoteStorage protocol? There is a proof-of-concept social network build on unhosted: http://friendsunhosted.com

Do you have an IRC channel? Feel free to also join our channel at #unhosted. :)


I am not convinced that this needs a new protocol as they claim. Facebook-style functionality be done on top of activity streams, pubsubhub, salmon, webfinger et al. They indicate they have investigated existing systems and found them lacking. I would rather have something like this built on protocols that a bunch of people have discussed out in the open first. That said, I am interested to see more details as they arise, as this type of thing is needed.


Or at least some hard details about what those existing alternatives are lacking. That could lead to further improvement, as whatever communities have already formed around those alternatives could debate the questions and possibly improve their protocols.

As it is the FAQ reads like "those are old and busted, we wanted something new and hot," which gives off an aura of NIH syndrome.


We take NIH very seriously and originally began by attempting to revise existing protocols. We have some very specific complaints about existing federated web protocols:

• no support for private message (pubsubhubbub, anything atom-based) • inability to move relationships when changing service • no standard API for application interaction

by leaving each of these (and others) out of scope see: http://ostatus.org/sites/default/files/ostatus-1.0-draft-2-s... they have created an ecosystem unfriendly to developers (who have to approach each provider separately to work out auth schemes and APIs) and likely to lead to vendor lock-in (because relationships can't be transferred and basic features are implemented differently in each system).


This is valuable -- I'd recommend adding a page to your site that includes these types of critiques, it'd be useful for those working on or around those other protocols. And it'd give people who are interested in your work a URL they could send around when asked "why not just use salmon/webfinger/ostatus/whatever."


Thanks, that's a good note. We didn't want it to seem like we were attacking the other protocols (all of which have had success, have clear use cases, and we owe some debt to for paving the way), but we should make our reasoning clearer to the community. Will do.


Please this! There are so many [dropbox/facebook/twitter]-clones that claim to be different that reinvent things and use poorly designed in-house protocols and I eventually got so tired of them that I mostly ignore them now.


Hey, so, I'm one of the co-authors of OStatus.

First, welcome to the community of people working on this important problem. I highly recommend you join this group to stay informed about what's going on elsewhere:

http://www.w3.org/community/fedsocweb/

It's an "issue" group, not dedicated to any one protocol, service, or software package. Please, make sure you're a part of it!

A couple of notes:

1. We're working on including private messages in PubSubHubbub 0.4 and thus into the next version of OStatus. Understood that it's a big deal.

2. You're right, there's no standard API. ActivityPub is an attempt at that; see here: http://www.w3.org/community/activitypub/

Thanks for considering. Let's make sure we interoperate!


And what about XMPP? It has private messages. If run in a similar way to tent (single user per server/domain?) then switching service provider is exactly the same. "No standard API" ... isn't the protocol an API?


I've yet to go read the op page -- but:

  * for messaging why would you not use xmpp and/or smtp?
As for using http -- I'd say http makes good sense for getting, sending and receiving documents -- like text and images. It makes it fairly easy to implement a REST-like architecture.

As for things that have been tried before, we have ping backs -- that IMNHO never really worked. And there's Diaspora that have yet to come up with a stable protocol -- and have an implementation that is pretty badly broken.

It's also a good illustration of going the "full http" route: publishing becomes easy; interaction (server to server) becomes hard if you want to have any kind of security in place.


And yet, where NIH would be useful, they reuse the inappropriate HTTP.


We considered the protocols you're mentioning. There are a few huge problems for our use cases: PSH is built on Atom which means all feeds are public. That's fine for your public feed ala Twitter, but unacceptable for private messages. We also think it's critical for users to be able to take their relationships with them when they move to another server/provider. OStatus and PSH don't make that possible. They also don't have a standard application access API, so developers have to work around different providers' specifications if they exist at all.

The federated social net was a great first step, but the lack of support for access control/private messages and account portability means that you end up with lots of proprietary implementations of basic features which can create vendor lock-in.


How does using atom make things public? Just use client certs over https -- and it's as private and secure as anything over http(s) ever is?

Alice gets her updates from Bob via something like:

https://bob.example.com/private/alice/<type>.rss

You'd of course need to synchronize access passwords/keys/tokens somehow -- but that could be part of "friending" someone?

Integrate with something like cacert.org so you don't have to manage certs (as part of this project). A friend request includes the requesters cert (could be self signed, or via a trusted authority, like cacert), encrypted with the public cert of whomever the request is sent to.

When a friendship is accepted on the other end, store the cert, and use that for authentication. Add your own authorization rules (Alice is a close friend).

It might be a benefit to set it up as follows: everyone has a personal cert. They generate and sign a proxy cert for their tent server. The public "top" cert is used for user management and federation -- numerous such "downstream" certs could be generated, along with revocation certs.


This looks great guys. I'll definitely put up a server and hook up the content I traditionally expose through my personal website.

Question: what features that are taken for granted on today's popular social networks are difficult/impossible in this kind of distributed system? for example, i suspect something like "friend suggestions" might be difficult, since you only have access to a part of the network. Auto-friend tagging in pictures would be tough too. I'm seeing a lot of upsides listed, but there must be some things you just can't do. A candid discussion of the drawbacks would be helpful.


There are a few "standard features" of centralized social networks that are more challenging to implement, but we haven't found anything impossible yet. Search, especially real-time requires an external search engine. But it means Google (et al) needs to subscribe and essentially ask your permission for real time updates. Friend search is a bit easier if your followers/people you're following are public. Likely there'll be an app for that.


Oh, that's beautiful. The minute you get any nontrivial amount of data flowing, Google et al will want to crawl/index/search you. If they subscribe like any other client it is a win-win: You get a search service, they get real-time data feeds.


A couple of things I don't see on a quick scan of the docs:

(1) I don't see how one Tent entity contacts another proactively - it looks like A can't message B unless B has already chosen to follow A. If this is so, is it an anti-spam measure? Given that StatusNet etc. are infested with spam, it seems like a very wise one :-) On the other hand, it is rather limiting as compared to centralized platforms, don't you think?

(2) I don't see how you maintain the promise of a portable identity when your identities are hosted URIs. Eg, if my identity is tent.is/foobar, and I want to move to a different host, how do I do that? I can download my data, sure. It looks like I can even bounce from tent.is to another data server. But unless I want to break all my social connections, don't I remain at the mercy of tent.is? This strikes me as a rather unsolvable problem, but it would be reassuring to clarify that you're not solving it :-)


1) This isn't clear in the docs right now, but unauthenticated notifications are allowed, and eventually we'll add signing (think Domain Keys).

2) Every piece of data in the system will be available via the API including negotiated app and follow credentials, moving will consist of authorizing an importer app to have read access to everything, and then pushing a post that tells all the servers to check the profile again for updated entity and server details. It should be a very simple process.


(1) Have you thought about spam, then?

After all, there's a reason social services are centralized on today's Internets. The reason (IMHO) is that the Internets since 1992 or so have been an antisocial network, and anything worth attacking that lacks a centralized defense command is rapidly overrun by digital Huns. For instance, SMTP exists today because it existed before eternal September, and being valuable was (barely) defended; but if it didn't exist as a legacy from the old, social Internet, it would be very difficult to create it in the new antisocial one. If not impossible.

I mean, it's certainly not that some of us rotting old neckbeards weren't using finger and talk on the firewall-free Internet in 1989. So we know how cool it would be if some bright young whippersnapper could solve teh problem...

(2) This is useful but inevitably imperfect, as forcing every interlocutor to equate the old and new names is of course impossible. Eg, HTTP redirects make it possible to change your DNS identity - but hardly trivial, though the redirect itself is trivial.

And of course it's a process that your existing host could easily frustrate, though that would be very ill-mannered. Not saying there are any perfect solutions here.


(not associated in any way with tent.io)

1. Just because it is allowed by the protocol doesn't mean any given client needs to pay any attention. Just like email, I can filter out any messages from people not in my contacts. I may choose not to and instead run each one of those messages through a spam filter. In this respect it really seems no different than email. Individual clients/servers can choose to be as strict as they like (but servers are servers, and they are sitting on the internet, so spammers can see them and send messages that will be ignored if they like).

2. Since connections with most of your contacts are theoretically maintained so you can push out new data, updating is more akin to propagating a new ip through the DNS system than using a redirect. Yes a DNS server can misbehave, but that can only screw up a network of well behaved servers for so long.


1. People learned (grudgingly) to use spam filters with their email because they had an existing service which had achieved large-scale network effect in a spam-free environment. A new service which develops a spam problem before it achieves critical mass is much more likely to be abandoned.

There must be some reason we haven't seen successful new decentralized service protocols on the Internet since the early '90s. I don't know of a more obvious one.

You can see the issues with StatusNet and spam:

https://www.google.com/search?q=statusnet+spam

2. The problem is that contact names propagate outward from the master state where a push will update them. For instance, they get written down on business cards. They also get cached, imprudently but inevitably, in forms that are still digital but don't update properly.

Imagine a protocol that you could use to update your email address this way, and you'll see the problem. In theory, you could design a special SMTP message that would cause all clients to update their address books. In reality this would scale quite poorly and be quite unreliable, leading people to avoid it, leading it to be even more unreliable, etc. Of course, your chances are much better with a bright, shiny new protocol... but still.


RE 2. I agree since some servers that you are trying to push your new address may already have changed theirs. What happens then? Do my friends inform me? What if that unreachable server is not connected to my friends in any way?


(2) ... And of course it's a process that your existing host could easily frustrate, though that would be very ill-mannered. Not saying there are any perfect solutions here

This would be especially bad if the server you're using is hacked and they stop everyone from leaving :/


Maybe anything that requires knowledge of the whole network would be pretty difficult. For example, showing the full list of the photos that are being posted. You can't just follow every user that posts photos.

The best solution that comes to mind now is building a server with a database of all the users and their respective servers. But, of course, that would break the decentralized purpose of Tent. It's an interesting problem to solve.

EDIT: Another big problem, how do I search users? A DNS-like solution would be feasible?


A DNS requires people to agree about who owns a certain identifier, which would be problematic in a decentralized system (though there are proposals like namecoin).

I think a better system could be publishing a triple (name, keyfingerprint, current-server) on a shared datastore (e.g. DHT). The user doing the search would still need to find some out-of-protocol way to identify the right person, though.

EDIT: Kadmelia on Emule is a working implementation of a similar system to what I described, but for files instead of users.


Yes. The "keyfingerprint" might be the server's public key for purposes of end-to-end security of the connection. (No more SSL! There's better libraries now.)

cjdns has a simple approach: your tcp6 address is your public key. This the right idea. Leverage what you already have that is unique (not to imply IP address is necessarily truly unique, e.g., anycast).

But in general to be on the network, you have to at least one unique item: your network address. So leverage that to make other unique identifiers.

To encrypt communications, you may need to maintin encryption keys. So leverage them to be part of each peer's unique identifier on the network.

Connecting to strangers, and only using the network to get each and every bit of infromation from the outside world, is all fine and good, but this sort of peer-to-peer networking is much more valuable with peers who you can identify in person. Without using a computer. You can exchange all the above numbers (identifiers) on business cards.


Personally, I don't like the network address as id, for two reasons; firstly, you don't control it, your network provider does (whoever that is - the server or ISP does). So if that entity wants, they can boot you off and pretend to be you.

Secondly, it ties you to them; what happens if you switch servers, or ISP, or whatever? I mean, right now I'm planning on switching to a cheaper, faster offer from another ISP, but I wouldn't do it if I were to lose all access to my accounts on the different services.

I think the id should be both controlled by the user and portable; that mostly leaves public keys and their fingerprints as ids.


So if that entity wants, they can boot you off and pretend to be you.

They could shut you down, but I don't think they could pretend to be you since they don't have your private key..


That's my point; it's why I prefer public/private keys as ids instead of network addresses.


Telehash would be pretty for the messaging, too. ;)

http://telehash.org


Telehash discussion here: http://news.ycombinator.com/item?id=4420634 - hopefully :)


I wonder about the rationale to use HTTPS for everything, especially about the SSL part. It seems that by the choice of SSL over a web of thrust (WOT [1]) approach one imports the problems of certificates and CAs into the protocol. Especially it would be possible for a server with a root CA to impersonate other people. On the other hand, a social network is about social relations, which could also directly serve to sign and validate public keys in a WOT. This could then serve as a authentication against the same social network that is stored. For example a chat software could show Alice that the person she is chatting with is indeed Charly, the friend of Bob ( Bob signed Charlies key).

[1] https://en.wikipedia.org/wiki/Web_of_trust


I'm damn excited about this. I've been mulling this concept over in the last couple of years, because I really we need something post-blogs that isn't the walled garden facebook / twitter / G+ / et al model.

For the me the ultimate social network would be just blogs, RSS and a feed reader, with people either managing the blog themselves or using a third party to do it for them-- the point is it doesn't matter.

The problem is that blogging is complicated, anything with multiple options is complicated, and discovery is complicated. I know where to look to find a friend on facebook, I don't know where to look to find his blog.

I don't have time right now (work) to look into Tent in more detail, but it sounds like it's a definite step in the right direction.


It seems like this will use up a crapton of bandwidth with its 'push' style notifications. If someone with a million Facebook followers makes a new post, one entry is made in the database and then users pull it down as they visit their own pages.

If a million people decide to 'camp in my tent' (?), my server is suddenly pushing out gigs of data every time I make a post.


Status.net/OStatus uses a similar PUSH model, basically it scales with number of followers, and yes, if you have thousands of followers the load will be significant. As a data point, I've been running a status.net instance since last December, during which time I've had 50-80* followers, have made an average of 12 posts/day, have followed 100-190* others, and my instance has transferred a grand total of 10,279MB * over that entire period.

followers and followeds gradually increased over the duration. I have this number handy since the connection is reverse-proxied via pagekite.net, which is metered. Over the last few months, I've had all static content offloaded to another server, which reduced the bandwith used, I'm not sure by how much.


You definitely seem like what I would consider a light user then. My wife has 1000+ Facebook friends, and posts 15-20 times a day. If I round to 1GB/mo in your case, then she would probably use around 20GB/mo... which is actually not bad.

Thank you for the numbers!


Even worse are pictures and videos. Rather than including a URL to it, the 'push' includes the base64 encoded binary data. Hmm.


Posts have different "views" one of which is meta which would just have a URL pointing to the photo instead of including it in the post. See: http://tent.io/docs/post-types


Ah ha, I missed that. Good work!


Good, I was going to say, after evaluating PuSH I would have assumed that some things can be heavy pushes and other things can be light pushes.


We anticipate follower counts similar to the levels seen on Twitter today. Managing 1 few hundred followers isn't too difficult, thousands can be managed by cranking up the dynos on heroku. But yeah, Justin Bieber is going to need a hosted solution to handle that kind of load (like he has now with Twitter, YouTube, etc).

We also have a setting in subscriber settings that control which types of posts are pushed in their entirety to different followers vs just a notification being sent vs nothing at all. So blog entries and status updates might get pushed to all 1M, but you probably wouldn't push an HD video update to all of them without some more serious server architecture.


What arguments do you have against a 'pull' style for notifications? Something like a GET to /latest that passes in the last time the client checked for updates and only gets back the URI of the last 10 or so posts since then. An HD video post could contain simply a URI that points to the video, so the video isn't even downloaded until the 'app' within tent asks for it.

This way you could have a path with all the videos on it that you could proxy off to a server with more bandwidth. It would get the user a lot more control over how their content is accessed.

Edit: And what if my server is down when one of my friends makes a post? Will I never see it?


Pull-style requests actually use up quite a bit more bandwidth than push notifications, because every client and server has to query every few seconds instead of just getting notified when there is a new post.

Posts have 'views', so a video or photo post would be pushed out with the 'meta' view which would include a URL to the content instead of the content inline.


I don't think so, considering that not all of my followers are going to be using the service all of the time. If the server just blindly downloaded everything always, then yes, it would be more bandwidth. But with proper caching strategies and servers only requesting new posts when clients are active, I think you could save a lot of bandwidth by not pushing everything.


While the example of Justin Bieber makes it seem to make sense to push the cost of following to the followed rather than the follower, I am not at all sure this makes sense in the general case. It puts the user at the mercy of spammers & bots and ISTM making the follower bear the cost of following would disincentivize this kind of bad behaviour.


What about allow a publisher to delegate a subscription endpoint via something like pubsubhubub? Then publisher just has to push to the aggregator, everybody does their pulls from them.


That only moves the load problem, doesn't solve it - given a decentralised system you're likely to be responsible for load on the aggregator server as well.


Well, in a decentralized social network you'd have to make sure the tentpole you are living under is able to hold the weight.

This is decentralized. In my expectation this network is not about a few celebs having millions of followers. I'd love to see it become peer to peer on the human level. Friends who are actually real friends, or at least people you have met and have had a real human interaction with.


This is the Twitter replacement gotham needs.

Repo starred, eagerly awaiting runnable stuff.


I would be interested to read a 'big-picture' description of what Tent is. Forgetting for the moment how it compares with competitors, what is the idea? I read the introduction but I feel that I don't understand it.

From what I can tell, the idea is to create a standard set of objects and rules for interacting with these objects. Of course that is how protocols tend to look.

What are some of the new objects/concepts proposed by Tent? For example, is there a distinction between "home" and "users" akin to server/client? Are there several types of messages, compared to email? Is there a standard cookie-like object? What is the conceptual model for sharing? Any insight would be appreciated.


Awesome. I've wanted this forever, glad someone is finally picking up the ball and running with it. I'll be happy to beta test!


How do you feel about alpha? please join the mailing list (on the bottom of http://tent.io) so we can let you know


"His server sends a notification to every server following him or mentioned in the post with permission to see the post."

The protocol seems to have some fundamental limitations.

For my money I'd rather go with FETHR (see http://dsandler.org/brdfdr/ and this paper: http://dsandler.org/brdfdr/doc/iptps-fethr/) and its implementation - which has code available right now (https://bitbucket.org/dsandler/brdfdr/).


Too bad that project never caught on, it seems really interesting and useful.


"Tent servers can also be run as Tor hidden services to create a social darknet for at-risk organizers and activists."


So can anything, but it's awesome that they are promoting this. Speaking of which, does anyone know if there's a hidden service Twitter clone? This would be fun.



Rad... thank you!


Thanks guys. Not a moment too soon. Humbly quoting the motivation:

> What is wrong with other social services? Centralized Social Service Providers limit what you can share and who you can share with. They only allow users to interact with other users on the same network. Because their products are centralized and maintained by a company, users are left in the cold when the company changes its products or shuts down. There's nothing wrong with a company offering users social services. But users shouldn't be limited by those companies. Imagine if you could only email other customers of your Internet Service Provider. Unfortunately Centralized Social Service Providers have done just that. You can only communicate directly with other users of their closed network.

> If you don't like a bank you can withdraw your money and deposit it somewhere else, including your own home. You could even start a new bank where you and your friends felt safe. You can still pay your bills and maintain your financial relationships, just tell them about your new account. We aren't talking about money. Your data is far more valuable– your family and friends' photos, locations, and private communications. You should be able to store them somewhere you trust, move them when you want, control who can and can't see them.


We're building something like this for a while at Qbix.

It's not an easy problem to solve when it comes to privacy and security: http://www.faqs.org/patents/app/20120110469#b

Eventually to arrive at this: http://myownstream.com


Looks ambitious in liberating the user and their data, but it seems you've taken too long and have stumbled upon substantive problems on your own. The screenshot of the platform seems old fashioned, but eventually by liberating the user/data, you are making them stand on their own island, hence there will be no "networking" or connections in the true sense, as it is currently happening on a single platform like Facebook or Twitter. By attempting to set the data and user free in such manner and by eliminating the single and cohesive platform, we risk excluding ourselves out.


What do you mean


What took you so long? I've been waiting for someone to finally get around to building this for a long time.


Appleseed, OneSocialWeb, StatusNet/OStatus, Disapora...


"How is Tent licensed? Tent will be completely free and open and treated as a standard. To prevent fragmentation before launch, the original authors currently retain copyright. This is a temporary situation which will be remedied immediately after a governance model is chosen. We decided it would be best to share what we could as soon as possible, releasing early and often. We are entirely committed to free and open software and protocols with open governance models leading to a ratified standard. Tent will be released under an open license in the immediate future."

I am curious as to how retaining copyright will help them prevent fragmentation? Can they not elect themselves as project leaders of the opensource project and prevent fragmentation?


Retaining copyright means that you can't fork their code to easily establish a working but slightly incompatible version, which is what fragmentation is.

Project leader is not a formalized position (or even a meaningful concept, really) in free software and doesn't come with the power to prevent forks or fragmentation. I guess their thinking is once a community of users and developers is brought together, they can be trusted to establish a model that retains compatibility since it is in everybody's interest. While at the moment, an incompatible fork would have the same "network effect" as the original.

Either way, the copyright only covers the software, not the protocol.


Retaining copyright isn't what matters. The license they choose on the copyright they (naturally) retain does.

It is possible that they could craft a license such that protocol-incompatible changes are not allowed, but I don't think that would be in their best interest. This is a starting point. The risk isn't fragmentation, it's crickets (ie nobody cares) and an untested protocol that may need multiple versions to get right.


PHP support is vital.


really


I like how they support Tor hidden services. Though I wish they would go a step further and support I2P.

It's really nice to see people are working on ways to sort of "replace" the current centralized services out there.

Let us hope they are attractive enough to developers and users.


Sounds almost too good to be true. Looking forward to being able to download some software.


We'll have a reference server implementation and a couple of sample apps up in the next 1-2 weeks for those who want to self host and a hosted version within a month for those who aren't so inclined.


They've got a github repo they're working out of it looks like:

https://github.com/tent/tent.io


Argghhh. Ruby.


Only the site generator is in Ruby. While most of our core team are Rubyists, someone is working on a C++ implementation, too. It's a protocol which means you can write an implementation in any language you want. We hope to see many server implementations with different optimal use cases in different languages.

For libraries, Ruby and Javascript are our top priorities with Java and Obj-C/iOS to follow. We'd love help if there are other languages you'd like to write libs for.


Okay but there are more PHP users out there. If you had one PHP server I would start to run it right now. My (and many others) shared hostings do not provide Ruby, Java, C++ support.


Confirming that I intend to write one for (gasp) PHP.

NB: I am the technical cofounder of elgg.org, and I believe software like Tent is (part of) the future of the social web.


But how does this differ to StatusNet/OStatus? If we have several similar solutions to a problem it makes it more difficult for one to gain traction. Seems like reinventing the wheel, or am I missing something?


I hope this crew is looking at WebRTC. The true killer app for a distributed social network is going to be a global decentalized video chat network. Imagine Google hangouts without Google. Even just to implement a voice-only chat without a centralized system would be pretty phenomenal.

Also, I really think they're making a mistake by not using secure Websockets for their protocol. Plain HTTP has too much overhead for what needs to be an efficient messaging protocol and the potential need for persistent connections.


How does this compare with Diaspora?


Diaspora* is a federated open source social network. It does not have a protocol specification or an API for app support/integration. Diaspora* is not intended to be run as one server/user, but by clustering users on pods with pod administrators, etc.


Reading the API docs, it seems that every user needs their own URI to be uniquely identified. I wonder how this will work with the average user, just IP address? What about users with shared internet?


You're correct-- we anticipate users either registering their own domains (i.e. http://danielsiders.com) or using a hosted service that provides subdomains (http://danielsiders.tent.io). If you're hosting at home (dream solution here is plug computers), then dynamic DNS pointing to your home IP/computer.


PageKite (pagekite.net) is another solution for self-hosted from a household. plug


This was the mistake Open ID made, which they later realised complicated things way too much for the average user. Regular users want to identify as an email address, not a URI.


Diaspora is mentioned in the article in the "What about the federated social web?" section. Apparently it is more decentralised than Diaspora. Although I can't see what that means at the moment as implementation details are fairly sparse.


Diaspora and some other platforms are federated communities - in other words, they federate between collections of multiple people. Here, the decentralized social activity occurs between individuals. It's decentralized all the way down.


I'm thinking about every smart phone as a tent server.

P2P camping site will establish when you are waiting the bus, taking the boring meeting, or camping at the river bank. I tried to use wifi-direct/bluetooth, but I found iPhone and Android system set device default to non-discoverable for security issue. But I do found a lot of Nokia/LG phone in discoverable mode on subway.

I hope the tent will be successful. Be an application to make strangers knowing each other and people to go outdoor.


How does this compare to Mr Social? : http://mobisocial.stanford.edu/papers/mrprivacy.pdf


I believe this is a great idea, but I disagree with some points of your implementation. I actually started writing a lengthy comment about this, and after about two hours of writing I decided it might look better as a blog post, so there you go:

http://www.nicollet.net/2012/08/tent-the-right-goals-the-wro...


I find your post interesting, and it goes some way to answering my own questions. However, your post is based on the premise

"There are four issues with decentralized, large-scale sharing of content between non-technical people..."

and you claim those four issues are 1. Identity Persistence 2. Data Persistence 3. Publishing 4. Access Restrictions

I do not see why this is so. But even if it is, it is not even clear if the main point, at this stage, is to identify the atomic components of the "social web" -- perhaps what is sought after now is a slightly higher-level conceptual framework which puts these together.

For example, "publishing" may be missing the point - perhaps we need a finer grained concept: different kinds of collaboration, delegation of authority, distribution of tasks, self-communication, electronic cloud prayer...


This looks really neat. How are you going to get people to start using it? Anything more active than hoping that an app/user ecosystem develops?


We are starting with an easy-to-use hosted service that will launch in a few weeks. We are catering to the development community by offering a simple, standardized API that will be easy to use. Privacy is a first-class feature, and Tent is specifically designed to be usable by high-risk activists[1] and people in countries that block other social networks.

[1] http://liberationtech.tumblr.com/post/13377461578/how-the-ne...


I don't mean this in a cruel way (I'd really like this to succeed), but that's a list of features, not a plan for getting adoption.

Figuring out how to bootstrap network effects is critical.


You're right-- Our plan is to get the protocol and reference server functioning for general use. We're mindful of the notes made here: http://liberationtech.tumblr.com/post/13377461578/how-the-ne.... We have some thoughts on adoption tactics and strategy, but the master plan is still under development. All insights are welcome.

edit: wow, wrong link.


Twitter and LinkedIn are successful in distinct niches from Facebook. Companies and privacy-conscious people apart from activists (Germans?) may also be interested in a social network that lets them own their information.


I think you had the wrong URL in your clipboard :)


A hosted service, extremely simple APIs and libraries, and some really great end-user clients will go a long way. Make it easy, in other words, and don't make it technical (except for the people who want it to be).


Having a good product is necessary, but far from sufficient.


In my opinion, there isn't much point in decentralizing servers -- it's been tried and failed a number of times. What we need is decentralized clients, e.g. a true P2P social network. Essentially, imagine that your contacts app could exchange messages over the Internet directly with any -- or all, or any combination -- of the contacts stored within.


Where's the donate button when I really need it?


"Not Invented Here" syndrome is manifested as an unwillingness to adopt an idea or product because it originates from another culture - Wikipedia

The culture of decentralized web doesn't bode well, not enough capitalism, which might in turn effect the quality of the product.


If anyone is looking to get involved I have (unoffically) started a project to write a testsuite against the protocol. Contributions would be appreciated.

https://github.com/lucaspiller/tent-testsuite


> Tent is specifically designed to be usable by high-risk activists and people in countries that block other social networks.

I can't help but read that and think, "terrorists." Then again, there will always be that tradeoff and you are probably on the right side.


Osama denied my friend request! :-( (Sad terrorist)


Think Sea Sheperd.


> OStatus [..] stopped short of actual decentralization.

How so ? It looks quite decentralized to me.


Why not use the OpenPGP standard? PGP public and private keys seem to be the natural solution for both,

1. signing data so that the source can be verrified

2. encrypting messages so that they can only be read by particular recipients


To a normal person, this article is just a collection of fuzzy well meaning ideas wrapped around a bunch of technical network jaron. The fact that there is no working demo is rather damming.


Instead of sub domain, email address would have been a better choice.

user@provider.tld

Where provider.tld provides specification/api.. like robot.txt/tent.json that would specify actual api endpoints for given user.


I think both are unfortunate, since it ties your identity to your current provider. I'd much rather see a truly decoupled system, even if it didn't provide a "friendly" user id, but only an hidden fingerprint. People can just search for name and other data anyway.


This exists, The Discovery Protocol Stack http://hueniverse.com/2009/11/the-discovery-protocol-stack-r...


Friendica does that already. No one cares. Potato chips are potato chips. No one cares about the brand, but everyone loves them, even though they are not free.


Who's behind Tent, does anyone know?

I can swear I saw a section with names on the site, but can't seem to find it now. It looks like it was taken out.


How is the different from identi.ca? Isn't this pretty much what their premise is/was? (ignoring implementation differences)



Who is building this? Who designed the protocol, and will publish/veto the standard?


What happens when my server is down when you try to post a notification?


From the docs:

"If the app does not respond with 2XX, then the server should try again later."

"Shirley has her client in maintenence mode; jerrold.me will attempt to deliver the notification later using an exponential backoff algorithm."


I'm digging this!


Please make more social networks.


Every day I wake up and I think 'If only we had more social networks!'


Website is down?


"Every user decide which other users can follow them and what information will be shared with each of their followers."

This effectively makes each person an island in h/herself and hence the model of social-web breaks down. It wouldn't work and people know it.

In order to liberate the data, you're throwing the baby out with the bath water.


I'm not following your rationale. Isn't that Facebook's default model? I explicitly choose who is allowed to see my content, and I have control over how much of that content they can see through privacy settings / lists. The problem with Facebook is that I simply don't trust them with my data.


Yes, that's correct, hence Facebook's user feed is not really based on, or helping, the social-web part of the user's life. The user feed or stream is there as an advertisement, a selling product, for the product and services to advertise to (it is sort of a loop, isn't it - you are selling yourself to the one who is also selling to you).

The real social-web aspect of the Facebook is in their Groups, or Pages, where users collaborate, converge "together" on a single entity (of any particular Group or Page).

Do you follow my logic now?

I am against the idea of making users standing independent on their own because they then become an island in themselves. It is fine if you are making them a product, a brand, or portfolio, but none of this is helpful for the authentic social-interaction. You don't need a brand or portfolio to be social, that's not its original purpose. The original purpose of the user (individual human being) is to communicate, be social, create communities.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: