This protocol is a successor to OStatus (GNU Social), and mostly made by the same people.
From what I remember, Mastodon switched from OStatus to ActivityPub a few releases ago. In the past they had to extend the OStatus protocol for private messages, which meant some clients may not honor the private status of theses posts. ActivityPub has private messages defined properly in the spec.
Last summer I tried untangling the OStatus/pump.io/ActivityPub/ActivityStreams nest hoping to piece together a big picture mental image based mostly on what Wikipedia has to say, and found it mostly frustrating.
For anyone who's similarly curious or confused, LWN published a pretty great overview last month—their "Federation in social networks" article:
I'm the author of the LWN article, so glad to hear the positive feedback. I wrote it because I run a Mastodon instance and have been getting pretty confused about how these fit together. ActivityPub is a very exciting direction and I'm hoping to see Mastodon implement it completely (right now their support is only partial). OStatus comes with a lot of historical baggage - down to the name itself - and that's bumping up poorly against modern expectations like privacy.
no, we really don't. Not every tool needs to be equipped for private discussions. ActivityPub and OStatus are used for Twitter-style communication. Those aren't high security communication services.
They turn into security problems in aggregate. For example, the threat to me from actors slurping up social media data to nudge/manipulate people at a large scale is much larger than the threat to me than someone reading my group chats to my friends or a lot of other personal info that is generally considered more private and more in need of high security.
If my personal twitter-style communication got out it would be worse to me than my more private messages, but it is worse to me personally if all the twitter style communication gets out than only my more private messages.
You have to account for manipulative big data risks in your analysis, thinking only about your personal data is an outdated approach.
That's probably a fair consideration. I and others worked on an intermediate system called pump.io, and the ActivityPub interface is very similar to the pump.io client-to-server API and federation protocol.
The biggest advantages of ActivityPub over OStatus are: JSON instead of Atom XML; a defined client-to-server API; and private distribution, including to contact lists (like Diaspora's aspects).
As someone who has worked on this type of standard for over a decade, I can say that it's the best federation standard I've ever seen, and I've seen them all.
Yes, and this was known to the main developer. There were easy ways to make it at least private if the other server was trusted, but they chose to not implement them.
this doesn't make much sense. The very easy way to add AP-level privacy to ostatus was to just use a different salmon endpoint for private messages. This way, messages would never have federated to servers that don't respect privacy settings (by accident. if the server leaks on purpose, that's a different story).
This solution was discussed at length with mastodon devs before the implementation of the private messages. It was ignored. Now we have a situation were Mastodon is likely to switch off OStatus soon, leaving behind all those projects that don't have the dev resources to rewrite their core federation systems every few years.
The Ostatus/AP dual stack is also pretty hacky and not even valid according to the AP spec, although it's getting better all the time.
Social media platforms are used for one-to-one communications and should support that. Furthermore, social media platforms should support sharing something with a group and only that group of people.
Really, the only time I should expect the content I share on a social media platform to be public is if I shared it as public content.
[Note that I am talking about technical capabilities and privacy here. Of course somebody with whom I shared private information can re-post it publicly, but solving that is out of scope for a technical solution - and is also true outside of social media platforms]
To expand on that... If social media is orthogonal to privacy, then there should be no conflict combing the two concepts to create (privacy + social media).
If they are incompatible, they aren't orthogonal, they are opposed.
I disagree with that, though you might want to think strongly on what you mean by "privacy".
For some people, this is strictly limited to the technical capability to prevent (or allow) the flow of information. I'd long subscribed to this point of view, but have found it inadequate, if only because capabilities have been changing so dramatically on this point.
Rather, and this is something of a personal definition, though I'm not certain it's specific or original to me is that privacy is the ability to define and defend limits and boundaries on information sharing.
You might ask "why use that definition?"
My principle answer is that if it isn't possible to share information in a limited context, then there's something immense and profound which is lost. Two people, or twenty, or even several tens of thousands (say: a large company or government bureaucracy) might need to share information without disclosure.
I've been following the ICIJ's multiple projects on various data leaks, which are particularly poignant in revealing both sides of the argument. On the one hand, the ICIJ itself farms out its data to several hundred journalists from numerous organisations, and keeps a lid on the project until an agreed-on release date. (The group has produced a video showing just this process.)
On the other, ICIJ's stock in trade have been leaks of information from other organisations whose privacy has been breached.
The question of "which of these is right?" should very much arise. And it's not an easy question to answer, though I believe there are some guides which can be used.
As such, privacy is not an inviolable right, but it is a crucial element of social organisation.
Your definition is not unusual at all, in fact, it is pretty close to the German definition of "Datenschutz" (data protection): Individuals have the right to know who has what information about them and the right to determine to what extent what information may be shared.
The definition draws in part on Jeffrey Rosen's The Unwanted Gaze, published in the late 1990s, and apparently quite underappreciated and/or known. Though it's been a while since I've read it.
Privacy on the level of AP would have been very easy to add, by just using a different salmon endpoint for private messages. This was discussed at length back then, but Mastodon still chose to implement the leaky-by-default changes. There's nothing in AP that can't be done using OStatus, with very very minor extensions.
In OStatus 'salmons' are messages sent from one server to another that contain posts. They are signed, so the receiving server knows if it can trust it.
If person peter@example.com mentions bob@differentsite.org, a salmon is sent from example.com to differentsite.org containing the message. The endpoint that this is posted to is the 'salmon endpoint'.
Using a second endpoint for privacy enhanced messages would have the way to go. Old servers wouldn't ever receive private messages, while new servers that understood the extensions could have kept 90% of their old code and infrastructure.
Oh that's nice, thanks for the explanation. I'm currently working on a social network and I like the idea of being able to connect with other ones via open standards. I need to do more research on implementation.
there's #social on freenode, although it's not very active. You might have more luck asking your questions on #pleroma on freenode (another Ostatus/AP server) or on the mastodon discord.
From my quick read of the spec, server-to-server federation assumes that servers can be trusted. That need not be the case; unless a message is signed it should not be assumed to have come from a specific user. I don't know if there's an easy way to share keys in a decentralized way, but that'd be an interesting problem to solve (some blockchain maybe).
The other question is whether we want server-to-server federation at all instead of a P2P network like bittorrent. With the decreasing cost of compute, I am also optimistic of everyone being able to run a little sandbox on the cloud to exchange data with others, instead of routing it via FB/Twitter.
> I don't know if there's an easy way to share keys in a decentralized way, but that'd be an interesting problem to solve (some blockchain maybe).
This is exactly what namecoin tried to solve. Buy an identifier, put whatever you want in it. Example: https://nameid.org/?name=rakoo
> The other question is whether we want server-to-server federation at all instead of a P2P network like bittorrent.
We want something in between. The best model I know is the one of scuttlebutt (http://scuttlebot.io/more/protocols/secure-scuttlebutt.html): Each peer is identified by an asymetric keypair, and writes posts/comments/photos to a local ledger. Everything is signed and potentially encrypted if the message is to be read only by select people. Diffusion follows the peers' connections: stuff is sent from peer to peer as they connect together, friends can be used as a third leg (ie a common friend can carry stuff even if it's not for them), and you also have pubs where more people can connect and get more stuff faster. This system is different from bittorrent in that everything depends on the human connections: information spreads along human acquaintances, names aren't globally unique, they depend on how your friends agree to name you, etc...
It's better than naive server-to-server federation because pubs can be simple, stupid message forwarders yet still have all the advantages of being always-up servers.
Scuttlebut is, so far, the most interesting decentralized social media tool I've seen. Even Mastodon and Matrix still require a server of some form, while SSB will continue to work offline and via truly local peer to peer connections.
The only issue is client support and the fact that I don't have a good sense of the state of development/stability/soundness for SSB.
If you (or anyone) has experience actually using SSB or similar true P2P platforms, I'd be very interested to hear what did or didn't work well.
SSB is rock-solid in that the community encourages the use of diverse client and server implementations rather than having a monolithic interface that's meant to do everything.
On the other hand, the underlying libraries are opinionated in that they usually use depject and a handful of other tools/techniques that aren't super familiar to those outside the SSB community, which makes it harder to write brand new clients.
I'd use SSB exclusively if they offered a web client (as I'm on a Chromebook), but we're slowly getting there.
FWIW I'm 80% convinced that dat is the future of SSB. It has the technological foundations to do everything SSB needs and more. There is however one huge difference in philosophy, that can be solved with technology but still needs to be done: SSB gossips content through the human network and public places, while dat needs direct connectivity between peers. The former is better exactly for the reason that was mentioned earlier: Peers need to have some re-hosting if they want to have their content disseminated while they're offline, so the discussion can still happen. Dat will need to have the equivalent of SSB's pubs (no-registration, no-config, just connect and send).
There is a discussion about that somewhere but I can't find it anymore
I think there's a few major reasons servers is still the way to go, albeit numerous much smaller servers.
- Mobile users content should remain available when they are disconnected.
- Participating in a federated environment still contains plenty of technical challenges, if we want non-tech-savvy users to participate, we need admins.
I personally would like to see servers for web applications, be they Sandstorm.io-type app platforms or Mastodon-type social networks, in a family-and-friends scale operation. I'd happily host resources for family and friends, for instance. While most people may not be able to run their own server, almost everyone is either family or friends with someone who can, and corporate offerings will fill in the gaps.
Currently ActivityPub servers in practice use HTTP Signatures and Linked Data Signatures, so there's a certain amount of proof of the origin of messages there. But in moving towards a much more peer to peer system, we can do even better by stripping out SSL Certificate Authorities and DNS altogether. The paper linked above discusses one path to accomplishing that in ActivityPub using DIDs. Hope that's interesting to you!
Seems to me that they _did_ consider those concerns to some extent, but the specification doesn't really address them:
> Unfortunately at the time of standardization, there are no strongly agreed upon mechanisms for authentication. Some possible directions for authentication are laid out in the Social Web Community Group Authentication and Authorization best practices report.
That need not to be the case, but realistically it is the case for Mastodon & co for now. People are looking into various signature schemes, potentially as extensions of the protocol, but it would have moved the entire thing into very untested waters, which is bad for a new standard.
Same with federation: It's what you need right now for "normal" users (e.g. compare user numbers of Mastodon etc to purely p2p alternatives). While I personally self-host a lot of stuff and would be willing to run a Mastodon instance for friends and family, I can't expect or even recommend they run something themselves individually. Thanks to federation, nothing is stopping those that can to run their own single-user software speaking the protocol, or for instances to only serve very small groups.
As a counter example, there are millions of non-technical users using the bittorrent protocol. They are running a server on their machines and not aware of the technical complexity that lies underneath. I am not saying it's an easy UX problem to solve, but it can be done.
Sure, I think the problem with people running i.e. Mastodon on their own machine would be what should happen when people turn off their computer.
I guess there could be some sort of "super node" solution, where a known Mastodon server would just forward you the "what happened while I was asleep" stuff and I guess people could encrypt their messages to you with your advertised public key, so that you don't have to trust the super nodes, but I am not sure about the additional complexity involved in implementing that vs current Mastodon using ActivityPub, nor about the usability challenges involved.
That's basically how Secure Scuttlebut works; the super nodes are called pubs.
It's actually remarkably easy to onboard to SSB with Patchwork. The problems set in when you start considering mobile devices or multi-device support (the latter is currently possible but risky).
BitTorrent only works well (i.e in a true peer to peer fashion) if many people are serving up the same files so that some sources are always online. Outside of piracy that is rarely the case.
It's intended that server-to-server messages be signed with the private key that matches the public key baked into the user's profile data. I'm not sure whether the final specification requires this, or merely recommends it; it may have gotten pushed off to a future update in order to get the spec out on time.
One of the authors of this standard has talked about decentralized identity, potentially using a blockchain, and has also mentioned that p2p is where things are headed. I have a feeling these will be the next topics explored in the standardization effort.
ActivityPub is like JWT. Simple on the outside but complex inside. For example I wanted to implement a simple server to connect to Mastodon. JSON via HTTP, what could be simpler? Actually it's JSON-LD not JSON. But it doesn't matter as JSON-LD is JSON, right? No, as the standard requires normalizing JSON-LD for purposes of signing that includes all of RDF canonicalization, of course the standard do not require it [0] but it's akin to unencrypted HTTP2, theoretically it exists but practically doesn't.
What's interesting is that ActivityPub requires signing the data twice, first on a HTTP request level, then the JSON-LD itself. It seems like the designers tried to make it simple (let's use JSON-LD and HTTP) but after several edge cases the standard got out of control.
[0]: "Linked Data Notifications also supports a number of RDF serializations which are not required for ActivityPub implementations. However, ActivityPub implementations which wish to be more broadly compatible with Linked Data Notifications implementations may wish to support other RDF representations." source: https://www.w3.org/TR/activitypub/
HTTP signatures and LD-signatures serve different purposes. HTTP sigs authenticate server-to-server requests, while LD-signatures allow to forward verified messages in the network. The forwarding part is a good extra for UX purposes but it's not necessary for your implementation, you can just stick to HTTP sigs which are really simple. I agree that LD-signatures are very complicated to implement. If you avoid LD-signatures then you don't have to mess with JSON-LD canonicalization either, you can just treat the JSON as simple JSON (though it's still a bit fiddly because a value can often be either a URI string, an array of URI strings, an object, or an array of objects).
I know what the signatures are for but I wonder if there's a simpler way to achieve these objectives.
Server authentication: For example XMPP uses client certificates to authenticate servers to other servers (TLS server certificates usually can be used for client authentication too).
Message authentication: maybe exchanging OpenPGP messages would be easier? They already are signed and the payload can be anything. Of course it's not as easy to list a collection of these messages as it is with JSON (just collect into an array).
> If you avoid LD-signatures then you don't have to mess with JSON-LD canonicalization either,
I want to be interoperable with existing software so avoiding it is not a practical option (although allowed by the ActivityPub spec). Just like having non encrypted HTTP2 is not a practical option (although allowed in the spec).
Look in particular at that cool video in it. Someone posts a video to Peertube, which gets federated to Mastodon. People comment on Mastodon and the comment gets federated back into Peertube. People reply on Peertube, and the comment goes back to Mastodon. Kinda nifty:
The thought process behind ActivityPub seems fanciful to me. To paraphrase:
Here is a spec for something no one currently uses, please take a risk and invest your time to understand ActivityPub, build an impl of ActivityPub, and then build your thing on top of it. Then, if many others also do this, it might have a chance at success.
Heed the lesson of OStatus. Open source has killed RFC style development. If the backers of ActivityPub genuinely want people to use it then they should publish a reference grpc microservice porcelain on docker so people can use it from any language, without thinking about it. Otherwise the cost to implement ActivityPub is just too high.
Did you miss the part where a million people use ActivityPub via Mastodon? There are already numerous implementations, such as Mastodon, Hubzilla, Pleroma, PeerTube, Kroeg, and more.
Perhaps I should have said "that no one uses directly".
I tried out Mastodon but didn't become a regular user. So I'm just curious, are you claiming a million monthly active users or a million accounts created since inception?
A million a month is roughly what serverfault.com gets. If you have 1MAU/month, that's a nice achievement, though it's still 1/130th the market share of twitter.
I hope Mastodon keeps growing to level out the playing field. That's why I suggested a way to make it easier for other sw engineers to support it.
Alternative paraphrase: Here is a spec based on our experiences building federated social networks, here are several implementations of it working together, if you want to join your users to this existing network you can implement it or adapt on of the existing solutions.
If your goal is actually different implementations and not just "an open source project", standards are important. The space was quite fragmented, ActivityPub is a chance to have more stuff talk with each other.
It's well thought out, and saves us a lot of trouble. It's easier to discuss architecture with something on hand to refer to.
As for the investment of time.. yes, but we figure it'll be worth it. We're not satisfied with any of the current platforms. We want to be able to collaborate without distractions, and we figure others will join us along the way.
Anything web-based is a dead end at this point. There is many, many reasons why any web-based solution will degenerate into centralized system. People behind Urbit got it right. I don't know if Urbit is going to succeed, but the they have pointed out the issue brilliantly.
My only question is: Is this a Twitter or Facebook replacement? Because if it's only about someone status and you can't post something on someone's wall/timeline, then it will never truly complete with Facebook.
It actually contains sufficient primitives to implement either. Mastodon uses it in a Twitter like way, but there are projects being worked on that are more similar to Facebook.
I have been in touch with Tim Berners-Lee and his Solid project since late 2016. Since then, I've run across many people who work at W3C and in the protocol space, and it's inspired me to open up and publish protocols derived from our platform (eg https://github.com/Qbix/auth)
But I believe that, in the end of the day, mass adoption can only be done when there is a popular platform. Whether it's closed source like Facebook or Twitter, or open source like Qbix, it needs to be actually used. That's how, for instance, oAuth came to be so widely adopted, and many others.
Mastodon is used by a million people across a lot of instances and supported ActivityPub before it was a W3C recommendation, so I'd say it is definitely being used.
Adoption and low friction is the most important thing. Unfortunately, copyright makes it illegal to parse web pages without the parent entity's consent, and the CFAA makes it illegal to download them. This means that a low-friction mechanism to break the corporate stranglehold on communication is extremely unlikely, since it can't do much to ease transition between providers.
If we didn't have these restrictions, this would be a purely academic debate, because there are already many practical ways to extract information from a resource and transmit it to other computers.
"Federation protocols" are a way of volunteering this information in a common language, but there's no real technical reason that's necessary to get "decentralized social media" or other such things. They're only considered potential solutions because effective scraping is not allowed.
Decentralized / democratized information access is not an issue that can be solved technically. It needs to be solved socially (primarily because the necessary technical infrastructure already exists, and is just held back by legal anti-solutions).
Reminds me of my own attempt to design a decentralized networking protocol FOSP (https://github.com/maufl/fosp-specification). I now think that a peer to peer solution would be more user friendly, it's sometimes difficult to explain to some of my peers that there can be a difference between an app and an service.
We need P2P not federated. Federated still requires trusting a server which is not good. We've tackled this problem and produced an insanely simple API for creating fully P2P apps, even the cliche Twitter/todo app can be built in 4min: https://scrimba.com/c/c2gBgt4
Does anyone know if say you have Facebook level of activity, wouldn’t the block chain be in the orders of petabytes to download? I’m sure you may want to know your history all the way back, or does the protocol is smart to know how much you need to download?
Yeah, let's send my social media data to a bunch of servers automatically through a distributed protocol with no practical means to enforce compliance with privacy settings. But that's okay, nobody ever posts anything on social media that they end up regretting, right?
I'm reasonably secure in my feeling that centralisation isn't the problem.
As much as people far too freely giving up information about themselves online. Decentralisation can work as an anti-censorship measure but I wouldn't expect something I send out over Bittorrent (for example) to remain my personal private info.
That's not true. XMPP can do far more than instant messaging, and it's definitely not complicated (well for a decentralized network and for technical audience of course).
For the record I'm the main developer of a XMPP based "social network" project (Salut à Toi), and we are able to communicate natively with others XMPP projects (e.g. Movim) for years (we can share comments like in the video, it's the basis of common standard).
The "many extensions" thing is commonly misunderstood by non XMPP people thinking that it's making software more complicated or hard to maintain. But it's not true: extensions is a strength of XMPP allowing to concentrate on one feature at a time, evolve it, change it if needed, and there is a very good negotiation mechanism. Software are evolving, and it's normal that different clients/servers have different feature, but even with that they can still communicate together.
The case is common actually: the websites on your browser can test for implemented features in javascript for instance before activating this or that.
XMPP is not a single technology but a base to support many coherent technologies, for many different use cases.
To go back to this ActivityPub, while I'm a bit annoyed that nobody tried to contact us to join our efforts on XMPP, resulting in yet another standard, at least if it's followed by some platforms, it may simplify the creation of gateways. I've looked at the specs, I don't think that putting "like", and "followers" as the main feature of a social network is a good idea. At first sight, it doesn't seem too difficult to translate to XMPP.
The problem I've always had with XMPP is that any time I try to do something
with it, I get lost in an endless sea of RFCs with no idea of which are
important for my use-case (most RFC titles seem fairly jargon-heavy to me).
Then, when I go to read an actual RFC, because of the fact that each RFC has
some interactions with all other RFCs, they're fairly impenetrable. Is there a
document somewhere collecting the most essential XMPP stuff you need to be aware
for different applications (e.g. chat client, XMPP server, XMPP social network
client, bridge between XMPP and other protocols, etc.) and presenting it in a
concise way that doesn't require navigating a vast sea of documents? Do you have
some other recommendation for how to get into the XMPP world?
It depends of what you want to do. In most cases, I would recommend using and existing client/library, this would make the things more easy for you (don't have to start from scratch, something already made in your favorite language).
It's progressively more functional as you implement each of the two RFCs. If you implement the first (6120: Core) you'll be able to process XML streams. The second brings IM, a friend list, and presence. You can then implement XEPs as you need them.
If your development workflow uses sprints in any form, the format of XMPP documentation is perfect. If you want to implement everything at once, good luck.
I'd recommend looking at the Compliance Suites which were designed to address this (disclaimer: I've been the author for the last few years, although I just transitioned out of that role):
I know there's a big scary "Rejected" warning at the top, but in this case it's okay to disregard that. It was rejected for bike-sheddy reasons about it not being perfect yet and is still a good starting place. The warning should go away sometime in the next few weeks (the problems that it was rejected for have been addressed, so the next time the council meets it will likely be accepted).
Actually, I never did any XMPP coding but from what I belief it is more difficult to get it 'complete' then to get it 'right'. I mean you get it 'right' even with a minimal/core implementation but 'complete' only with about the 50 most common extensions. Getting it 'right' seems to be a problem when the XEP definition is open to interpretation.
Nevertheless, I agree that it would not fit the XMPP ecosystem to be used as a micro blogging service. It's like using e-mail for IM: Possible, but not recommended.
> I never did any XMPP coding but from what I belief it is more difficult to get it 'complete' then to get it 'right'.
I don't think there is a point in getting it complete. You choose the XEPs you want by looking at what your client/server goals are.
Also if you need help in choosing, there are pointer XEPs, like the Compliance Suites, or the mailing list, or the chatrooms. See https://xmpp.org/community/
> Getting it 'right' seems to be a problem when the XEP definition is open to interpretation.
Fair enough, 'complete' might be the wrong word here too (as you do not have to support all XEPs), but currently my biggest problem is, that Conversations (Android XMPP client) works pretty good (something I would consider 'complete' in terms of messaging), but I could not find any desktop client which works on a similar feature level.
For example how many desktop clients support OMEMO, MAM and Carbon copies? Those are just 3 XEPs all related to secure multi device communication. Yes, they haven't reached the status recommended yet, but I think that is more a symptom than the root of the problem.
Desktop client support is lacking, but I think Dino (a relative newcomer) does a pretty good job of being compatible with Conversations and being "modern" (for some handwavey value of "modern") messenger based on XMPP:
Last time I tried Dino was 4th August, but I can't remember what made me stick with Pidgin. But hey, if Sam Whited suggests Dino I will try it again :-D
I love the XMPP and Golang stuff you are doing.
And far as I can see it Dino seems to have pretty good XEP collection nowadays:
> Last time I tried Dino was 4th August, but I can't remember what made me stick with Pidgin.
Dino's still relatively new software so it may not be all that stable yet (I don't actually use it enough to know). I hope it works out for you though; it looks nice if nothing else!
> I love the XMPP and Golang stuff you are doing.
Thanks! It's still very early days and I don't get a lot of time to work on it, but I'm glad someone has seen it and found it useful.
The exact methods of authentication are simply "out of scope" for the ActivityPub standard itself. As far as I understand simply for bureaucratic reasons. In reality there's already agreed upon mechanisms for doing this.
Each actor has an RSA keypair. Server-to-server requests are authenticated using HTTP signatures (that's a different spec), so you know a delivery is legit and on behalf of which actor it is. For messages that are passed directly between origin and destination there is no reason to believe the sender has tampered with the message (SSL is a prerequisite anyway). For messages that are about other messages, such as reblogs, you would not trust the sender, you would look up the message by its ID on its origin server instead. This is always possible because ActivityPub prescribes IDs to be resolveable URIs. There's also another method to verify the authenticity of a passed message, called Linked-Data Signatures (that's a different spec). It's a way of signing JSON inside JSON, it's a bit of a bother to implement though (To make sure the signature works regardless of how the payload is re-formatted or re-ordered in transit, the JSON is "canonicalized" using RDF)
The real world example as used by Mastodon uses signing to protect content but the process is kind of complicated (normalization of json to RDF triples etc.)
I've considered that, as well. I enjoy Mastodon, and am working on another ActivityPub project. But in a lot of ways it's not a significant advantage over Usenet.
Mail over UUCP and Usenet over UUCP are optimized for short(-ish) messages, and deal with packet loss, dynamic IP addresses on remote connections, partial transfers, and synchronization scheduling as part of their implementation.
It would seem to be a good base to build something like ActivityPub on top of.
From what I remember, Mastodon switched from OStatus to ActivityPub a few releases ago. In the past they had to extend the OStatus protocol for private messages, which meant some clients may not honor the private status of theses posts. ActivityPub has private messages defined properly in the spec.
It seems PeerTube (https://github.com/Chocobozzz/PeerTube) will also use ActivityPub.