Hacker News new | past | comments | ask | show | jobs | submit login
Scuttlebot: Peer-to-peer database, identity provider, and messaging system (scuttlebot.io)
361 points by goranmoomin on April 18, 2020 | hide | past | favorite | 116 comments



Scuttlebutt is a neat concept, burdened by a bad protocol. Signing a message involves serializing a json object, signing it, adding the signature as a field on that json object, and then serializing it again. To verify, you deserialize the message into an object, remove the signature field, and then reserialize it, and verify the signature against that new serialization. This means that all the clients have to have a json serialization that 100% matches the node.js serialization that the majority (totality?) of current clients use. Any other implementation (I wrote one in Go) becomes a nightmare of chasing rarer and rarer differences that prevent verification, and if you miss one before you serialize your own message with it, suddenly you've poisoned your own journal...

All in all, it's something to study and learn from, but I strongly recommend not becoming involved unless you are 100% happy with being tied into node.js.


After reading about the protocol I came to a similar conclusion as you. Although it needs to be noted that the JSON serialization is defined as JSON.stringify as defined in ECMA-262 6th Ed. plus some more.

To me it's worse that key order must be preserved, which this standard does not specify the way I understand it.

Source: https://ssbc.github.io/scuttlebutt-protocol-guide/#message-f...


> In brief, the rules are:

- Two spaces for indentation.

- Dictionary entries and list elements each on their own line.

- etc...

This is so weird to me. Why a protocol needs a strict, opinionated format of JSON? If they really need a very specific format of JSON, why they even bother JSON? There are better options like protobuf.

This seems the worst example of "Use JSON for everything".


Just a guess, but maybe so they can reliable hash it?


This is terrible. It's basically not JSON anymore. It's some custom text protocol with similar value escaping...

If they wanted both the descriptions to be visible and the order to be preserved, they could use:

    [ ["previous", "..."], ["author", "..."], ...


lvh at Latacora wrote a blog post about this problem.

https://latacora.micro.blog/2019/07/24/how-not-to.html


This might actually improve in the near future: https://github.com/ssbc/ssb-spec-drafts/blob/master/drafts/d...


As @cel pointed out further down the tree [1] that's not entirely correct any longer. Especially the "100% tied to node.js" part has not been for a while. There are now alternative implementations for Rust and Go.

Now, the feed format is indeed a practical problem if you try to make a client from scratch and it is annoying. Luckily, using one of the already existing libraries will handle this for you.

Changing that feed format for everyone is not possible, simply because there's already an existing social network built on the old ones that we very much want to preserve since we actually... well... hang out there. Changing feed format thus involves adding a new feed format and making sure other clients can handle and link together both. At the benefit of abstracting away the feed format, and being able to iterate on them.

[1]: https://news.ycombinator.com/item?id=22912075


Indeed, json maps are not supposed to be ordered so doing anything that depends on the order is bound to fail.

This is the exact reason bencode (https://en.wikipedia.org/wiki/Bencode) was invented, and I still believe we can replace all uses of json by bencode and be better off it, because it solves all too common issues:

- bencoding maps are in lexicographical order of the keys, so no confusion possible for hashing/signing (a torrent id is the hash of a bencoding map)

- bencoding is binary friendly, in fact it must be because it stores the pieces hashes of the torrent

Why don't we use bencoding everywhere ?


Interesting that your experience with Bencode was this positive. I've implemented a Bencode serializer/deserializer in Rust and here are a few things I've noticed:

* It is very easy to parse/produce Bencode

* It's probably fast

* The specification is really bad

* No float type

* No string type, just byte sequences. This is especially bad because most/all dictionary keys will be utf-8 strings in practice but you can't rely on it

* Integers are arbitrary length, most implementations just ignore this

I think Bencode is an ok format for its use case, but I don't think it should be used instead of json.


When I was faced with this (signing a structure), I serialized the json into base64, then put that base64 string as a value (along with the MAC) into a new json document. It of course increases deserialization overhead (json, verify, unbase64, inner json) but sidesteps this issue.

I thought about sorting keys and other things like that, and the dozen edge cases and potential malleability issues dissuaded me for the compatibility issues mentioned above.

How have others solved it?


Use Bencoding, like bittorrent does: https://en.wikipedia.org/wiki/Bencode

As I put in another comment, a torrent id is a hash of a map, where one of the keys contains binary data. bencoding solved that decades ago already.


It looks similar to stackish and BON (binary object notation) https://github.com/bon-org/bon-doc/blob/master/README.asciid...

BON is compatible with json+ and erlang data type, in specific, it allows any data type for the map key. Json only allows string as map key.


Why bencoding and not BSON or CBOR or any of the other serialization options?


Binary protocols. Or out-of-band signing.


Do you mind going deeper here?

Which protocols? Can you point to examples?

Because I was about to design a signing Json solution but based on the comments here it is a bad idea.


The signing schemes I've seen used in binary protocols fall into two categories:

1. Canonicalize and sign: the format has a defined canonical form. Convert to that before doing cryptographic operations. If the format is well designed around it, this is doable, whereas JSON doesn't really have this and with many libraries it's hard to control the output to the degree that you'd need. 2. Serialize and sign: serialize the data as flat bytes, and then your signed message just has a "bytes" field that is the signed object. This is conceptually not far off from the base64 solution above, except that there's not extra overhead, since with a binary protocol you'll have a length prefix instead of having to escape stuff.


Being able to separate the object and signature saves tons of trouble https://latacora.micro.blog/2019/07/24/how-not-to.html


Protobuf, cap n proto, and messagepack are a few I've seen before


It sounds like base64 was unnecessary there since a JSON string can contain serialized JSON.

Personally I'll just concatenate the values in a defined order and sign/hash that.


jsonrpc “args” can vary in name, number, and type.


Sure, but where you're doing

    {"msg": "eyJhIjogMTAsICJjIjogInRlc3RpbmciLCAiYiI6ICJoZWxsbyJ9", "h": "..."}
you can as well be doing

    {"msg": "{\"a\": 10, \"c\": \"testing\", \"b\": \"hello\"}", "h": "..."}
and skip base64 altogether.

If you mean this as a point against the second part of my post, it's of course only in some limited circumstances you can simply dump the values in a defined order and be done with it. To make it general, you have to have delimiters for at least arrays, objects and numbers, canonicalize number representations, and probably output the keys as well, at which point you've invented your own complete serialization protocol.


I just use JWTs whenever I have to pass signed messages.


This is how Cosmos (https://cosmos.network/) deals with signing transactions, caused _major_ headaches for me trying to figure out why my signatures were invalid when sending them from Node.js


I did something similar for a project that had clients in go and node. I solved this by flattening the object key paths to strings and sorting the keys, basically. You need to build that into the client serialization/deserialization, it feels clunky but I had 0 issues and has been working smoothly for a long while now.


Any signing done over structured data has this problem. You always need a canonical representation.


It would be simpler to work out a canonical reduction for JSON. This should be reasonably easy since there are so few elements to it.

A simple proposal for actual security guys to rip to shreds:

Strings are represented as utf-8 blobs, and hashed.

Numbers should probably be represented using a proper decimal format and then hashed. If you're reading the JSON in and converting it to floats, you could get slight disagreement in some cases.

Arrays are a list of hashes, which itself is hashed.

For objects, convert the keys to utf-8 and append the value (which will always be the hashed representation) and then sort these entries bitwise. And then hash all that.

Or, better, it'd be great to have an order-independent hash function that's also secure. I doubt xoring all the pairs would be good enough. Update: a possible technique[1]

[1]: https://crypto.stackexchange.com/questions/51258/is-there-su...


could it use a HTTPHeader style system instead to avoid the json serializing back and forth?


Just serialize, sign, pack the signature and the raw bytes of the serialization. Doesn't matter how, just gotta pack the raw bytes, not futz with it


You can even have a JSON object containing base64 fields for message and signature if you feel it really needs to be JSON all the way down for whatever reason.


This is typically the case that would benefit from a function compiled to webassembly. "Write once, run everywhere."

The protocol helper functions would be compiled to a webassembly library, and you would reuse them in Go, Python, the browser, etc

Of course, it's not justification for using their protocol (rewriting a protocol in another language is a good test for the protocol specification), but that would be a usecase for webassembly.


Once someone has written a library to do this correctly in Go/Rust/Whatever isn't this problem solved? Everyone building scuttlebutt apps with that language can use that library. It didn't seem like this protocol is changing.


I tried, and tried, and tried to make that library for Go. I failed, cause of serialization issues too involved for it to be worth it to me (got to the point where I'd have to write my own json implementation)

Just giving people the heads-up that JS seemingly is the only blessed language for scuttlebutt



there is work on C, Go, and Rust versions. It runs in iOS and Android. So, yeah, not so much "tied into node.js". Yes, that's where the majority of the work is, but...



I spent quite a bit of time with XML digital signatures, which is a similar situation, but with even more surface area to get wrong. Months to implement, over a year to harden.


I’ve had similar, albeit easy to solve, problems with key formats when signing emails.

But that isn’t a reason why scuttlebutt isn’t more popular. It only takes one Go implementation and the problem is solved permanently.


Warning: A bit of a ramble about my experience trying to use SSB for my "application".

I really enjoy SSB, conceptually. In reality I've had a fair amount of trouble getting started writing an application in it. Notably, I have difficulty knowing what patterns are good or bad in SSB.

As far as I can tell SSB utilizes effectively a commit ledger for user data. This is fine, but some of the specifics of my application mean that I feel the need to create many sub-identities so that revisions to a piece of content don't bloat the parent identity when syncing to mobile/etc.

Strangely, I've got a fair bit of experience in these types of designs - as I've written my own distributed storage for the application in question in many different forms (various ledgers, CRDTs, etc). Yet my issue is that I'm unsure what is considered "bad form" for the SSB crowd. I don't want to write an application for SSB but written in some form of SSB anti-pattern.

Combine this with SSB not having well established[1] implementations in my language of choice (Rust), and all around as much as I want to use SSB as the foundation for my app the protocol was just a bit of a heavy burden currently. I care what the SSB community thinks because I don't want to just "use" the SSB protocol, I want my application to join the SSB hubs/networks and share data.

I've looked a lot into this, but I still struggle to conceive exactly how my application can fit into the SSB ecosystem, if at all. This isn't a complaint post, just sharing some personal thoughts and my confusing history with SSB.

Love the project, and I especially love the design of gossip, person to person based applications in this day and age.

[1]: Rust does have a handful of SSB libs, but I think I'd be looking for a full, batteries included library. Everything I've seen is quite low level SSB.


I've been tinkering with SSB recently as well and I'm also finding it difficult to figure out how to interface with the wider SSB network in the 'correct' way. The communication protocol is well documented and easy to implement if you have libsodium available. Building applications on it should be straight forward considering it's just an eventually consistent event store.

I am having a few issues though. The 'api' that patchwork and other parts of the network use on top of the communication protocol seems to not be documented at all outside of the 'post' message. I'm not sure if it's not documented or if I just haven't discovered it yet. For example, I'm having to dive in to the code to see how a 'vote' message is formatted. I'm not sure how all of the various clients are staying compatible. None of the message formats seem to versioned in anyway either.

I'm also not sure what are the 'correct' ways to introduce my own message types. Should I use a vendor prefix in the type or something? Is it bad form to pollute the network with your own message types that other clients can't understand?

I've just started diving in to all this so maybe I just haven't gotten deep enough yet. I wrote a little toy SSB implementation last weekend to learn the protocol. I've really enjoyed playing around with it so far but I'm still not sure if it's the right fit for the applications I want to build yet.


Huge fan of Scuttlebutt and think it could (should) be the future of the social internet. I recently quit my job to write about and work on decentralized tech full time with scuttlebutt being my primary focus. I've written about why I think it's so important here: https://adecentralizedworld.com/2020/03/what-is-scuttlebutt/


> Decentralized social networks have been tried before, the two most well known are Diaspora and Mastadon. With these services there [is still moderation]... Scuttlebutt is how I believe the social web should function in the future.

Even the Hackernews community which lean towards decentralization more so than the general public would still argue there is value in moderation.


Decentralized doesn't mean unmoderated. Since it is decentralized there is no "global feed" and there is a lot of great discussion about moderation tools and processes on secure-scuttlebutt. Particularly, check the section of follow-graph: https://ssbc.github.io/scuttlebutt-protocol-guide/#follow-gr...


I'm the author of one of the clients (https://patchfox.org). All clients support blocking people, both visibly with a public message, or privately with an encrypted message only readable by you (which causes your client to block that feed). In Patchfox, I've added other forms of moderation which boils down to self-moderation and selective listening. You can mute or blur feeds, keywords, and channels. Even though this is client-side only, it provides some tools to keep a safe space around how you interact with the larger world. It doesn't affect replication and doesn't carry on into other clients (as I'm still trying to find the best patterns and practices for this).

In the end I'm seeing lots of comment here by people who probably don't actually use SSB. It is a quite fun and lovable place.


Moderation feeds can be published into decentralized networks just like the content messages themselves. Then readers can choose their set of preferred moderators (author/item killfile-stream publishers) to apply clientside.


For sure, any network without any moderation is going to quickly fall into chaos. I'm working on a post on decentralized personalized moderation at the moment which should be out in the next day or two :)


I think this right here is a killer app for decentralized social networks and is something I've been thinking about a lot. I'm looking forward to your post!


It's possible (and important) that moderation can be implemented in a decentralized community. Aether[0] has a really interesting take on it where it is essentially democratic.

[0] https://getaether.net/


> there is value in moderation

Of course. Everything in moderation, including moderation.


Without an identity provider scheme that doesn't allow creating unlimited free identities, the spam, scam, and troll problem gets out of hand.

China has a strong identity provider scheme. They tie everyone's phone to their government issued citizen ID.


It was a good read, thank you!

I also really appreciated your other post[0]. This is what allowed me to understand that SSB is not really a social network but more a p2p protocol to exchange messages where applications can be built on.

[0] https://adecentralizedworld.com/2020/03/a-decentralized-plat...


What other decentralized tech did you find promising (e.g. Dat, AP Fediverse, Solid, IPFS)?


Hey, this page is outdated. It is basically a list of APIs from ssb-server and some other related plugins, but there are entries there that are old and haven't been kept in sync with the actual modules.

https://scuttlebutt.nz (nee https://ssb.nz) is a better link for all things SSB. The protocol guide at https://ssbc.github.io/scuttlebutt-protocol-guide/ is your friend in understanding our little garden.

Be aware that SSB grew much like a garden. It is not a protocol and ecosystem designed by committee with a cold and effective process. It grew from simple stuff into more complex stuff, and yes we all understand some of the challenges pilled upon all of us due to bad decisions in the past.

There is a lot to love in SSB. Instead of going "npm install, meh", you should try it out. You don't need npm or nodejs to try SSB out, you can just pick any of the clients listed in the first page I linked.

I develop one of those clients, patchfox, but it is not a full client so you'll need to bring your own ssb-server.


Here's a getting started with scuttlebutt tutorial I wrote last year for beginners https://miguelmota.com/blog/getting-started-with-secure-scut...


There's an unstated assumption in all these things, that you already know some people in the network. If not, you can see all sorts of interesting things, but nobody knows you're replying to them, and it's a very lonely experience.

If there's a way around that, I haven't figured it out, and all these tutorials aimed at people who've never heard of the network seem misplaced -- if you don't already have a personal invite, the network is useless to you, but if you already have a personal invite, the article is useless to you.


A thread from 2017: https://news.ycombinator.com/item?id=14409187

Points to https://scuttlebutt.nz/ which is related


This HN post is linking to the protocol docs. Scuttlebutt.nz is about the social network built on top of them. More importantly it's the most significant thing that uses the protocol. The "apps" that use the protocol are, in practice, just plugins that work in one particular app (somewhat geeky app) and that particular app is primarily a client for the social network.


This is really cool. I dig the API and mechanisms to exchange data. I've been working on a DBaaS (like Firebase) that works via message passing between users and now I'm thinking Scuttlebot could be great tool to integrate with it.

I wish there was more info on how the replication worked...



So the homepage of this project is basically documentation. Where others introduce use-cases, you actually teach us how it's done. I love it.


Except for those of us who have no idea really what it is, how it would be used, or what the features really do.


Then you probably want https://scuttlebutt.nz/ instead, which is more of a user introduction.


Can run directly (https://github.com/arj03/ssb-browser-demo) in the browser as well.


Looks very cool - but to the author, you'll have to write some introductory 'what is this documentation' because the single sentence on the landing page isn't quite enough.

"It's a magofinisticks peer-to-peer storage functionomatic" is how it might read to some people. So what it is, what it does, the value of it, some general overview of the features and how they work would go a long way.

Looks cool though.


I've used patchwork before, but can someone comment on the other applications of the scuttlebut network and where this scuttlebot fits in?


Scuttlebot is the core server and plugins for Scuttlebutt. The Github repo is now called ssb-server. Patchwork uses it for replication and adds a gui and new plugins on top of it.


Yes, there is a decentralized git: https://git.scuttlebot.io/%25n92DiQh7ietE%2BR%2BX%2FI403LQoy...

A decentralized book review system and a decentralized chess.


SSB is great conceptually, but is plagued with bad design decisions, and with JS. I gave up on it, after trying it couple of times.

If someone took it, and did it right, it might be a huge hit.


Awesome similar DWeb projects for those interested/inspired:

Ceptr/Holochain: http://developer.holochain.org/

DAT protocol: https://datprotocol.github.io/how-dat-works/


Why does the users' data have to be stored in a form of a linked list (or "blockchain")? Couldn't every peer in the network just hold his own database full of messages and blobs and sign them with their private key when requested by an invited follower?

Edit: typo


Basically Scuttlebot is a collection of linked lists (or blockchains if you will) because you only have to discover one message to be able to fetch the rest of the related content, since it's all linked as a DAG.


Yeah, so this is the design decision that I don't really understand. Each message on the blockchain contains the author's public key representing his identity. If I already know the identity of the author, I could just go ahead and befriend him and later ask him or other peers who are his friends to share his "feed" with me. If the feed lives in a relational DB, I could then use SQL to ask for particular messages like:

"select * from posts order by created_at desc" or: "select * from posts order by created_at asc where created_at > <timestamp>" or: "select * from blobs where type = 'photo'" etc.

In other words, I could query the feeds the way I want to. Also, this allows the author to delete the content, although this would also require to send some DELETE requests to peers mirroring the author's feed and those requests would have to be respected, but that's another story.


The idea behind this is that it's streams all the way through, so there could be something more efficient to use for these "rolling" database that basically are streams of data. So instead of SQL, it's map-filter-reduce, a query engine that works over streams. https://github.com/ssbc/ssb-query/blob/master/README.md - https://github.com/dominictarr/map-filter-reduce

It's probably a result of going with pull-streams and basically architecture all scuttlebot tools/libs around pull-streams, so you want something that works over streams.

And yeah, Scuttlebot is not built with SQL as it's query language, but you could probably easily write something that understands both SQL queries and the file format scuttlebot stores the messages in. Thing is, you don't want to send these queries to other peers (they are all local), because the system is setup to be offline-first and only reach out to the network when it has to, not just for listing latest messages.

In general, you want all the action of fetching/syncing of data to happen exactly when you want it to happen, and then when you're done it doesn't continue until next "sync". So everything you do is local.

But then, if "query the feeds the way I want to" is the reason you don't like it, then so be it. But if it's just that you don't understand it, then reading through and digging into the different repositories will give you a better viewpoint. Start at the ssb-query and look what's using it and what it's using.


I didn't mean to be provocative, I just wondered what's the advantage of that particular setup. I guess your comment and the peer's explain it pretty well. Also, thanks for the links to the repos. I'll start from there.

Is there any dev forum on ssb or other place I could catch you guys at if I want to learn more or contribute?


I think they want to guarantee an unbroken chain of entries because a malicious client could otherwise omit some entries as a means of tampering with the message that was intended to be conveyed, or to censor some messages.


The protocol guide is quite beautiful: https://ssbc.github.io/scuttlebutt-protocol-guide/


I recently discovered SSB and was really intrigued.

I would love to hear an experience report from actual users like what is working great, and what is bad.


It's been a while, but I used to be an active SSB user.

I hosted SSB pubs and used to post on patchwork semi-regularly.

I thought it worked pretty well as a social network. I discovered new and interesting ideas from folks that I don't see much on mainstream social media.

I haven't followed the space much recently, and I'm curious about how it has evolved over the last year or so.

My favorite memories on SSB:

Someone promoted a book that they had written, and we arranged for a sales transaction by talking purely over the network. I sent them some amount of Bitcoin, and they sent me the PDF of their book. It felt very personal to work with the author directly, and side-step payment processors.

I loved taking my laptop out on the train or to a coffee shop, and replying to threads and publishing a post to SSB while offline. Something about reading other peoples ideas while disconnected, and then writing my thoughts, and having them automatically sync to the network when I got back on my WiFi at home, gave me a different perspective on ways to use technology.


> I loved taking my laptop out on the train or to a coffee shop, and replying to threads and publishing a post to SSB while offline. Something about reading other peoples ideas while disconnected, and then writing my thoughts, and having them automatically sync to the network when I got back on my WiFi at home, gave me a different perspective on ways to use technology.

You can do this with Usenet and most BBS's. Most native IM apps will also do this for you, plus of course there's email.


One cool feature of Scuttlebot is that if you and your friend are already following each other, you only need a connection to each other P2P to be able to send messages to each other. So if you're on a train with ad-hoc WiFi connected to each other, you can still proceed as usual and sync stuff.

I don't think this feature exists in Usenet and BBS's where there is a central server who masterminds the sync that everyone is doing. Same with email, requires a server (local or remote) to send/receive stuff while in SSB both local and remote are usually the same machine.


For BBSs, you're correct. Usenet (and email) used UUCP, which is actually much closer in concept I think here.

UUCP is a store-and-forward mechanism, not dependent on a real-time connection to a particular server. I used to run a node, connected to a guy I'd met who worked for an ISP. He had, gasp, a full-time network connection via ISDN; pretty magical in these days of dial up.

So, Usenet feeds were configured on my own little system, essentially subscribing to the newsgroups I wanted. Periodically, it would dial out to the other gent, upload any new posts from me, and download anything new on those newsgroups. My email came and went the same way. Naturally, what I got was a subset of what he had accessible.

While I never used this functionality, I could have had others call up to me, and I would just be an intermediate link in the chain. RFC 976 (https://tools.ietf.org/html/rfc976) describes how this works for email, including SMTP over UUCP.


Interesting, I didn't know that (BBS and Usenet was before my time), so thank you for sharing.

That does sound a lot like how Scuttlebot treats feeds as well.


Thank you!

The lack of multi-device support wasn't too much cumbersome?


The lack of multi-device support was a constraint that gave me a different perspective of interactions on the web.

My ssb keypair was on a work laptop, so when I changed jobs and had to give my laptop back, I lost my keypair. Now, I could have exported the keypair and continued to use my "account" on my new laptop. The network would have synced on my new device, and I'd get all my posts and pictures back. But decided to embrace the constraint instead.

When I rejoin the network, I'll have a new keypair, and no post history. I think this can have an interesting effect on how we view our attachment to data.


Creating a new account from scratch also means rotating your keys, which is a good practice. As the last few discussions on PGP have shown, the model of having a long-term identity key is more dangerous than it seems, because a single mistake (by you or by the application developer) means so much content can now be leaked. It's probably easier to let the natural connection between people be the vector of long-term trust, which ironically SSB emphasizes on


I love SSB, in principle. The protocol itself is very well documented[1]. The community tends to center lofty ideals around accessibility, anti-authoritarianism, and social responsibility[2][3] which I'm all about.

Unfortunately, I've found the software implementations maintained by the SSBC to be "barely working" at best, with pretty scant and out of date documentation (to the point where code in "Getting Started" sections doesn't actually work) for most libraries/tools, PRs and issues lounge for many months without a response, and I've noticed a disappointing tendency among the SSB community and maintainers to be a bit condescending to newcomers and less technical users (not to mention cliqueish) in a way that seems in tension with some of the ideals that they pay lip service to.

That said, I'm aware that we're all human, and my experience here is as more of an observer and tinkerer than active participant, so should be taken with a grain of salt

[1] https://ssbc.github.io/scuttlebutt-protocol-guide/

[2] https://www.theatlantic.com/technology/archive/2017/05/meet-...

[3] https://www.zdnet.com/article/manyverse-and-scuttlebutt-a-hu...


I've been using ssb for about a year and a half, and love it. The ideas behind the dweb do not get the credit they deserve, IMO : p2p data sharing where your data is sent directly to your friends rather than centralized on a know-it-all server is what the world needs, for anyone concerned about privacy.

If I had to say what I don't like about ssb, it would probably be that it's not easy to write an application for it. After trying for a while, I turned to Dat and the Beaker browser, which let me write frontend applications the usual way, with just an api to manage the p2p archives.

Both dat and ssb are javascript projects, so if you want to write your dweb applications using an other language, you're out of luck. I heard there is a rust implementation of dat being worked on that will allow to access an ABI, and thus write bindings for other languages, if it gets to conclusion.

Also, for all dweb stuff, the main current problem IMO is privacy. You can make private discussions on ssb, you can add an encryption layer on top of dat/beaker (I made myself a library using libsodium, that's good enough), but the main focus is about publishing things for the world, like you publish blog posts.

All in all, on my free time, I'm now more interested to work on the dweb than on the web, so I can only encourage people to toy with it.


Thank you!

The lack of multi-device support isn't too much cumbersome?

Also, you do not fear, due to the gossip protocol, that your private messages may be stored forever by peers and one day, your private key leak, and all your conversations are publicly exposed?


You're welcome :)

> The lack of multi-device support isn't too much cumbersome?

For ssb, it's not a problem for me, because I only use it on my laptop. There's a mobile app that exists, Manyverse, but you have to make a separate account for it, so usually ssb users will have a "john_doe" account and a "john_doe_mobile" one. I guess that's good enough.

For dat, yes, it's been a major problem for me for a while, because I mainly use it to make my own "p2p cloud", so I want my data on mobile as well. There is the Bunsen browser on android, quite experimental while able to load dat urls. Sadly for me, localStorage doesn't work in it, and it's what I use to store my encryption keys. I thought it was hopeless for a while, until I started using Termux (which basically provided a POSIX environment on android). From there, I start dat processes to replicate my data, and I wrote a small server to serve them on 127.0.0.1, which then allow me to use the app in any browser on mobile. Completely hackish, and I can't recommend any sane person to do that, of course:)

> Also, you do not fear, due to the gossip protocol, that your private messages may be stored forever by peers and one day, your private key leak, and all your conversations are publicly exposed?

Yes indeed, this is a real risk for any p2p data. I _think_ it's still better than having them unencrypted on big database known for snooping, but we'll have to deal with that at some point. I guess the best would be to have some sort of encryption capable of autodestructing past a given age, I guess? That's a challenge for cryptographers, especially given it needs to not allow to just set the clock in the past to bypass it. Well, I hope the world will surprise me once again :)

On the other hand, when I thought about that, I also considered it may actually be a good thing, depending on how many years it takes to break the encryption or find the keys. If I'm long dead, I'm fine with my data being decrypted, because otherwise we'll make the work of future historians impossible, if data is sparse and severely encrypted so they can't access it.


How do you get connected to the scuttleverse? Did you already know people who were also using it?


Nope, I used a "pub", which are bot accounts that give you an invitation and auto follow you to boot the network. The way to use them is described here : https://scuttlebutt.nz/get-started/

It's a bit cumbersome, but the purpose is to avoid having any central authority, if I understood correctly (well, except the server running the webpage on which pubs are listed :] ).


I've been on SSB for some 3 years (with some breaks when I had enough of npm). Once you're onboarded it works like a charm: exchange of data between peers works swiftly and efficiently, to the extend that you can even use it for realtime chat the way IRC works. The community is colourful and friendly, and the signal to noise ratio is high. I've learned a lot about fermentation and growing mushrooms and living off-grid while reading posts on SSB.

My biggest frustration has been that all usable clients are written in nodejs. I recently took a seven months break to cool off from rage over npm (and yarn, for that matter), but now I'm back again.

Onboarding can be tricky because there are no central servers — it's 100% p2p — but I guess it's easier these days than it was in the beginning. And if you know somebody who is already onboard it shouldn't pose a problem at all.

A better site to start with is probably https://scuttlebutt.nz/


Thank you!

The lack of multi-device support isn't too much cumbersome?

Also, do you not fear, due to the gossip protocol, that your private messages may be stored forever by peers and one day, your private key leaks, and all your conversations are publicly exposed?


It would be sweet if we had multi-device support, but it doesn't bother me too much: All my IDs are mutually following each other, and I have my “hops” set to 3, so I see virtually the same timeline on each device. What can be frustrating, though, is that notifications and private messages to a given ID are only visible from the device with that ID.

I am not overly concerned about leaks of my private key, although the risk is certainly there.

One think many people have to get used to, though, is that the log is append-only: once you've published a message and is no way to delete or undo the message — is it there for “all eternity”. The positive side is that you're more conscious about what you post and why, because there is no way of taking it back.


One of my wishes is that it would support a hardware token like the yubikey for storing the private key, to make leaks less likely (although it might not be super performant).


Not as much as I fear this from Facebook


In addition, Manyverse is a social network built on SSB. Each user has a chain, and they sync and relay their friends chain.


npm install

sigh...


Then again, JavaScript can be thought of as the language of the distributed web...


Yeah, this is where I lost interest too.


I don’t understand why npm install is so bad here? It’s just a package manager. It’s not hard to write your implementation without npm if you prefer.


it is, I tried with haskell and gave up when I noticed I couldn't easily get json serialization to be in the same order as the javascript implementation and therefor couldn't verify nor sign messages with interoperability with the current clients.


> I tried with haskell and gave up when I noticed I couldn't easily get json serialization to be in the same order as the javascript implementation and therefor couldn't verify nor sign messages with interoperability with the current clients.

This gets you most of the way there:

> With the Aeson library, the order of keys in objects is undefined due to objects being implemented as HashMaps. To allow user-specified key orders in the pretty-printed JSON, encodePretty' can be configured with a comparison function.

- https://hackage.haskell.org/package/aeson-pretty-0.8.7/docs/...

I'm not sure that messes with spacing or not though or you require specific space preserving requirements to interoperate with current clients.

Even if not you can probably look at the source of encodePretty' and work out how to not mess with spacing or keep it all on one line.

I've wanted a version of Aeson for a while that preserves order and all formatting for things like linters that are very unobtrusive. It seems your use case would have benefitted from that as well.


Well, the problem is replicating the javascript order which isn't lexicographical. :/


This gives you the ability to arbitrarily order via user defined list (the first argument) and then to order by some comparison function if for instance a new key were added (length in this example).

Given the example:

    {
      "baz": ...,
      "bar": ...,
      "foo": ...,
      "quux": ...,
    }
Then listing all of the keys in your desired order:

    comp :: Text -> Text -> Ordering
    comp = keyOrder ["foo","bar","baz","qux"] `mappend` comparing length
Note if you list all keys in the first argument of keyOrder it never goes to comparing length. There is probably a cleaner way to denote that this is the case but I can't be bothered to figure it out atm.

You can use this function:

    encodePretty' defConfig { confCompare = comp } YourType
And it will produce:

    {
      "foo": ...,
      "bar": ...,
      "baz": ...,
      "quux": ...,
    }
Can you give a sample of the javascript in the order necessary? This is from NPM? They have to use some comparison function to figure out how to sort their keys I'd imagine, so you could just copy that with Haskell.


I think npm is barely a package manager.

Yarn is a bit better, but it’s also my experience with most of the stuff in the JS ecosystem.


I think the quality of X doesn't determine what X is.

npm is a package manager, it manages packages for you. Hard to escape that fact. It's a bit rubbish, but it doesn't make it _not_ a package manager.

Same with the USA, might be a shitty country, doesn't mean it's barely a country


Depends I think. If you make a package manager, but the only function it has is text editing, is it a package manager?

If you get too far from what people aim to use your tool for, are you still a tool for x?


> If you make a package manager, but the only function it has is text editing

If it quacks like a duck, looks like a duck, it's a duck. If it doesn't quack and doesn't look like a duck, it's not a duck. So in your example, if npm just provides text editing, I'd call it a text editor, not package manager. If it manages packages, I'd call it a package manager. So this is judging things based on their functionality, rather than what they want to use it for.

Don't think the intended purpose matters too much in what something is. It is what it is, it is not what it was aimed to be but what the current state of it is.

And AFAIK, people _mostly_ use npm for managing packages, other use cases are not nearly as popular (like distributing assets).


Are these new systems created just to get a fun name out there?


Scuttlebot is a pun on scuttlebutt, which is the cask used to serve water on a ship, and the protocol was invented by a guy who lives on a boat.

Think of it as “water cooler”: a place where people can meet and smalltalk.


Uh, for something touting e2e encryption and security it would be better if the site did not serve over plain http by default


https adds nothing to a page when the only trafic is server -> client


Of course it does, HTTP is never only server -> client - from preventing a passive eavesdropper from seeing what pages are being browsed on the server, cookies, UA fingerprinting etc. to active content modification in transit.


Pages are still visable in the tls handshake, no coockies on this page (that would be client-> server traffic). But yeah, good point about the fingerprinting and content modification


No, they aren't - just the hostname (domain name).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: