Hacker News new | past | comments | ask | show | jobs | submit login
Ruma, a Matrix homeserver written in Rust (ruma.io)
284 points by type0 on July 4, 2016 | hide | past | favorite | 119 comments



Author of Ruma here. I can answer any questions people have. Ruma is in very early stages of development, if it's not clear from the website, so it can't actually be used for its intended purpose yet, but we're getting there. :}


I don't know if you're involved in Matrix's development, but what's the advantage compared to XMPP? I feel like it's bringing a new protocol that - as xmpp - is unlikely to break the massive silos that are Hangout, Messenger, whatsapp etc...

Or am I wrong, and if so what did I miss?

Edit: Don't get me wrong, I'm dreaming to see the end of that fragmentation happening, and I don't want to discourage anyone to address that problem, I'm simply wondering if anything is fundamentally different from XMPP.

Edit2: found a few answers here : https://matrix.org/docs/guides/faq.html#what-is-the-differen...


I answered the same question elsewhere in the thread so I will just link to it: https://news.ycombinator.com/item?id=12028977

Happy to follow up with additional questions, though!


How does end-to-end encryption compares to tox? Tox uses TOXID - huge hexadecimal numbers as encryption key and DHT to find peers. Once you connect to a friend you know their IP, all messages, AV and file transfers are encrypted. I find a lot of documentation about it is missing. Would like to know which tox or matrix will be better for my next project. Tox also doesn't expose any metadata.


End-to-end encryption is still in the draft phase in terms of the Matrix specification—there hasn't yet been a published version of the spec that includes it. Synapse, the reference homeserver implementation, and Vector, the premiere web client, are still developing the implementation and will be deploying it sometime soon, likely followed by a new version of the spec that formalizes it.

The documents I know of that provide details about it are here:

* https://matrix.org/jira/browse/SPEC-162 (the tracking issue for the feature in Matrix's issue tracker)

* https://matrix.org/git/olm (C implementation of the protocol formerly known as axolotl v2 developed for use in Matrix)

* https://matrix.org/speculator/spec/drafts%2Fe2e/client_serve... (draft of the spec for E2E)


Thanks, I will try to find some time and join your team and help with development. I lost hopes in tox despite being much more mature than matrix.


Depends on your definition of mature. The security was mature, but the UX was always garbage and there was never consensus in the ecosystem and huge swathes of critical features (like usable dns) have never been wholly adopted.

Matrix is lacking its E2E encryption but already has everything else. You can use Vector today to replace Hangouts or Skype completely.


encryption in matrix is still very much Work-in-Progress[0]. It will use olm for encryption, which is AFAIK, just an axololt implementation (the protocol behind Open Whisper System).

There's no concept of finding "peers" in Matrix. It's more akin to xmpp : each user is attached to a Homeserver, which can chose to federate to one or multiple other HSes. Users may then create rooms on their own HS (or the HS they federate to) and other users may join those rooms, similar to IRC.

[0] http://matrix.org/speculator/spec/drafts%2Fe2e/client_server...


It should be noted that rooms are not "on" a homeserver the way IRC channels are on a network. The server is just a convenient reference point. If it goes down, the room still exists and can still be used, and even given a new name.


The name of the project means "ugly" in Finnish. I wouldn't sweat it though, no one cares. Just thought I'd let you know since it's mildly humorous.


I learned that recently and also thought it was pretty amusing. Cheers. :}


There's a pair of bars called Ruma in Finland: http://www.ruma.fi/


Wait till you hear what rust(y) means/connotes in English!

(Seriously, I know it seems minor, but I've always hated the name they picked; it's almost like they named it "filth".)


Anecdotally, it is named after a certain multi-stage lifecycled fungus. Much better :)


Maybe so, but if they had named it "filth" and explained that it was this Uber-geeky reference, I'd still say it's a bad name for the same reason.


I also tend to think of thermite before uncleanliness when I hear rust. Rust is just iron oxide, not anything dirty.


When I hear "rust", I get images of rusty pipes, old rusty nails, faucets where the water is tainted with rust, and people's skills getting rusty. I feel disgust, and it's why I didn't feel particularly eager pursuing it further.


That's one of the various reasons for the name; there's no single reason Rust is named "Rust."


What is the underlying protocol that Ruma uses for communication? Is it HTTP, XMPP or something else?


In terms of OSI layer 7 protocol, Matrix uses HTTP, though in the future it may specify other application protocols.


The problem with HTTP is that it drains 11 times more battery on a mobile phone than something like MQTT.


As Perceptes says, Matrix is designed to have modular transports. So if you want to use MQTT or CoAP or WebSockets or whatever for the client/server API, then go for it :) PRs welcome!


Can you give any evidence to support that statement?


[sources required]


Would love to know how you're testing it .. can you describe your rig?


The basic functionality of the homeserver is only just starting to land, so we're not testing any of the federation aspects yet. The current test suite is just your standard web app integration testing. Take a look at the repository's README[1] for some high-level details, and src/test.rs[2] for details of how it's actually achieved in Rust using the iron-test and diesel crates. The PostgreSQL database used for testing is managed with Docker and Docker Compose.

[1] https://github.com/ruma/ruma [2] https://github.com/ruma/ruma/blob/04e5af143136c026c1b7449921...


It's worth noting that the Matrix.org project itself provides an integration test harness called sytest which can be pointed at other HS implementations to check their behaviour: github.com/matrix-org/sytest


Slightly off-topic, but, as far as Rust goes, I've completely given up on it after realizing (after much study) that it is unsuitable for event-driven (callback-based) design. In short, its idea of ownership, borrowing and references is incompatible with "lower" objects knowing about their "parents" and invoking callbacks to them.

I know there are ways to get around but I don't see any which is sufficiently good for me (e.g. low boilerplate, scalable/composable, no compromise of performance). Some details are in my question on StackOverflow[1].

One particular example that C and C++ allow is to have an object "self-destruct" itself in a callback (e.g. after a TCP disconnect, EventLoop callbacks Socket callbacks MyClient which directly destroys MyClient including Socket). Yes it is indeed valid to call a (virtual) function on a class and for that class to proceed to delete itself [2]. This is very nice because it removes need to have special cleanup code all around and/or half-dead states. In Rust, this seems impossible by design (i.e. would need unsafe, and it's unclear if the compiler itself allows it anyway).

[1] http://stackoverflow.com/questions/36952894/event-driven-des...

[2] http://stackoverflow.com/questions/3150942/c-delete-this


Rust is very-very explicit. Whereas every event-driven framework is (usually) a full-throttle magical land of monkey patching or wrapping everything, plus the not so well advertised, but very forbidden dark marshes full of nasty blocking I/O goblins, and usually this means there's no proper API to use the async parts without the magic of the event loop.

In Rust you'd have to either make an few (unsafe) global ( https://github.com/rust-lang-nursery/lazy-static.rs ) structures to keep track of the event-driven state, or pass them into every callback, or make every callback somehow derive (or a derived from) a common structure that does this bookkeeping.

So far the language doesn't have syntactic sugar for this, but I think it'll be there in a few years. The compiler is up to the task (as you can already see a few 3rd party macro-driven solutions for similar "magic" things - such as serde's custom macros).

But since every event-driven thing can be implemented as a queue and a consumer thread pool (as it's implemented under the event-driven hood), I don't think Rust is a non-starter for even-driven solutions. Though I'm inclined to agree on the extra care needed to satisfy the type system to get easy callbacks is annoying.


I have to disagree that about the magical monkey patching. Event driven systems can be made to be very simple and easy to understand, and such that much boilerplate is avoided. It's just that most programmers are not capable of doing that, and the frameworks that become popular usually aren't subsequenty "fixed".

A few hints how to do it:

- no shared_ptr / ref counting, make object ownership clear

- design to allow destroying an object at any time even from a callback

- don't make callbacks harder than they need to be (use virtual functions or simple macro hackery to reduce boilerplate of using function pointers; don't introduce "signals and slots")


signals and slots seem like a way of introducing an extra layer of type safety and explicitness - you cannot accidentally pass the wrong function pointer to a handler, and it's easier to extract your wiring graph. they don't make things all that much harder either; they add a slight bit of ceremony, but it's worth it. (mostly based on my experience using qt)


At least in the Qt implementation, signals and slots make it easy to forget to connect essential signals, and to understand which signals are essential. I also feel like they tend to encourage making the interface more complex than it needs to be.

On the other hand, with function pointers or virtual functions, you can easily ensure a required callback is provided, by requiring the user to pass it in the constructor.

I don't see any difference related to "accidentally passing the wrong function". In either case you need to specify which function to call and on which object; for virtual function callbacks it's even easier and harder to fail. Type checking can also be done by the compiler in either case (even for function pointers at least in C++ it is possible to make a type-safe wrapper, see my implementation of lightweight callbacks [1]).

[1] https://github.com/ambrop72/aprinter/blob/master/aprinter/ba...


I'm just now completing a rather large event-driven system in Rust. It's also callback-based and has to interoperate with C.

I do struggle with the ownership rules sometimes but it's pretty much always because I'm clashing with the bad habits I learned coding in C/C++, or I'm shooting my future-self in the foot because my limited cognitive capabilities can't always keep all code paths in memory perfectly all the time.

The borrow checker came with an upfront cognitive cost that ultimately saved me from multithreaded async I/O madness later on. I can sleep at night knowing my multithreaded async system is more sound and easier to safely extend/maintain than it would have been had I created it in C/C++.

I can see why some people might give up on Rust. In C++ you can take the easy way out (use mutable aliasing etc.) and you can quickly get things 'working'. You don't have it so easy in Rust because you have to carefully think about end-end ownership.

Also in Rust I found that when I did make bad design decisions it was sometimes a lot more work (redesigning structs, shifting ownership responsibility) than it would have been in C/C++ (because I probably would have laid land-mines for my future self by mutably aliasing things etc.).


It would probably help if you described how you build a large, event-driven system with callbacks in Rust. Then, the parent and anyone else doubting it would see specifically how to go about it without problems they describe. Also, any links you have to posts specifically on good style or structure for these in Rust would be helpful if you couldn't give details on your own work for whatever reason.


That's inaccurate.

You can do what you describe as long as you use Rust's reference counting support (Rc and Arc), which includes weak references that can be used for parent pointers, plus RefCell and Mutex for mutation.

To have an object "self-destruct", have it remove all reference counted pointers to itself.

You can pass an &Rc<Self> (or &Rc<RefCell<Self>> or &Arc<Mutex<Self>>) as the self parameter if you want to let an object create references to itself. If you want an object to be able to drop itself immediately in that case, use an Rc<Self> instead of an &Rc<Self>, and call drop(self) in the method (this also work if you are not using reference counting and just pass Self, of course).

You should try to not do this kind of thing though, because it adds overhead and the compiler cannot statically check that you are not leaking reference counted cycles or deadlocking on mutexes or refcells (which is not a Rust limitation, it's just impossible without having the programmer write a machine-checkable proof).

If you do it in C++ the compiler also cannot check whether you are referencing freed memory or incorrectly concurrently modifying the same object.


I think Rc<RefCell<T>> falls into the "high boilerplate workarounds" category that ambrop7 mentioned they were trying to avoid.


Rc<RefCell<T>> is not high boilerplate at all, and it's a super common Rust pattern. If you think this is too much boilerplate then I'm really not sure what you were expecting. It's just composition of reference counting memory management and mutability, a great example of modular design. What more did you want?


Rc is dynamic memory allocation, which means every element of your application will end up in its dynamic memory block. This is inefficient and highly undesirable for resource-restricted / embedded applications.


Right, you shouldn't reflexively use Rc everywhere. That seems like a completely different topic, though: what I was responding to was the assertion that composition of Rc and RefCell was "boilerplate".


Using an Rc<RefCell<T>> comes down to one method call, .borrow_mut() - it's not exactly magic, but it's far from significant boilerplate either. Additionally, it's not a "workaround", it's an inherent part of most useful Rust code. Servo contains 79 uses of borrow_mut.


Note that Servo has a GC (the spidermonkey one) since it deals with a managed DOM, so it is an atypical example. Most Rust code I've seen has far less RefCell usage; but yes, RefCell is pretty idiomatic and the boilerplate is minimal.


There's a lot of stuff in Rust that makes such types manageable. Also, such types should always be expressed as:

struct MyFancySharedThing<T> { thing: Rc<RefCell<T>> }

Rc and RefCell are implementation details and should _always_ be hidden.


> You should try to not do this kind of thing though

Is there a recommendation on what to do instead?


I have a protocol implementation in development which looks somewhat like this (minus the "delete this" part, I just keep my Peers in a bunch of vectors and remove them from those, Rc then drops it when the function ends and nobody is capable of accessing it), as part of a protocol implementation. Turns out it's relatively easy - it's what the Rc<RefCell<T>> type is for.

I'm coming from Python, so the odd simple integer/boolean check (what Rc and RefCell come down to) aren't an issue for me - they might be for you depending on what you're writing, I suppose, although 99% of the time they're not going to be what you need to optimise.

However - Rust is not an object-oriented language. This sort of design isn't necessarily what you want to be using. In your particular case, what you'd probably want is the EventLoop owning the Socket and MyClient, callbacking MyClient directly, and allowing MyClient's callback to return a value describing whether to destroy it or not. Libraries like rotor[0] do exactly this.

[0] https://github.com/tailhook/rotor


Well yes, I do want to avoid any unnecessary overhead, and specifically I want to avoid any dynamic memory allocation. Consider a design for a hard real-time system and/or microcontrollers where you want to have all memory allocated statically. In C++ this is pretty easy to do if you want it (hint: don't use shared_ptr).


I'd agree that Rust doesn't manage to entirely avoid dynamic memory allocation unless you use unsafe code... but as you say, you can't use the safer parts of C++ without running into the same issues.

There's also a chance that you might be able to encapsulate your unsafe code behind safe abstractions, and Rust can help prove the rest is memory-safe.


Right now I'm working on something where the choice was between Nim or Rust. I required integration with lots of C/C++ code and there are callbacks from C that execute my code, Nim was the clear choice.

The main app is C that links all my libs together, but the actual C code just calls a Nim to execute the code. I talk to all the C libs using Nim.

I'm very happy with that decision. It was a breeze to setup and is great to work with.


> I required integration with lots of C/C++ code and there are callbacks from C that execute my code, Nim was the clear choice.

i don't follow - Rust integrates with C quite well, too.


If it did as well as Nim the person I'm replying to would not have the issues he is describing.


Could you elaborate with more specifics here? How is nim's interfacing with C code nicer than Rust's?


Thanks for the hint, I must look at Nim sometime :)


Have you considered that maybe event-driven programming is problematic?

As I see it, its basically letting you ignore the context code is called in, but the issue with that is, well, IME failing to understand that properly is where nearly all bugs come from.

It's true that rusts rules about aliasing are frequently, well, annoying, but its hard for me to think that poor support for event driven programming is that bad of a thing.

... FWIW, you should possibly take a look at servo. Browsers have to support some level of eventing, so it seems like they probably have a system for this.


No, I've long ago came to the conclusion that event-driven programming is the right way for programming large-scale network applications. The problem is that it is often poorly supported in languages/libraries, and for some reason seems to be looked upon as "elitist". I'm not sure if it's just the culture or if it really is harder for people to grasp.


I think the "totally unsuitable" is a quite hard statement, but I also came to the conclusion that Rust plus callbacks is not a match made in heaven while trying to implement a networking library (although that was pre 1.0).

Wrapping everything into Rc<Refcell<>> and lots of dereferencing was one turn down (yes - C++ also needs shared_ptr<>) from the syntactic and ergonomic point of view. That might now be better due to some automatic derefing. The biggest issue that I had was that reference counted 'interfaces' (trait objects in rust) were not working in the required way at all (no required up and downcasting of boxed trait objects was possible). Don't know how this changed since then.

If I would need to perform the task again I would probably try to model everything with synchronous/blocking API calls as this seems to fit much better into Rusts ownership model, even thought this would sometimes require lots of threads.


You don't need shared_ptr in C++, and I prefer to never use it (or reference counting generally) in general. See my other comment: https://news.ycombinator.com/item?id=12031739


Yeah but if you don't use shared_ptr and friends you will make exactly the sort of mistakes that Rust prevents.


You may be able to prevent some specific issues using shared-ptr, but in my experience it is better long-term to design the application so its use is unnecessary. One issue that pops up is reference cycles, so then you also need the weak_ptr boilerplate.

Specifically, I've found that is possible to prevent any callback-related use-after-free and related callback hazards by following these simple rules:

- Callbacks should always originate from the the lower layers (event loop), should never be called back synchronously as part of a call from upper layers. If you need to callback soon, use a facility of the event loop that makes the event loop call you soon.

- When invoking a callback, "return" immediately afterward. If you have the desire to do something after invoking a callback, do so by first queuing up a call to you from the event loop using the same facility as mentioned above. Actually, callback typically return void , I usually call them by "returning them", i.e. return callback();

- Make good use of destructors or related facility (if your language doesn't literally have destructors) to ensure that when an object is destroyed, it will not receive any more callbacks from the objects that it used to own.

I think that a programming language could even enforce these rules at compile time (and the logic for this is much simpler than rust's ownership system).


Why do you want to use object semantics for resource management? It sounds like you are dealing with a dependency graph - wouldn't it be much more simple just to use a graph datastructure?


I don't understand what you mean by "use a graph datastructure". I want to have a framework / set of patterns for building large-scale event-driven applications using composable components.


`Unsafe` doesn't mean you shouldn't do it, it just means you have to tell the compiler "I know what I'm doing, trust me" - and be careful because it will have less guarentees because it trusts you.


Indeed you beat me to the punch. You can totally use unsafe. This just means that the compiler essential is unable to _prove_ that your code is memory safe. That does mean the author can't. Documenting unsafe points in your code gives also tells you where to look if you do start seeing memory related problems, which can be helpful for debugging.


You should try rotor for event-driven programming - it uses composable state machines to define network protocols.


I haven't had a chance to work on a real-world problem with Rust (I mostly use Python, but have started learning Rust for fun), so this is genuine curiosity speaking: is this (https://github.com/ruma/ruma/blob/master/src/main.rs#L60) idiomatic Rust? If I understand correctly, lines 63-76 are using chaining because Rust doesn't support optional / named parameters? And lines 78 - 107 are basically just "run server if no error"? Seems like an awful lot of code for what it does... Could it be make nicer or is such level of verbosity something normal in Rust?

EDIT: again, not trying to put anybody down, I would just like to learn if this code is representative of bigger Rust apps.


Yes, all of that is more or less idiomatic Rust.

The first section of code you mention is using what's referred to as the "builder pattern" in Rust, where you have a dedicated type that chains method calls to mutably construct an object which is produced at the end. You're not alone in thinking this pattern is ugly and verbose (though I've grown to like it, personally.) There is a lot of discussion[1] about providing a terser approach via keyword arguments, but nothing concrete is planned at this point.

The second section of code you mention is a match statement (essentially a fancy, type-based switch statement) that inspects the arguments presented to Ruma from the command line and dispatches to the appropriate code. This supports two subcommands currently, `run` and `secret`, which are the two arms of the match. The `run` arm loads the user's configuration from the configuration file into a Rust structure, then starts the web server, which blocks the process until it is killed, reporting any errors that occurred along the way. Rust is very explicit about code paths that are optional or may produce errors, so you often see the "some" and "none" (or "ok" and "error") cases handled explictly in code, which will certainly look verbose coming from languages like Python. I came from Ruby, myself, and what I've learned is that the places where Rust feels weighty are almost always making me accountable in places where I would've otherwise left a corner case unhandled or introduced a bug in a Ruby program. There are some constructs in the language to make handling optional and error cases a little less verbose (e.g. `unwrap_or`, `try!`, etc.) but in many cases I prefer the longer form for simplicity.

[1] https://github.com/rust-lang/rfcs/issues/323


Lines 63-76 are the builder pattern. I think it's the standard idiom: https://aturon.github.io/ownership/builders.html which is familiar from C++ or Java.

Lines 78-107 are "do stuff or handle error". You could do "do stuff or crash" instead more tersely via thing_that_might_fail.unwrap().

I think lines 80-87 for example are idiomatic. Lines 89-100 seem a little strange to me in that they embed one error case in an ok case. I would probably do one error per block, returning the server and then using it in a separate block. But I'm new to Rust so don't take my word as gospel.

In any case, I think it's true in any language that when you explicitly handle every error in a unique way, your code is longer. I suppose one way to shorten would be to make each error-handling case more similar: just print the error without a unique prefix. Then make a function for each subcommand that returns Result<(), Error> (nothing on success, error on failure). Then the functions could use the try! macro to make the error-handling implicit (the macro automatically propagates error to the caller), and main() could handle all the errors with just one path. Something like:

    fn run() -> Result<(), Error> {
        let config = try!(Config::from_file());
        let server = try!(Server::new(&config));
        server.run()
    }

    fn secret() -> Result<(), Error> {
        let key = try!(generate_macaroon_secret_key());
        println!("{}", key);
        Ok(())
    }

    fn main() {
        ...
        let result = match matches.subcommand() {
            ("run", Some(_)) => run(),
            ("secret", Some(_)) => secret(),
            _ => Err(SomeErrorType::new("no such subcommand")),
        };
        if let Err(e) = result {
            println!("error: {}", e);
            // or to stderr via the ugly:
            // writeln!(&mut stderr, "error: {}", e).unwrap();
            process::exit(1);
        }
    }
EDIT: I suppose you could also make some variation of the try! macro which takes another string to prepend to the error. (Calling or_else with a function to prepend, maybe.) Then you can duplicate the original error messages in this slightly more terse form.


Is it that verbose, though? I have a hard time seeing how it could be made any shorter in other languages, without sacrificing functionality.

EDIT: I think what gives the code its length here is the explicit error handling. If you do the same in Python you'll end up with the same amount of code. In Rust, you could unwrap() everything and lose nice error messages and probably slash the LOC in half.


With the built-in error handling macro, you can generally do:

    let value = try!(fn_that_may_error());
Rather than having to muck around with explicit matching on Ok vs Err.

The try! macro would make the linked function much less verbose... Except that what try! does is bubble the error further up the stack by returning an Err if the called function returns an Err — and since this is the main function, there's nowhere up the stack to go (main() has a return type of "Unit," which is Rust's version of void or null; for functions that pass errors up the stack you'd have a return type of Result<ReturnType, ErrorType>, which allows errors to be returned), so try! can't work.

I agree that some of that error handling in 79-101 is pretty awkward looking, but luckily it's not super representative of idiomatic Rust: usually you can make Rust much more concise by using try!, with the exception of the main() function which can't use try! and has to actually handle the error.


I'd prefer to use `expect` in this case, or if I wanted to present a non-scary error message, write a custom macro to print the error and exit.


That's nice, but is Matrix going anywhere? There are so many of these federated social systems now (Diaspora, Urbit, Gnu Social etc.) with few users. (And they don't interoperate with each other, which is a lack.)

Maybe we just need some federated system that's a directory only. All it does is help you find people and resources, after which they speak end to end.


It's worth noting that Matrix is the only federated system which sets out from the outset to bridge together all the others (hence the name Matrix). Whilst we don't have bridges to diaspora/urbit/gnu social yet, it's only a matter of time. PRs welcome!

In terms of the community size - looking at just the Matrix.org homeserver, there are around 300K messages a day, 250K users (of which about 200K are bridged), and 30K rooms. Meanwhile there are at least 500 openly federated homeserver installations we can see from the Matrix.org homeserver.

Synapse is relatively mature as a homeserver (although it still has performance challenges; it is very cool to see Ruma progressing!) - meanwhile Vector as a flagship web/ios/android client is in late public beta and very usable too - eapecially with end-to-end encryption on the horizon.

I may be biased (being project lead for Matrix), but I'd say that it's going somewhere :)


This sounds promising.

Is anyone already using Matrix as an event server/queue for interactive environments and/or sensor networks (similar to the Stanford Event Heap [1])?

What is the minimum latency for Synapse/Ruma?

[1] https://graphics.stanford.edu/papers/eheap/


We've done some stuff with sensor networks - eg our FOSDEM 2015 demo was hooking up cars via OBD2 ports to stream their engineering telemetry into Matrix for visualisation/analytics etc: https://archive.fosdem.org/2015/schedule/event/deviot04/

However, this is still fairly PoC. Our latency is deliberately high at the moment (given all events are persisted on all participating servers, and signed etc) - typically around the 100-300ms mark depending on server performance involved. If you want lower latency stuff (eg VoIP or MIDI) the architecture is that you use Matrix to be a signalling layer to negotiate the realtime protocol (eg RTP).


> It's worth noting that Matrix is the only federated system which sets out from the outset to bridge together all the others

Nope, Friendica did that first (started in 2010): http://friendica.com/


fair enough, although I guess I think of Friendica as being more social network based rather than realtime chat/messaging/VoIP like Matrix.


"eapecially with end-to-end encryption on the horizon" https://matrix.org/docs/spec/client_server/r0.1.0.html#end-t... ... unfortunately there is nothing there yet ;-/ ... this is a sign, encryption will be placed on top, which will weaken the idea, as it is an optional addon not a first class citizen.


The Matrix spec is still in development and E2E has been in progress since the beginning. It is not an "optional addon", it will be part of the core when merged. You're looking at the wrong branch: https://matrix.org/speculator/spec/drafts%2Fe2e/client_serve... is the in-progress spec, and https://matrix.org/jira/browse/SPEC-162 tracks all the progress. Meanwhile, it's even landed on the develop branch of Vector already (at least for the 1:1 ratchet).


What you're asking is pretty hard to answer, since you're essentially asking if a particular technology will become popular. If anyone had the ability to answer such a question, they'd make a lot of money. :} Matrix is still a new technology and most of its specifications are not yet stable. Most of the implementations of Matrix clients and servers are very young, too. Even Vector, the feature-rich web client, which is developed by the Matrix team, only recently entered public beta. The development of the spec and of the tools must come first before the system can be used by the masses. But that's what's happening now. I'd call that "going somewhere!"


Good luck than.


Would you say the Matrix specification is intended to compete more with messaging (Slack, Skype, etc), email, or both? What do you think people would use this for, and what's the value proposition?


From a technology perspective, it could be used for either instant messaging or email-like conversations. The common use cases right now place it more in line with realtime messaging like Slack. We just launched some documentation about Matrix[1] on the Ruma site that explains pretty thoroughly why someone would want to use Matrix, why someone would want to run their own homeserver, and quite literally "Why Matrix Matters"[2]. In short, the value is being able to (eventually, once the software matures) use a single client application to chat with all your contacts across all the various chat networks, and to maintain ownership and control of your own data.

[1] https://www.ruma.io/docs/matrix/ [2] https://www.ruma.io/docs/matrix/why/


The "Why" page comes across to me as a bit of an activist pitch. You can be right (and you probably are) about limited consumer choice, and the dangers of corporate consolidation, and the lack of privacy, but none of those things constitute a selling point to the average consumer. If they did, then the market would at least be in the process of solving these problems for you, and I don't see any sign of that at all.

These things do matter to businesses though. I've seen multiple companies adopting Slack express discomfort over the idea that the communications are hosted and archived by Slack. Email is broken and horrible, but businesses still use it for a lot of the same reasons they might choose Matrix. Slack fixes the horrible, but lacks (and its business model is likely incompatible with) a federated protocol. There could be an opportunity there.

That wouldn't be one company though, that'd be hundreds of different companies all providing their own clients and hosting solutions.


All good points. It will probably be helpful to have a bit more context on the goals of Ruma as a project. It is a passion project, and has no intention of ever having direct financial backing or a business model. The "Why" page sounding like activism is somewhat intentional—it explains the motivations for the project, but it is not specifically intended to "sell" it to a person who is not already sympathetic to the problems it tries to address.

Perhaps a good comparison is the public reaction to Snowden's leaks about the NSA. They haven't been super big news outside of tech and certain political circles. The average person values convenience and a good user experience over any more philosophical or political beliefs related to privacy and security. The idea of a centralized service or company controlling data just doesn't matter to a lot of people. You're absolutely right that the business motivation is not quite there, because there isn't a strong public demand for it.

Matrix itself does have some financial incentives—specifically, its development is funded in part by a company called OpenMarket, who are also the developers of Vector, a web-based client for Matrix that will at some point have a commercial product offering.

Edit: In other words, Vector is planning to do exactly what you describe: provide a product that competes with Slack, but building it on a protocol where someone could create a competitor using the same protocol, allowing them to be interoperable. Ruma does not have a financial stake in the game, and I've chosen to build on Matrix because it is well aligned with my values.


> that will at some point have a commercial product offering

Good -- if someone demonstrates that it's possible to make money with federated messaging systems, then we can see better investments in them.


Matrix itself is a fairly flexible building block for decentralised communication of any kind where you care about decentralising the conversation history. You can use it for messaging, VoIP, IoT data - and once threading lands it'll be good for email and forums too.

Right now most of the clients are geared up for the messaging and VoIP use cases.

The value prop is to provide a meta-network which connects together all the existing communication silos, so users can talk to folks on other platforms (eg Slack, IRC) without caring what service or client they are using. Ideally someone on Slack could end up talking to someone on IRC, bridged via a decentralised Matrix room, without even realising they are using Matrix. Alternatively you could use a native Matrix client like Vector.im or the Matrix API to get at the data. The key thing is that no single silo ends up owning or controlling the data - it is replicated over all the participants, who thus own the conversation equally.

The end goal is to let users select which communication apps and services they want to trust with their data (including running their own, if they so desire), rather than the communication and contacts being fragmented over hundreds of different apps.


On forums, here's how I'd like to have it work:

https://roamingaroundatrandom.wordpress.com/2014/06/01/a-dec...

It isn't very technically detailed, but it should be clear enough. Your system sounds like a good match for what I imagine. The TL;DR: The messages are in focus, not the servers. Content-addressing using hashes, threads managed through messages referencing prior messages directly by their hashes. When creating forums, servers are just (optionally) defined to simplify routing / delivery.

One could build discussion servers on top of the matrix home server, and discussion clients to match.


Heh, this is awesome and pretty much precisely what we have in mind with Matrix. All that is missing is threading and editable message support, which we are working on shortly. Help would be very very much appreciated, especially in building bridges to forums!!


Does this imply that one could build a distributed GitHub-style issue tracker/pull-request manager on top of Matrix? That sounds pretty awesome, if true.


Yes, you could, although it's not a perfect fit right now as the only data structures you get in matrix are a timeline of arbitrary objects (events) for a room, and a key-value map of arbitrary "state" events. This is flexible enough for many purposes (and you could probably use them for issue tracking and PR management), but we can't efficiently store arbitrary object graphs or trees yet. It'd be great to get there though; meanwhile there are other projects like IPFS and IPDB which are active in that space too.


Looking for Matrix vs Jabber. Is there any comparison document?

Also I was just trying to implement our own Telegram Server to take advantage of Free Telegram Clients available.

Can anyone compare Telegram with Matrix. (Except the Federated part).


Jabber (XMPP) has two fundamental differences from Matrix, as I see it: XMPP is a spec for a system for exchanging messages. It has very small, granular "extensions," most of which are optional. Matrix is a spec for a system that stores arbitrary data ("events") and a way to synchronize and resolve conflict across federated servers. It doesn't have extension specifications like XMPP does, and includes a much more complete set of features in its core, to ensure that all client implementations have compatible features. This is a problem in XMPP, where many features can't be used effectively because the servers cannot assume that all clients understand many of the extensions.

More detail in Matrix's FAQ: https://matrix.org/docs/guides/faq.html#what-is-the-differen...


Also, it uses HTTP + Json rather than TCP + XML. Which is an instant win.


XMPP doesn't have to use raw TCP. BOSH exists for years: https://xmpp.org/extensions/xep-0124.html


https://docs.google.com/spreadsheets/d/1tv5QTuoS29YwApP7i2OK... is a fairly comprehensive (crowd-sourced) comparison of different FOSS messaging clients, including Telegram.


Nice little project, really clean code - super easy to understand after reading some of the Matrix spec.


how does matrix compare to the wave protocol


They are related to some extent. The big differences were that Wave was built on XMPP, driven largely by and for Google, and put higher emphasis on collaborative editing and threading and less on bridging. Matrix is non-profit opensource and focuses on defragmentation and bridging from the outset. Matrix also provides many more clients than were ever available for the Wave protocol, some of which have subjectively better UX.

If Wave had bridged with protocols like email or IM, provided a clearer UX for newbies, and nurtured an open community and network rather than focusing primarily on Google's use cases, Matrix probably wouldn't be needed today.


So how usable is this atm? Can it be used with vector yet to setup a fully working homeserver?


No. It's in very, very early stages of development and is not usable for its stated purpose yet. All the attention on HN is appreciated but unfortunately premature in terms of being able to promote it as a working program.


Awesome! I was selecting a Matrix implementation but not anymore I think. Thank you.


Matrix (the protocol) seems very interesting, I wasn't aware of it.


What is `Matrix homeserver'?


Literally the first paragraph answers this.


Oui. "Matrix is an open specification for an online communication protocol. It includes all the features you'd expect from a modern chat platform"


But what is a "homeserver"?


For a lot more detail, including a better explanation of homeservers, take a look at the Introduction to Matrix guide in the documentation: https://www.ruma.io/docs/matrix/


apparently 'homeserver' = 'P2P node' running as a service on your system


Yes, missed that somehow. Thanks and sorry.


Somehow, I was expecting a virtual world simulator before opening up the link.


Relevant XKCD: https://imgs.xkcd.com/comics/standards.png

Neat project though.


I didn't downvote your comment, but I do want to provide a response for anyone else reading:

This comic gets linked quite a lot when discussing new systems, and I and many others see it as a fairly low-effort jab. I'd venture a guess that anyone reading HN has seen this comic before, and the creators of systems like Matrix are certainly not oblivious to it as common criticism.

While there is danger of becoming yet another variation that doesn't "win" or solve problems in a meaningful way, the folks developing systems like Matrix think they can do more to help by approaching these problems in a new way than they can by throwing up their hands and living with the problems, and I think that effort deserves respect.

It'd be more constructive to do some reading about the protocol, become more familiar with it, and share more specific criticisms that concern you than to drop a link to the comic everyone has seen before.


The problem is that Matrix is positioned right in the middle of the train wreck of unfulfilled promises left behind by Google Wave, XMPP and (for those of enough to remember it) Chandler[0].

It's something that a lot of people (including myself) really, really want to succeed, but all previous trials failed (even when only E-mail was around; now we have the data silos by Twitter and Facebook that are additional obstacles).

Even after almost a decade, the well is still poisoned for everyone from Google's botched Wave rollout; if even Google (with the tremendous leverage of Gmail) was unable to pivot communication is this direction, who will?

[0] https://en.wikipedia.org/wiki/Chandler_(software)


I definitely understand the skepticism. If you search my HN comment history, you'll even find me saying that I'm skeptical that Matrix will achieve widespread success, too. But I still think the way to overcome that is to keep trying. The main edge that Matrix has over previous systems like Google Wave and XMPP is that it's been designed from the start to be bridged to other systems, rather than trying to fight them for mindshare. This means that Matrix can grow and be useful even if people don't "switch" from their existing systems to Matrix. That aspect of Matrix does make me hopeful and optimistic about its future.


Wasn't that exactly the idea of XMPP as well? (e.g. gateways to ICQ etc.)

https://en.wikipedia.org/wiki/XMPP#Connecting_to_other_proto...


Yeah, but XMPP never really did that. It just had the potential.

In the same way, yes, you CAN link IRC to Slack and XMPP, but how many people have?


Then me using ICQ and MSN transports with XMPP 10 years ago was just a fantasy and all the transports installable for ejabberd do not exist ? It's all just potential ?


We have all been bitten by the unfulfilled promises of Wave, XMPP, Chandler (yes, I tried to use it), Psyc, SIP, etc etc

That doesn't mean that we should give up. One technology will finally get the right balance and take off, by learning from the mistakes of the precursors. Maybe that will be Matrix; maybe we will be another stepping stone in the journey - who knows.

One thing for sure is that big commercial for-profits like Google are absolutely not the best positioned to fix problems of cross-industry collaboration and defragmentation. Which is why Matrix attempts it as a neutral non-profit.


Doesn't everybody think they can do better? The xkcd people also knew about the 14 previous standards which they could have tried improving. But they had a new approach, never before attempted, to finally solve the problem for real.


My criticism is not of the comic itself (which is obviously very relevant and a legitimate concern), it's of the glib way the commenter I was replying to simply dropped a link to it as their only input on the conversation.


Hey man, that's what I thought... ;-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: