Hacker News new | past | comments | ask | show | jobs | submit login
JMAP: A better way to email (2014) (fastmail.com)
253 points by robert_foss on May 7, 2017 | hide | past | favorite | 51 comments



Just as an update since then: a JMAP working group was chartered at the IETF a few months ago, and is working towards bringing the spec to RFC status: https://datatracker.ietf.org/wg/jmap/about/


Thanks for posting that, was about to say the same thing. Am one of the co-chairs. AMA.


After reading the "Why is JMAP better than IMAP?" section on http://jmap.io/, what it seems to boil down to is small efficiency gains and an easier time for people writing clients. It doesn't look to me like users of clients will really gain much from their client using JMAP instead of IMAP. There's a comment about not requiring a persistent connection being better for mobile clients, but I've been using IMAP push with K-9 on my phone for a long time now, and I've never noticed a degraded battery life from using it.

A quick look at the protocol looks like in order to get a list of mailboxes, you have to ask for them all. In IMAP, you can say, show me top level mailboxes, show me direct child mailboxes of mailbox x, etc. In a previous role, I was responsible for working on a mail archiving solution which had a large hierarchy of folders (into the thousands IIRC) in a single account. Access to the archive was via IMAP. Looks to me that JMAP here would require the client to potentially fetch megabytes of data just to start showing a list of folders, where the solution we had with IMAP probably took a couple of kilobytes at most to get started.

It does make me think that the protocol may have been designed with less thought for uncommon use cases than IMAP was, but I've only spent a few minutes looking at the protocol, so I could be wrong.


You're very welcome to comment on the IETF mailing list if you think you can make a compelling argument for having getMailboxes also take a filter allowing you to build a tree with "parentId": [null] and then use the IDs from each of those responses to create a "parentIds": [x, y, z] query.

Of course nothing stops you storing thousands of mailboxes in a non-heirarchical layout and making IMAP have large LIST fetches too.

(I do wonder - did your solution via IMAP still fetch everything each time, because in that case all you're doing is spreading that megabyte out over multiple round trips)


> You're very welcome to comment on the IETF mailing list if you think you can make a compelling argument for having getMailboxes also take a filter allowing you to build a tree with "parentId": [null] and then use the IDs from each of those responses to create a "parentIds": [x, y, z] query.

I'm not invested enough to sign up to a mailing list and try to convince people to change a protocol. I'm only invested enough to fire off a comment on a forum I am already signed up to. I explained my use case above. Whether or not it is compelling is up to the reader to decide. Maybe I'm the only person in the World who has dealt with large hierarchies of mailboxes.

> Of course nothing stops you storing thousands of mailboxes in a non-heirarchical layout and making IMAP have large LIST fetches too.

Yes. I would issue the LIST command, and as the IMAP server started streaming the results to the client, I would immediately process each mailbox as soon as it was down, rather than waiting for them all to be retrieved. I fear that with a JSON based protocol, most server and client implementations wouldn't deal with streaming (although they technically could), and would just deal with completed blobs.

Does the Cyrus implementation construct the entire blob of JSON before beginning sending it to the client, or does it start constructing and streaming the JSON as soon as possible, before it has even fetched the entire list of mailboxes from whatever the backend data store is? The JMAP streaming solution seems like a more complicated solution for developers to implement than the IMAP streaming solution.

> (I do wonder - did your solution via IMAP still fetch everything each time, because in that case all you're doing is spreading that megabyte out over multiple round trips)

People were accessing this mailbox with many different clients. How each of those clients worked under the hood, I do not know. I did build a web frontend, and that web frontend worked efficiently, fetching only the folders that were being displayed to the user, and their immediate children. A user clicked a folder to expand it, and they would immediately see the pre-fetched child folders, and a new request to fetch those folders children was then triggered.


Sadly protocols are made by those who show up and put the effort in.

I'll pass a link to this comment on to the list and see if others chime in. The downside of allowing filtered getMailboxes is more protocol complexity, but maybe it's worth it.

On to the next topic:

Cyrus creates the entire blob. Even when it's streaming a LIST reply it still batches a lot in memory. You'll discover pretty fast if you don't do that, you might wind up holding a lock for a LONG time if you're trying to build consistent replies and the remote end is on a slow network link or just consuming slowly. So we batch all replies right now.

If I was building a brand new product I might use an engine which supported MVCC (like Postgres database for example) and hence have flexibility. Having said that - even then you might find that a server talking to slow clients would want to buffer to temp file and free up memory and locks while it streamed the data.

With HTTP we get around this by having nginx handle all the mess, so I guess that's what will happen with JMAP too, nginx buffering large bodies to disk.


>Sadly protocols are made by those who show up

This is a very high burden to place on casual feedback from somebody trying to be nice.


I'm not quite sure how to respond to this. The IETF is aware that there's already a high burden on people, that decisions tend to be made mostly by the people who can get to meetings and talk in the rooms where standards are hashed out.

Having said that, I'm not sure I agree that it's an unreasonable burden to ask somebody to join a mailing list and defend an opinion if they want to influence a standard which will hopefully be useful for tens of years and implemented by hundreds or thousands of people. Any additional complexity for server and client implementers becomes quite a lot of work multiplied by all those people, and the additional burden of implementing a bigger standard may even be enough to discourage people from using it at all.

And it's a fact, the decisions are made by those who show up and put the work in. Wishing it was otherwise doesn't change the reality that we need to make tradeoffs and that the IETF favours rough consensus and working code. Both of which take work.

I will forward this discussion to the mailing list, that's about all I can do as chair, I won't proxy all the IETF work over to comments on HN!


That makes sense. I think you jumped to quickly to interpreting the original comment as an intent to influence the standard. I read it as just "here is a thought that might be useful/relevant to the people who are making the thing so that they can use it as they see fit when making the standard".

No big deal either way!


> Sadly protocols are made by those who show up and put the effort in.

Yes. And like everybody, I have my own priorities, and will not apologise for JMAP not being one of them. Somebody like yourself who is actually invested in making JMAP a success should welcome people pointing out real life use cases where the protocol you're trying to improve upon actually works better than the one you've come up with. I would hope you're spending significant effort trying to get people to look over your spec and point out flaws.


[flagged]


> I'm pretty sure it was welcomed, and you were invited to contribute.

I was invited to join a mailing list and try to convince people that my protocol change was important. There is nothing unreasonable about saying no to this request.

> (You're not coming off super well in this thread. "I have feedback and time to engage a HN subthread, but getting involved with the actual thing I'm criticizing is a burden I choose not to bear, and this is your problem and you should change to accommodate me.")

"I have useful feedback and I am offering it for free"

"Sign up to this list and try to convince people to do work"

"No thanks"

Wow, I'm such a bastard.


> Yes. And like everybody, I have my own priorities, and will not apologise for JMAP not being one of them.

That could have been said better considering he was taking your comment for possible action. If you don't think that comment is a bit snarky, so be it. I do.


Does this trend of replacing TCP/IP with HTTP start being more common, or it just looks to me that way? I get a feeling that HTTP is stretched too much outside of what it's designed for, and it results in messy design.


Most important thing is JavaScript from browser can use only HTTP, so if you want to build web-based mail client, you won't have to build proxy HTTP service, you can just implement protocol on client. Another important thing is proxy, many organizations don't allow anything but HTTP, so it's easier to just use it.

HTTP is not that bad, anyway. You don't have to use every quirk.


I agree with your comments about web-based e-mail clients, but:

> many organizations don't allow anything but HTTP, so it's easier to just use it.

Remember when we allowed for newer protocols to have their own ports and didn't blacklist everything but HTTP? Why are we doing this again, now? Please don't say security/firewalls or else I'm going to ram an ice pick in my eye.

If your organisation is blocking ports, they're probably filtering HTTP. Companies that care about filtering deny access to most personal e-mail on work machines. So basically you're saying "let's abuse HTTP so people can violate company policy."

You may or may not also being saying, "Let's build a brand new awesome shiny latest and greatest protocol...and permanently tie it to the technical debt wreckage of HTTP/2.

oh and Google could just fix their terrible broken IMAP implementation.


(HTTPS, actually.)

HTTPS (when combined with JSON) provides requests and responses, without harsh size limits, with nice addressing. TCP provides a byte stream. A lot of people are finding it easier to express their upper-level protocols in terms of requests and responses than in terms of a byte stream.

Having to cope with NAT middleboxes breaking the byte stream doesn't help, of course.


There are benefits to using http as a substrate, like request bundling on mobile to reduce radio and battery usage. JMAP is transport agnostic in its core form, but json for structure over https for transport is the current best choice for simplicity and developer friendliness


We can probably blame misguided corporate IT policies for starting this trend, by blocking most of the outgoing ports and migrating many newer protocols to use TLS over port 443 (although it doesn't really have to be HTTP to bypass corporate firewalls).

This is still the most often cited reason, but I don't think it's a good one anymore (as said before, nothing prevents you from using TLS).

If you have a stateless protocol is dealing with autonomous request, HTTP is quite frankly the best general-purpose transport layer for you. The only other well-supported transport layers are the message-based UDP and the stream-based TCP, TLS (which offers encryption), SSH (which offers encryption and multiplexing).

If you've got a stateless request-based protocol, you want a message-response based transport, but UDP doesn't quite offer the same abstraction and lacks reliability guarantees and large message support. With a stream-based protocol on the other hand, you're bound to redo an ad-hoc, informally-specified, bug-ridden, slow implementation of half of HTTP[1].

Yeah, you probably don't need all of the features of HTTP for your protocol, but being able to use existing HTTP infrastructure solutions (such as proxies, load balancers, caches, testing tools, endpoint monitoring tools) is a great boon that you would never get with your own homegrown protocol.

[1] https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule


How would you implement something like push in HTTP for example? I remember, that XMPP had to come up with some hacks (BOSH[1]) to do it over HTTP.

1. https://en.wikipedia.org/wiki/BOSH_(protocol)


Streaming (server trickles out a response that never ends) or long-polling (server intentionally waits before sending a full response) work very well for pushing data.

BOSH uses long-polling, and as someone from the XMPP community I'll admit it's a bit of a hack around HTTP, mainly because it's stateful.

However, it's entirely possible to have RESTful/stateless long-polling, for example in Dropbox's API: https://www.dropbox.com/developers/documentation/http/docume...


By the way, can't WebSockets be used to solve this? It would still use HTTP for initial connection, but essentially can work without relying on streaming and polling hacks. And it's not really limited to usage in browsers.


Yup, WebSockets could certainly be used for push.

I think the choice of HTTP vs WebSockets comes down to architecture style. WebSockets are optimized for bidirectional communication with a specific remote server. In other words, something about the service makes it better if client I/O is sticky, maybe due to an in-memory session.

HTTP can make more sense in a stateless architecture, where GETs and POSTs could potentially go to different servers in the cluster. Of course, one could use WebSockets statelessly too (have no messages depend on past messages for context).


I think that HTTP-based applications are "easier", because they're less likely to be clobbered by a firewall (which is fairly inherently hostile to Apple-grade users). In other words, the layer below (TCP/IP) is judged inadequate so everyone just jumps up a layer (HTTP) and deals with it there, even when it's a terrible idea.


I think maybe it's time to embrace many of the upgraded specs available in http2, which you can upgrade http connections to anyways.

As for existing solutions that have been around for a while, http keepalives implemented in a smart manner, and exposing apis through a cdn are huge difference makers.


I found this observation interesting:

"Our servers are in New York, and our developers are in Australia. Even on a good day, the ping times are over 230 milliseconds. Anything more than the absolute minimum number of round trips is felt very keenly."

I wonder if some web apps might be built better if developers were bandwidth throttled and latency hindered artificially on their own machines, for say, one day a week.


I think Facebook does/used to do something like this[1]

[1] https://blogs.wsj.com/digits/2015/10/27/facebook-slows-the-i...


This is a useful built-in feature of the dev tools on Chrome so for devs who care about their users it's trivial to use.

I think moving towards progressive web apps (PWA) should make the web much faster because good performance over poor bandwidths is one of the cornerstones of PWA.


Yes, this thing in chrome: https://developers.google.com/web/tools/chrome-devtools/netw...

PWA should help somewhat, but there is still quite a bit of javascript bloat and "too many assets" to combat. When my phone runs out of the included 4G and drops to 3G, many sites are unbearably slow.


Don't forget that developers make decisions within the boundaries prescribed by the release bookends, meaning it can certainly valuable to the developer but not valuable to the team or larger organization. This very situation reminds me of the quote "forgive the length of this letter, for I didn't have time to make it shorter." Taking steps to optimize load times or page size requires effort, and then maintaining constant diligence against future laziness


Thanks HN - while I'm away at Mt Gambier supporting my kid in http://generationsinjazz.com.au/ you post not one, but TWO things about my hobby horses. Turns out 6000 people in a field tends to overload the local cell, so I've had very spotty coverage.

Sitting in a bus with unreliable phone connection on my way back home trying to reply to everything now. Cheers.


"Turns out 6000 people in a field tends to overload the local cell"

Next time, try dropping down to 2G and you should make it through fine.


Good idea :) Though putting the phone away and cheering on our kids was pretty worthwhile too.


Email protocols are very old and network APIs have improved a lot during the last two decades. So, when FastMail (I’m a happy customer) introduced JMAP I was excited about it. But even then I didn’t anticipate much success for it. Big email providers (Gmail, iCloud, Outlook, …) don’t care about these things. And it’s a shame that email providers and clients are still wrestling with ancient protocols to provide a decent experience for saving drafts, search, folders/labels, and other basic things.


I hope it gains more traction. In the mean time, I've switched, to Fastmail, and I couldn't be happier. It really is much faster than Gmail, I was very surprised because I didn't think email could be so fast. I guess Gmail had become the new normal.


Except for search really. I love FastMail but they need to work on the search speed. Atleast it's faster than Outlook.com though, that search speed is beyond unacceptable.


Yeah, because we fill out the whole count rather than returning just the first page, a search that returns tons of messages will be slower than gmail. You can limit the set of returned messages with time ranges.

One of our slack messages is something that's become a bit of a cliche! "JMAP will fix it" - in this case, a lot of the cost is generating the message counts by iterating every match, and that's not going to be cheaper in JMAP. We don't have data structures that allow sorted partial search responses, so we can't generate the first page until we've resolved the search for every message :(


If you open a support ticket I can look at your search issues, ask for Bron. It shouldn't be slow unless you're deliberately bypassing the Xapian indexes by using substr searches.


How long is search supposed to take? Entering something common (like "gmail") takes around 5 seconds for me with 32k results. My old google mail account takes 1.5s to display the first page.


Looking at the Cyrus code, the server doesn't even start sending search results to the client until it has completed running the search. And due to the response being JSON, and there being no pagination in the protocol, presumably the Fastmail client wont start displaying the results until it has retrieved them all either. So I'm not surprised it's slow.


Are you looking at JMAP getMessageList here? It paginates the response (offset/anchor and count), but yes - it has to build the complete list. Not that we're using that in FastMail production yet, we're using XCONVMULTISORT still. It's roughly the same algorithm though.

The biggest expense is all the index.c stuff to support IMAP protocol requirements. There's a lot of cost in there which will go away with JMAP, and which will go away even more when we replace the separate cyrus.index files with a structured database per user. That's still got a ton of work to do though... one of the goals for long term Cyrus improvements.


Does this allow for the client to say, "Give me all results matching x", and for the server to say, here's the first N results, but there are potentially more, and then the client can then ask for more, in multiple requests until it has them all, allowing them to progressively show results until they're all retrieved?

That's how I would expect a JSON based protocol to work for pretty much all types of requests that provide potentially large lists in a response. I would expect getMailboxes to work this way too to be honest.


More recent info at http://jmap.io


Why dont improve on IMAP with extensions, I can't see how the "extending IMAP only makes it worse" argument hold. How would adding an addition protocol (JMAP) to a buggy client make the situation any better? I can see how it's easier for fastmail as a company to try to get its changes out by making another protocol, but would we not benefit better by trying to improve on IMAP? it's very unlikely this will ever replace IMAP and SMTPS from history internet always try to iterate and improve not replace standards completely? There is great effort to improve on SMTP and encryption lately, what can we learn from that?


If you want to try out JMAP on a provider that is not FastMail, you use a proxy:

https://proxy.jmap.io/

I've not seen it deployed anywhere else yet as a production thing, and I don't know how much progress has been made on standardisation of it, but it does seem to offer vast improvements over IMAP.


The proxy is a pretty crappy Perl JMAP server layered on top of a middling quality IMAP sync layer. The CalDAV/CardDAV bits are a bit better because it has nicer synchronisation primatives, but it's still code I slapped together over a couple of weeks of evenings. It's fine for playing with, but I wouldn't try to use it for large mailboxes. There's not much optimisation in there, it's doing pretty basic sqlite queries against a fairly normalised database, so there's no denomalisation for efficiency.


Do Fastmail themselves let you use jmap to connect now? They didn't a while back but I haven't checked recently.


Not yet, though it's being used internally for a few things already.

The ietf working group and the Cyrus IMAP source code are the most up to date right now. I'll be updating the proxy (perl, open source) soon.


I was looking for a way a week ago and couldn't find it, but maybe it's just hidden really well.


What about encrypting/signing? Does it address that too?


Explicitly not. Key management is a far from solved problem. We're not trying to fix everything.


Happy to see this get more attention again. I highlighted it in last week's cron.weekly [1] too.

I'm still unsure of the viability at this point, but time will tell.

[1] https://www.cronweekly.com/issue-78/




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: