The initial assumption with these types of services is always that an attacker, having compromised their servers, could simply inject malicious code into the browser.
From their docs:
> Normally, when hackers get access to your server, they can change the code that gets sent to customers. For example, they could make the code say "send your password to us". Then, even though they can't read documents immediately, passwords start coming in and they can soon read them.
> To solve this, we're using a relatively new web technology (Service Workers) to install some code which can't be changed without setting off a warning to you. That code then keeps taps on all other code, and checks that it matches the publicly available version on GitHub.
The Service Worker doesn't simply check a signature from the developer, it checks the source code against the version on GitHub. So, to know whether it contains a backdoor, read the code on GitHub.
It's similar to the keybase.io identity model (using popular services as defacto authories). It might be overkill but it could be nice to supplement that with a check of the repo mirrored on e.g. Bitbucket and Gitlab.
Yes, definitely. GitLab's API is very similar to GitHub's, except that it doesn't support CORS. If that's fixed it should be pretty simple to add support for GitLab.
I wonder what do they do with Service Worker updates that the browser normally does every day or so. Do they somehow block install and activate events? If not then the attacker just needs to update the worker and the history repeats.
There's no way to block activate events, but there's a way to delay them. In the meantime, you can check the new Service Worker file against GitHub, and if it doesn't match, warn the user.
I like how they don't flaunt crypto terms all over the place. Calling encrypted content "gibberish" is fun.
And I think it's a genius use-case for Service Workers. From their security page[0]: "we're using a relatively new web technology (Service Workers) to install some code which can't be changed without setting off a warning to you. That code then keeps taps[sic] on all other code, and checks that it matches the publicly available version on GitHub."
Combine this with HSTS, and you can be certain the code running hasn't been modified by a third-party.
To be precise: If it works as described, it makes it (a little? substantially? orders of magnitude?) more difficult for third parties to modify the code.
Very true. You probably still need to trust that the developers' Github accounts aren't compromised. I was looking at their repo[0] for this Service Worker verification, and their "So what's the problem this solves?" section confuses me, as it doesn't explain the how. :/
While a hacker gaining access to the developers' GitHub account would be bad, they would still have to actually push the malicious code to GitHub before they can serve it from airborn.io. So, if people pay attention to pushes to GitHub, this attack could still be detected (but not prevented). For prevention, one possibility would be to require all commits to have been on GitHub for at least 24h or so. Then, the devs would have some time to try and get their accounts back. We don't implement that today, though.
That section attempts to explain how web apps work today, if you don't use that library. Reading the entire thing back, I agree that the how is never explained very well, although https://www.airborn.io/docs/security does explain it.
That’s ironic because I have about 500 emails from this Patrick guy spamming my inbox from 2014-2016. I’m quite sure I never opted in; I just searched my inbox for a subscription confirmation email and could not find it.
Seems like Patrick is in this thread, so sorry if I’m off-base, but I really would like to know what I did to deserve hundreds of emails from “Patrick at Price Intelligently,” because I’m pretty sure I never subscribed to his list.
(It’s not just him, btw - the worst are the GitHub profile scrapers that somehow manage to make it into my “primary” inbox)
Hey there - this doesn't sound good (and definitely not our intention). We ported over some contacts here and there from different marketing migrations, but always made sure to hide out any unsubs or things like that. I just looked in our DB for the email address in your HN profile and I'm not seeing it anywhere, so I'm assuming it's a different email address.
Ironically, could you email me the email address to patrick[at]priceintelligently[dot]com and I'll investigate. If it slipped in or you ended up actually opting in, definitely will take it out. If it's a larger issue, definitely will solve it. Obviously, apologies either way.
It was probably my school email (which still works). Either way, the emails stopped in 2016. So no worries, it’s not worth looking into, but I sincerely appreciate your responding. Good luck with the business.
(Porting contacts between lists seems dangerous btw. IANAL but pretty sure subscribing to one list does not count as opting-in to another.)
Yea - found you actually and can share meta details privately if you liked. Fortunately wasn't the DB port.
On that front we automatically just don't touch any of the unsubs for the new system, so was worried that was messed up. Doesn't look like it though. Not a lawyer either, but from my understanding it's the same list even if you change marketing automation/email products. GDPR is making this fun, too. :)
Airborn needs to charge more for their business users. If there's one common lesson on Indie Hackers, it's that everyone (including customers) stands to gain from charging more.
If Airborn creators on on this thread: how did you think about getting people to pay while making the source available? It doesn't seem commonly done.
We're basically banking on medium-size businesses finding it more convenient to use a hosted service, especially when it offers the same or a higher level of security.
As usual, the discussion about whether something is secure enough or not should start from describing various attack vectors (which will be different for different class of users). AFAIK, the airborn product tries to eliminate an attack on its own servers / data stored on servers and in transit. While it is a good thing but by itself it doesn't necessarily make all the users of the product secure. In particular, they ignore attacks on the client itself, injection on the network, and several other "offline" attack vectors (e.g. taking pictures of user's screen). I understand the desire to make marketing simple for end users, but I really don't think that it helps people make rational decisions about security.
- When you create a collaboration link, it contains an encryption key. (You can additionally also set a password.) All messages sent to the server by collaborators are then encrypted with those. The collaboration algorithm is currently quite simple, and paragraphs are locked when editing them. In the future we'd like to use a P2P algorithm, e.g. using Y.js. [1]
- When you sign up, the page downloads a file on your PC which contains your username, and your password encrypted with a "password recovery key". When you lose your password, we send you your password recovery key and you can decrypt your password with it.
Login being broken is weird, do you see any errors?
Certainly - although the JS can also just read the document and submit that. To make sure that it isn't doing either, you'd have to read the code on GitHub.
Great way to comply with GDPR! Also presumably reduces your data management overhead slightly, although it seems you're still storing data on your own systems[1]. This reminds me of SpiderOak's "No Knowledge" claim[2].
(Disclaimer: I don't know much at all about the current standards of encryption)
Is this security not already standard, and not practiced by companies like Dropbox? As a side note, I like the collaboration aspect and one-person-per-paragraph is a pretty smart idea!
Edit: To those downvoting, sorry, I'm just curious and I think some of us were unaware of the differences between Airborn and other services, which I've now learned can still view your data
Most "encrypted communications" means that no unauthorized third parties can view the data as a result of the encryption. However, even if Dropbox stores your files encrypted, they still have the keys to those files and so could be coerced into decrypting (them through a warrant, for example).
Have you considered increasing your pricing, and adding more premium features? As a supporter of client-side encryption in cloud apps, I really want to make sure you have a sustainable business. https://training.kalzumeus.com/newsletters/archive/saas_pric...
A Premium Enterprise support Platinum Plan for $500/mo can be a big seller and allegedly the minimum expense report you can file in some large orgs [1].
I just tried using this app with one of my friends and I found that editing in real time between two people wasn't working as intended. When someone else wrote text, the text I wrote (in a separate paragraph) would disappear.
Is anyone else seeing this? It needs a bit more polish before I could use it for encrypted collaborative editing.
Like others, I also like the novel service workers solution (mitigation might be more accurate) to client side code integrity.
I’m more interested in how you handle the encryption keys. Specifically, how does a user share a document with another? You mentioned that the “share group” has its own private key; where is this stored?
The encryption key is in the share link, and also stored encrypted in all collaborating users' accounts (if they have one - an account is not required to collaborate).
So the user is responsible for securely transmitting the share URL? i.e. Secrets are shared out-of-band?
It’s pretty clever, I like it. Definitely not the holy grail of client side browser-based multiparty encryption, but you’ve found some innovative techniques. Bravo, good luck with it.
So I remember trying this before, and it had a 'desktop' and an app store where you could add more apps, but right now the demo seems to be limited to documents and presentations.
Is that a change in focus or a limitation of the demo?
Yeah. I've been focusing on office apps rather than "any apps", and I removed the window manager based on feedback here and elsewhere that it felt too "heavy" and unnecessary. Also, the app store was actually the Firefox OS marketplace, which was shut down. (And to be honest, the apps in it weren't super useful for Airbon to begin with. There weren't very many apps for "serious work".)
Under the hood, it's theoretically still super easy to add/install apps in Airborn, though, and Firefox OS's marketplace server is open source, so maybe in the future?
I can agree with that, the window manager was really neat (as I've always indicated these sorts of cloud platforms are like virtual OSes), but probably didn't add much to usability.
Performance seems really smooth, I'm glad to see this is still coming along quite nicely.
Even if it isn't the best UI, I think you really do still need to get something like EtherCalc on here, even for personal use, I love spreadsheets for storing structured information, and for business users, it's almost definitely a need.
Interesting strategy to list updating the software as a premium feature. Will you make old versions of the software permanently available to customers to install on new devices? Will you support your v1 API forever?
Non-business accounts update to the latest version automatically. If you login to an old non-updated account using a new device, you get the old version. We might eventually notify old accounts that they have to update, but we haven't needed to do so yet. If you pay close attention to the requests by https://www.airborn.io/demo, you can see "v2" :)
You can create a blockquote element by pressing Tab, although it won't have any special styling. If you want, you could add some CSS in Raw view. I'll look into making a control for this specifically, although IIRC browser support for blockquotes in contenteditable is a bit inconsistent.
Regarding inline styles, I don't remember why, sorry. It might again have been to fix some inconsistency between browsers.
Thanks for the information. FWIW my main issue with inline styles is, my use case would be taking the HTML into another system, and the more pervasive the inline styles are, the more likely they are to override our the desired styling contained in stylesheets produced by that system. They may help your product work better on purely its own terms but they are (IMO) highly suboptimal when HTML leaves your system and travels.
This made the rounds on Reddit a few months ago, and it was called airbornOS back then. Glad to see they dropped the "OS" part, as it really isn't an OS.
The initial assumption with these types of services is always that an attacker, having compromised their servers, could simply inject malicious code into the browser.
From their docs:
> Normally, when hackers get access to your server, they can change the code that gets sent to customers. For example, they could make the code say "send your password to us". Then, even though they can't read documents immediately, passwords start coming in and they can soon read them.
> To solve this, we're using a relatively new web technology (Service Workers) to install some code which can't be changed without setting off a warning to you. That code then keeps taps on all other code, and checks that it matches the publicly available version on GitHub.
Interested to see how robust it is...