It's amusing to see that it got a security overview _after_ it shutdown it's business operations. I am curious if it get a security review when it was a proper company given that their USP is security.
I am still sad that the business didn't work out :( I thought there was going to be a follow up blog on where they ended up getting acqui-hired?
We actually didn't get acquihired -- Sandstorm is still an independent company, and I'm still the CEO and majority shareholder, but we no longer have paid employees. We're continuing to work on Sandstorm, and have even pushed some major new features in the last couple weeks (e.g. the Powerbox UI for connecting apps to each other and to the outside world with explicit user permission).
What did happen is we're all getting new full-time jobs elsewhere, since we aren't making enough money to pay ourselves. Most of us actually haven't started our new jobs yet (taking a little break) but we'll have a blog post when it happens in a couple weeks.
Probably not -- finding a company willing to hire someone to deploy Sandstorm for them sounds at least as hard as finding a company willing to buy Sandstorm as a product and pay Sandstorm Inc. for support, which is what we had trouble with. I'm sure they exist but enterprise sales are non-trivial. :/
That said, my new employer is a big user of Cap'n Proto, a sub-project of Sandstorm.
SSRF is an extremely bad vulnerability; it's usually game-over on penetration tests. The zip-file path validation bug is also bad.
I'm pretty ambivalent about these "we got a security review, they said we're good" updates, even when they include the actual contents of the report (the final contents of the reports you actually see are almost always negotiated between the client and the testers).
It is a real problem for the industry that there's no clarity to be had about what it means to have had an assessment, what the different assessors capabilities are, how engagements are scoped, &c. I tend to mistrust organizations that use audit results to claim a clean bill of health --- or anything like that --- but more and more projects do that now, so I don't know how valuable that rule of thumb will remain.
I'm not sure this blanket statement -- probably derived from the world of SaaS -- is necessarily helpful in the context of Sandstorm. Keep in mind that Sandstorm is meant to host internal-facing services. One doesn't normally expect that an external attacker will have authority to create a full user account and install their own apps, which is necessary to exploit this particular vulnerability. (It's actually the app, not Sandstorm itself, making the requests; Sandstorm failed to prevent apps from making requests to the private network.)
On Sandstorm Oasis, the service we run which does allow arbitrary visitors to create full user accounts (possibly the only Sandstorm server worldwide that does this), the SSRF did not provide access to anything sensitive.
I'm of course not saying it wasn't a problem -- I described the severity as "high" in the post.
> I'm pretty ambivalent about these "we got a security review, they said we're good" updates
To be clear, I never made any such claim. The post reports facts, which is that a security review occurred, and some pretty tricky-to-find bugs were found and fixed. I'm sure there are other bugs to be found.
I'd very much like to receive further reviews from other parties.
> Keep in mind that Sandstorm is meant to host internal-facing services.
If the goal is not to run internet facing services, why is the project so focused on security? In the enterprise, there is already F5, NIDS etc so nobody can get in. Is sandstorm trying to prevent employees from hacking the company or something?
We don't think monolithic firewall-based security has been very successful at preventing hacks. Our goal is to create an environment that involves much more fine-grained separations, and enforces security properties at the platform level so that bugs in apps are largely mitigated. We want you to be able to deploy apps without having to security-review them first, which means the platform itself must provide guarantees.
Arguably an app-driven SSRF is a pretty big problem in that threat model. I think we missed it earlier because we imagine a future world where people don't expose unauthenticated services on the internal network and rely on their firewall to protect them. Of course, we need to keep in mind that the existing world isn't going to go away when people deploy Sandstorm and so we need to handle both worlds gracefully.
Another point to make is that we do envision use cases where someone sets up a personal server and invites their friends to it to chat and collaborate -- usually as "visitors" (can't install apps), but sometimes as full users sharing one server. Typically you'd only invite trusted friends to be "full users", though, unless you are running a hosting service. Hosting services (like ours) ought to be extra-careful with multiple layers of security.
That's an argument people have been making for at least 10 years, and it falls apart pretty quickly: how secure do you think most companies would be if you opened up all their AWS security groups to the world?
Right. As I said, it's not the case today, for most companies (with Sandstorm ourselves, as a company, being an exception). With most infrastructure people use today, leaving services unauthenticated makes life easier, so people are going to do it.
One of the goals of Sandstorm is to make it easy to connect services to each other where desired without making them open to the world, with the goal of solving this sort of problem.
> In the enterprise, there is already F5, NIDS etc so nobody can get in.
Completely aside from the fact that there are thousands of breeches that prove that statement wrong, one must also be concerned with insiders, credential theft attacks, and tons of other threats that a monolithic 'build a wall' security model doesn't solve.
Sandstorm's model is close to what I call MSaaS, or managed software as a service. These services can be managed as a SaaS service on prem, yet still be hosted inside a company's network. The equivalence can come from companies selling software "services" by doing deployments of different required SaaS "packages" which can be accessed as a whole.
I think it's extremely important to highlight the security requirements for this type of business model deployment and don't think that focusing on security shouldn't be occurring just because it's running code in a private location.
> Keep in mind that Sandstorm is meant to host internal-facing services.
This is not really obvious from any of your marketing copy or documentation, nor would it be a realistic expectation if it were. I think you need to secure like your users don't know or understand your intentions.
Unfortunately, we've had trouble expressing what Sandstorm is in web page format, because it's so different from anything else out there. People tend try to pattern-match it to something else and get the wrong idea. This has been a constant struggle. But once you actually try it, I think it becomes a lot clearer.
There are literally two Sandstorm servers in the world that allow self-service creation of full user accounts (one of which is run by us). The rest are by invite only, which means that to launch an attack, you'd first have to trick the server admin into giving you an invite. That's certainly not impossible, but it is a significant barrier.
That said, again, I do agree this was a real problem -- we do think it's bad if invited users can compromise the server or its network. I'm not trying to claim otherwise, I'm just trying to put everything into full perspective and avoid hyperbole.
That's true only to the extent that anything that can trigger SSRF is CSRF-safe. I'm not familiar with Sandstorm, but I assume you're saying that's the case here.
Sandstorm mitigates most CSRF in apps by hosting each grain session on a randomly-generated subdomain. That is, every time a user opens an app instance, they talk to it on a new unguessable subdomain.
Granted, a network-level attacker who can sniff the victim's DNS resolutions could discover the hostname and launch a CSRF attack. But, that's a much, much higher barrier than normal. (And the app's own internal CSRF protection still applies.)
(We're also working on some other tricks to mitigate CSRF for defense-in-depth, e.g. relying on Chrome's use of the "Origin" header, though there are currently some issues blocking rolling this out.)
And more generally Sandstorm's fine-grained containerization model, which among other things mitigates many app vulnerabilities: https://sandstorm.io/how-it-works
Understanding the importance and impact of pivot points in long term penetration seems to undermine "it's internal so it doesn't matter". Not that you're alone in that view but the "hard crunchy shell, soft chewy center" security model needs to die...
Agreed. It is difficult to properly set expectations for assessment results due to the cultural demand for a clean bill of health. No one wants to be sold "meaningful insight" into their security posture, they want to be sold a report that says no vulns are present after they're fixed.
Fundamentally, I believe the security consulting industry is due for a radical shift, probably instigated and led by Hackerone and Bugcrowd. Unfortunately there is a lot of inefficiency in the industry that allows consulting firms to exist as they do now.
For the most part, my clients come to me for an assessment because they have a measurable business need - lucrative customer A is demanding an external third party assessment. This is the primary use case for which I feel comfortable - my time at Accuvant (now Optiv) left me deeply uncomfortable with the rote way that security assessments could be nosebleed expensive for frankly questionable work (e.g. $10k/week/assessment for reviewing brochure websites for large companies - for the most part employees knew what they were doing, it was just overpriced and unnecessary).
In a lot of ways security assessments are inflated in price because they're somewhat like insurance. Truly exceptional vulnerability researchers could and probably should be earning half a million to a million a year. Watching them work is a beautiful blend of art and science. They are underpaid. On the other hand, merely competent or outright mediocre "penetration testers" are overpaid by way of de facto rent collecting.
If I were to run a productized software firm now and no particular customer demanded a third party assessment, I'd honestly never commission one. Instead, I'd open a bug bounty program and dial the rewards up, then welcome specific people to come find vulnerabilities (people like Frans Rosen of Detectify, Jack Witton of Facebook, Egor Homakov a competitor of mine and in this very thread ;) or Bitquark at Tesla - not sure of his real name off the top of my head).
I have utter confidence that for essentially everything but cryptanalysis, a generously priced bug bounty is plainly superior than any given firm's commissioned assessment in raw results. It's not quite as turnkey or comforting, but it's effective. Hackerone and Bugcrowd even field reports for managed programs these days. I believe this wholeheartedly enough that I would (and have in the past!) advise potential new clients against the interests of my firm in this direction if they didn't require the assessment for an external third party or regulatory compliance.
Once they really perfect the researcher signal/noise rating system, Hackerone and Bugcrowd are going to take the top 100-1000 researchers on either platform and wrap their current activities into a neat layer of turnkey abstraction, call it a formal assessment and legitimately disrupt the pricing of the security consulting industry.
Worth noting that in the case of this security review of Sandstorm, the customer of the review was a Sandstorm user, namely the government of Taiwan, which probably means incentives were aligned better than if we had commissioned the review directly.
Many of these problems were in third-party libraries. It would be cool if Sandstorm were written with capability-safe languages like E or Monte or Pony which encode Sandstorm's security properties into the structure of the language.
Sandstorm is of course a huge fan of capabilities, but unfortunately the available capability-safe languages out there do not currently have the kind of ecosystem needed to be really productive. Instead, Sandstorm compromises by using a capability-based RPC layer, Cap'n Proto, which is heavily based on E's CapTP.
With that said, I don't think it's really true that a capability-based programming language would have avoided these problems.
1. For the Nodemailer problem, no ambient authority was used to split the email into two addresses. A capability-based implementation could have done the same thing. This is more of a langsec issue in that the API was a bit foot-shooty.
2. If the zip implementation were completely rewritten in a capability language, then sure, this vulnerability could have been avoided. It also could have been avoided if zip accepted NUL-delimited filenames rather than newline-delimited. It's not really practical to rewrite the world in another language, unfortunately.
3. SSRF can be avoided using capabilities (forcing the attacker to present a capability, not just an address, to any third-party server they wish to access). Ironically, though, this is a networking issue, not an in-process issue, so what we really need is stricter application of capabilities at the network layer, rather than a capability programming language. Sandstorm is actually willing to push capabilities at the network layer. The trouble is, the network is often used to talk to the rest of the world, which isn't usually capability-based. Hence, we have to make compromises.
4. The Linux kernel bug would maybe have been avoided if the kernel were written in a capability language, but that's a pretty enormous undertaking. Alternatively, it could have been avoided if we forced all our apps to be written in capability languages, but that would mean that no existing codebase could be ported to Sandstorm, which is far too large a cost. That said, I would like to have some special support for apps written in capability languages someday, e.g. to let the user know that this app is extra-safe.
Put simply, going all-capabilities is just not practical today, and we have to make compromises in order to make meaningful progress.
I'm honestly not sure. I haven't been able to read the original report (which is not in English). My guess is that it did not receive direct attention.
I note that Cap'n Proto did receive some scrutiny from security guru (and personal friend) Ben Laurie in the past:
I am still sad that the business didn't work out :( I thought there was going to be a follow up blog on where they ended up getting acqui-hired?