Hacker News new | past | comments | ask | show | jobs | submit login

It's astonishing to me that I hit Ctrl-F and searched for "sandbox" and "software compartmentalization" and there are no hits in that doc.

Chrome-like sandboxing seems like the current state of the art, and it's complementary to all the techniques mentioned. There will be vulnerabilities, but making attackers chain vulnerabilities to get into the system will have the effect of "dramatically reducing them".

I don't see any numbers in that paper either, which seems like a big oversight.

Chrome has proven to be more difficult than other browsers to exploit, and this is with hundreds of thousands of dollars on the line. I think the Pwn2Own escape in 2016 required the attacker to chain 4 exploits. I don't think people hacking servers need 4 exploits today.

https://en.wikipedia.org/wiki/Pwn2Own

The browser is operating under a more extreme environment than a server: it contains tens of millions of lines of C++ code, and it exposed to attackers (web sites) by billions of people continuously throughout the day.

This is also the solution that doesn't require rewriting hundreds of millions of lines of code, which is an economic impossibility in the short term.

There is some current research on how to help programmers split programs into trusted and untrusted parts, e.g.:

http://dl.acm.org/citation.cfm?id=2813611




>It's astonishing to me that I hit Ctrl-F and searched for "sandbox" and "software compartmentalization" and there are no hits in that doc.

They use the term container. It's a subheading:

2.2.1 Operating System Containers

"...Container-based isolation can clearly reduce the impact of software vulnerabilities if the isolation is strong enough."


Yeah I just noticed (see my other comment). But I still think the paper is weirdly unbalanced and divorced from the realities of software development.

I think current events show the urgency around cybersecurity in the government. But it sounds more research-oriented, whereas I would expect it to be more about accelerating current practice (and also debunking technologies like Docker, which are sloppy with regard to security).

Maybe I'm misunderstanding the purpose of this paper though.


It seems to me to be a bit of a guideline or general list of thinks to be aware of when devleoping. It's fairly short, at least compared to many government reports, so it can be digested quickly and put in the ole' toolbox.


People hacking servers don't need 4 exploits, but hacking servers is still difficult relative to hacking clientside code, because servers (a) have a more limited attack surface than clients and (b) are less performance-sensitive and thus can rely on memory-safe languages.

Docker and seccomp-bpf are techniques for sandboxing server applications, and they're useful, but they run into the same problem as sandboxing any complex system: the privilege to do many of the things the application ordinarily does --- writing a row in a database table, for instance, or creating a new queue entry --- can unpredictably result in privilege escalation.

The techniques you need to apply to make sure that basic application actions don't escalate privileges are largely the same as the ones you'd take to ensure the application doesn't have bugs.

Don't get me wrong: I think more and more applications will take advantage of serverside sandboxes, to good effect. But I don't think it's going to be a sea-change for security, and I don't think it's valid to compare the "4-exploit chain" needed for Chrome to the situation in a complicated web application.


I don't know docker should be considered security sandboxing. From https://docs.docker.com/engine/security/security/ :

    Running containers (and applications) with Docker
    implies running the Docker daemon. This daemon 
    currently requires root privileges, and you should 
    therefore be aware of some important details.


Docker definitely doesn't follow least privilege. If it were least privilege, then the Docker daemon WOULDN'T EVEN EXIST.

For example, Chrome's sandboxing tool (minijail I think) and systemd-nspawn are NOT DAEMONS. They just set the process state and exec().

Docker is sloppy as hell from a security perspective. It is indeed embarrassing that they mention it in this paper.

Docker has also enshrined the dubious practice of dumping an entire Linux image into a container, and then installing a bunch of packages from the network on top of that.

And now you need a "container security service" as mentioned in the article. How about you just understand what your application depends on, rather than throwing the whole OS plus an overestimation of dependencies in there?


I see your point, but I would say two things:

1) You might be underestimating how far a typical server configuration is from least privilege. It's true that you need to write to database tables or a queue service for the app to work, but a typical configuration has the capability to do hundreds or thousands of other things that are not necessary for the app.

If you have a PHP app running inside a Apache process, how many system calls are needed? What about PHP as a FastCGI process under nginx? Linux has 200+ system calls, and I'm sure the number is a small fraction of that.

What about the privileges for connecting to local daemons? Most people are dumping huge Linux images into Docker and then installing tons of packages by default. Last time I checked, apt-get install will START daemons for you.

It takes some work to figure out what privileges a given application needs, and that's probably why people don't do it. But I would contend that much less hard, and much more bang for the buck than a lot of the techniques mentioned in this paper.

2) You might be underestimating diversity of privileges needed for different application components ("microservices" if you will). For example, in distributed applications I've dealt with, the auth code doesn't need to talk to all the same back ends that the normal application logic does. If it lives in a separate process, which is a good idea for a number of other reasons, it can have different privileges than the rest of the application.

In any case I view this as a kind of "walk before you run" type of thing. If you can't even figure out what privileges your application needs, how are you going to get programmers to add pre- and post- conditions, and apply symbolic execution and techniques like that? The paper sounds absurd and divorced from the realities of software development.

Now that I read the paper, they do devote just 2 pages to "containers and microservices" (which are weirdly buzzword-y terms for a traditional practice). It still seems unbalanced to me.


The OP Secure Browser invented the process isolation w/ kernel model for browsers in 2008. It was based on what high-assurance security did since the 70's. Native Client watered down its security a bit in 2009 to make it faster and with unsafe language. Result + some regular stuff was Chrome sandbox. Far from cutting edge in high assurance security for browser architecture or isolation in 2016.

Lots of potential for improvement using tech developed since then that does way better or older tech that does job little better.


As you said, the goal with OP [1] (and in the initial Chrome [2] and MS research [3] work) was really to apply what we had learned with isolation for operating system level security (for the last several decades) onto how we build web browsers. Dropping privileges and routing calls via smaller kernel should not be cutting edge today.

edit adding references: [1] https://wkr.io/public/ref/grier2008op.pdf [2] https://seclab.stanford.edu/websec/chromium/chromium-securit... [3] https://www.microsoft.com/en-us/research/publication/the-mul...


To add to it, in browsers we have things like Illinois Browser OS, ExpressOS for mobile example, and just running instances on a separation kernel in safer language or with mitigations like INTEGRITY-178B, GenodeOS, etc.

https://www.usenix.org/legacy/event/osdi10/tech/full_papers/...

http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=9D1...

On the language side, there's things like SVA-OS for automating it if they don't want to do rewrites in safer language.

http://safecode.cs.illinois.edu/sva.html

Capability-security used to combine usability with higher security esp if they got rid of the Java crap underneath E. I can't recall if this niche made any recent improvements on effortless software security with reasonable overhead.

http://www.combex.com/tech/darpaBrowser.html

On hardware, we have tagged CPU's like SAFE, even better a C-compatible one w/ capability security running FreeBSD called CHERI, those designed for safe languages like jHISC, and those that protect software + fight malicious peripherals with confidentiality & integrity at the page level within the memory subsystem.

https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/

All of these have at least prototypes with a subset deployed commercially by interested parties. Separation kernel approach having most suppliers with CodeSEAL being one processor + compiler approach that saw some adoption outside small stuff for smartcards. CodeSEAL was confidentiality + integrity of pages + control flow enforcement w/ access controls for execution paths. Quite a few companies are also building stuff using tools like Astree & SPARK that prove absence of critical errors in code done in a restricted style. One just commercialized a combination of QuickCheck with C programs. Always happens if a small company with decent funds is willing to do it.

Lots of shit going on. I haven't even got into formal methods as applied to software (down to machine instructions) or hardware (down to gates). I haven't mentioned all the advances in automated or spec-driven testing with tools backing them. Hell, there's so much to draw on now for anyone studying high-assurance security a long time that my current problem is where to start on tackling future problems. I have to filter rather than find ways to knock out classes of vulnerabilities or show correctness. I about need to stop learning most of this shit to properly sort out all the newest findings but improvements are coming at phenomenal pace compared to 90's or early 2000's. :)

Note: This is all possibly part of a larger theory I have about how knowledge gets siloed and/or forgotten in IT due to different cultures with different teachings passed on. My idea was a curated resources collecting as many things that got results as possible with open discussions from people in the many groups. Can't build it right now but mention it for feedback periodically.

Note 2: One of the best ways I find real (aha HA) security versus mainstream when looking for new papers or tech is to put this in quotes: "TCB" All high-assurance engineers or researchers care about the TCB with its LOC usually listed. They also tend to list their assumptions (aka possible screwups). This filter plus words browser and security took me to Gazella, OP2, and IBOS last run. To be clear, Gazelle didn't have TCB in document but described it & was in another's related work. All it took given most efforts at hardening browsers apparently don't know what a TCB is or don't care. Works well with other things, too. Sadly.


OK so I take it you agree that these techniques are not well-represented in the NIST report?


In NIST report, mainstream security recommendations... all kinds of places. Yeah. Only credit I'll give NIST report is it has some techniques that were used to build some of what's on my list.


> Chrome-like sandboxing seems like the current state of the art

Where do you get that from? Ever heard of the macOS Seatbelt framework? Jails in BSD?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: